My internet connection is getting upgraded to 10 Gbit next week. I’m going to start out with the rental router from the ISP, but my goal is to replace it with a home-built router since I host a bunch of stuff and want to separate my out home Wi-Fi, etc onto VLANs. I’m currently using the good old Ubiquiti USG4. I don’t need anything fancy like high-speed VPN tunnels (just enough to run SSH though), just routing IPv6 and IPv4 tunneling (MAP-E with a static IP) as the new connection is IPv6 native.

After doing a bit of research the Lenovo ThinkCenter M720q has caught my eye. There are tons of them available locally and people online seem to have good luck using them for router duties.

The one thing I have not figured out is what CPU option I should go for? There’s the Celeron G4900T (2 core), Core i3 8100T (4 core), and Core i5 (6 core). The former two are pretty close in price but the latter costs twice as much as anything else.

Doing research I get really conflicting results, with half of people saying that just routing IP even 10 Gbit is a piece of cake for any decently modern CPU and others saying they experienced bottlenecks.

I’ve also seen comments mentioning that the BSD-based routing platforms like pfSense are worse for performance than Linux-based ones like OpenWRT due to the lack of multi-threading in the former, I don’t know if this is true.

Does anyone here have any experience routing 10 Gbit on commodity hardware and can share their experiences?

  • InverseParallax@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    7 hours ago

    Core i3 is fine, celeron can route, but you don’t have as much headroom, or room for firewall rules, etc. Recommend Intel x520 or mellanox cx3 or newer, though the cx2 is perfectly fine.

    The bs about bsd being slower is maybe 15 years old at best?

    Bsd is a monster for routing.

    Run 25gbe routing, still can get by on your 4 core, but I throw some serious xeons at it anyway.

    • kalleboo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 hours ago

      Thanks for the Intel x520 recommendation, those are looking like a much better deal right now than the Mellanox cards I was looking at.

      Glad to hear it about the BSD networking!

      I’m still trying to avoid the Xeons for power consumption reasons, hehe, although it would be a lot more fun for sure!

  • MercuryGenisus@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    ·
    13 hours ago

    I am saddened to see that this thread had no mention of how many horses it takes to run a router. What do y’all think? Would one be enough? It would need to work in shifts to keep up time at 100%. Maybe 3 to be safe?

    • kalleboo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      We also need to consider the practical aspects. Who mucks after the horses? Who feeds them? Do we need a stall? Does it need to be air conditioned in the summer/winter?

    • huskypenguin@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      13 hours ago

      This is why I came here. I think you’d need at least three. One to work while the other sleeps, and a spare in case one gets injured.

      • y0din@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        13 hours ago

        3 horses = 3 horsepower, which translates to a whopping 393.6 Duckpower.

        Honestly, why are we still using horses as the standard here? Ducks are clearly the superior metric. So if you’re like me and prefer a more feathered approach, just remember:

        3 horses = 3 horsepower = 393.6 ducks You’re welcome.

        (PS: Just imagine 393.6 ducks handling 10Gb… now that’s efficiency.)

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 hours ago

          Hmm, is that waddling or flying power? Swimming?

          Also, the only reason for the 3 horsepower is so the others can rest, so we’d probably need far fewer than 393.6 ducks, I think we could get away with <100, provided we can manage their sleep cycles properly.

          • y0din@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 hours ago

            Alright, let’s get into the nitty-gritty of Duckpower.

            First, let’s settle the “waddling vs. flying vs. swimming” debate. Horses aren’t big on flying, so we’re talking waddling power here. Until someone locates a Pegasus, we’re limited to the traditional land-bound horsepower. If you want swimming power, I guess you’d need to measure a seahorse?

            Now, here’s where it gets serious: according to the brilliant minds at Art of Engineering, we can calculate Duckpower using a clever formula. They took the mass of a duck, compared it to a horse, and ran it through Kleiber’s Law. The answer? One horsepower = 131.2 Duckpower. So, back to our math:

            3 horsepower = 3 x 131.2 Duckpower = 393.6 ducks waddling their hearts out.

            But wait! We probably don’t need all 393.6 ducks if we give them some solid shift schedules. Horses only get 3 HP so two can rest; following this logic, we’d only need around 100 well-rested ducks, provided they get naps and stay hydrated.

            So, let’s optimize our duck workforce with a shift schedule. Assuming we only need 100 ducks, here’s the plan:

            Duckpower Shift Schedule:

            Total Ducks: 100

            Working Ducks per Shift: 25

            Shift Duration: 2 hours on, 6 hours off (plenty of time for snacks and naps)

            In a day, we’d run 4 shifts like this:

            1. Shift 1: 25 ducks start strong at 8:00 AM, waddling with purpose.

            2. Shift 2: Fresh 25 ducks take over at 10:00 AM while Shift 1 ducks hit the ducky lounge for snacks and a nap.

            3. Shift 3: At 12:00 PM, another 25 ducks clock in to keep those wheels turning.

            4. Shift 4: Finally, at 2:00 PM, the last 25 ducks take over while the others catch up on R&R.

            With this cycle, each duck works only 2 hours out of every 8, staying energized, waddling at peak efficiency, and ready for action.

            TL;DR: 3 horsepower = 393.6 ducks waddling but if we set up a 4-shift system, we can pull this off with only 100 ducks working 2 hours each, plus snack breaks.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      10 hours ago

      Switches and routers are pretty low-power, so we could probably get away with some form of body heat -> electricity thing. Or a battery and put the horse on a treadmill every so often.

    • jubilationtcornpone@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Horsepower is a very rough “average” of work output over a given period of time. It doesn’t really account for spikes in load. For that we’ll have have to consider the torque. So the real question is, how many foot/pounds or newton/meters does OP need to handle 10 gigs of throughput?

  • grue@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 hours ago

    I don’t know the answer, but I do know I’d at least start off looking for hardware with a dedicated ASIC for routing, not general-purpose PC hardware doing routing with the CPU.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    11 hours ago

    If you connect via 10gbit PCIe extension cards it is often a question of how many PCIe channels the CPU has and if the mainboard you are using has these connected directly to the CPU or needs to pass them through the mainboard chipset which is much slower.

    • kalleboo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      6 hours ago

      These ThinkCenter M720q machines I’m looking at all seem to have a single PCIe 3.0 8x card slot, regardless of the CPU, and that seems to be all that the Mellanox ConnectX cards need according to their spec sheets, so hopefully that is good.

    • cmnybo@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      11 hours ago

      For a dual port card, you will want an 8 lane PCIe 3.0 slot connected to the CPU. Almost any desktop CPU will have enough lanes since you won’t be using a graphics card. You can get by with a 4 lane slot, but you won’t be able to max out both ports bidirectionally at the same time.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    13 hours ago

    Your uplink capabilities are way different than your actuality. Get the service first, do some measurements, then start planning.

    • kalleboo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      Yeah I’m not ordering anything until I have the connection up and running, which is why I opted to rent the ISP router to begin with, but looking at results online that others on the same ISP have posted, I can probably expect up to around 7 Gbit real-world so I’ve been thinking that I will at least want something better than the standard 1 Gbit or even 2.5 Gbit stuff out there, hence why I’m trying to research what the hardware requirements actually are!

  • Ebby@lemmy.ssba.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    13 hours ago

    I have 10Gbit and hunted that whale. But I didn’t build my own router. Electricity is $0.51 Kw/h. Ouch.

    First, 10Gbit hardware is more available now than years ago, so you have more options. I started off with the router my ISP gave me. It worked, but it was 1Gbit. Not going to do for me. Plus, basic function was paywalled. Booooo! Snagged a broken Asus router and got it working great.

    With IDS/IPS enabled, I get about 3.5Gbps. There is newer router tech today that looks interesting with fewer bottlenecks that would have been nice years ago, but not worth the upgrade right now.

    My desktop hits about 2Gbps downloading Steam games/updates, but my partners desktop lags behind with SATA SSD storage. Definitely need NVME with that speed.

    I will say my experience with 10Gbit Ethernet cards is not positive. I have a lot of intermittent disconnections and there are a lot of bugs vs 1Gbit switches. They do not like sharing with 2.5Gbit devices. I keep my server on 1Gbit connections. It’s plenty fast for my needs though.

    • kalleboo@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      The low power consumption is one of the reasons I was attracted to the ThinkCenter M720q devices. It definitely wouldn’t be worth it if I had to build some tower PC or run a Xeon server!

      The ISP router I’m getting is 10 Gbit (on WAN and one LAN port, the rest are 1 Gbit), but the configuration seems limited and it’s a $5/mo rental tacked onto the bill.

      I think I can live without IDS/IPS, in all the time I used it on UniFi, it never gave me any actionable info, so hopefully that helps me with performance.

      That’s interesting about the 10Gbit ethernet cards. Is that with something like a Mellanox or some other card? My NAS is going to be stuck on 2.5 Gbit since it’s just a Synology.