return to table of content

High-speed 10Gbps full-mesh network based on USB4 for just $47.98

gaudat
69 replies
23h54m

USB4/thunderbolt is a magical protocol. Turns out the fastest way to move data between 2 modern PCs is to connect their thunderbolt ports with a USB-C cable. The connection shows up as a ethernet port on Windows and I can easily saturate a SSD with its 1GB/s+ transfer rate. And it just works (tm). Reminds me of firewire on these 20 yo Macs.

And What happen if you wire both ports together on the same PC?.. Do you get a broadcast (thunder) storm?

jwells89
22 replies
23h50m

Target disk mode over FireWire was magical back in the day. Nothing like turning someone’s laptop into an oversized external hard drive to rescue data or get a borked OS X installation booting again.

xattt
10 replies
23h10m

FireWire had IP networking stacks in Windows and OS X in 2007. You could daisy chain a bunch of devices together to share a network connection.

FirmwareBurner
9 replies
19h57m

Sure but it was very expensive compared to USB and Ethernet so Firewire never caught on with mainstream conumers other than some niche cases like camcorders.

Thunderbolt was also expensive, which is why adoption was limited, but it's becoming more maisntream since Intel and Apple have been pushing it in the last years, adn piggibacking over USB-C makes it an easy sell comapred to requireing a separate connector like firewire.

Still, thunderbolt peripherals are way more expensive than USB ones, so like Firewire before, use is still more in the enthusianst/professional space.

GeekyBear
8 replies
19h35m

it was very expensive compared to USB and Ethernet so Firewire never caught on

Compared to Gigabit Ethernet back in that time period? Firewire was a huge bargain.

FirmwareBurner
7 replies
19h31m

No, not gabit, but 100 Ethernet was more than enough for what average consumers had to transfer back then, and it was significantly cheaper and more available than firewire. It was more likely your HDD to be a bottleneck for faster network transfers.

GeekyBear
6 replies
19h23m

Even the first version of Firewire was four times as fast as that.

Completely loading a 5 Gig iPod with music over that first version of Firewire still took a few minutes.

FirmwareBurner
5 replies
19h20m

>Even the first version of Firewire was four times as fast as that.

Yes and? At what price points? What was the adoption rate? How many mainstream PCs and peripherals worldwide had it?

Wherever you went, whoever you met, you were way more likely to find a USB or ethernet port to hook up for a fast transfer rather than Firewire.

At least in my country at the time, maybe you lived in Cupertino/Palo Alto where evryone had iMacs and firewire.

Just like VHS over Betamax, USB won because it was cheaper and more convenient despite technically inferior to firewire and consumer tewch at the time was a race to the bottom in terms of price.

>Completely loading a 5 Gig iPod with music over that first version of Firewire still took a few minutes.

Only the first gen iPod had firewire before switching to USB, and even then, what was the point of Firewire 400 on it when the tiny and slow mechanical HDD on it was the real bottleneck.

There was no way the iPod would have been remotely as successful had it stayed on firewire. Apple didn't have the market sahre back then to enforce their own less popular standard. Only when it switched to USB and supporting PCs did the iPod really take off.

GeekyBear
2 replies
19h13m

Yes and?

At the time, USB was still limited to 12 megabits per second and transferring that same 5 Gigs of MP3 files would have taken over an hour. The firewire iPod did it in a couple of minutes.

USB was cheaper, but dog slow.

Gigabit Ethernet was faster but WAY more expensive.

FirmwareBurner
1 replies
19h3m

I see you keep ignoring my arguments so this is the last time I say it.

Again, only the first gen iPod was firewire exclusive and it was not yet a maisntream product since it was still Mac only, so avergae consumer demand at home computers for Firewire was lackluster and the iPod didn't change that.

Firewire was niche or non existent in the home PC space and it died completley with the launch of USB 2.0 remaining alive only in the pro-sumer space.

>Gigabit Ethernet was faster but WAY more expensive.

Please show me where I mentioned Gigabit ethernet as an argument. I said 100 Ethernet which was dirt cheap and almost every PC and Mac had it, as opposed to Firewire, so if you needed a fast cross platform transfer it was your best bet at the time in terms of cost and mass availability over firewire before USB 2.0 and gigabit hit the market.

GeekyBear
0 replies
18h40m

Your claim was that Firewire "was very expensive compared to USB and Ethernet".

Which completely ignores the speed and the costs of the various data transfer standards as they existed at the time.

The cheap 1.2 megabit USB standard that existed at the time couldn't transfer 5 Gigs of MP3 files in less than an hour.

The cheaper 10 megabit version of Ethernet they sold at the time also would need more than a hour to transfer enough MP3 files to fill an iPod and wouldn't have been cheaper than a Firewire port.

Ethernet with faster speeds than 400 megabit Firewire existed, but was MUCH more expensive.

Speed AND cost both matter.

I said 100 Ethernet which was dirt cheap

Back then? It wasn't.

jwells89
1 replies
19h0m

Later gens of iPod gained the ability to connect to USB but still supported FireWire. The majority of my usage of my 20GB 4th gen iPod was with the FireWire cable it shipped with.

xattt
0 replies
18h25m

It’s forgotten to history that Apple supported FW on the 30-pin connector.

RachelF
5 replies
20h41m

Yeah, we built a similar system a generation ago using FireWire on x86 and Linux.

FireWire at 800Mbps beat Gigabit Ethernet in terms of latency for a rather hard real-time system.

rabi_molar
4 replies
20h38m

I remember the "good ol days" when I would always opt for a FireWire audio interface for music production and live performance over USB for exactly this reason. I'd get way better latency and stability.

justnotworthit
2 replies
20h33m

Any ideas what im I supposed to do with the firewire mixer i bought 14 years ago?

wmf
0 replies
19h31m

Get a Thunderbolt 3 to Thunderbolt 2 dongle and a Thunderbolt 2 to Firewire dongle?

robes
0 replies
17h54m

You can still use it! I keep an old ThinkPad X61 & T400 around with mini-Firewire ports on my MOTU 828 mkII interface. It is also a DAC over SPDIF for my much newer Ryzen desktop. I would like to try Thunderbolt to FW800 to FW400 adapters to see if I can get it working on something more modern, as I learned it has mainline Linux kernel support.

jwells89
0 replies
20h13m

Even now an ancient FW400 HDD enclosure of mine is less flaky than a lot of USB storage I’ve used.

Fnoord
3 replies
20h1m

Way before that there was SCSI.

The low cost is price. Asus for example with their ASUS ThunderboltEX 4 allows you to have TB4 via PCIe card.

The neat thing about USB4 was same as PATA and later SATA: widely and relatively cheap available in consumer hardware. SCSI and FireWire were technically superior but were neither cheap nor widely available.

Oh and I don't know about SCSI but FireWire was actually a security risk.

tga_d
1 replies
15h51m

I know thunderbolt at least up through 3 was generally carte blanche DMA, so an obvious security nightmare (strictly speaking no worse than cold boot attacks and the like, but there's a practicality difference between dumping raw DIMMs and just plugging in a thumb drive -- or inter-machine links like TFA, for that matter). Does TB4 bother trying to solve this?

p_l
0 replies
8h3m

Not really. Just like with OHCI 1394, it's responsibility of host IOMMU to handle it.

vegardx
0 replies
10h33m

Just a fair warning about these cards, the support is flakey at best. You should research if it works with your motherboard and CPU before going down that route. I did a lot of research on this because I wanted to connect my Gaming-PC to an Apple Studio Display over optical Thunderbolt, but quickly decided against it.

Luckily there are good alternatives. I landed on a solution using a Belkin[0] DisplayPort and USB to Thunderbolt-cable. I just get USB2.0 speeds, but it's enough for my needs. I'm also able to extend it using an active DisplayPort 1.4 extender, for a total of 10 meters cable.

[0] https://www.belkin.com/support-article/?articleNum=316883

bimguy
0 replies
17h48m

Target disk mode to my workstation and saving someone's whole system with Disk Warrior used to be my favourite and most rewarding task. APFS did away with that joy, if a Mac OS systems fails now you have almost no chance of saving the system from itself.

vardump
17 replies
23h10m

You need more like 50 Gbps to saturate a modern Nvme SSD.

10 Gbps doesn't come close.

genman
9 replies
22h27m

There was a moment with the spinning rust drives where it would have made sense to have storage in a networked device and not locally but now it rarely makes sense unless an incredibly fast interconnect could be used.

Of course this example is still interesting and cool.

NavinF
7 replies
21h41m

Infiniband can match NVMe bandwidth and its latency is similar. Newer network cards can also present a NVMe-oF drive as a local NVMe drive

mgerdts
5 replies
21h3m

25 Gb Ethernet roughly matches a PCIe Gen 3 NMVe drive’s max throughput 50 Gb will match Gen 4. These are RDMA capable.

It seems 25Gb dual port Mellanox CX-4 cards can be found on eBay for about $50. The cables will be a bit pricey. If not doing back to back, the switch will probably be very pricey.

justinclift
2 replies
15h43m

There are 100Gb/s Intel Onmi Path switches currently on ebay for cheap:

https://www.ebay.com/itm/273064154224

And yep, they do apparently work ok if you're running Linux. :)

https://www.youtube.com/watch?v=dOIXtsjJMYE

Haven't seen info about how much noise they generate though, so not sure if suitable homelab material. :/

mgerdts
1 replies
15h33m

I saw that, but didn’t consider it particularly cheap. Also, the power draw of this things is also likely a concern if run continuously.

justinclift
0 replies
14h19m

Yeah, power draw could be a problem. :(

Cheap though... it's a small fraction of the price for a new one.

Are there better options around?

Looking at Ebay just now, I'm seeing some Mellanox IB switches around the same price point. Those things are probably super noisy though, and IB means more mucking around (needing an ethernet gateway for my use case).

genman
1 replies
20h51m

It can match a single drive.

mgerdts
0 replies
14h33m

Assuming jumbo packets are used with RoCE, every 4096 bytes of data will have 70 bytes of protocol overhead [1]. This means that a 25 Gb/s Ethernet link can deliver no more than 3.07 GB/s of throughput.

Each lane of PCIe Gen 3 can deliver 985 MB/s [2], meaning the typical drive that uses 4 lanes would max out at 3.9 GB/s. Surely there is some PCIe/NVMe overhead, but 3.5 GB/s is achievable if the drive is fast enough. There are many examples of Gen 4 drives that deliver over 7 GB/s.

Supposing NVMe-oF is used, the MVMe protocol overhead over Ethernet and PCIe will be similar.

1. https://enterprise-support.nvidia.com/s/article/roce-v2-cons...

2. https://en.wikipedia.org/wiki/PCI_Express

genman
0 replies
21h2m

Yes, that was my point - 10Gbps is just way too slow, even full Thunderbolt bandwidth can be easily saturated in raid configuration - NVMe are just incredibly fast.

arglebargle123
0 replies
22h12m

For a couple of years I had a Linux NAS box under my desk with like 8 Samsung 850 pros in a big array connected to my desktop over 40GbE. Then NVMe became a common thing and the complexity wasn't worthwhile.

nine_k
6 replies
21h4m

10 Gbps does not, but 10 GBps, as written above, is 80 Gbps, matches your estimate.

vardump
5 replies
20h57m

I've tried the same method and it's about 10 Gbps, not 10 GBps.

ThePowerOfFuet
4 replies
20h25m

Sounds like you were actually using USB 3.1 Gen 2 or USB 3.2 Gen 2[x1], not Thunderbolt 4.

vardump
2 replies
19h25m

I tried it with M2 Max Macbooks. Definitely TB4/USB4 capable.

walterbell
1 replies
17h51m

With a certified TB4 cable?

vardump
0 replies
13h37m

Yes.

bhaney
0 replies
20h15m

Thunderbolt 4 can't do 80Gbps either

ComputerGuru
12 replies
22h57m

The connection shows up as a ethernet port on Windows

Do you know if this is the case for all thunderbolt generations (speed differences aside)? Does it apply to thunderbolt using mini DisplayPort too or only over USB PHY?

chx
7 replies
21h52m

It was possible on Thunderbolt 2 , on the Mac side OS X Mavericks enabled it, this Intel whitepaper from 2014 talks about the Windows side. https://www.thunderbolttechnology.net/sites/default/files/Th...

https://www.gigabyte.com/Press/News/1140 this doesn't mention it and the Intel whitepaper specifically requires TB2 so I would guess TB2 was it.

raffraffraff
6 replies
20h15m

Pity Thunderbolt 2 is basically non-existent nowadays. I have a few Macbook Pro 13 (2015) and I'd love to be able to use the thunderbolt 2 ports, but peripherals were too expensive and the standard short-lived. Try finding a thunderbolt 2 dock anywhere. Filter out all the false-positives (USB-C docks) and the are maybe 10 total on ebay, and they're stupidly expensive for 6+ year old used devices, most without cables or power supplies. Such a pity because they can really extend the useful life of those laptops.

russelg
5 replies
18h30m

TB3 is backwards compatible so you can use the Apple TB2 to 3 adapter in conjunction with a TB2 cable to hook up any TB3 device to your MBP 2015. I had the mid-2015 15" MBP and used the adapter to hook up an external GPU. If that can work I'm sure a TB3 dock will.

ComputerGuru
4 replies
18h22m

It’s the beauty and elegance of the PCIe design. Thunderbolt just provides convenient ported access to those lanes.

chx
3 replies
5h3m

The cardinal sin of USB IF was releasing the 5gbps then 10gbps USB modes.

It should've been PCIe 2.0 x1 and then 3.0 x1. There was absolutely no reason not to do it: PCIe 2.0 came out in January 2007, USB 3.0 came out in November 2008. PCIe 3.0 followed in November 2010 and 10gbps over the USB C connector didn't appear until August 2014.

What USB4 version 2.0 can only do with a complex tunneling architecture we could get "straight": PCIe 5.0 x1 can do 32gbps which closely matches the 40gbps lane speed defined in USB4 version 2.0 (which again came out years after PCIe 5.0 mind you). It would require two lanes, one for RX one for TX and the other two lanes could carry UHBR20 data for display, for a total of 40gbps. This very closely resembles the 80gbps bus speed of USB4 version 2.0 but the architecture is vastly simpler.

We wouldn't have needed dubious quality separate USB-to-SATA then USB-to-Ethernet etc adapters. External 10GbE would be ubiqutious instead of barely existing and expensive. Similarly, eGPUs would not need to be a niche and DisplayLink simply wouldn't exist because it wouldn't need to exist and the world would be a better place for it. You could just run a very low wattage very simple but real GPU instead. Say, the SM750 is like 2W.

ComputerGuru
2 replies
2h54m

I get what you’re saying but not specifically how you’re imagining the implementation. What do you envision the difference between thunderbolt and usb would be in this case? All complex/bandwidth-intensive applications would be better suited to use PCIe directly, but the problem has always been that for various peripherals this imposes a (small) cost manufacturers would rather not pay and would prefer to have the usb spec abstract over.

chx
1 replies
1h45m

There would be no choice, there would be no 5gbps USB mode so there's nothing you can do but use a PCIe chip. It would've brought down the PCIe costs over the many years.

ComputerGuru
0 replies
12m

Ok, yeah, I agree. But the world of cost-cutting and penny-saving would never allow that - same reason FireWire lost out to USB. As passive and dumb peripherals as possible won out (for cheaper parts and faster time to market).

davkan
2 replies
21h8m

My TB3 mesh network shows interfaces as thunderbolt0 etc. this is on Linux using thunderbolt _net from the kernel. Latency is worse than regular twisted pair Ethernet.

bobim
1 replies
20h49m

Dang, that wouldn’t play nice for MPI then.

davkan
0 replies
20h41m

I was seeing 1-1.5ms latency using linux bridges for the mesh. Not a huge issue for ceph replication but significantly more than switched lan. It may be possible to get it lower with routed instead of bridged but my understanding is thunderbolt_net on Linux is not perfect in that regard.

vineyardmike
0 replies
22h51m

I think it works for older thunderbolt too. Been years since I tested it though.

justinclift
6 replies
15h34m

Turns out the fastest way to move data between 2 modern PCs ...

With the caveat being that's only if you're not up for adding PCIe cards.

If you are ok with adding some PCIe cards, then you can transfer things a lot faster than 1GB/s. :)

kwanbix
5 replies
14h59m

Care to elaborate?

justinclift
3 replies
14h32m

Network cards that go a lot faster than 10GbE are common.

They're widely used (with many different types) in IT data centres, home labs, and probably other places too.

Heaps of them are on Ebay. As a random search just now for "Mellanox 25GbE" on US Ebay:

* https://www.ebay.com/itm/134435757546

* https://www.ebay.com/itm/355348422765

(there are hundreds of individual results)

Searching for "Mellanox 50GbE":

* https://www.ebay.com/itm/225021493021

* https://www.ebay.com/itm/233915360659

(less results)

There are older generation ones too, doing 40GbE:

* https://www.ebay.com/itm/305046322527

* https://www.ebay.com/itm/166350081025

(hundreds of results again)

With those older generation cards, some care is needed depending upon the OS being run. If you're using Linux you should be fine.

If you're running some other OS though (eg ESXi) then they might have dropped out of the "supported list" for the OS and not have their drivers included.

oso2k
2 replies
13h13m

I bought a bunch cheap HP 40Gbps NICs [0] when they were $13 but need PCI-E risers to make them fit a full height slot. Work fine in pfSense & Fedora.

[0] https://www.ebay.com/itm/333682185870

kridsdale1
0 replies
2h19m

My first job in 2007 had me plugging fiber in to 40Gbps backbone units. I was told the ports cost a million dollars each. Now $13. Amazing.

justinclift
0 replies
8h9m

It's amazing what you can do with some sheet metal and tin snips when there's a strong enough need for the bracket to fit a particular slot height. Homelab environment obviously. :)

---

Oh, if you have even a hobby grade CNC machine around, you can get a fairly professional level result:

https://forums.servethehome.com/index.php?threads/3d-printab...

hatefulmoron
0 replies
14h33m

I assume they're referring to getting a couple of NICs

phyzome
2 replies
19h45m

The connection shows up as a ethernet port

OK but what then? I've had ethernet ports on my computers since I can remember, and that hasn't magically allowed me to transfer data back and forth just by plugging a patch cable into both machines. What software is at work here?

bpye
0 replies
19h37m

I’m guessing if they’re using Windows it’s boring old SMB?

SloopJon
0 replies
19h2m

Depending on what you want to do on what O/S, batteries may or may not be included. I use Moonlight on a Mac to control a Windows 10 PC running Sunshine--lower latency than Remote Desktop (which I don't have anyway in the Home edition), and nicer looking than Parsec.

When you connect a patch cable directly (no crossover cable needed in the 21st century), you'll likely find that each system has a self-assigned IP in the 169.254 network.

samstave
1 replies
22h14m

WTF!

I have two identical machines that I need to do this with... lemme test it and see.

...

ARGs usb-3... so nopes. (How FN lame is that)

drewzero1
0 replies
17h20m

Indeed... I just got my first computer with a USB-C port and have been puzzling over what to do with it. Most of the cool tricks seem to require Thunderbolt, which it is not.

SloopJon
1 replies
21h21m

I've been playing with Thunderbolt networking over the past week with mixed results. I can get 16Gbps between a couple of Macs. Between a Mac and a PC running Windows 10 I get similar speeds in one direction, but less than 1Gbps in the other direction.

In terms of scaling this to multiple hosts as the author does, I've read that it is possible to daisy chain, or even use a hub, but it doesn't strike me as the most reliable way to build a network. For an ad hoc connection, though (like null modem cables of yore), it's a great option.

belthesar
0 replies
17h26m

I think reliability is a great metric to evaluate this on. I don't have a lot of experience with USB4/Thunderbolt networking, but as far as ring network principals go, when you have a network with only 3 nodes, a ring topology is also a fully connected topology. This means that connectivity between nodes should never fail due to the failure of a node. That screams reliable to me.

As far as points of failure, there's no additional hub/switch in between the devices, so you have a Thunderbolt controller on each device, two cables, and two ports. If a cable goes bad, so long as there isn't a silent/awkward failure mode, all three nodes can still talk to eachother, at degraded speed. If a switch goes bad, the whole network is down, unless you start talking about redundant switch topologies.

To your point though, there does seem to be plenty of shenanigans with performance, especially between devices with different Thunderbolt controllers, that may make this less ideal. But IMO, that's more of a question of do you want to with a more battle tested topology, or are you okay with a less battle tested, but still highly performant and "simple" (we won't go into how bonkers the USB/Thunderbolt spec is) topology?

znpy
0 replies
23h6m

And What happen if you wire both ports together on the same PC?

A literal loopback interface? Two of them, most likely.

nickstinemates
22 replies
23h58m

Incredible. Traditional 10Gb network gear is very expensive to buy and to operate. $50 is a lot cheaper.

Palomides
21 replies
23h44m

not really true, a similar setup with 40Gbe (three nodes, no switch) would run you less than $100 for pcie cards and cables

hamandcheese
11 replies
23h33m

The only NICs I see in that price range are old Mellanox cards.

Intel NICs are 5-10x that price. I'm not sure why, but my suspicion is that it has to do with driver support and/or interoperability.

Hikikomori
6 replies
23h10m

Mellanox works fine.

hamandcheese
4 replies
23h1m

Gotcha. Why the price discrepancy then?

Hikikomori
1 replies
22h58m

Are you comparing new Intel cards to old mellanox cards on ebay? If not idk why, I have not compared them myself, some feature maybe? Cost doesn't always make sense either.

hamandcheese
0 replies
17h58m

Im just comparing the prices I see when I search ebay for "40gbe nic" vs "40gbe nic intel", making no effort to compare features.

petronio
0 replies
19h14m

I can't answer definitively, but I was looking for SFP cards recently and the older cards don't really support ASPM. The cards themselves aren't power hogs, but they keep the CPU from going into lower states during idle.

The cheapest one I found that others related had ASPM actually working was the Intel X710, and those are much more expensive than the ConnectX-3.

ApolIllo
0 replies
16h56m

I avoiding older 10gbe NICs due to power consumption. Maybe that's why?

olavgg
0 replies
22h25m

And with Mellanox you get working RMDA/ROCE.

Palomides
3 replies
23h22m

I run mellanox connectx3 cards, they work immediately with no extra drivers on windows 10/11 and every linux I've tried

mellanox is/was quite good at getting code upstreamed

maybe I need to do my own blog post about my pile of computers...

thefz
0 replies
20h21m

I run mellanox connectx3 cards, they work immediately with no extra drivers on windows 10/11 and every linux I've tried

Own three, can confirm for Windows and Linux.

gregors
0 replies
17h4m

would love to read this!

TacticalCoder
0 replies
20h27m

maybe I need to do my own blog post about my pile of computers...

Yes, several of us love to read about that! I haven't switched to 10 Gbit/s yet...

nickstinemates
8 replies
23h28m

Can you provide links? I'll upgrade my home lab from 10g to 40g immediately at that price.

willglynn
3 replies
22h58m

In 2024 I would suggest deploying 2x25G instead, via e.g. MCX4121. Pricing is similar (<$30 NICs), but:

* 2x25G throughput is higher than 40G,

* 25G latency is lower than 40G,

* you can use 25G ports as 10G ports, and

* you can use DACs to connect 4x25G <=> 100G

That last point is particularly relevant given the existence of switches like the Mikrotik CRS504, providing 4x100G ports on 25W.

Palomides
2 replies
22h49m

those are all reasonable points, if I were doing mine again I would spend a little more and go up to 100gbe

if you run all older mellanox gear the cx3 can do the kinda nonstandard 56gbe as well

bombela
1 replies
20h52m

What are you doing that you need 100gbe?

I am still on 1gbe... I guess I don't transfer anything bigger than a few GiB time to time.

nickstinemates
0 replies
11h11m

No individual connection other than like a cr tral storage server needs 100gbe, at least for me, but a 100gbe backplane is good for a lot of 1gbe poe devices as an example. With residential fiber/coax reaching 5gb, 1gb is not enough.

Palomides
2 replies
22h47m

cx353a and cx354a, prefer the fcbt models, but you can reflash them all to fcbt

crotchfire
1 replies
22h0m

What does fcbt stand for and why would I want it?

Palomides
0 replies
21h40m

the same hardware was sold at different prices with different capabilities enabled based on the exact model variant, stuff like higher speeds and infiniband support

you can see them all in the datasheet, I believe fcbt is the one with all the stuff enabled

Hikikomori
0 replies
23h9m

Mellanox connect-x 3 probably.

rahimnathwani
17 replies
23h45m

  average residential electricity rate is 15.34 cents per kWh
This didn't seem right, as I pay more than double that here in San Francisco. (I calculated $0.35/kWh by dividing the total I paid for electricity generation and delivery, and dividing it by the number of kWh consumed.)

The linked page cites data from over a decade ago (2012).

krallja
4 replies
23h42m

Mine is currently 9.4¢/kWh, no TOU, net metered. PG&E is the problem, not the data.

rahimnathwani
1 replies
23h18m

I was responding to what OP said about California rates.

krallja
0 replies
18h17m

My mistake, sorry. Still seems atrocious to me. It’s not like we have better power generation equipment here in NC.

throwaway-blaze
0 replies
23h24m

Where in CA roughly is this?

delfinom
0 replies
23h28m

32 cents / kWh here in NY.

jtriangle
4 replies
23h14m

That whole section is BS in general. The server linked isn't going to use 1KW of power ever, not on the worst of days. The only real point that they're making is that "it's loud" which is true, very true, but, it's designed to run in a rack away from humans..

While their solution is more livable for them, the hardware is vastly inferior for actually hosting serious services on, and they don't seem to understand that because they're software guys who're getting away with it.

g8oz
2 replies
22h8m

Given the electricity rates he mentioned, what would you estimate the monthly running cost to be?

tyrfing
0 replies
21h28m

Depends on load. Somewhere around $20-40? Assuming idle around 80-120w, and another 100w for the e-waste drives in their pictured listing.

alphabettsy
0 replies
21h31m

His numbers are probably 8x higher than actual. So maybe $8-15/mo for electricity for one of the old servers, not $100.

szundi
0 replies
21h14m

It will eat much more than 80W though

pstrateman
2 replies
21h49m

The average rate is only ~15 cents per kWh because of places like SF that are effectively just scamming the consumers.

Electricity in free markets is more like 5-8 cents per kWh.

NavinF
1 replies
21h36m

Yep electricity prices in CA are set by the gov't. Market rates are typically 10 cents/kWH

Cheer2171
0 replies
17h8m

Technically the CA government only sets a maximum price, not prices themselves. Power companies can sell at market rate if they want. But that same CA government also allows monopolies, so the price ceiling becomes the floor.

belval
2 replies
23h31m

This didn't seem right, as I pay more than double that here in San Francisco.

Their data might be a bit outdated but the December 2023 average is $0.168/kwh according to [1].

[1] https://www.bls.gov/regions/midwest/data/averageenergyprices...

rahimnathwani
1 replies
23h22m

OP was talking about California, not the entire US.

The page you linked shows four California cities, each with Dec 2023 rates over $0.27/kWh.

JumpCrisscross
0 replies
23h16m

Wow. I guess I won't get mad at my guests for running the heat 24/7 at 5¢ per kWh.

mlyle
0 replies
23h43m

https://www.globalenergyinstitute.org/2022-average-us-electr...

Within the state, there's huge variation. The average is around 25 cents. E.g. if you live in Santa Clara, you pay 16.6 cents per kilowatt hour to Silicon Valley Power, while all surrounding cities pay 45 cents-ish.

https://www.siliconvalleypower.com/residents/rates-and-fees

riku_iki
9 replies
23h59m

15c/kwh estimate in California looks very far from the current situation.

jms703
8 replies
23h54m

Yeah, I think the website the author linked to has old data. It's 3x that in California.

kube-system
7 replies
23h50m

It is 26.72 cents per kwh as of the latest official numbers which are from October 2023.

https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...

riku_iki
6 replies
23h43m

Not clear who are those "ultimate customers".

Also, in case of server hardware, it is excessive over basic tier electricity consumption, so it will hit those 50c/kwh.

kube-system
5 replies
23h41m

Not clear who are those "ultimate customers".

The first column is residential customers.

riku_iki
4 replies
23h38m

It is residential "ultimate" customers.

My current bill for initial tier is: 42c/kwh delivery + 15c/kwh generation, and even higher over initial consumption.

I guess I am not "ultimate" customer.

kube-system
3 replies
23h35m

'ultimate customer' means the customer using the power, which you are.

The reason you don't pay the price listed on that page is because you are only one customer. You are not the average customer.

riku_iki
2 replies
23h31m

The fact that I live in the center of densely populated area and serviced by major provider who also holds monopoly, makes me think there is a chance that table's numbers are just not what you think. They may include just generation cost, and not delivery, or there is something about "ultimate customer" definition.

Also, there were several hikes since Oct in my understanding.

kube-system
1 replies
23h13m

I read form EIA-861, which is where the data comes from.

Looks like sales to "ultimate customers" means it excludes electricity that was not sold, electricity that was sold to resellers, energy lost, etc.

The form also collects information about revenue from delivery "and other related charges"

42c/kwh delivery sounds insane. I couldn't find much data about average delivery rates, but I plugged in a few counties here and it looks like many areas have delivery rates significantly lower than that: https://www.cpuc.ca.gov/RateComparison

riku_iki
0 replies
20h56m

I plugged in a few counties here and it looks like many areas have delivery rates significantly lower than that: https://www.cpuc.ca.gov/RateComparison

That page says numbers are year old, I entered my zip there and it says delivery is 20c/kwh for me.

If you don't know the story: PG&E is delivery monopolist in CA, with exception of some places with local powerplants. It was found guilty in causing wildfires, lost in court and need to pay $XXB damages, which it now shifts to customers through multiple rate hikes in 2023 and more coming in 2024.

42c/kwh is initial tier cost, whatever is over limit will be charged at 50c/kwh for just delivery.

nabilt
9 replies
20h20m

This is a pretty cool solution. I didn't know the capabilities of USB4 before this.

The comparison with the Dell r630 power numbers got me interested since I just purchased a Dell r430 to host my site so I decided to benchmark mine.

Specs:

  * 2x Xeon E5-2680 v3 (same CPU)
  * 64GB RAM
  * 2x power supply (can't remember if it's 500W or 750W each and too lazy to look)
  * 1 SSD & 1 7300 RPM HDD
  * Ubuntu server 22.04.3 LTS
Using a Kill-A-Watt meter I measure ~100 watts after boot. Running sysbench (sysbench --test=cpu --threads=12 --cpu-max-prime=100000 --time=300 run) I get up to ~220 watts.

If my calculations are correct that's 72 kW per day or $11.05 per month at idle:

  0.1 kW * 24 hours * 30 days = 72 kWh
  72 kWh * 15.34 cents/kWh = $11.05 
and 158.4 kW or $24.3 per month during load:

  0.22 kW * 24 hours * 30 days = 158.4 kWh
  158.4 kWh * 15.34 cents/kWh = $24.3
I'm not sure of OP's use case, but these numbers are probably more realistic than using the max wattage of the power supply for most people. I will still be hosting in a co-location for the reliable internet and so I can sleep without the sound of a jet engine taking off. Those fans are loud!

justsomehnguy
8 replies
19h26m

I'm not sure of OP's use case

Lack of understanding. Even comparing 65W to 1000W should had ring some bells, but.

but these numbers are probably more realistic

Almost, depends on the load (hardware) and load (software), as someone who manages a fleet of 720/730/630, a standby server eats around 150W and under the load up to 300-350W, depending on the package.

Using a Kill-A-Watt meter

You can use built-in measuring in iDRAC.

nabilt
2 replies
17h33m

Thanks, good to know. I may need to request a 3A 1U in that case.

I didn't know iDRAC could measure real time power usage. Pretty amazing.

buffington
1 replies
16h30m

I inherited a few R420's from my work, and the coolest thing about it is the iDRAC. I'm not a SRE or anything of the sort, so don't get to see stuff like that much, but the utility of the iDRAC is fantastic.

The machines each have 192GB of ram, so I thought I'd set them up as LLM inference machines. I figured that with that much ram, I could load just about any model.

Then I discovered how slow the CPUs on these older machines is. It was so utterly slow. I have a machine I bought from Costco a few years ago that was under $1k and came with a RTX 3060 with 12GB of GPU ram. That machine can run around 20+/tokens per second on 13B models (I actually don't know - I stream the text, and cap it at 9 tokens per second so I can actually read it).

The R420? Its tokens per second were in the 0.005 to 0.01 range.

So, yeah, not a good CPU for that sort of task. For other stuff, sure. I thought I'd setup a small file server with one instead, but the fans are so jet engine loud that it's intolerable to have in any part of the house, even when managing fan speeds with software.

oso2k
0 replies
13h21m

CPUs are slow in comparison to GPUs for lots of tasks. Comparing a 10 year old CPU vs. a 4 year old GPU only make that comparison "more offensive." That said, you pair your R420 with something like a RTX A2000 and you'll have a much fairer fight.

justinclift
2 replies
15h38m

You can use built-in measuring in iDRAC.

Does anyone know if ~modern HP iLO can show power usage too?

Looking through my Gen8 HP Microservers just now, I'm not seeing power usage info. :(

"Power Information" is just showing the state of the power supplies (aka "OK", "Good, In Use") and not much else.

justsomehnguy
1 replies
13h42m

Since iLO3 (~2010) and even more simple one (as in momentary) even at iLO2.

It's just your MicroServers are gutted so they wouldn't compete with enterprise gear and ML line (probably).

Try:

SNMP

IPMI

HPE Agents (Linux, Windows)

REST/RIBCLI interface for iLO (https://support.hpe.com/connect/s/softwaredetails?language=e...)

It certanly has some reading on the power but it's just don't display it in the interface.

At least https://www.storagereview.com/review/hp-proliant-microserver... says it has Power Meter menu entry?

Also check if you really running the updated firmware for the iLO.

justinclift
0 replies
8h3m

Thanks. The Power Meter menu entry is there, but clicking on it just gives a blank page with this text:

    The Power Manager is unavailable for this configuration.
That's with iLO "Advanced" too. :(

Haven't bothered with setting up SNMP nor IPMI on the Microservers though, as it seemed like a bunch of effort for no real benefit (for a homelab). The HPE agents thing is an idea. I'll have to take a look at what that needs, can hook into, etc. :)

These Microservers are definitely running the latest firmware, and the latest iLO firmware too (released last year).

evanreichard
1 replies
17h11m

Another data point - I'm running about 500w for my small rack. NAS (20 x 3.5HDD), ICX6610 (3 PoE APs), 2 x R630, and some small devices.

According to iDRAC, my R630's are drawing almost exactly 100w/each. Each with about 75 pods (k8s nodes).

oso2k
0 replies
13h1m

My R730 with 14x 2TB NVMe, 8x 6TB SAS HDDs, 2x E5-2680 v4 eats 250W.

stoltzmann
7 replies
22h53m

While the machine itself isn’t expensive, it is not cheap if you consider the cost to operate. A machine like this is very power-hungry. Suppose the power consumption is 1000W per hour.

Alright, a server will be more power hungry than e.g. your desktop, but... This specific Dell R630 has 2x 750W PSUs in it. That 750W is the maximum rating of one power supply, and there's 2 of them for redundancy - not for increased power intake. That server will run at 750W maximum - but that is the absolute, absolute maximum power it should draw. It's when you have all the rails loaded to the limit and running the server to the ground.

A more realistic scenario would be e.g. 100W or so on average.

The worse problem if running this server at home would be the terrible small high-RPM fans they have in 1RU servers. The loud high-pitched whine of them will drive you nuts. A better idea would be to get either a lower-power 1RU pizza box, or something larger that can take larger fans - replacing the fans with something quieter and adjusting the fan controller to spin at lower RPMs.

1000W per hour

This is just wrong.

mysteria
1 replies
20h39m

I run rack servers at home and I 100% call BS on that 1000W number. My older Ivy Bridge Xeon servers only consume 50W on idle, 100W with some load, and <200W with all cores fired running P95 or something. The noise is a concern but it's not an issue if you put it in your basement.

buffington
0 replies
16h20m

I couldn't do it with a single R420. When that first booted up in my basement - good lord I wasn't prepared. There wasn't a soul in my house who wasn't suddenly in there with me trying to understand what had just happened.

Adjusting the RPMs in software helped a little, but even at the lowest speeds, it was a hovercraft. It had to go.

strobe
0 replies
21h37m

noise actually not that bad with some bios configuration tricks - I used to run dell r720 and r420 together with ~20 HDDs near my desk table and most of the day noise was close to some non-silent gaming PC/workstation. Only problem was if is some need to reboot it, in that case it just blow fans to full power and it's loud like vacum cleaner that you for sure don't wanna to use at middle of the night.

november84
0 replies
21h56m

An addition, you have the ability to chain the PSUs for increased total power at the loss of redundancy.

mmastrac
0 replies
21h18m

I swapped out the screamer fans for Noctuas on a 1U server and it's significantly better. Needed a custom mini PCB and a bios setting to ignore fan speed feedback but it's extremely quiet and runs cool.

I don't think it uses much power when idle either. I think that rack servers being expensive is a myth.

livueta
0 replies
22h28m

Yeah, I have a bunch of fully loaded R720XDs and with 12 3.5" SATA drives they sit at about 120-140W, both according to the lights-out console and wall wart meters.

justsomehnguy
0 replies
19h9m

and there's 2 of them for redundancy - not for increased power intake

Ahem:

    System Headroom
        Statistic       Reading 
        Instantaneous   1528 W | 5215 BTU/hr 
        Peak            1346 W | 4594 BTU/hr
        
That's R720 with dual "PWR SPLY,750WP,RDNT,FLX" PSUs. You can configure them for the redundancy mode and you can cap the maximum power per PSU:

    Hot spare is a power supply feature that configures redundant Power Supply Units (PSUs) to turn off depending on the server load. 
    *This allows the remaining PSUs to operate at a higher load and efficiency.*
    This requires PSUs that support this feature, so that it quickly powers ON when needed.

    Redundancy Policy:

         Not Redundant — In this mode, failure of a single PSU can power off the system.

         Input Power Redundant — In this mode, the system is functional in the event of failure of a PSU input circuit,
         provided the PSUs are connected to different input circuits. This is also called AC redundancy. 
> This is just wrong.

But yes, these guys idle at 150W at have around 300W under load with dual CPUs and a lot of RAM.

Fb24k
7 replies
23h44m

I still remember connecting two MS-DOS computers together with a parallel cable, that was the easiest way to do "large" file transfers there for a while (used a program called LapLink, iirc)...

myself248
5 replies
22h26m

You could also run PLIP over the same cable. It's like SLIP (which is like PPP) but much faster.

Where Laplink isn't really a network, just a file transfer thing that requires Laplink to be running on both PCs, PLIP is a network driver that lets you do all the usual network things over the connection.

And since PCs can have up to 3 parallel ports before things start getting stupid, it's pretty straightforward to have a row of machines with PLIP links going both ways, bridging or routing the interfaces. Or, do PLIP-SLIP-PLIP-SLIP without adding any ports, and you could have a functional-but-brittle-and-slow network for pennies.

TacticalCoder
3 replies
20h36m

You could also run PLIP over the same cable.

I was running that in the nineties. My main desktop, running Linux, and an ultra old, ultra crappy laptop running Linux too. They'd be connected using PLIP and the desktop, more powerful, was running its own X server but also applications for that were running on the laptop's X server.

So my brother and I could both be using Netscape to surf the net (we'd call it that back then) at the same time, over the 33.6 modem connection.

It was really easy to run PLIP and was saving me the trouble to try to get network card running under Linux on my desktop and most importantly saving me the trouble to try to get the PCMCIA crap to work on my laptop.

Fun times...

P.S: and, yup, back then laptops had a full parallel port!

rubatuga
1 replies
20h20m

Using slip over minimodem over FM receiver/transceiver was quite an interesting experiment. With the low power exemptions, you can design your own radio networking protocol without FCC approval. They may require the FM radio signals to be music/audio, in that case, have you heard of noisecore?

TacticalCoder
0 replies
16h9m

They may require the FM radio signals to be music/audio, in that case, have you heard of noisecore?

Nope but I would loved that back in the days: we created a LAN between our house and the (attached) neighbors' house (so we could play Warcraft II against each other) but... We couldn't create a LAN with the neighbor across the street!

myself248
0 replies
17h23m

The Xircom PE3 was my other favorite way to get a laptop online, also through its full parallel port.

Once around 2005-ish, I scored an 802.11b client-bridge real cheap because .11g stuff had been out a while. Velcroed it to the lid of my Zenith Supersport, and made a ten-inch ethernet cable to connect it to the PE3. An unholy abomination allowed both units to tap power from the keyboard port; the less said about that, the better.

What felt like thirty hours of hair-pulling later, I had a 720k DOS boot disk with packet drivers and a telnet client, and I could MUD from my lap, wirelessly. Ahh, the sweet smell of useless success.

Then like an idiot, I sent all that stuff to the recycler around 2008.

LargoLasskhyfv
0 replies
14h53m

I did this with a https://en.wikipedia.org/wiki/Breakout_box#/media/File:KL_Br... in between. With ECP-DMA, or some such. Red-green blinking lightshow at about 4Mb/s! :-)

Edit: Yes, I'm aware that box says RS-232, but with gender-changers, the dip-switches and some cable-bridges, you could abuse it for 'parallel'.

farkanoid
0 replies
23h11m

Laplink! That part ot my brain hasn't activated in decades...

My favourite was bootstrapping using 'COPY /B COM1: C:LL3.EXE' to get the Laplink executable to the target machine over Serial, when you didn't have a spare floppy.

jms703
6 replies
23h55m

Neat.

I stopped reading at "California’s average residential electricity rate is 15.34 cents per kWh" and checked my electricity bill. Here in California, I'm paying 40.82 cents per kWh. The website https://www.electricitylocal.com/ seems wildly off.

Edit: I did read the entire article. Just stopped to check what I was paying.

alchemist1e9
3 replies
23h53m

At that rate how are there any data centers in California?

kube-system
1 replies
23h38m

Industrial and commercial users of power almost always get different pricing than residential rates in the US.

alchemist1e9
0 replies
23h19m

Different yes but not 4x different. I know in Illinois standard generic commercial is around 9 cents and residential is like 12 cents.

diggan
0 replies
23h38m

Not everyone has the same contracts and contacts.

zamadatix
1 replies
23h49m

Probably just older data based on looking at the trends https://data.bls.gov/timeseries/APUS49E72610?amp%253bdata_to... and https://data.bls.gov/timeseries/APUS49A72610?amp%253bdata_to...

That said, mentioning it is one thing (it makes the case for low power devices even stronger) but why stop reading anything after the first error you identify? That doesn't guarantee you only get accurate information it just guarantees you'll miss out on good information, like the cheap high speed connectivity the article is actually demonstrating.

jms703
0 replies
23h37m

I meant that I stopped reading to check.

I read the entire article.

arghwhat
6 replies
20h42m

Nit: This is not a mesh. With only 3 nodes, it's not even a ring but just a direct connection between servers.

A "real" ring is formed when you have more than 3 nodes and so some destinations require more than one hop. A mesh implies a network formed by somewhat arbitrary point-to-point connections with multiple possible routes between two points.

tssva
3 replies
19h46m

This may not be a ring, but a “real” ring doesn’t require more than 3 nodes. A ring can be formed with as little as 2 nodes.

If routing was enabled this would definitely be a mesh network. It would be a full mesh network where every node is directly connected to each other and a secondary path through another node is available should the direct path fail.

arghwhat
2 replies
8h25m

I was speaking from the perspective of a network, where you'd want to distinguish between a direct connection between two computers, and a network where you need to cooperate for routing.

This is not a ring because it is not configured as such: Rather, each network card has exactly one peer, and there is no knowledge of other peers, making it no different from just a direct connection between two computers.

If you had redundant paths, it would be a ring yes. A 3-node routable ring is in theory a special-case of a fully-connected mesh, but you do not call ring networks mesh networks. Mesh is usually reserved for when you truly mean to support arbitrary, disorganized network topologies

See https://en.wikipedia.org/wiki/Mesh_networking

tssva
1 replies
6h14m

A network where all nodes are fully connected to every other node and data can be routed through secondary indirect paths if the primary direct path fails is absolutely a mesh network.

And a 3 node ring is absolutely still a ring network.

arghwhat
0 replies
4h23m

A network where all nodes are fully connected to every other node and data can be routed through secondary indirect paths if the primary direct path fails is absolutely a mesh network.

Yes, that is a classical partially connected mesh because it functions without a direct connection between all nodes. That you started out fully connected does not matter.

If the network relies on direct connection between source and destination, it is not a mesh.

And a 3 node ring is absolutely still a ring network.

Only if it can route when you break the loop, and only if you extend it by inserting more nodes into the ring.

The example in this article does not route, which is why it is absolutely not a ring - If you disconnect server A and B, they cannot talk even if they both connect to C. There are just 3 entirely independent point to point networks.

toast0
1 replies
20h24m

Full-mesh typically means each node connects directly to each other node (for some value of directly), so IMHO, a three node system with each node connected to the other two fits the name. I do agree that it's not very interesting with only three nodes.

arghwhat
0 replies
8h18m

A network where every node has a direct connection is a point-to-point network. Yes, it is a special-case called a fully-connected mesh, but these networks are rarely made (a fully-connected mesh of 32 servers require 32 NICs in each server and 496 cables) and it is not a common term.

What is usually described as a mesh network (technically: "partially connected mesh") is a network where every node has a route to every other node, and that every node may have a direct connection to any number of nodes, but where there is no guarantee that any two nodes have a direct connection and may need to route over other nodes. The ability to handle this sort of completely arbitrary network topology is the core to mesh networks.

See https://en.wikipedia.org/wiki/Mesh_networking. There's also the old WiFi mesh spec (https://en.wikipedia.org/wiki/IEEE_802.11s, what the OLPC used), not to mention Zigbee and Bluetooth Mesh.

toyg
5 replies
23h56m

> I don’t understand why it can only hit 11Gbps [...] other people building similar networks were able to hit 20Gbps. [...] So it could simply be that the machine only supports up to this speed [...] it might compete with [Intel's] network controllers, so they capped the speed

I'm bad at low-level networking, but could it be a routing issue? Effectively every machine is also a router, so there might be some wasting going on.

adrian_b
1 replies
21h13m

IIRC the specification of Thunderbolt from Intel (which was inherited by USB 4) limits the Ethernet emulation mode to 10 Gb/s.

Why did they set a limit so low is not known, but the supposition made by another poster that this is a market segmentation feature may be right, because such policies have always been typical for Intel.

wmf
0 replies
19h22m

This post is getting almost 12 Gbps and someone mentioned getting 16 Gbps on Mac so I don't think there's any intentional limit. Thunderbolt is really 22 Gbps anyway and networking over Thunderbolt is just inefficient.

r1ch
0 replies
23h31m

It's measuring single TCP connection performance which is already difficult to optimize. With jumbo frames and tuned buffer sizes I'd expect it to get higher, but it will likely be serialized to a single core's worth of CPU. Using multiple connections should give a better representation of available link bandwidth.

arghwhat
0 replies
23h44m

The machines are not configured as "routers" in that example. It is just a point-to-point layer 2 network, where each network link serves exactly one possible destination. As simple as it can be.

DannyBee
0 replies
18h1m

It's more likely to be offloads and processing time - none of these have any of the offloads a regular network card does. Even if the CPU can handle it, it increases latency, and with a single stream like this, that affects throughput badly.

At 10gigabit/second, with 1518 byte packets, it's 823451.9104 packets per second.

So in a single stream, you have to process each packet within 1.21microseconds to keep up

At 20gbps, you have 600nanoseconds per packet.

There are also almost certainly timing/synchronization issues between different stacks like this. It's horribly inefficient.

Network cards achieve >10gbps in part by offloading a lot of stuff.

Even if the CPU can handle the load, just going through different stacks like they are may add enough latency to throw single stream throughput off.

The posited reason of "not compete with network cards" is beyond stupid. It can't because you can only do a few meters this way, max.

That's not interesting at all. 25gbps network cards are cheap and for a few meters, a $10-15 25gbps DAC will suffice

For more than that, 25gbps transceivers are 25 bucks a pop for 25gbps-SR, which will do 100meters.

With none of the problems of trying to use longer thunderbolt cables, too.

Intel's 25gbps SKUs are not where they make their money anyway.

thekombustor
4 replies
23h28m

10Gbps mesh for only $48, assuming you already have very modern (expensive) USB4 capable machines.

The article also seems to make the assumption that a server would be pulling 1000W all the time, 24/7, which is rarely the case (of course, I can't comment on what their workload might be, but it would be quite unlikely)

Still, I like the direct networking being done here. But saying "you can build a 10Gbps mesh for $50" when you have 3x $750+ machines seems a bit disingenuous. It is not unreasonable to get 10Gb SFP+ NICs on ebay for ~$50 a pop ($150 for 3)

E39M5S62
3 replies
22h4m

$50 would be roughly all-in for the SFP+ NIC, two optics and a generous length of multi-mode fiber. I just did this, and here's the breakdown of my costs from eBay:

1x Juniper EX3300-24p - $75

7x SFP+ optics - $7/each

3x Intel X520-DA2 NIC - $20/each

4x 3 meter OM3 LC-LC fiber - $6/each

1x 30 meter OM3 LC-LC fiber - $24

-----

Total: $232

The EX3300-24p has 24x 1gb copper ports with PoE+ on them, and 4x SFP+ ports. If you need more SFP+ ports you'll want to find a different switch - but for a small multi-use home network the EX3300-24p nicely matched my requirements.

justsomehnguy
2 replies
19h1m

If you need more SFP+ ports you'll want to find a different switch

CRS309-1G-8S+IN, Suggested price $269.00

> Desktop switch with one Gigabit Ethernet port and eight SFP+ 10Gbps ports

https://mikrotik.com/product/crs309_1g_8s_in

E39M5S62
1 replies
16h44m

Mikrotik switches/products are pretty intriguing. Do you have any experience with that model? I'm interested in how stable it is and if the known bugs are anything to be concerned about.

justsomehnguy
0 replies
15h51m

Nope, but if you saw/used one Mikrotik then you saw them all.

Personally I have a love/hate relationship with MT, with a little love and a lot of hate, but at their price range they are unbeatable and works 99% of time.

https://www.servethehome.com/mikrotik-crs309-1g-8sin-review-...

mrkstu
4 replies
23h54m

On actual network gear you'd typically drop the interfaces into a subnet/broadcast domain, but the blogger here has put everything in /32s and it looks like it's creating a peer connection without a shared domain.

Any particular advantage/disadvantage to doing it this way?

ComputerGuru
2 replies
23h4m

It doesn’t scale. Number of connections/hardware is exponential to the number of nodes.

cma
1 replies
20h9m

Isn't it just N^2?

ComputerGuru
0 replies
19h58m

Absolutely. Sorry, I meant quadratic but had a brain fart.

dfox
0 replies
22h35m

In this case the primary reason for configuring it as point to point links is that it matches the hardware topology. In theory there is a standards compliant way to create an ethernet-style network from this, but there are two critical software components to that that are not implemented in mainline kernels (namely raw Ethernet over TB and SPB).

The main disadvantage inherent in the physical topology is that you are always going to do switching/routing decisions on the CPU. But as long as you use that as a virtualization platform or run k8s on that you are going to do that anyway and the additional overhead is probably irrelevant. (This assumes the full mesh topology, which with this hardware is not scalable over 3 nodes, might be to like 5 with somewhat more expensive consumer grade HW and is not really scalable to more than 8 nodes due to the sheer amount of cabling required)

bhouston
4 replies
23h55m

I saw this mini-computer recently, the UM790 Pro, and I almost bought one as well. The AMD Ryzen CPU, AMD Ryzen 9 7940HS CPU, has a rating of ~30,000 on the CPU Benchmarks page (https://www.cpubenchmark.net/high_end_cpus.html) which is quite high end for the size/cost.

The only caveat is that I am running a 10Gbit ethernet network at home and it wasn't that costly to setup. A 10GigE switch costs around $500 CAD right now and that is all you need.

Liftyee
3 replies
21h54m

A 10GigE switch costs around $500 CAD right now and that is all you need.

Surely your machines all need 10GbE NICs as well? Admittedly my hardware isn't the newest (2.5GbE at most), but from a quick search 10Gig PCIe cards are around $150 each. Meanwhile 10-20 Gbps USB3.0 is reasonably common already. Though, using Ethernet still has many other advantages over running USB C cables everywhere.

robes
0 replies
15h18m

server-grade 2x SFP+ 10G NIC pulls are pretty easy to pick up from eBay for about $20 each

With a multigig capable SFP+ module it can handle 2.5G/5G copper as well.

bhouston
0 replies
20h47m

My MacMini M2 has one built in. My other machines, yeah, I did buy some, which go for around $100 CAD on Amazon (e.g. TX401). Ethernet is advantageous because my house is wired, so I can have my switching a central location.

adrian_b
0 replies
21h17m

There are also adapters from USB4 to 10 Gb/s Ethernet, for the case when it is desired to use a switch or long cables.

Sweepi
3 replies
23h42m

8 Zen4 cores dont beat 16 Zen2 cores in multicore. Either Geekbench 6 does not utilize all cores, or the 3950x setup is borked.

wtallis
0 replies
23h19m

Geekbench 5 pretended that all multi-core workloads were embarrassingly parallel by running N independent copies of the single-core test across N cores, with zero communication between threads. Geekbench 6 takes a more realistic approach so Amdahl's Law applies and it doesn't scale linearly with more CPU cores.

jltsiren
0 replies
23h26m

Geekbench 6 multi-core is a single-task benchmark. It runs one task at a time, using multiple threads when possible. This makes it scale worse and favor single-core performance more than benchmarks that run multiple independent tasks in parallel.

diamondlovesyou
0 replies
23h40m

GB6 will use the Zen4's AVX512, which Zen2 doesn't support.

Havoc
3 replies
21h58m

Would it not make sense to put the k8s network on 2.5gbe and give the three nodes a usb4 connection to a storage NAS instead? Assuming you can find a nas with 3x usb4 I suppose

alphabettsy
2 replies
21h28m

A NAS typically doesn’t use or support connecting to a PC over USB, that would be DAS(direct attached storage), but even then they don’t support connecting to multiple PCs simultaneously.

robes
0 replies
15h17m

Assuming the NAS can do the same networking over USB4 that the author used it could work.

Havoc
0 replies
17h46m

The PCs in the article are connected to multiple other devices each so seems possible at least in principle.

I don’t see why the same wouldn’t be possible for a NAS. Stuff like truenas can serve across multiple interfaces

z3t4
2 replies
23h42m

I suspect there will be some extra network latency using usb-c compared to a RJ hardware switch.

dfox
1 replies
22h53m

It is almost certainly the other way around. Even 1000-base-T PHY is complex block of analog magic that draws significant power and adds measurable latency over direct SGMII/optical link. In comparison USB4 is even more directly connected to CPU than the whateverMII interface of PCIe NIC.

johnwalkr
0 replies
22h27m

For a while at work I was trying to use ethernet for embedded stuff because it seemed more modern than CAN, i2c etc. I couldn't figure out why so few microcontrollers support it. It turns out just normal copper 100Mbps ethernet uses about 0.25W per port (0.5W for 1 point-point ethernet link) so it doesn't make sense for embedded applications. Gigbit uses more but I forget how much more.

theogravity
2 replies
21h22m

Given California’s average residential electricity rate is 15.34 cents per kWh

In the bay area it's 2.5x - 3x the rate.

I stopped using my rack server for that reason. I was dumb and didn't consider the electricity costs when buying mine at the time years ago. It sits around collecting dust now.

The author's alternative idea is really good considering 8w idle and 80w peak and I doubt it hits peak for what the author wants to do.

matmatmatmat
1 replies
21h13m

It's actually even worse in SDG&E territory. A kWh costs around 32 cents, but "transmission" and "distribution" are again twice that. The end result is about $1.00 / kWh.

thinkerswell
0 replies
17h47m

I’m convinced SDGE is the most hated company in the country. The southeast is so much cheaper for electricity.

myself248
2 replies
22h29m

I recall reading some networking books that mentioned interesting ancient network structures a long time ago, such as ring topology networks or daisy chain networks.

IP-over-SCSI was great, you could throw 8 PCs on one SCSI chain. Put two SCSI controllers in each machine, do rows-and-columns, and you could have 64 hosts a maximum of 2 hops from each other, at U320 speeds, in the 1990s.

https://www.linuxjournal.com/article/2344

Imagine a beowulf cluster of hot grits! Er, sorry...

yonatan8070
1 replies
8h10m

IP-over-SCSI

I've never heard of it, but given that it's possible, could you do theoretically do IP-over-SATA?

myself248
0 replies
1h36m

It's been talked about but as far as I'm aware, nobody's ever gotten it working:

https://lore.kernel.org/linux-ide/4D6A5B72.6040600@teksavvy....

robocat
1 replies
23h35m

The full-mesh is three Mini-PCs where each PC connects to the other two PCs by USB4: unfortunately the image crops out the USB4 cable that links the top PC to the bottom PC which makes it a bit unclear.

langarus
0 replies
23h20m

it's linked in the article. It's this cable https://www.amazon.com/dp/B0BLXLSDN5?th=1

mcoliver
1 replies
23h15m

Re speed, looks like the cables are the right ones. Would be nice to find a wiring diagram of the motherboard to see how the pcie lanes are allocated but hard to find on these consumer devices. Perhaps each usb4 port is a single lane gen2 which would top out around 10Gbps. You could try paralleling iperf3 to give you a bit more info. Also check the tx_speed and rx_speed in /sys/bus/thunderbolt/devices to see what it is negotiating.

Company specific TB driver/firmware updates can be found here https://www.thunderbolttechnology.net/updates

ralonso
0 replies
12h9m

I believe you might be right regarding the port itself being the issue here. I have a Minisforum UM775 (basically the previous gen of the 790 Pro mentioned in the OP) and despite having two "USB4" ports, only one is rated for 40Gbps while the second one is 10Gbps.

I cannot find any specifics regarding the ports on this 790 Pro one, but I'm certain that's ultimately the issue: the bottleneck is on that second (the one on the right) USB4 port.

daxfohl
1 replies
23h42m

This would only work for close range, right? To do a whole-house thing you'd need to get a bunch of USB-cat6 adapters which would end up being more expensive than a switch?

manmal
0 replies
23h30m

Yes a couple meters max, at full speed. The full speed cables are also quite expensive.

amelius
1 replies
18h41m

Does USB4 have the galvanic isolation that ethernet has?

ApolIllo
0 replies
17h7m

Thunderbolt 3 had a fiber option for longer runs, but I don't see it for Tbolt4.

ThinkBeat
1 replies
23h45m

My problem with setups like these is storage. Not much room for expansion in the small boxes and I am not thrilled with external drives. Still had a QNAP or other NAS and hope to get one with compatible USB4 port

jmrm
0 replies
19h22m

AFAIK those can have 2/3 NVMe drives, and you can expand them even more with a PCI Express card to add even more NVMe drives

BloodyIron
1 replies
21h37m

Didn't read the article, but the math they did about the R730 power draw is NOWHERE near realistic. A 1000Watt power supply unit (PSU) will only draw what is actually being used, not the full capacity 100% of the time. I have a fleet of R720's (generation before the example R730's) and their typical at-wall draw when running many VMs on them is about 150 Watts. So their math for that aspect is 100% WRONG.

That being said, neat concept IMO.

armhole2625
0 replies
21h23m

I also have a fleet of R720s and can confirm BloodyIron's numbers. Average draw is around 150 watts.

zamadatix
0 replies
23h57m

This is a great use of the Thunderbolt hardware that comes built into consumer devices.

For throughput try sticking "--parallel 4" or shorthand "-P 4" on the iPerf3 command and see if total throughput changes.

notorandit
0 replies
23h45m

Maybe he can get 20 Gbps between 2 servers. You get half because of USB limitations (2 ports on the same hub).

nope96
0 replies
18h11m

How good is error handling on USB 4? The rare times I've had data corruption it was almost always when transferring lots of data using USB (this happened to me on multiple drives, computers, usb ports or cables. this was in the early USB 3 days)

neilalexander
0 replies
18h46m

But still, I don’t understand why it can only hit 11Gbps at this moment

The interface/bridge MTUs might want increasing to a larger value. I notice a pretty big difference when connecting Macs together using Thunderbolt with an MTU of 9000 vs 1500.

lathiat
0 replies
14h33m

I recently wanted to do this and was surprised to learn that USB3 did not have a point-to-point mode like thunderbolt - some controllers had "dual mode" ability but the vast majority don't. I was doing this on Macs with thunderbolts 10 years ago but still couldn't do it on a PC today.

Despite being great in most ways, AMD desktop systems in general rarely have thunderbolt (unlike Intel) - a few niche motherboards have it. You also can't just add it with a PCIe card, it requires special support from the motherboard.

Hopefully this might change with USB4 not sure.. be interesting to see what the lower end motherboards end up shipping support for.

johnwalkr
0 replies
20h48m

I use a shorter, similar cable when I travel with 2 macbooks (work and personal). I like to use barrier (software kind-of KVM) to share my mouse and keyboard between them and work on one, but use the other one for music and reference material. I get work/personal separation with the convenience of using one mouse and keyboard.

I simply set up static IPs on each one and also setup internet sharing from one to the other. Surprisingly barrier, internet sharing and even charging always works with this arrangement. I only have to connect 1 macbook to wifi and power.

johng
0 replies
1d

This is neat.

jauntywundrkind
0 replies
16h53m

The speed here is concerning to me. This is way short of what I'd expect or hope for. I hope in the future reviewers start testing this sort of thing when doing in depth reviews; this should be very visible high pressure an objective.

Hopefully there's wins available & this isn't some silly unadvertised hardware gotcha. Higher mtu, trying some parallelism are two good suggestions I heard.

One other thing I'd note: it's only been very recently that Linux has learned how to let the USB and DisplayPort negotiate/allocate bandwidth. It was evenly split between the two until March. Linux 6.3 Adds Thunderbolt/USB4 DisplayPort Bandwidth Allocation Mode. Linux 6.3 Adds Thunderbolt/USB4 DisplayPort Bandwidth Allocation Mode

It's unclear how automatic this bandwidth management is under what systems; users might need to use the new thunderbolt-utils from July to adjust it manually. Intel Rolls Out thunderbolt-utils To Manage USB4/Thunderbolt Devices On Linux. https://www.phoronix.com/news/Intel-Linux-thunderbolt-utils

I really want to hope much better is already possible. I don't own any USB4 systems though! How awesome they are. I hope we see some cool all-in-one's with multiple USB4 in; upcoming Minisforum V3 tablet for example has at least one usb4 that can do DisplayPort In, if I understand, for example, and that capability feels like it should just be coming for free now on PC's USB4 ports!!!

dgacmu
0 replies
21h31m

Somewhat related, I recently wanted to fix a slow nfs problem (1 server, 1 client -- basically, a file server and a machine with some GPUs). It turns out that used previous generation mellanox 100gig NICs are quite cheap now, and you can direct-connect two machines for about $600.

(What I really wanted was the low latency, but the bandwidth is handy to have sometimes)

Yukonv
0 replies
22h49m

Related, Intel was showing off Thunderbolt Share at CES[1]. Allows Thunderbolt 4/5 device-to-device transfer of files. Theoretical speeds in the 20Gbps and 40Gbps for Thunderbolt four and five respective.

One idea for why they were only able to reach 11Gbps is having only one Thunderbolt/USB4 controller[2], meaning the two USB4 ports split the 40Gbps PCIe lane. Throw in a full-duplex connection and you get 10Gbps in one direction.

[1] https://youtu.be/GqCwLjhb4YY?t=81 [2] Just a theory but seems like a sane assumption.

PeterStuer
0 replies
10h55m

Side note about the author's distrust about cheap switches 'from China': i have installed many switches from different brands over the years, and the most reliable by a large margin where from TP-Link, easily beating the likes of HP, Netgear, Linksys and Cisco.

AceJohnny2
0 replies
17h39m

This was a selling point for the tras^H^H cylinder Mac Pros (2013), with Thunderbolt 2. I hear some teams constructed compute clusters based on that...