USB4/thunderbolt is a magical protocol. Turns out the fastest way to move data between 2 modern PCs is to connect their thunderbolt ports with a USB-C cable. The connection shows up as a ethernet port on Windows and I can easily saturate a SSD with its 1GB/s+ transfer rate. And it just works (tm). Reminds me of firewire on these 20 yo Macs.
And What happen if you wire both ports together on the same PC?.. Do you get a broadcast (thunder) storm?
Target disk mode over FireWire was magical back in the day. Nothing like turning someone’s laptop into an oversized external hard drive to rescue data or get a borked OS X installation booting again.
FireWire had IP networking stacks in Windows and OS X in 2007. You could daisy chain a bunch of devices together to share a network connection.
Sure but it was very expensive compared to USB and Ethernet so Firewire never caught on with mainstream conumers other than some niche cases like camcorders.
Thunderbolt was also expensive, which is why adoption was limited, but it's becoming more maisntream since Intel and Apple have been pushing it in the last years, adn piggibacking over USB-C makes it an easy sell comapred to requireing a separate connector like firewire.
Still, thunderbolt peripherals are way more expensive than USB ones, so like Firewire before, use is still more in the enthusianst/professional space.
Compared to Gigabit Ethernet back in that time period? Firewire was a huge bargain.
No, not gabit, but 100 Ethernet was more than enough for what average consumers had to transfer back then, and it was significantly cheaper and more available than firewire. It was more likely your HDD to be a bottleneck for faster network transfers.
Even the first version of Firewire was four times as fast as that.
Completely loading a 5 Gig iPod with music over that first version of Firewire still took a few minutes.
>Even the first version of Firewire was four times as fast as that.
Yes and? At what price points? What was the adoption rate? How many mainstream PCs and peripherals worldwide had it?
Wherever you went, whoever you met, you were way more likely to find a USB or ethernet port to hook up for a fast transfer rather than Firewire.
At least in my country at the time, maybe you lived in Cupertino/Palo Alto where evryone had iMacs and firewire.
Just like VHS over Betamax, USB won because it was cheaper and more convenient despite technically inferior to firewire and consumer tewch at the time was a race to the bottom in terms of price.
>Completely loading a 5 Gig iPod with music over that first version of Firewire still took a few minutes.
Only the first gen iPod had firewire before switching to USB, and even then, what was the point of Firewire 400 on it when the tiny and slow mechanical HDD on it was the real bottleneck.
There was no way the iPod would have been remotely as successful had it stayed on firewire. Apple didn't have the market sahre back then to enforce their own less popular standard. Only when it switched to USB and supporting PCs did the iPod really take off.
At the time, USB was still limited to 12 megabits per second and transferring that same 5 Gigs of MP3 files would have taken over an hour. The firewire iPod did it in a couple of minutes.
USB was cheaper, but dog slow.
Gigabit Ethernet was faster but WAY more expensive.
I see you keep ignoring my arguments so this is the last time I say it.
Again, only the first gen iPod was firewire exclusive and it was not yet a maisntream product since it was still Mac only, so avergae consumer demand at home computers for Firewire was lackluster and the iPod didn't change that.
Firewire was niche or non existent in the home PC space and it died completley with the launch of USB 2.0 remaining alive only in the pro-sumer space.
>Gigabit Ethernet was faster but WAY more expensive.
Please show me where I mentioned Gigabit ethernet as an argument. I said 100 Ethernet which was dirt cheap and almost every PC and Mac had it, as opposed to Firewire, so if you needed a fast cross platform transfer it was your best bet at the time in terms of cost and mass availability over firewire before USB 2.0 and gigabit hit the market.
Your claim was that Firewire "was very expensive compared to USB and Ethernet".
Which completely ignores the speed and the costs of the various data transfer standards as they existed at the time.
The cheap 1.2 megabit USB standard that existed at the time couldn't transfer 5 Gigs of MP3 files in less than an hour.
The cheaper 10 megabit version of Ethernet they sold at the time also would need more than a hour to transfer enough MP3 files to fill an iPod and wouldn't have been cheaper than a Firewire port.
Ethernet with faster speeds than 400 megabit Firewire existed, but was MUCH more expensive.
Speed AND cost both matter.
Back then? It wasn't.
Later gens of iPod gained the ability to connect to USB but still supported FireWire. The majority of my usage of my 20GB 4th gen iPod was with the FireWire cable it shipped with.
It’s forgotten to history that Apple supported FW on the 30-pin connector.
Yeah, we built a similar system a generation ago using FireWire on x86 and Linux.
FireWire at 800Mbps beat Gigabit Ethernet in terms of latency for a rather hard real-time system.
I remember the "good ol days" when I would always opt for a FireWire audio interface for music production and live performance over USB for exactly this reason. I'd get way better latency and stability.
Any ideas what im I supposed to do with the firewire mixer i bought 14 years ago?
Get a Thunderbolt 3 to Thunderbolt 2 dongle and a Thunderbolt 2 to Firewire dongle?
You can still use it! I keep an old ThinkPad X61 & T400 around with mini-Firewire ports on my MOTU 828 mkII interface. It is also a DAC over SPDIF for my much newer Ryzen desktop. I would like to try Thunderbolt to FW800 to FW400 adapters to see if I can get it working on something more modern, as I learned it has mainline Linux kernel support.
Even now an ancient FW400 HDD enclosure of mine is less flaky than a lot of USB storage I’ve used.
Way before that there was SCSI.
The low cost is price. Asus for example with their ASUS ThunderboltEX 4 allows you to have TB4 via PCIe card.
The neat thing about USB4 was same as PATA and later SATA: widely and relatively cheap available in consumer hardware. SCSI and FireWire were technically superior but were neither cheap nor widely available.
Oh and I don't know about SCSI but FireWire was actually a security risk.
I know thunderbolt at least up through 3 was generally carte blanche DMA, so an obvious security nightmare (strictly speaking no worse than cold boot attacks and the like, but there's a practicality difference between dumping raw DIMMs and just plugging in a thumb drive -- or inter-machine links like TFA, for that matter). Does TB4 bother trying to solve this?
Not really. Just like with OHCI 1394, it's responsibility of host IOMMU to handle it.
Just a fair warning about these cards, the support is flakey at best. You should research if it works with your motherboard and CPU before going down that route. I did a lot of research on this because I wanted to connect my Gaming-PC to an Apple Studio Display over optical Thunderbolt, but quickly decided against it.
Luckily there are good alternatives. I landed on a solution using a Belkin[0] DisplayPort and USB to Thunderbolt-cable. I just get USB2.0 speeds, but it's enough for my needs. I'm also able to extend it using an active DisplayPort 1.4 extender, for a total of 10 meters cable.
[0] https://www.belkin.com/support-article/?articleNum=316883
Target disk mode to my workstation and saving someone's whole system with Disk Warrior used to be my favourite and most rewarding task. APFS did away with that joy, if a Mac OS systems fails now you have almost no chance of saving the system from itself.
You need more like 50 Gbps to saturate a modern Nvme SSD.
10 Gbps doesn't come close.
There was a moment with the spinning rust drives where it would have made sense to have storage in a networked device and not locally but now it rarely makes sense unless an incredibly fast interconnect could be used.
Of course this example is still interesting and cool.
Infiniband can match NVMe bandwidth and its latency is similar. Newer network cards can also present a NVMe-oF drive as a local NVMe drive
25 Gb Ethernet roughly matches a PCIe Gen 3 NMVe drive’s max throughput 50 Gb will match Gen 4. These are RDMA capable.
It seems 25Gb dual port Mellanox CX-4 cards can be found on eBay for about $50. The cables will be a bit pricey. If not doing back to back, the switch will probably be very pricey.
There are 100Gb/s Intel Onmi Path switches currently on ebay for cheap:
https://www.ebay.com/itm/273064154224
And yep, they do apparently work ok if you're running Linux. :)
https://www.youtube.com/watch?v=dOIXtsjJMYE
Haven't seen info about how much noise they generate though, so not sure if suitable homelab material. :/
I saw that, but didn’t consider it particularly cheap. Also, the power draw of this things is also likely a concern if run continuously.
Yeah, power draw could be a problem. :(
Cheap though... it's a small fraction of the price for a new one.
Are there better options around?
Looking at Ebay just now, I'm seeing some Mellanox IB switches around the same price point. Those things are probably super noisy though, and IB means more mucking around (needing an ethernet gateway for my use case).
It can match a single drive.
Assuming jumbo packets are used with RoCE, every 4096 bytes of data will have 70 bytes of protocol overhead [1]. This means that a 25 Gb/s Ethernet link can deliver no more than 3.07 GB/s of throughput.
Each lane of PCIe Gen 3 can deliver 985 MB/s [2], meaning the typical drive that uses 4 lanes would max out at 3.9 GB/s. Surely there is some PCIe/NVMe overhead, but 3.5 GB/s is achievable if the drive is fast enough. There are many examples of Gen 4 drives that deliver over 7 GB/s.
Supposing NVMe-oF is used, the MVMe protocol overhead over Ethernet and PCIe will be similar.
1. https://enterprise-support.nvidia.com/s/article/roce-v2-cons...
2. https://en.wikipedia.org/wiki/PCI_Express
Yes, that was my point - 10Gbps is just way too slow, even full Thunderbolt bandwidth can be easily saturated in raid configuration - NVMe are just incredibly fast.
For a couple of years I had a Linux NAS box under my desk with like 8 Samsung 850 pros in a big array connected to my desktop over 40GbE. Then NVMe became a common thing and the complexity wasn't worthwhile.
10 Gbps does not, but 10 GBps, as written above, is 80 Gbps, matches your estimate.
I've tried the same method and it's about 10 Gbps, not 10 GBps.
Sounds like you were actually using USB 3.1 Gen 2 or USB 3.2 Gen 2[x1], not Thunderbolt 4.
I tried it with M2 Max Macbooks. Definitely TB4/USB4 capable.
With a certified TB4 cable?
Yes.
Thunderbolt 4 can't do 80Gbps either
Do you know if this is the case for all thunderbolt generations (speed differences aside)? Does it apply to thunderbolt using mini DisplayPort too or only over USB PHY?
It was possible on Thunderbolt 2 , on the Mac side OS X Mavericks enabled it, this Intel whitepaper from 2014 talks about the Windows side. https://www.thunderbolttechnology.net/sites/default/files/Th...
https://www.gigabyte.com/Press/News/1140 this doesn't mention it and the Intel whitepaper specifically requires TB2 so I would guess TB2 was it.
Pity Thunderbolt 2 is basically non-existent nowadays. I have a few Macbook Pro 13 (2015) and I'd love to be able to use the thunderbolt 2 ports, but peripherals were too expensive and the standard short-lived. Try finding a thunderbolt 2 dock anywhere. Filter out all the false-positives (USB-C docks) and the are maybe 10 total on ebay, and they're stupidly expensive for 6+ year old used devices, most without cables or power supplies. Such a pity because they can really extend the useful life of those laptops.
TB3 is backwards compatible so you can use the Apple TB2 to 3 adapter in conjunction with a TB2 cable to hook up any TB3 device to your MBP 2015. I had the mid-2015 15" MBP and used the adapter to hook up an external GPU. If that can work I'm sure a TB3 dock will.
It’s the beauty and elegance of the PCIe design. Thunderbolt just provides convenient ported access to those lanes.
The cardinal sin of USB IF was releasing the 5gbps then 10gbps USB modes.
It should've been PCIe 2.0 x1 and then 3.0 x1. There was absolutely no reason not to do it: PCIe 2.0 came out in January 2007, USB 3.0 came out in November 2008. PCIe 3.0 followed in November 2010 and 10gbps over the USB C connector didn't appear until August 2014.
What USB4 version 2.0 can only do with a complex tunneling architecture we could get "straight": PCIe 5.0 x1 can do 32gbps which closely matches the 40gbps lane speed defined in USB4 version 2.0 (which again came out years after PCIe 5.0 mind you). It would require two lanes, one for RX one for TX and the other two lanes could carry UHBR20 data for display, for a total of 40gbps. This very closely resembles the 80gbps bus speed of USB4 version 2.0 but the architecture is vastly simpler.
We wouldn't have needed dubious quality separate USB-to-SATA then USB-to-Ethernet etc adapters. External 10GbE would be ubiqutious instead of barely existing and expensive. Similarly, eGPUs would not need to be a niche and DisplayLink simply wouldn't exist because it wouldn't need to exist and the world would be a better place for it. You could just run a very low wattage very simple but real GPU instead. Say, the SM750 is like 2W.
I get what you’re saying but not specifically how you’re imagining the implementation. What do you envision the difference between thunderbolt and usb would be in this case? All complex/bandwidth-intensive applications would be better suited to use PCIe directly, but the problem has always been that for various peripherals this imposes a (small) cost manufacturers would rather not pay and would prefer to have the usb spec abstract over.
There would be no choice, there would be no 5gbps USB mode so there's nothing you can do but use a PCIe chip. It would've brought down the PCIe costs over the many years.
Ok, yeah, I agree. But the world of cost-cutting and penny-saving would never allow that - same reason FireWire lost out to USB. As passive and dumb peripherals as possible won out (for cheaper parts and faster time to market).
My TB3 mesh network shows interfaces as thunderbolt0 etc. this is on Linux using thunderbolt _net from the kernel. Latency is worse than regular twisted pair Ethernet.
Dang, that wouldn’t play nice for MPI then.
I was seeing 1-1.5ms latency using linux bridges for the mesh. Not a huge issue for ceph replication but significantly more than switched lan. It may be possible to get it lower with routed instead of bridged but my understanding is thunderbolt_net on Linux is not perfect in that regard.
I think it works for older thunderbolt too. Been years since I tested it though.
With the caveat being that's only if you're not up for adding PCIe cards.
If you are ok with adding some PCIe cards, then you can transfer things a lot faster than 1GB/s. :)
Care to elaborate?
Network cards that go a lot faster than 10GbE are common.
They're widely used (with many different types) in IT data centres, home labs, and probably other places too.
Heaps of them are on Ebay. As a random search just now for "Mellanox 25GbE" on US Ebay:
* https://www.ebay.com/itm/134435757546
* https://www.ebay.com/itm/355348422765
(there are hundreds of individual results)
Searching for "Mellanox 50GbE":
* https://www.ebay.com/itm/225021493021
* https://www.ebay.com/itm/233915360659
(less results)
There are older generation ones too, doing 40GbE:
* https://www.ebay.com/itm/305046322527
* https://www.ebay.com/itm/166350081025
(hundreds of results again)
With those older generation cards, some care is needed depending upon the OS being run. If you're using Linux you should be fine.
If you're running some other OS though (eg ESXi) then they might have dropped out of the "supported list" for the OS and not have their drivers included.
I bought a bunch cheap HP 40Gbps NICs [0] when they were $13 but need PCI-E risers to make them fit a full height slot. Work fine in pfSense & Fedora.
[0] https://www.ebay.com/itm/333682185870
My first job in 2007 had me plugging fiber in to 40Gbps backbone units. I was told the ports cost a million dollars each. Now $13. Amazing.
It's amazing what you can do with some sheet metal and tin snips when there's a strong enough need for the bracket to fit a particular slot height. Homelab environment obviously. :)
---
Oh, if you have even a hobby grade CNC machine around, you can get a fairly professional level result:
https://forums.servethehome.com/index.php?threads/3d-printab...
I assume they're referring to getting a couple of NICs
OK but what then? I've had ethernet ports on my computers since I can remember, and that hasn't magically allowed me to transfer data back and forth just by plugging a patch cable into both machines. What software is at work here?
I’m guessing if they’re using Windows it’s boring old SMB?
Depending on what you want to do on what O/S, batteries may or may not be included. I use Moonlight on a Mac to control a Windows 10 PC running Sunshine--lower latency than Remote Desktop (which I don't have anyway in the Home edition), and nicer looking than Parsec.
When you connect a patch cable directly (no crossover cable needed in the 21st century), you'll likely find that each system has a self-assigned IP in the 169.254 network.
WTF!
I have two identical machines that I need to do this with... lemme test it and see.
...
ARGs usb-3... so nopes. (How FN lame is that)
Indeed... I just got my first computer with a USB-C port and have been puzzling over what to do with it. Most of the cool tricks seem to require Thunderbolt, which it is not.
I've been playing with Thunderbolt networking over the past week with mixed results. I can get 16Gbps between a couple of Macs. Between a Mac and a PC running Windows 10 I get similar speeds in one direction, but less than 1Gbps in the other direction.
In terms of scaling this to multiple hosts as the author does, I've read that it is possible to daisy chain, or even use a hub, but it doesn't strike me as the most reliable way to build a network. For an ad hoc connection, though (like null modem cables of yore), it's a great option.
I think reliability is a great metric to evaluate this on. I don't have a lot of experience with USB4/Thunderbolt networking, but as far as ring network principals go, when you have a network with only 3 nodes, a ring topology is also a fully connected topology. This means that connectivity between nodes should never fail due to the failure of a node. That screams reliable to me.
As far as points of failure, there's no additional hub/switch in between the devices, so you have a Thunderbolt controller on each device, two cables, and two ports. If a cable goes bad, so long as there isn't a silent/awkward failure mode, all three nodes can still talk to eachother, at degraded speed. If a switch goes bad, the whole network is down, unless you start talking about redundant switch topologies.
To your point though, there does seem to be plenty of shenanigans with performance, especially between devices with different Thunderbolt controllers, that may make this less ideal. But IMO, that's more of a question of do you want to with a more battle tested topology, or are you okay with a less battle tested, but still highly performant and "simple" (we won't go into how bonkers the USB/Thunderbolt spec is) topology?
A literal loopback interface? Two of them, most likely.