return to table of content

Cloning a Laptop over NVMe TCP

mrb
48 replies
11h18m

In the author's scenario, there are zero benefits in using NVMe/TCP, as he just ends up doing a serial block copy using dd(1) so he's not leveraging concurrent I/O. All the complex commands can be replaced by a simple netcat.

On the destination laptop:

  $ nc -l -p 1234 | dd of=/dev/nvme0nX bs=1M
On the source laptop:

  $ nc x.x.x.x 1234 </dev/nvme0nX
The dd on the destination is just to buffer writes so they are faster/more efficient. Add a gzip/gunzip on the source/destination and the whole operation is a lot faster if your disk isn't full, ie. if you have many zero blocks. This is by far my favorite way to image a PC over the network. I have done this many times. Be sure to pass "--fast" to gzip as the compression is typically a bottleneck on GigE. Or better: replace gzip/gunzip with lz4/unlz4 as it's even faster. Last time I did this was to image a brand new Windows laptop with a 1TB NVMe. Took 20 min (IIRC?) over GigE and the resulting image was 20GB as the empty disk space compresses to practically nothing. I typically back up that lz4 image and years later when I donate the laptop I restore the image with unlz4 | dd. Super convenient.

That said I didn't know about that Linux kernel module nvme-tcp. We learn new things every day :) I see that its utility is more for mounting a filesystem over a remote NVMe, rather than accessing it raw with dd.

Edit: on Linux the maximum pipe buffer size is 64kB so the dd bs=X argument doesn't technically need to be larger than that. But bs=1M doesn't hurt (it buffers the 64kB reads until 1MB has been received) and it's future-proof if the pipe sizes is ever increased :) Some versions of netcat have options to control the input and output block size which would alleviate the need to use dd bs=X but on rescue discs the netcat binary is usually a version without these options.

bayindirh
13 replies
10h31m

As a sysadmin, I'd rather use NVMe TCP or Clonezilla to do a slow write rather than trying to go 5% faster with more moving parts and chance to corrupt my drive in the process.

Plus, a it'd be well deserved coffee break.

Considering I'd be going at GigE speeds at best, I'd add "oflag=direct" to bypass caching on the target. A bog standard NVMe can write >300MBps unhindered, so trying to cache is moot.

Lastly, parted can do partition resizing, but given the user is not a power user to begin with, it's just me nitpicking. Nice post otherwise.

mrb
9 replies
9h57m

NVMe/TCP or Clonezilla are vastly more moving parts and chances to mess up the options, compared to dd. In fact, the author's solution exposes his NVMe to unauthenticated remote write access by any number of clients(!) By comparison, the dd on the source is read-only, and the dd on the destination only accepts the first connection (yours) and no one else on the network can write to the disk.

I strongly recommend against oflag=direct as in this specific use case it will always degrade performance. Read the O_DIRECT section in open(2). Or try it. Basically using oflag=direct locks the buffer so dd will have to wait for the block to be written by the kernel to disk until it can start reading the data again to fill the buffer with the next block, thereby reducing performance.

bayindirh
8 replies
9h48m

the author's solution exposes his NVMe to unauthenticated remote write access by any number of clients(!)

I won't be bothered in a home network.

Clonezilla are vastly more moving parts

...and one of these moving parts is image integrity and write integrity verification, allowing byte-by-byte integrity during imaging and after write.

I strongly recommend against oflag=direct as in this... [snipped for brevity]

Unless you're getting a bottom of the barrel NVMe, all of them have DRAM caches and do their own write caching independent of O_DIRECT, which only bypasses OS caches. Unless the pipe you have has higher throughput than your drive, caching in the storage device's controller ensures optimal write speeds.

I can hit theoretical maximum write speeds of all my SSDs (internal or external) with O_DIRECT. When the pipe is fatter or the device can't sustain that speeds, things go south, but this is why we have knobs.

When you don't use O_DIRECT in these cases, you see initial speed surge maybe, but total time doesn't reduce.

TL;DR: When you're getting your data at 100MBps at most, using O_DIRECT on an SSD with 1GBps write speeds doesn't affect anything. You're not saturating anything on the pipe.

Just did a small test:

    dd if=/dev/zero of=test.file bs=1024kB count=3072 oflag=direct status=progress 
    2821120000 bytes (2.8 GB, 2.6 GiB) copied, 7 s, 403 MB/s
    3072+0 records in
    3072+0 records out
    3145728000 bytes (3.1 GB, 2.9 GiB) copied, 7.79274 s, 404 MB/s
Target is a Samsung T7 Shield 2TB, with 1050MB/sec sustained write speed. Bus is USB 3.0 with 500MBps top speed (so I can go %50 of drive speeds). Result is 404MBps, which is fair for the bus.

If the drive didn't have its own cache, caching on the OS side would have more profound effect since I can queue more writes to device and pool them at RAM.

mrb
5 replies
9h27m

Your example proves me right. Your drive should be capable of 1000 MB/s but O_DIRECT reduces performance to 400 MB/s.

This matters in the specific use case of "netcat | gunzip | dd" as the compressed data rate on GigE will indeed be around 120 MB/s but when gunzip is decompressing unused parts of the filesystem (which compress very well), it will attempt to write 1+ GB/s or more to the pipe to dd and it would not be able to keep up with O_DIRECT.

Another thing you are doing wrong: benchmarking with /dev/zero. Many NVMe do transparent compression so writing zeroes is faster than writing random data and thus not a realistic benchmark.

PS: to clarify I am very well aware that not using O_DIRECT gives the impression initial writes are faster as they just fill the buffer cache. I am taking about sustained I/O performance over minutes as measured with, for example, iostat. You are talking to someone who has been doing Linux sysadmin and perf optimizations for 25 years :)

PPS: verifying data integrity is easy with the dd solution. I usually run "sha1sum /dev/nvme0nX" on both source and destination.

PPPS: I don't think Clonezilla is even capable of doing something similar (copying a remote disk to local disk without storing an intermediate disk image).

bayindirh
3 replies
8h53m

Your example proves me right. Your drive should be capable of 1000 MB/s but O_DIRECT reduces performance to 400 MB/s.

I noted that the bus I connected the device has 500MBps bandwidth theoretical, no?

To cite myself:

Target is a Samsung T7 Shield 2TB, with 1050MB/sec sustained write speed. Bus is USB 3.0 with 500MBps top speed (so I can go %50 of drive speeds). Result is 404MBps, which is fair for the bus.
mrb
2 replies
8h27m

Yes USB3.0 is 500 MB/s but are you sure your bus is 3.0? It would imply your machine is 10+ years old. Most likely it's 3.1 or newer which is 1000 MB/s. And again, benchmarking /dev/zero is invalid anyway as I explained (transparent compression)

doubled112
0 replies
5h7m

TIL they have been sneaking versions of USB in while I haven't been paying attention. Even on hardware I own. Thanks for that.

crote
0 replies
4h4m

No, it wouldn't imply the machine is 10+ years old. Even a state-of-the-art motherboard like the Gigabyte Z790 D AX (which became available in my country today) has more USB 3 gen1 (5Gbps) ports than gen2 (10Gbps).

The 5Gbps ports are just marketed as "USB 3.1" instead of "USB 3.0" these days, because USB naming is confusing and the important part is the "gen x".

ufocia
0 replies
4h40m

I wonder how using tee to compute the hash in parallel would affect the overall performance.

Dylan16807
1 replies
9h19m

...and one of these moving parts is image integrity and write integrity verification, allowing byte-by-byte integrity during imaging and after write.

dd followed by sha1sum on each end is still very few moving parts and should still be quite fast.

bayindirh
0 replies
8h51m

Yes, in the laptop and one-off case, that's true.

In a data center it's not (this is when I use clonezilla 99.9% of the time, tbf).

iforgotpassword
2 replies
10h20m

I don't see how you can consider the nvme over tcp version less moving parts.

dd is installed on every system, and if you don't have nc you can still use ssh and sacrifice a bit of performance.

  dd if=/dev/foo | ssh dest@bar "cat > /dev/moo"

bayindirh
1 replies
10h16m

NVMe over TCP encapsulates and shows me the remote device as is. Just a block device.

I just copy that block device with "dd", that's all. It's just a dumb pipe encapsulated with TCP, which is already battle tested enough.

Moreover, if I have fatter pipe, I can tune dd for better performance with a single command.

darkwater
0 replies
9h52m

netcat encapsulates data just the same (although in a different manner), and it's even more battle-tested. NVMe over TCP use case is to actually use the remote disk over the network as it were local. If you just need to dump a whole disk like in the article, dd+netcat (or even just netcat, as someone pointed out) will work just the same.

tripflag
9 replies
10h46m

This use of dd may cause corruption! You need iflag=fullblock to ensure it doesn't truncate any blocks, and (at the risk of cargo-culting) conv=sync doesn't hurt as well. I prefer to just nc -l -p 1234 > /dev/nvme0nX.

dezgeg
4 replies
10h5m

Isn't `nc -l -p 1234 > /dev/nvme0nX` working by accident (relying on that netcat is buffering its output in multiples of disk block size)?

jasomill
2 replies
9h34m

No — the kernel buffers non-O_DIRECT writes to block devices to ensure correctness.

Larger writes will be more efficient, however, if only due to reduced system call overhead.

While not necessary when writing an image with the correct block size for the target device, even partial block overwrites work fine:

  # yes | head -c 512 > foo
  # losetup /dev/loop0 foo
  # echo 'Ham and jam and Spam a lot.' | dd bs=5 of=/dev/loop0
  5+1 records in
  5+1 records out
  28 bytes copied, 0.000481667 s, 58.1 kB/s
  # hexdump -C /dev/loop0
  00000000  48 61 6d 20 61 6e 64 20  6a 61 6d 20 61 6e 64 20  |Ham and jam and |
  00000010  53 70 61 6d 20 61 20 6c  6f 74 2e 0a 79 0a 79 0a  |Spam a lot..y.y.|
  00000020  79 0a 79 0a 79 0a 79 0a  79 0a 79 0a 79 0a 79 0a  |y.y.y.y.y.y.y.y.|
  *
  00000200
Partial block overwrites may (= will, unless the block to be overwritten is in the kernel's buffer cache) require a read/modify/write operation, but this is transparent to the application.

Finally, note that this applies to most block devices, but tape devices work differently: partial overwrites are not supported, and, in variable block mode, the size of individual write calls determines the resulting tape block sizes.

neuromanser
0 replies
2h3m

# yes | head -c 512 > foo

How about `truncate -s 512 foo`?

dezgeg
0 replies
8h56m

Somehow I had thought even in buffered mode the kernel would only accept block-aligned and sized I/O. TIL.

mrb
0 replies
9h42m

Your exact command works reliably but is inefficient. And it works by design, not accident. For starters, the default block size in most netcat implementations is tiny like 4 kB or less. So there is a higher CPU and I/O overhead. And if netcat does a partial or small read less than 4 kB, when it writes the partial block to the nvme disk, the kernel would take care of reading a full 4kB block from the nvme disk, updating it with the partial data block, and rewriting the full 4kB block to the disk, which is what makes it work, albeit inefficiently.

adrian_b
1 replies
10h14m

According to the documentation of dd, "iflag=fullblock" is required only when dd is used with the "count=" option.

Otherwise, i.e. when dd has to read the entire input file because there is no "count=" option, "iflag=fullblock" does not have any documented effect.

From "info dd":

"If short reads occur, as could be the case when reading from a pipe for example, ‘iflag=fullblock’ ensures that ‘count=’ counts complete input blocks rather than input read operations."

tripflag
0 replies
9h51m

Thank you for the correction -- it is likely that I did use count= when I ran into this some 10 years ago (and been paranoid about ever since). I thought a chunk of data was missing in the middle of the output file, causing everything after that to be shifted over, but I'm probably misremembering.

M95D
0 replies
9h48m

I would include bs=1M and oflag=direct for some extra speed.

_flux
8 replies
10h30m

there are zero benefits in using NVMe/TCP, as he just ends up doing a serial block copy using dd(1) so he's not leveraging concurrent I/O

I guess most people don't have faster local network than an SSD can transfer.

I wonder though, for those people who do, does a concurrent I/O block device replicator tool exist?

Btw, you might want also use pv in the pipeline to see an ETA, although it might have a small impact on performance.

crote
6 replies
4h13m

I doubt it makes a difference. SSDs are an awful lot better at sequential writes than random writes, and concurrent IO would mainly speed up random access.

Besides, I don't think anyone really has a local network which is faster than their SSD. Even a 4-year-old consumer Samsung 970 Pro can sustain full-disk writes at 2.000M Byte/s, easily saturating a 10Gbit connection.

If we're looking at state-of-the-art consumer tech, the fastest you're getting is a USB4 40Gbit machine-to-machine transfer - but at that point you probably have something like the Crucial T700, which has a sequential write speed of 11.800 MByte/s.

The enterprise world probably doesn't look too different. You'd need a 100Gbit NIC to saturate even a single modern SSD, but any machine with such a NIC is more likely to have closer to half a dozen SSDs. At that point you're starting to be more worried about things like memory bandwidth instead. [0]

[0]: http://nabstreamingsummit.com/wp-content/uploads/2022/05/202...

JoeAltmaier
3 replies
4h9m

Can't find corroboration for the assertion 'SSDs are an awful lot better at sequential writes than random writes'.

Doesn't make sense at first glance. There's no head to move, as in an old-style hard drive. What else could make random write take longer on an SSD?

PeterisP
1 replies
2h3m

The key aspect is that such memory generally works on a "block" level so making any smaller-than-block write on a SSD requires reading a whole block (which can be quite large), erasing that whole block and then writing back the whole modified block; as you physically can't toggle a bit without erasing the whole block first.

So if large sequential writes mean that you only write full whole blocks, that can be done much faster than writing the same data in random order.

wtallis
0 replies
30m

In practice, flash based SSDs basically never do a full read-modify-write cycle to do an in-place update of an entire erase block. They just write the new data elsewhere and keep track of the fragmentation (consequently, sequential reads of data that wasn't written sequentially may not be as fast as sequential reads of data that was written sequentially).

RMW cycles (though not in-place) are common for writes smaller than a NAND page (eg. 16kB) and basically unavoidable for writes smaller than the FTL's 4kB granularity

wtallis
0 replies
4h0m

The main problem is that random writes tend to be smaller than the NAND flash erase block size, which is in the range of several MB.

You can check literally any SSD benchmark that tests both random and sequential IO. They're both vastly better than a mechanical hard drive, but sequential IO is still faster than random IO.

wolrah
0 replies
2h12m

Besides, I don't think anyone really has a local network which is faster than their SSD. Even a 4-year-old consumer Samsung 970 Pro can sustain full-disk writes at 2.000M Byte/s, easily saturating a 10Gbit connection.

You might be surprised if you take a look at how cheap high speed NICs are on the used market. 25G and 40G can be had for around $50, and 100G around $100. If you need switches things start to get expensive but for the "home lab" crowd since most of these cards are dual port a three-node mesh can be had for just a few hundred bucks. I've had a 40G link to my home server for a few years now mostly just because I could do it for less than the cost of a single hard drive.

BeeOnRope
0 replies
1h33m

In EC2, most of the "storage optimized" instances (which have the largest/fastest SSDs) generally have more advertised network throughput than SSD throughput, by a factor usually in the range of 1 to 2 (though it depends on exactly how you count it, e.g., how you normalize for the full-duplex nature of network speeds and same for SSD).

Palomides
0 replies
6h25m

dd has status=progress to show bytes read/written now, I just use that

exceptione
2 replies
6h22m

That said I didn't know about that Linux kernel module nvme-tcp. We learn new things every day :) I see that its utility is more for mounting a filesystem over a remote NVMe, rather than accessing it raw with dd.

Aside, I guess nvme-tcp would result in less writes as you only copy files in stead of writing the whole disk over?

NoahKAndrews
0 replies
3h35m

Not if you use it with dd, which will copy the blank space too

NoahKAndrews
0 replies
3h35m

Not if you use it with dd, which will copy the blank space too

Multrex
1 replies
9h19m

Seems awesome. Can you please tell us how to use gzip or lz4 to do the imaging?

zbentley
0 replies
7h27m

If you search for “dd gzip” or “dd lz4” you can find several ways to do this. In general, interpose a gzip compression command between the sending dd and netcat, and a corresponding decompression command between the receiving netcat and dd.

For example: https://unix.stackexchange.com/questions/632267

wang_li
0 replies
3h25m

Would be much better to hook this up to dump and restore. It'll only copy used data and you can do it while the source system is online.

For compression the rule is that you don't do it if the CPU can't compress faster than the network.

vient
0 replies
8h9m

on Linux the maximum pipe buffer size is 64kB

Note that you can increase pipe buffer, I think default maximum size is usually around 1MB. A bit tricky to do from command line, one possible implementation being https://unix.stackexchange.com/a/328364

szundi
0 replies
9h18m

This is exactly that I usually do, it works like a charm

loeg
0 replies
2h11m

Agree, but I'd suggest zstd instead of gzip (or lz4 is fine).

khaki54
0 replies
5h57m

Yep I've done this and it works in a pinch. 1Gb/s is also a reasonable fraction of SATA speeds too.

fransje26
0 replies
4h35m

I am not often speechless, but this hit the spot. Well done Sir!

Where does one learn this black art?

ersamsa
0 replies
9h18m

Just cat the blockdev to a bash socket

alpenbazi
0 replies
6h35m

Sir, i dont upvote much, but your post deserves a double up, at least

HankB99
0 replies
1h21m

I came here to suggest similar. I usually go with

    dd if=/dev/device | mbuffer to Ethernet to mbuffer dd of=/dev/device
(with some switches to select better block size and tell mbuffer to send/receive from a TCP socket)

If it's on a system with a fast enough processor I can save considerable time by compressing the stream over the network connection. This is particularly true when sending a relatively fresh installation where lots of the space on the source is zeroes.

BuildTheRobots
0 replies
2h20m

It's a little grimy, but if you use `pv` instead of `dd` on both ends you don't have to worry about specifying a sensible block size and it'll give you nice progression graphs too.

wjnc
14 replies
11h18m

So this just bit for bit dumps a NVMe device to another location. That’s clear. So all encryption just is transferred and not touched. But doesn’t the next machine go into panic when you boot? There are probably many changes in the underlying machine? (Okay, now I read the other post. The author really knows the way. This is at least intermediate Linux.)

jauntywundrkind
13 replies
11h10m

A Linux install is often remarkably hardware agnostic.

Windows would panic, certainly (because so much drivers & other state is persisted & expected), but the Linux kernel when it boots kind of figures out afresh what the world is every time. That's fine.

The main thing you ought to do is generate a new systemd/dbus machine-id. But past this, I fairly frequently instantiate new systems by taking a btrfs snapshot of my current machine & send that snapshot over to a new drive. Chroot onto that drive, and use bootctl to install systemd-boot, and then I have a second Linux is ready to go.

therein
5 replies
11h0m

I expected Windows to panic as well but Win 10 handles it relatively gracefully. It says Windows detected a change to your hardware and does a relatively decent job.

netsharc
4 replies
10h53m

Yeah, I've done several such transplants of disks onto different hardware.. maybe even from an AMD to an Intel system once (different mainboard chipsets), and of Windows 7. As you say, it will say new hardware detected and a lot of the times will get the drivers for them from Windows Update. There are also 3rd party tools to remove no-longer existing devices from the registry, to clean the system up a bit.

matrss
3 replies
7h59m

Windows will, however, no longer acknowledge your license key. At least that is what happened when I swapped a Windows installation from an Intel system into an AMD system.

raudette
1 replies
4h32m

With Windows 10, I have unplugged the primary storage from an old laptop, and plugged it into a new one, without issue.

The system booted, no license issues.

wtallis
0 replies
3h50m

Likely because on a laptop there's usually a Windows license embedded in an ACPI table. As long as you weren't moving a Pro edition installation to a machine that only had a key for Home edition, it would silently handle any re-activation necessary.

toast0
0 replies
5h58m

If you switch to a Microsoft account instead of a local account before you move your install, you can often recover your license once Windows processes the hardware change (sometimes it takes several attempts over a few reboots, their license server is special), and once that happens, you can go back to a local account, if Microsoft didn't remove that flow.

lnxg33k1
3 replies
10h52m

If you moved the wrong stuff Linux would give you a bad time too, try /proc, /dev

fellerts
2 replies
10h18m

Those are pseudo-filesystems though, they aren't part of the install.

lnxg33k1
1 replies
9h55m

Not sure where I said they were part of the install, but I meant that if you don’t exclude them bad things would happen to the cloning process

Edit: or backup process

account42
0 replies
7h30m

With a block device copy approach like in TFA you don't need to worry about that because those filesystems do not exist on the block device.

When copying files (e.g. with rsync) you need to watch out for those, yes. The best way to deal with other mounts is not to copy directly from / but instead non-recursively bind-mount / to another directory and then copy from that.

redleader55
0 replies
3h2m

To me the most complicated thing would be to migrate an encrypted disk with the key stored in TPM. The article doesn't try to do that, but I wonder if there is a way.

justsomehnguy
0 replies
10h30m

Windows would panic, certainly (because so much drivers & other state is persisted & expected)

Nope.

7B is because back in WinNT days it made sense to disable the drivers for which there are no devices in the system. Because memory and because people can't write drivers.

Nowadays it's AHCI or NVMe, which are pretty vendor agnostic, so you have 95% chance of successful boot. And if you boot is successful then Windows is fine and yes, it can grab the remaining drivers from the WU.

Nexxxeh
0 replies
3h12m

Unless you're doing a BIG jump, like from legacy to UEFI, or SATA to NVMe, Windows will generally just figure it out.

There may be an occasional exception for when you've doing something weird (iirc the 11th-13th gen Intel RST need a slipstreamed or manually added drivers unless you change controller settings in the BIOS which may bite on laptops at the moment if you're unaware of having to do it).

But even for big jumps you can usually get it working with a bit of hackery pokery. Most recently I had to jump from a legacy C2Q system running Windows 10 to run bare metal on a 9th gen Core i3.

I ended up putting it onto a VM to run the upgrade from legacy to UEFI so I'd have something that'd actually boot on this annoyingly picky 9th gen i3 Dell system, but it worked.

I generally (ab)use Macrium Reflect, and have a copy of Parted Magic to hand, but it's extremely rare for me to find a machine I can't clone to dissimilar hardware.

The only one that stumped me recently was an XP machine running specific legacy software from a guy who died two decades ago. That I had to P2V. Worked though!

roomey
13 replies
11h15m

I recently had to set up a new laptop (xubuntu).

Previously I cloned but I this time I wanted to refresh some of the configs.

Using a usb-c cable to transfer at 10gb/s is so useful (as my only other option was WiFi).

When you plug the computers together they form an ad-hoc network and you can just rsync across. As far as I could tell the link was saturated so using anything else (other protocols) would be pointless. Well not pointless, it's really good to learn new stuff, maybe just not when you are cloning your laptop (joke)!

baq
10 replies
10h20m

Did it ‘just work’?

Serious question since last time I tried a direct non-Ethernet connection was sometime in the 90s ;)

adrian_b
9 replies
10h5m

I assume that those USB-C connectors were USB 4 or Thunderbolt, not USB 3.

With Thunderbolt and operating systems that support Ethernet over Thunderbolt, a virtual network adapter is automatically configured for any Thunderbolt connector, so connecting a USB C cable between 2 such computers should just work, as if they had Ethernet 10 Gb/s connectors.

With USB 3 USB C connectors, you must use USB network adapters (up to 2.5 Gb/s Ethernet).

roomey
7 replies
6h35m

Yes, both were modern enough laptops. Although the ideapad didn't advertise thunderbolt in the lspci, connecting that and the dell precision "just worked" (tm).

It's very useful for sending large data using minimal equipment. No need for two cat6 cables and a router for example.

greenshackle2
6 replies
4h30m

You just need a single Ethernet cable really, if the devices are reasonably modern. With Auto MDI-X the days of needing a crossover cable or a switch are over.

roomey
5 replies
4h6m

I'm not sure, first off the precision doesn't have an ethernet port at all!

Secondly, I'm not sure if a crossover cable setup will autoconfigure the network, as the poster above says, it has been since the 90s when I bothered trying something like that!

greenshackle2
4 replies
3h54m

Right, that plan is somewhat foiled by most laptops not having ethernet ports anymore.

You don't need crossover cables anymore. You can just connect a regular patch cable directly between 2 devices. Modern devices can swap RX/TX as needed.

As for auto-configuration, that's up to the OS, but yeah you probably have to set up static IPs.

vetinari
3 replies
3h1m

They would receive APIPA (169.254.0.0/16) address for IPv4 and link-local for IPv6, if the interface would be brought up. Now, that second part is a question; windows would do it, but linux probably not.

greenshackle2
2 replies
2h50m

If both ends got APIPA addresses would they be able to talk to each other?

I was under the impression you have to set up the devices as each other's default gateway, but maybe I'm the one not up to modern standards this time.

adrian_b
0 replies
1h55m

Their randomly chosen addresses are within the same /16 subnetwork.

Therefore they can talk directly, without using a gateway.

Their corresponding Ethernet MAC addresses will be resolved by ARP.

The problem is that in many cases you would have to look at the autoconfigured addresses and introduce manually the peer address in each computer.

For file sharing, either in Windows or using Samba on Linux, you could autodiscover the other computer and just use the name of the shared resource.

namibj
0 replies
7h14m

USB C is perfectly capable of connecting two equals, even with USB-2.

It merely requires one side to be capable of behaving as a device, with the other side behaving as a host.

I.e., unlike PCIe hubs, you won't get P2P bandwidth savings on a USB hub.

It just so happens that most desktop xhci controllers don't support talking "device".

But where you can, you can set up a dumb bidirectional stream fairly easily, over which you can run SLIP or PPP. It's essentially just a COM port/nullmodem cable. Just as a USB endpoint instead of as a dedicated hardware wire.

mixmastamyk
0 replies
2h14m

I did this also, but had to buy a ~$30+ Thunderbolt 4 cable to get the networking to work. Just a USB3-C cable was not enough.

The transfer itself screamed and I had a terabyte over in a few mins. Also I didn't bother with encryption on this one, so that simplified things a lot.

fransje26
0 replies
4h32m

Were you transferring the entire filesystem, after booting from a live disk, or were you transferring files over after having set up a base system?

dmos62
12 replies
10h36m

I recently had to copy around 200gb of files over wifi. I used rsync to make sure a connection failure doesn't mean I have to start over and so that nothing is lost, but it took at least 6 hours. I wonder what could I have done better.

Btw, what kinds of guarantees do you get with the dd method? Do you have to compare md5s of the resulting block level devices after?

maccard
3 replies
7h45m

I wonder what could I have done better.

6 hours is roughly 10MB/s, so you likely could have gone much much quicker. Did you compress with `-z`? If you could use ethernetyou probably could have done it at closer to 100MB/s on most deviceds, which would have been 35 minutes.

dmos62
2 replies
2h58m

No, I didn't use compression. Would it be useful over a high-bandwidth connection? I presume that it wasn't wifi bandwidth that was bottlenecking, though I've not really checked.

One thing I could have done is found a way to track total progress, so that I could have noticed that this is going way too slow.

skinner927
0 replies
2h44m

I’ve had compression noticeably speed things up even over a wired lan

mixmastamyk
0 replies
2h11m

One of the options of rsync is to print out transfer speed, --progress or verbose or similar.

throwawaaarrgh
1 replies
6h27m

If your transport method for rsync was ssh, that is often a bottleneck, as openssh has historically had some weird performance limits that needed obscure patches to get around. Enabling compression helps too if your CPU doesn't become a bottleneck

dmos62
0 replies
2h57m

Yeah, it was SSH. Thanks for the heads up.

klabb3
1 replies
4h55m

200gb of files over wifi […] took at least 6 hours

I wonder what could I have done better.

Used an Ethernet cable? That’s not an impressive throughput amount over local. WiFi has like a million more sources of perf bottlenecks. Btw, just using a cable on ONE of the device => router ~> device can help a lot.

dmos62
0 replies
3h2m

Yeah, I did that. One of the devices didn't have an ethernet port though.

jiripospisil
1 replies
8h27m

...but it took at least 6 hours...

Rsync cannot transfer more than one file at a time so if you were transferring a lot of small files that was probably the bottleneck. You could either use xargs/parallel to split the file list and run multiple instances of rsync or use something like rclone, which supports parallel transfers on its own.

mkesper
0 replies
2h21m

You could tar and zip the files into a netcat-pipe

vetinari
0 replies
2h57m

200 GB in 6 hours is too slow for wifi, too.

dd doesn't skip empty blocks, like clonezilla would do.

downrightmike
0 replies
37m

Wifi shares the medium (air) with all other radios. It has a random time it waits after it stops if it sees a collision.

"Carrier-sense multiple access with collision avoidance (CSMA/CA) in computer networking, is a network multiple access method in which carrier sensing is used, but nodes attempt to avoid collisions by beginning transmission only after the channel is sensed to be "idle".[1][2] When they do transmit, nodes transmit their packet data in its entirety.

It is particularly important for wireless networks, where the alternative with collision detection CSMA/CD, is not possible due to wireless transmitters desensing (turning off) their receivers during packet transmission.

CSMA/CA is unreliable due to the hidden node problem.[3][4]

CSMA/CA is a protocol that operates in the data link layer."

https://en.wikipedia.org/wiki/Carrier-sense_multiple_access_...

jeroenhd
4 replies
11h14m

I'm sure there are benefits to this approach, but I've transferred laptops before by launching an installer on both and then combining dd and nc on both ends. If I recall correctly, I also added gzip to the mix to make transferring large null sections faster.

With the author not having access to an ethernet port on the new laptop, I think my hacky approach might've even been faster because of the slight boost compression would've provided, given that the network speed is nowhere near the speed limit compression would add to a fast network link.

Moneysac
2 replies
11h12m

could you explain how you do that exactly?

jeroenhd
0 replies
1h4m

Basically https://news.ycombinator.com/item?id=39676881, but also adding a pipe through gzip/gunzip in the middle for data compression.

I think I did something like `nc -l -p 1234 | gunzip | dd status=progress of=/dev/nvme0n1` on the receiving end and `dd if=/dev/nvme0n1 bs=40M status=progress | gzip | nc 10.1.2.3:1234` on the sending end, after plugging an ethernet cable into both devices. In theory I could've probably also used the WiFi cards to set up a point to point network to speed up the transmission, but I couldn't be bothered with looking up how to make nc use mptcp like that.

rincebrain
0 replies
10h55m

Well, if it's whole-disk encrypted, unless they told LUKS to pass TRIM through, you'd not be getting anything but essentially random data for the way the author described it.

rwmj
3 replies
10h12m

A lot of hassle compared to:

  nbdkit file /dev/nvme0n1
  nbdcopy nbd://otherlaptop localfile

throwawaaarrgh
2 replies
6h34m

This is actually much better because nbdcopy can handle sparse files, you can set the number of connections and threads to number of cores, you can force a flush before exit, and enable a progress bar. For unencrypted drives it also supports TLS.

verticalscaler
1 replies
4h50m

If you really want a progress bar chuck in a 'pv' somewhere into the command posted at the top of the thread.

rwmj
0 replies
1h27m

Or add nbdcopy -p option :-)

coretx
3 replies
10h27m

Doing this without systemd or directly from the netboot environment would be interesting.

bayindirh
2 replies
10h25m

The user didn't do it from systemd actually. Instead they booted GRML, which is not very different from a netboot environment, and hand-exported the device themselves.

coretx
1 replies
5h6m

GRML is a debian live image, much different from a netboot environment. Look at https://ipxe.org/, boot firmware doing fiber over ethernet, iSCSI, HTTP; booting from virtually everything but NVMe over TCP.

bayindirh
0 replies
4h56m

I use GRML almost daily, sometimes booting it over network and loading it to RAM. We don't use iPXE specificially, but use "vanilla" PXE regularly, too, generally to install systems with Kickstart or xCAT, depending on the cluster we're working on.

I'll be using GRML again tonight to rescue some laptop drives which were retrieved from a fire, to see what can be salvaged, for example. First in forensic mode to see what's possible, then to use dd_rescue on a N95 box.

MeteorMarc
2 replies
11h14m

How can this work if the laptops have different hardware, cq different requirements on device drivers?

TheNewAndy
0 replies
10h53m

Linux ships with all the drivers installed (for a fairly high value of "all") (typically)

AndroTux
0 replies
9h30m

With modern systems this rarely is a problem. Like Andy said, Linux comes with most drivers in the kernel. Windows just installs the appropriate drivers as soon as you boot it up with new hardware. Sometimes it takes a minute or two for it to boot up after the initial swap, but then it should be fine.

M95D
2 replies
9h43m

I don't understand why he didn't pipe btrfs through network. Do a btrfs snapshot first, then btrfs send => nc => network => nc => brtfs receive. That way only blocks in use are sent.

tarruda
1 replies
8h22m

That's the first thing I thought when I saw he used btrfs. I use btrfs send/receive all the time via SSH and it works great. He could easily have setup SSH server in GRML live session.

There's one caveat though: With btrfs it is not possible to send snapshots recursively, so if he had lots of recursive snapshots (which can happen in Docker/LXD/Incus), it is relatively hard to mirror the same structure in a new disk. I like btrfs, but recursive send/receive is one aspect ZFS is just better.

streb-lo
0 replies
4h55m

Yea, there needs to be a `snapshot -r` option or something. I like using subvolumes to manage what gets a snapshot but sometimes you want the full disk.

Gabrys1
2 replies
11h14m

I usually set up an initial distro and then copy over my /home. Later, I just need to install the debs I'm missing, but this has the benefit of not installing stuff I don't need anymore.

That said, I didn't know you could export NVMe over TCP like that, so still a nice read!

fransje26
0 replies
4h26m

Same here.

The only problem with that approach is that it is also copies over the .config and .cache folders, most of which are possibly not needed anymore. Or worse, they might contain configs that overrides better/newer parameters set by the new system..

Hikikomori
0 replies
3h2m

~/oldhome/oldhome/oldhome ...

znpy
1 replies
8h48m

I love to see such progress, brought by nvme-over-tcp and systemd!

Not so many years ago doing something similar (exporting a block device over network, and mounting it from another host) would have meant messing with iscsi which is a very cumbersome thing to do (and quite hard to master).

pak9rabid
0 replies
3h0m

If you don't need routing, you could always use the vastly simpler ATA-over-Ethernet protocol.

jurgenkesker
1 replies
7h40m

Or just use Clonezilla? Then you also copy only the actual data blocks, and it can autoresize your partitions as well. That's how I always do it.

True, I do always just take the NVME disk out of the laptop and put it in a highspeed dock.

1970-01-01
0 replies
4h16m

Clonezilla is great. It's got one job and it usually succeeds the first time. My only complaint is the initial learning curve requires tinkering. It's still not at the trust level of fire and forget. Experimenting is recommended as backup is never the same thing as a backup and restore and even Clonezilla will have issues recreating partitions on disks that are very different from their source.

hcfman
1 replies
11h24m

Very nice read! I didn't know it existed. Love it.

hcfman
0 replies
11h20m

I love your explanation of your disk install with cryptsetup etc. I do a manual install as well as I always install with both mirrored and encrypted disks. The combination of the two I didn't find as easy install options (I think not at all) on the Ubuntu's I installed. Good to see a lot of this low level OS talk here.

HenryBemis
1 replies
8h59m

This could also be titled: How to make one's life difficult 101 and/or I love experimenting.

I've bought a few hard disks in my lifetime. Many years ago one such disk came bundled with Acronis TrueImage. Back in the day you could just buy the darn thing, after certain year they switched it to subscription model. Anyhoo.. I got the "one-off" TrueImage and I've been using it ever since. I've 'migrated' the CD (physical mini-CD) it to a USB flash disk, and have been using it ever since.

I was curious for the solution as I recently bought a WinOS tablet, and would like to clone my PC to it, but this looks more like one too many hoop jumps to something that TrueImage can do in 2h hours (while watching a movie).

greenshackle2
0 replies
4h9m

Next time, I'll be sure to do it the easy way by going back in time and getting a free copy of Acronis TrueImage in 1998.

tristor
0 replies
1h31m

This is very clever, although somewhat unnecessary, but still useful to know. The one thing I would call out is the author used WiFi because one of the laptops didn't have Ethernet. I've encountered this situation several times myself and I've found that nearly every modern laptop supports USB in both modes, so you can simply use a USB-C cable to connect the two laptops and get a pseudo-Ethernet device this way to interconnect them, no need to use WiFi. This is hundreds of times faster than WiFi.

transpute
0 replies
11h25m

Thanks AWS/Annapurna/Nitro/Lightbits for bringing NVMe-over-TCP to Linux.

https://www.techtarget.com/searchstorage/news/252459311/Ligh...

> The NVM Express consortium ratified NVMe/TCP as a binding transport layer in November 2018. The standard evolved from a code base originally submitted to NVM Express by Lightbits' engineering team.

https://www.lightbitslabs.com/blog/linux-distributions-nvme-...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

timetraveller26
0 replies
25m

I was in a situation like this recently, and just want to say that if opening the pc/laptop and connecting it to the new machine is possible you should go that route, it's a little hassle compared to wait to a transfer over the network, waiting times can be deceptive.

This reminds me one of my favorite articles

https://blog.codinghorror.com/the-infinite-space-between-wor...

throwaway81523
0 replies
8h15m

Oh man I thought this was going to be about super fast networking by somehow connecting the PCI buses of two laptops. Instead it's the opposite, tunneling the NVMe protocol through regular Ethernet. Sort of like iScsi back in the day.

Yeah there are much simpler approaches. I don't bother with container images but can install a new machine from an Ansible playbook in a few minutes. I do then have to copy user files (Borg restore) but that results in a clean new install.

telotortium
0 replies
2h42m

copy took seven hours because the new laptop didn't have an Ethernet port

Buy a USB to Ethernet adapter. It will come in handy in the future.

slicktux
0 replies
9m

Nothing like using Linux pipes and TCP to clone a rooted device for low level data recovery… ;)

ptman
0 replies
9h28m

Pipe with zstd (or similar)!

justsomehnguy
0 replies
10h25m

> Since the new laptop didn't have an Ethernet port, I had to rely only on WiFi, and it took about 7 and a half hours to copy the entire 512GB

That's because bs=40M and no status=progress.

jesprenj
0 replies
10h44m

and I'm not so familiar with resizing LUKS.

LUKS2 does not care about the size of the disk. If it's JSON header is present, it will by default treat the entire underlying block device as the LUKS encrypted volume/partition (sans the header), unless specified otherwise on the commandline.

jaxr
0 replies
6h37m

Today a new nvme is arriving and I need to do EXACTLY this. Thanks Mr poster :)

everfrustrated
0 replies
8h41m

Pretty cool.

Is there any way to mount a remote volume on Mac OS using NVMe-TCP?

danw1979
0 replies
7h35m

Couldn’t they have just used netcat ?

baq
0 replies
10h21m

If you directly connect devices over WiFi without an intermediate AP you should be able to double your transfer speed. In this scenario it might have been worth it.

account42
0 replies
7h54m

I haven't actually "installed" an OS on my desktops/laptops in decades, always just copy over the files and adjust as needed. Usually just create a new filesystem though to use the opportunity to update the file system type / parameters (e.g. block size), encryption etc. and then rsync the files over.

Still, if I was the planning ahead kind then using something more declarative like NixOS where you only need to copy your config and then automatically reinstall everything would probably be the better approach.

Levitating
0 replies
7h1m

I've switched laptops like 5 times in the past 2 years. Last time was yesterday.

I just take my nvme drive out of the last one and I put it into the new one.

I don't even lose my browser session.

Edit: If you do this (and you should) you should configure predictable network adapter names (wlan0,eth0).