return to table of content

Timeshift: System Restore Tool for Linux

pixelmonkey
50 replies
18h36m

I've probably spent way too much time thinking about Linux backup over the years. But thankfully, I found a setup that works really well for me in 2018 or so, used it for the last few years, and I wrote up a detailed blog post about it just a month ago:

https://amontalenti.com/2024/06/19/backups-restic-rclone

The tools I use on Linux for backup are restic + rclone, storing my restic repo on a speedy USB3 SSD. For offsite, I use rclone to incrementally upload the entire restic repository to Backblaze B2.

The net effect: I have something akin to Time Machine (macOS) or Arq (macOS + Windows), but on my Linux laptop, without needing to use ZFS or btrfs everywhere.

Using restic + some shell scripting, I get full support for de-duplicated, encrypted, snapshot-based backups across all my "simpler" source filesystems. Namely: across ext4, exFAT, and (occasionally) FAT32, which is where my data is usually stored. And pushing the whole restic repo offsite to cloud storage via rclone + Backblaze completes the "3-2-1" setup straightforwardly.

ratorx
24 replies
17h12m

One problem with file based backups is that they are not atomic across the filesystem. If you ever back up a database (or really any application that expects atomicity while it’s running), then you might corrupt the database and lose data. This might not seem like a big problem, but can affect e.g. SQLite, which is quite popular as a file format.

Then again, the likelihood that the backup will be inconsistent is fairly low for a desktop, so it’s probably fine.

I think the optimal solution is:

1) file system level atomic snapshot (ZFS, BTRFS etc)

2) Backup the snapshot at a file level (restic, borg etc)

This way you get atomicity as well as a file-based backup which is redundant against filesystem-level corruption.

magicalhippo
11 replies
15h54m

Windows' Volume Shadow Copy Service[1] allows applications like databases to be informed[2] when a snapshot is about to be taken, so they can ensure their files are in a safe state. They also participate in the restore.

While Linux is great at many things, backups is one area I find lacking compared to what I'm used to from Windows. There I take frequent incremental whole-disk backups. The backup program uses the Volume Shadow Copy Service to provide a consistent state (as much as possible). Being incremental they don't take much space.

If my disk crashes I can be back up and running like (almost) nothing happened in less than an hour. Just swap out the disk and restore. I know, as I've had to do that twice.

[1]: https://learn.microsoft.com/en-us/windows/win32/vss/the-vss-...

[2]: https://learn.microsoft.com/en-us/windows/win32/vss/overview...

lmz
10 replies
12h34m

LVM snapshots are copy on write and can be used the same way.

magicalhippo
9 replies
10h29m

Any backup software that utilizes LVM in this way?

Ie automatically creates a snapshot and sends the incremental changes since previous snapshot to a backup destination like a NAS or S3 blob storage.

_flux
3 replies
7h57m

I think block-level snapshots would be very difficult to use this way.

I just make a full dedupped backups from LVM snapshots with kopia, but I've set that up only on one system, on others I just use kopia as-is.

It takes some time, but that's fine for me. Previous backup of 25 GB an hour ago took 20 minutes. I suppose if it only walked files it knew were changed it would be a lot faster.

magicalhippo
2 replies
3h4m

Thanks, sounds interesting. So you create a snapshot, then let kopia process that snapshot rather than the live filesystem, and then remove the snapshot?

I suppose if it only walked files it knew were changed it would be a lot faster.

Right, for me I'd want to set it up to do the full disk, so could be millions of files and hundreds of GB. But this trick should work with other backups software, so perhaps it's a viable option.

_flux
1 replies
2h47m

Exactly so.

Here's the script, should it be of benefit to someone, even if it of course needs to be modified:

    #!/bin/sh
    success=false
    teardown() {
      umount /mnt/backup/var/lib/docker || true
      umount /mnt/backup/root/.cache || true
      umount /mnt/backup/ || true
      for lv in root docker-data; do
        lvremove --yes /dev/hass-vg/$lv-snapshot || true
      done
    
      if [ "$1" != "no-exit" ]; then
        $success
        exit $?
      fi
    }
    
    set -x
    set -e
    teardown no-exit
    trap teardown EXIT
    for lv in root docker-data; do
      lvcreate --snapshot -L 1G -n $lv-snapshot /dev/hass-vg/$lv
    done
    
    mount /dev/hass-vg/root-snapshot /mnt/backup
    mount /dev/hass-vg/docker-data-snapshot /mnt/backup/var/lib/docker
    mount /root/.cache /mnt/backup/root/.cache -o bind
    
    chroot /mnt/backup kopia --config-file="/root/.config/kopia/repository.config" --log-dir="/root/.cache/kopia" snap create / /var/lib/docker
    kopia --config-file="/root/.config/kopia/repository.config" --log-dir="/root/.cache/kopia" snap create /boot /boot/efi
    success=true

magicalhippo
0 replies
3m

Awesome, thanks!

abbbi
2 replies
5h38m

wyng backup does this. It uses the device mappers thin_dump tools to allow for incremental backups between snapshots, too:

https://github.com/tasket/wyng-backup

edit: requires lvm thin provisioned volumes

There is also thin-send-recv which basically does the same as zfs send/recv just with lvm:

https://github.com/LINBIT/thin-send-recv

it uses the same functions of the device mapper to allow incremental sync of lvm thin volumes.

magicalhippo
1 replies
3h12m

Thanks for the pointers, looks very relevant.

It's just such a low-effort peace of mind. Just a few clicks and I know that regardless what happens to my disk or my system, I can be up and running in very little time with very little effort.

On Linux it's always a bit more work, but backups and restore is one of those things I prefer is not too complicated, as stress level is usually high enough when you need to do restore to worry about forgetting some incantation steps.

abbbi
0 replies
2h53m

it depends. Doing a complete disaster recovery of a windows system IMHO can be a real struggle. Especially if you have to restore a system to different hardware, which the system state backup that microsoft offers does not support afaik.

Backing up a linux system in combination with REAR:

https://github.com/rear/rear

and a backup utility of your choice for the regular backup has never failed me so far. I used it to restore linux systems to complete different hardware without any troubles.

lmz
1 replies
9h57m

I don't think the diffs are usable that way. They're actually more like an "undo log" in that the snapshot space is taken by "old blocks" when the actual volume is taking writes. It's useful for the same reasons as volume shadow copy: a consistent snapshot of the block device. (Also this can be very bad for write performance as any writes are doubled - to snapshot and to to the real device)

magicalhippo
0 replies
3h2m

Yeah ok, that makes sense. Write performance is a concern, but usually the backups run when there's little activity.

hashworks
6 replies
12h37m

While I do that, is that really the case? I can imagine database snapshots are consistent most of the time, but it can't be guaranteed, right? In the end it's like a server crash, the database suddenly stops.

lmz
4 replies
12h35m

Your DB is supposed to guarantee consistency even in server crashes. (The Consistency, Durability part of ACID).

mdavidn
3 replies
12h3m

That consistency is built on assumptions about the filesystem that may not hold true of a copy made concurrently by a backup tool.

e.g. The database might append to write-ahead logs in a different order than the order in which the backup tool reads them.

grumbelbart2
2 replies
10h57m

That's why you do a filesystem snapshot before the backup, something supported by all systems. The snapshot is constant to the backup tool, and read order or subsequent writes don't matter.

The main difference is that Windows and MacOS have a mechanism that communicates with applications that a snapshot is about to be taken, allowing the applications (such as databases) to build a more "consistent" version of their files.

In theory, of course, database files should always be in a logically consistent state (what if power goes out?).

Sakos
1 replies
10h15m

something supported by all systems

Well, supported by Windows and MacOS. Linux only if you happen to use zfs or btrfs, and also only if the backup tool you use happens to rely on those snapshots.

c45y
0 replies
7h22m

I believe basically any filesystem will work if you have it on LVM. Bonus of lv snaps being thin snapshots too

jlokier
0 replies
1h27m

That works if the backup uses a snapshot of the filesystem or a point in time. Then the backup state is equivalent to what you'd get if the server suddenly lost power, which all good ACID databases handle.

The GP is talking about when the backup software reads database files gradually from the live filesystem at the same time as the database is writing the same files. This can result in an inconsistent "sliced" state in the backup, which is different from anything you get if the database crashes or the system crashes or loses power.

The effect is a bit like when "fsync" and write barriers are not used before a server crash, and an inconsistent mix of things end up in the file. Even databases that claim to be append-only and resistant to this form of corruption usually have time windows where they cannot maintain that guarantee, e.g. when recycling old log space if the backup process is too slow.

_flux
2 replies
13h58m

You can also use lvm2 and then you get atomic snapshots with any file system (I think it needs to support fsfreeze, I guess all of them do).

pixelmonkey
0 replies
6h56m

I never knew this. Thanks for sharing!

Am4TIfIsER0ppos
0 replies
25m

lvm requires unallocated space in the volume which makes it kind of garbage to use for snapshots

pixelmonkey
1 replies
17h4m

I agree with you, of course. On macOS, Arq uses APFS snapshots, and on Windows, it uses VSS. It'd be nice to use something similar on Linux with restic.

In my linked post above, I wrote about this:

"You might think btrfs and zfs snapshots would let you create a snapshot of your filesystem and then backup that rather than your current live filesystem state. That’s a good idea, but it’s still an open issue on restic for something like this to be built-in (link). There’s a proposal about how you could script it with ZFS in this nice article (link) on the snapshotting problem for backups."

The post contains the links with further information.

My imperfect personal workaround is to run the restic backup script from a virtual console (TTY) occasionally with my display server / login manager service stopped.

vladvasiliu
0 replies
13h18m

I run this from a ZFS snapshot. What I want backed up from my home dir lives on the same volume, so I don't have to launch restic multiple times. I have dedicated volumes for what I specifically want excluded from backups and ZFS snapshots (~/tmp, ~/Downloads, ~/.cache, etc).

I've been thinking of somehow triggering restic by zrepl whenever it takes a snapshot, but I haven't figured a way of securely grabbing credentials for it to unlock the repository and to upload to s3 without requiring user intervention.

carderne
5 replies
5h35m

Enjoyed the post, thanks. One question: why don’t you use restic+rclone on macOS? They both support it and I’d assume you could simplify your system a bit…

pixelmonkey
4 replies
5h24m

I only have one macOS system (a Mac Mini) and Arq works well for me. Also I prefer to use Time Machine for the local backups (to a USB3 SSD) on macOS since Apple gives Time Machine all sorts of special treatment in the OS, especially when it comes time to do a hardware upgrade.

setopt
3 replies
4h23m

I’ve also found Arq to be brilliant on MacOS. It’s especially nice on laptops, where you can e.g. set it to pause on battery and during working hours. Also, APFS snapshots is a nice thing given how many Mac apps use SQLite databases under the hood (Photos, Notes, Mail, etc.).

On Linux, the system I liked best was rsnapshot: I love its brutal simplicity (cron + rsync + hardlinks), and how easy it is to browse previous snapshots (each snapshot is a real folder with real files, so you can e.g. ripgrep through a date range). But when my backups grew larger I eventually moved to Borg to get better deduplication + encryption.

pixelmonkey
2 replies
3h24m

rsnapshot was definitely my favorite Linux option before restic. I find that restic gives me the benefits of chunk-based deduplication and encryption, but via `restic find` and `restic mount` I can also get many of the benefits of rsnapshot's simplicity. If you use `restic mount` against a local repo on a USB3 SSD, the FUSE filesystem is actually pretty fast.

setopt
1 replies
2h35m

Thanks for the info, I’ll have a closer look at Restic then. Borg also has a FUSE interface, but last time I tried it I found it abysmally slow – much slower than just restoring a folder to disk and then grepping through it. I used a Raspberry Pi as my backup server though, so the FUSE was perhaps CPU bound on my system.

pixelmonkey
0 replies
1h42m

Yea, I don't want to oversell it. The restic FUSE mount isn't anywhere near "native" performance. But, it's fast enough that if you can narrow your search to a directory, and if you're using a local restic repo, using grep and similar tools is do-able. To me, using `restic mount` over a USB3 SSD repo makes the mount folder feel sorta like a USB2 filesystem rather than a USB3 one.

bongobingo1
5 replies
12h57m

Do you have much of an opinion on why you went with Restic over Borg? The single Go binary is an obvious one, perhaps that alone is enough. I remember some people having un-bound memory usage with Restic but that might have been a very old version.

_flux
0 replies
7h54m

This was basically one big reason why I went with https://kopia.io . The other might have been its native S3 support.

pixelmonkey
0 replies
6h59m

For me, these traits made restic initially attractive:

- encrypted, chunk-deduped, snapshotted backups

- single Go binary, so I could even backup the binary used to create my backups

- reasonable versioning and release scheme

- I could read, and understand, its design document: https://github.com/restic/restic/blob/master/doc/design.rst

I then just tried using it for a year and never hit any issues with it, so kept going, and now it's 6+ years later.

marcus0x62
0 replies
6h47m

I use both to try to mitigate the risk of losing data due to a backup format/program bug[1]. If I wasn't worried about that, I'd probably go with Borg but only because my offsite backup provider can be made to enforce append-only backups with Borg, but not Restic, at least not that I could find.[2] Otherwise, I have not found one to be substantially better than the other in practice.

1 - some of my first experiences with backup failures were due to media problems -- this was back in the days when "backup" pretty much meant "pipe tar to tape" and while the backup format was simple, tape quality was pretty bad. These days, media -- tape or disk -- is much more reliable, but backup formats are much more complex, with encryption, data de-dup, etc. Therefore, I consider the backup format to be at least as much of a risk to me now as the media. So, anyway, I do two backups: the local one uses restic, the cloud backup uses borg.

2 - I use rsync.net, which I generally like a lot. I wrote up my experiences with append-only backups, including what I did to make them work with rsync.net here: https://marcusb.org/posts/ransomware-resistant-backups/

hashworks
0 replies
12h41m

I use both, and I never had problems with any of them. Restic has the advantage that it supports a lot more endpoints than ssh/borg, f.e. S3 (or anything that rclone supports). Also borg might be a little bit more complicated to get started with than restic.

tlavoie
2 replies
18h0m

One question, why use rclone for the Backblaze B2 part? I use restic as well, configured with autorestic. One command backs up to the local SSD, local NAS, and B2.

pixelmonkey
1 replies
17h38m

I explain in the post. Here's a copypasta of the relevant paragraph:

"My reasoning for splitting these two processes — restic backup and rclone sync — is that I run the local restic backup procedure more frequently than my offsite rclone sync cloud upload. So I’m OK with them being separate processes, and, what’s more, rclone offers a different set of handy options for either optimizing (or intentionally throttling) the cloud-based uploads to Backblaze B2."

tlavoie
0 replies
15h20m

So you did! Sorry, hadn't read the post beforehand. Oh, and I too mourned the loss of CrashPlan. Being in Canada, I didn't have the option offered to have a restore drive sent if needed, but thought it was a brilliant idea. On the other hand, I think Backblaze might!

PhilippGille
2 replies
12h14m

Do you only back up your home directory, or also others? I didn't find info about that in your post.

pixelmonkey
1 replies
6h57m

I backup everything except for scratch/tmp/device style directories. Bytes are cheap to store, my system is a rounding error vs my /home, and deduping goes a long way.

PhilippGille
0 replies
21m

I'm less worried about the size and more about something breaking when doing a recovery.

Let's say you're running Fedora with Gnome and you want to switch to KDE without doing a fresh install. You make a backup, then go through the dozens of commands to switch, with new packages installed, some removed, display managers changed etc. Now something doesn't work. Would recovering from the restic backup reliably bring the system back in order?

The tool from the original post seems to be geared towards that, while most Restic and rclone examples seem to be geared towards /home backup, so I wonder how much this is actually an alternative.

e12e
1 replies
6h36m

I've been mulling over setting up restic/kopia backups - and recently discovering httm[1] support restic directly in addition to zfs (and) more - I think I finally will.

[1] https://github.com/kimono-koans/httm

pixelmonkey
0 replies
6h9m

I only discovered httm thanks to this thread, and I'll definitely be trying it out for the first time today. Maybe I'll add an addendum to my blog post about it.

kmarc
0 replies
11h1m

For home backup, I have a similar setup with dedup, local+remote backups.

Borgbackup + rclone (or aws) [1]

It works so well, I even use this same script on my work laptop(s). rclone enables me to use whatever quirky file sharing solution the current workplace has.

[1]: https://github.com/kmARC/dotfiles/blob/master/bin/backup.sh

bulletmarker
0 replies
5h36m

I have used pretty much the same setup for the last 6 years. I run borg to a small server then rclone the encrypted backup nightly to B2 storage.

bobek
0 replies
12h47m

I have ended up with something very similar. Restic/rclone is awesome combo. https://bobek.cz/restic-rclone/

tombert
16 replies
20h39m

This reminds me of the default behavior of NixOS. Whenever you make a change in the configuration for NixOS and rebuild it, it takes a snapshot of the system configurations and lets you restore after a reboot if you screw something up.

Similarly, it doesn't do anything in regards to user files.

choward
15 replies
20h22m

I can't tell you the number of times I see a project and think to myself "NixOS already solves that problem but better."

alfalfasprout
5 replies
18h51m

The problem, unfortunately, is that Nix often finds itself in a chicken and egg scenario where nixpkgs fails to provide a lot of important packages or has versions that are old(er). But for there to be more investment in adding more packages, etc. you need more people using the ecosystem.

arianvanp
2 replies
10h41m

Nixpkgs is the largest and most up to date package repository according to https://repology.org/

I'm honestly curious what packages you have a problem with

SAI_Peregrinus
1 replies
6h8m

Proprietary package vendors often provide a. deb that assumes Ubuntu. Maybe also a. rpm for RedHat if you're lucky.

tombert
0 replies
3h31m

That's definitely true, but maybe I've just been lucky, pretty much every proprietary program I've wanted to install in NixOS has been in Nixpkgs.

Skype, Steam, and Lightworks are all directly available in the repos and seem to work fine as far as I can tell. I'm sure there are proprietary packages that don't work or aren't in the repo, but I haven't really encountered them.

atlintots
0 replies
17h52m

Luckily Nix is also an excellent build system, and does provide escape hatches here and there when you really need them (e.g nix-ld).

NoThisIsMe
0 replies
16h9m

What are you talking about? Nixpkgs is one of the largest and most up-to-date distro package repos out there.

fallingsquirrel
4 replies
20h17m

In fairness, this app supports snapshotting your home directory as well, and that's not solvable with Nix alone. In fact, I'm running NixOS and I've been meaning to set up Timeshift or Snapper for my homedir, but alas, I haven't found the time.

__MatrixMan__
3 replies
20h4m

Is there something about your home directory that you'd want to back up that is not covered by invoking home manager as a nix module as part if nixos-rebuild?

https://nix-community.github.io/home-manager/index.xhtml#sec...

To me, it's better than a filesystem-backup because the things that make it into home manager tend to be exactly the things that I want to back up. The rest of it (e.g. screenshots, downloads) aren't something I'd want in a backup scheme anyhow.

SAI_Peregrinus
1 replies
5h58m

Data (documents, pictures, source code, etc.) is not handled by home-manager. Backing up home.nix saves your config, but the data is just as if not more important.

__MatrixMan__
0 replies
36m

Hmm, different strokes I guess. Maybe it's just that too much kubernetes has gone to my head, but I see files as ephemeral.

Code and docs are in source control. My phone syncs images to PCloud when I take them. Anything I download is backed up... wherever I downloaded it from.

fallingsquirrel
0 replies
19h46m

I want to keep snapshots of my work. I run nightly backups which have come in handy numerous times, but accessing the cloud storage is always slow, and sometimes I've even paid a few cents in bandwidth to download my own files. It would be a lot smoother if everything was local and I could grep through /.snapshots/<date>/<project>.

pmarreck
1 replies
16h21m

Imagine installing an entirely new window manager without issue, and then undoing it without issue.

NixOS does that. And I'm pretty sure that no other flavor of Linux does. First time I realized I could just blithely "shop around window managers" simply by changing a couple of configuration lines, I was absolutely floored.

NixOS is the first Linux distro that made me actually feel like I was free to enjoy and tinker with ALL of Linux at virtually no risk.

There is nothing else like it. (Except Guix. But I digress.)

tombert
0 replies
3h12m

Completely agree; being able to transparently know what the system is going to do by just looking at a few lines of text is sort of game-changing. It's trivial to add and remove services, and you can be assured that you actually added and removed them, instead of just being "pretty sure" about it.

Obviously this is just opinion (no need for someone to supply nuance) but from my perspective the NixOS model is so obviously the "correct" way of doing an OS that it really annoys me that it's not the standard for every operating system. Nix itself is an annoying configuration language, and there are some more arcane parts of config that could be smoothed over, but the model is so obviously great that I'm willing to put up with it. If nothing else, being able to trivially "temporarily" install a program with nix-shell is a game-changer to me; it changes the entire way of how I think about how to use a computer and I love it.

Flakes mostly solve my biggest complaint with NixOS, which was that it was kind of hard to add programs that weren't merged directly into the core nixpkgs repo.

autoexecbat
1 replies
20h6m

I've seen the configuration.nix file, it doesn't look like it captures specific versions. How does it handle snapshotting?

somnic
0 replies
18h59m

For managing your configuration.nix file itself you can just use whichever VCS you want, it's a text file that describes one system configuration and managing multiple versions and snapshots within that configuration file is out of scope.

For the system itself, each time you run "nixos-rebuild switch" it builds a system out of your configuration.nix, including an activation script which sets environment variables and symlinks and stops and starts services and so on, adds this new system to the grub menu, and runs the activation script. It specifically doesn't delete any of your old stuff from the nix store or grub menu, including all your older versions of packages, and your old activation scripts. So if your new system is borked you can just boot into a previous one.

LorenDB
9 replies
18h47m

I prefer using openSUSE, which is tightly integrated with snapper[0], making it simple to recover from a botched update. I've only ever had to use it when an update broke my graphics drivers, but when you need it, it's invaluable.

Snapper on openSUSE is integrated with both zypper (package manager) and YaST (system configuration tool) [1], so you get automatic snapshots before and after destructive actions. Also, openSUSE defaults to btrfs, so the snapshots are filesystem-native.

[0]: http://snapper.io/

[1]: https://en.opensuse.org/Portal:Snapper

Arnavion
4 replies
18h40m

And it's also integrated into the bootloader (if you use one of the supported ones). The bootloader shows you one boot entry per snapshot so you can boot an old snapshot directly.

Spunkie
2 replies
18h18m

This is a feature I've really been missing since switching from grub to systemd-boot.

Has anyone figured out an easy way to get this back with systemd-boot?

Arnavion
0 replies
17h23m

Some time ago they did add systemd-boot as a supported option and apparently it also generates one entry per snapshot.

https://news.opensuse.org/2024/03/05/systemd-boot-integratio...

https://en.opensuse.org/Systemd-boot#Installation_with_full_...

https://github.com/openSUSE/sdbootutil

I haven't tried it though so I don't know for sure. (I have my own custom systemd-boot setup that predates theirs, and since my setup uses signed UKIs and theirs doesn't, I don't care to switch to theirs. I can still switch snapshots manually with `btrfs subvol` anyway; it just might require a live CD in case the default snapshot doesn't boot.)

jwrallie
0 replies
18h33m

Very nice, sometimes people claim that the only difference between distros is the repository and package management tools.

It is when the defaults make the parts integrate nicely like this that the “greater is more than the sum of its parts” come into place.

Barrin92
1 replies
13h3m

openSUSE honestly is so criminally underrated. I've been using Tumbleweed for a few years for my dev/work systems and YaST is just great. Also that they ship fully tested images for their rolling release is just so much saner. OBS is another fantastic tool that I see so few people talking about, despite software distribution still being such a sore point in the linux ecosystem.

Rinzler89
0 replies
9h23m

>openSUSE honestly is so criminally underrated

Because it's not very popular in the US which has mostly cemented around fedora/ubuntu/arch so you don't hear much about any other distros, and most other countries around the world tend to just adopt what they learn from the US, due to the massively influential gravitational field the US has on the tech field.

But in the german speaking world many know about it. It's a shame that despite the internet being relatively borderless it's still quite insular and divided. I'm not a native german speaker but it helps to know it since there's a lot of good linux content out there that's written in german.

whiztech
0 replies
9h33m

I use btrfs-assistant with Kubuntu because I can't get Timeshift to work properly. It's basically some kind of front-end for snapper and btrfsmaintenance.

[0]: https://gitlab.com/btrfs-assistant/btrfs-assistant

Shorel
7 replies
5h41m

My system is different and simpler:

The root partition / and the home partition /home are different.

There's a /home/etc/ folder with a very small set of configuration files I want to save, everything else is nuked on reinstall.

When I do a reinstall, the root partition is formatted, the /home partition is not.

This allows me to test different distros and not be tied to any particular distro or any particular backup tool, if I test a distro and I don't like it, then it is very easy to change it.

ijustlovemath
4 replies
5h37m

/home/etc or ~/etc?

birdiesanders
3 replies
5h25m

Those are equivalent.

michaelmior
0 replies
5h14m

On most systems, that is not the case. Typically a user's home directory is `/home/USERNAME` so `~/etc` would be `/home/USERNAME/etc`.

ijustlovemath
0 replies
4h30m

Try it for yourself:

[ /home/etc = ~/etc ] || echo theyre different

execat
0 replies
5h13m

No. ~etc is equivalent to /home/etc. ~/etc is the same as /home/<current user>/etc.

dataflow
1 replies
5h31m

The implication here is that your home directory can actually work across distros? How in the world do you do that? Surely you have to encounter errors sometimes when cached data or configs point to nonexistent paths, or other incompatibilities come up?

ijustlovemath
0 replies
4h19m

Typically ~ contains user specific config files for applications, which are (usually) programmed to be distro agnostic. If you're installing the same applications across distros, I don't see why this wouldn't work without too much effort. After all, most distros are differentiated by just two things:

- their package management tooling

- their filesystem layout (eg where do libraries etc go)

phoe-krk
4 replies
20h37m

I'd like some sort of a comparison with Duplicity/Déjà Dup that seems to be the default on Gnome/Cinnamon.

mkesper
1 replies
8h2m

Is that usable nowadays? Last time I checked it was hellishly slow compared to borg.

phoe-krk
0 replies
8h1m

Usable enough for me. I don't mind since it's running in the background anyway.

fallingsquirrel
1 replies
20h15m

Different categories of app. Duplicity is geared toward backing up files to a separate machine, and this tool snapshots your filesystem on the same machine.

phoe-krk
0 replies
20h13m

OK, thanks. I was confused because Time Machine is capable of backing up to a remote device.

metadat
4 replies
20h39m

Can timeshift work with ext4 filesystems?

I know it won't have the atomicity of a CoW fs, but I'd be fine with that, as the important files on my systems aren't often modified, especially during a backup - I'd configure it to disable the systemd timers while the backup process is running.

mbreese
1 replies
19h34m

Can’t you also snapshot LVM volumes directly? So if you have an LVM volume, it shouldn’t matter what the filesystem is, provided it is sync’d… in theory.

(I’ve only done this on VMs that could be paused before the snapshot, so YMMV.)

nijave
0 replies
19h26m

Yeah, you can take live snapshots with LVM. You can use wyng-backup to incrementally take and back them up somewhere outside LVM. This has been working pretty well for me to backup libvirt domains backed by LVs

tamimio
0 replies
18h42m

Yep, been using it for a while, incl ext4, you can have scheduled snapshots too, saved my arse few times, especially when you install something that cannot be easily uninstalled like hyperland or similar.

gballan
0 replies
20h32m

Just getting started with it--but I think so, using rsync.

OldMatey
4 replies
18h55m

I adore Timeshift. It has made my time on Linux so much more trouble free.

I have used Linux for 10+ years but over the I have spent hours, days and weeks trying to undo or fix little issues I introduce by tinkering around with things. Often I seem to break things at the worst times, right as I am starting to work on some new project or something that is time sensitive.

Now, I can just roll back to an earlier stable version if I don't want to spend the time right then on troubleshooting.

I've enabled this on all my family members machines and teach them to just roll back when Linux goes funky.

pmarreck
2 replies
16h39m

While it's not quite average-user-friendly (YET), one of the reasons I switched to NixOS is because it provides this out-of-the-box. I was frustrated with every other Linux for the reasons you cite, but NixOS I can deal with, since 1) screwing up the integrity of a system install is hard to begin with, 2) if you DO manage to do it, you can reboot into any of N previous system updates (where you set N).

Linux is simultaneously the most configurable and the most brittle OS IMHO. NixOS takes away all the brittleness and leaves all the configurability, with the caveat that you have to declaratively configure it using the Nix DSL.

rrix2
1 replies
13h41m

NixOS also has out of the box support for zfs auto snapshots, where you can tell it to keep 3 months, four weeks, 24 hourly, and frequent snapshots evert fifteen minutes so you can time shift your home directory, too

pmarreck
0 replies
5h6m

I'm zfs on root and haven't set that up yet! I should

gooseyman
0 replies
15h38m

I enabled this four months ago and I have had the same experience.

It’s not that I couldn’t retype the config file I accidentally wrote over while tinkering, but I like the safety that comes with Timeshift to try and fail a few times.

Hard lessons come hard. This softens those lessons a little while maintaining the learning.

sieve
3 replies
13h4m

ZFS Snapshots + Sanoid and Syncoid to manage and trigger them is what people should be doing. Unfortunately, booting from ZFS volumes seems to be some form of black art unless things have changed over the last couple of years.

The license conflict and OpenZFS always having to chase kernel releases often resulting in delayed releases for new kernels means I cannot confidently use them with rolling release distros on the boot drive. If I muck something up, the data drives will be offline for a few minutes till I fix the problem. Doing the same with the boot drive is pain I can live without.

rabf
2 replies
6h9m

Best option to date: https://github.com/zbm-dev/zfsbootmenu

A shame most distro's installers don't support it natively, but an encrypted rootfs on ZFS is great once you get it setup.

sieve
1 replies
5h23m

Yeah.

I am somewhat wary of trying this, mucking something up and wasting a lot of time wrestling with it. Will probably play around with it in a vm and use it during the next ssd upgrade.

Would have been so much better if the distros showed more interest in ZFS

aeadio
0 replies
3h4m

In principle there's no reason you can't install this next to GRUB in case you're wary. If you're not using ZFS native encryption, and make sure not to enable some newer zpool features, GRUB booting should work for ZFS-on-root.

That said, I've been using the tool for a while now and it's been really rock solid. And once you have it installed and working, you don't really have to touch it again, until some hypothetical time when a new backward-incompatible zpool feature gets added that you want to use, and you need a newer ZFSBootMenu build to support it.

Because it's just an upstream Linux kernel with the OpenZFS kmod, and a small dracut module to import the pool and display a TUI menu, it's mechanically very simple, and relying on core ZFS support in the Linux kernel module and userspace that's already pretty battle tested.

After seeing people in IRC try to diagnose recent GRUB issues with very vanilla setups (like ext4 on LVM), I'm becoming more and more convinced that the general approach used by ZFSBootMenu is the way to go for modern EFI booting. Why maintain a completely separate implementation of all the filesystems, volume managers, disk encryption technologies, when a high quality reference implementation already exists in the kernel? The kernel knows how to boot itself, unlock and mount pretty much any combination of filesystem and volume manager, and then kexec the kernel/initrd inside.

The upsides to ZFSBootMenu, OTOH,

    * Supports all ZFS features from the most recent OpenZFS versions, since it uses the OpenZFS kmod 
    * Select boot environment (and change the default boot environment) right from the boot loader menu
    * Select specific kernels within each boot environment (and change the default kernel)  
    * Edit kernel command line temporarily  
    * Roll back boot environments to a previous snapshot  
    * Rewind to a pool checkpoint  
    * Create, destroy, promote and orphan boot environments  
    * Diff boot environments to some previous snapshot to see all file changes
    * View pool health / status  
    * Jump into a chroot of a boot environment  
    * Get a recovery shell with a full suite of tools available including zfs and zpool, in addition to many helper scripts for managing your pool/datasets and getting things back into a working state before either relaunching the boot menu, or just directly booting into the selected dataset/kernel/initrd pair.
    * Even supports user mode SecureBoot signing -- you just need to pass the embedded dracut config the right parameters to produce a unified image, and sign it with your key of choice. No need to mess around with shim and separate kernel signing.

nurettin
3 replies
18h58m

Timeshift saved my system so many times over the past 6-7 years. Botched upgrades, experimenting with desktop environments, destroying configuration defaults, it works and does what it says on the tin.

prmoustache
1 replies
10h26m

How can you "botch" upgrades so many times?

I may have had only one update that went wrong in 30 years of using Linux and that was just a bug introduced by a gfx driver in a new minor kernel version. I downgraded it and waited for the bug to be fixed upstream and that was it.

nurettin
0 replies
3h49m

bravo, I guess?

tamimio
0 replies
18h41m

Can’t agree more with this, it does what it says!

jenscow
3 replies
18h18m

I use BackInTime, which works in a similar way but is much more configurable. I have hourly backups of all my code for the past day, then a single daily for the past week, etc.

Saved my ass a few times.

Springtime
1 replies
14h43m

Sounds like rsnapshot (rsync with hardlinks and scheduling) but the BackInTime repo doesn't mention any comparison of how it's different, though Timeshift says they're similar. Anyone have experience with BiT vs rsnapshot?

bayindirh
0 replies
10h47m

BackInTime works similar to Apple TimeMachine. It uses hardlinks + new files. Plus, it keeps settings for that backup inside the repository itself, so you can install the tool, show the folder, and start restoring.

On top of that BiT supports network backups and multiple profiles. I'm using it on my desktop systems with multiple profiles for years and it's very reliable.

However it's a GUI first application, so for server applications Borg is a much better choice.

raudette
0 replies
4h3m

I've used BackInTime since 2010. I loved that, even without using the tool, you could just poke through the file structure, and get an old version of any backed up file.

dmitrygr
3 replies
20h22m

similar to the System Restore feature in Windows and the Time Machine tool in Mac OS

This makes no sense! System Restore is a useless wart that just wastes time making "restore points" at every app/driver install and can rarely (if ever) produce a working system when used to "restore" anything. It does not back up user data at all. Time Machine is a whole-system backup solution that seems to work quite well and does back up user data.

To me the quoted statement might as well read "a tool similar to knitting needles (in hobby shops) and dremels (in machine shops)"

Reading their description further, it seems like they are implementing something similar to TimeMachine (within the confines of what linux makes possible), and not at all like "System Restore". This seems sane as this implements something that is actually useful. They, sadly, seem to gloss over what the consequences are of using non-btrfs FS with this tool, only mentioning that btrfs is needed for byte-exact snapshots. They do not mention what sort of byte-inexactness ext4 users should expect...

twodave
0 replies
19h39m

My main use of system restore was to return to a “clean” install + just the bare minimum installs I needed back when windows was more likely to atrophy over time. I agree it is mostly useless today.

nijave
0 replies
20h5m

I believe System Restore takes a registry backup and can recover from a bad driver install but it's been years since I used it last. I think just about anything System Restore does can be replicated by "just fixing it" in Safe Mode but I think System Restore is geared for less technical folks.

Newer versions of Windows have File History to backup user data (I don't think they have an integrated system/file solution quite like Time Machine though).

However it makes some sense to keep system/user data separate. You don't want to lose your doc edits because you happened to have a bad driver upgrade at the same time. Likewise, you don't want to roll your entire system back to get an old version of a doc.

Time Machine is trivial to implement (without the UI) with disk snapshots (that's what it does--store disk snapshots to an external disk)

magicalhippo
0 replies
18h17m

They're talking about the Volume Shadow Copy Service[1], which effectively provides snapshots[2] of the filesystem.

Which files are part of a shadow copy is determined by the one creating a shadow copy, so it could include user data.

You can view and access the files in a shadow copy using ShadowExplorer[3] if you don't have the pro versions.

[1]: https://learn.microsoft.com/en-us/windows-server/storage/fil...

[2]: https://learn.microsoft.com/en-us/windows/win32/vss/the-vss-...

[3]: https://www.shadowexplorer.com/

yuumei
2 replies
20h50m

Has the btrfs sub volume quota bug been fixed yet? I always had issues when using it

sschueller
1 replies
19h20m

I don't know but synology uses BTRFS now as well and if something crucial like that was broken I don't think they would support it on a NAS.

marcus0x62
0 replies
19h3m

Synology uses custom extensions to BTRFS for much of their functionality.

gchamonlive
2 replies
18h2m

I use a series of scripts to make daily Borg backups to a local repository: https://github.com/gchamon/borg-automated-backups

Currently the local folder is a samba mount so it's off-site.

The only tip I'd have for people using Borg is to verify your backups frequently. It can get corrupted without much warning. Also if you want quick and somewhat easy monitoring of backups being created you can use webmin to watch for the modifications in the backup folder and send an email if there isn't a backup being sent in a while. Similarly, you can regularly scan the Borg repo and send email in case of failures for manual investigation.

This is low tech, at least lower tech than elastic stack or promstack, but it gets the job done.

khimaros
1 replies
10h23m

i've had positive experience with borgmatic which is available in debian repos.

gchamonlive
0 replies
8h48m

Neat! I'll take a look, thanks!

exe34
2 replies
20h31m

oh this brings back memories, i found a script that did this about 15 years ago. it kept three versions of backups using rsync and hard-links to avoid duplication.

exe34
0 replies
11h30m

rsnapshot was originally based on an article called Easy Automated Snapshot-Style Backups with Linux and Rsync, by Mike Rubel.

must have been this one :-D thanks for finding it!

umvi
1 replies
19h34m

Creates filesystem snapshots using rsync+hardlinks

Sounds like it works similarly to git fork on GitHub? That is, if no files have changed, the snapshot doesn't take up any extra room?

Izkata
0 replies
15h34m

Directories and hardlinks take up space, just very little.

It would make sense to hardlink a directory if everything in that tree was unchanged, but no filesystem will allow hardlinking a directory due to the risk of creating a loop (hardlinking to a parent directory), so directories are always created new and all files in the tree get their own hardlink.

Apple's Time Machine was given an exception in their filesystem to allow it, since they have control over it and can ensure no such loops are created. So it doesn't have that penalty creating hardlinks for every single individual file every time.

nubinetwork
1 replies
4h3m

Isn't timeshift what apple calls their snapshot/backup thingy?

aaronmdjones
0 replies
35m

No, that's Time Machine.

e12e
1 replies
18h58m

Hmm, this doesn't appear to be what I hoped it was:

Timeshift is similar to applications like rsnapshot, BackInTime and TimeVault but with different goals. It is designed to protect only system files and settings. User files such as documents, pictures and music are excluded.

On the other hand, a quick search looking for "that zfs based time machine thing" did reveal a new (to me) project that looks very interesting:

https://github.com/kimono-koans/httm

tamimio
0 replies
18h38m

You can include the user files too in the home directory. I have some snapshots that include them and some that do not, so you are covered both ways.

croniev
1 replies
12h29m

Timeshift does not work for me because I encrypted my ssd, decrypt on boot, but linux sees every file twice, once encrypted and once decrypted, thinking that my storage is full, and thus timeshift refuses to make backups due to no storage. At least thats as far as I'm understanding it atm

sulandor
0 replies
12h2m

linux sees every file twice, once encrypted and once decrypted

fixing this should prove profitable

ThinkBeat
1 replies
20h12m

A bit of a side note and a bit of old man reveal, it would be nifty to have the backup system write the snapshots to cd/dvd/bluray disk.

I remember working in a company that had a robot WORM system. It would grab a disc, it would be processed, take it out, place it among the archives. If a restore as needed the robot would find the backup, and read off the data.

I never worked directly on the system, and I seem to remember there was a window that the system could keep track of (naturally) but older disks were stored off site somewhere for however long that window was.

(Everything was replicated to a fully 100% duplicate system geographically highly separated from the production system.

gballan
0 replies
19h53m

AFAIK timeshift can use any mount. I tried a USB stick, but it was too slow. Now I'm experimenting with a partition on a second drive.

Lord_Zero
1 replies
16h8m

I just switched from windows to Mint and the first thing it asked me was to configure backups and snapshots and stuff. Pretty cool!

Groxx
0 replies
13h46m

Mint's first-launch welcome-list is excellent. It's a relatively small thing but it helps a lot.

trinsic2
0 replies
13h31m

Don't forget Atpik. great for migrating a system to a new distro.

tracker1
0 replies
1h58m

I've just got a simple script that uses rclone for most of my home directory to my NAS. For nearly everything else, I don't mind if I have to start mostly from scratch.

stevefan1999
0 replies
15h13m

Can someone recommend a solution that works well with immutable distros such as Project Bluefin or Fedora Kinoite/Silverblue? We just need to backup maybe the etc and dotfiles. Also great if it can backup NixOS too.

pmarreck
0 replies
16h41m

Yet another solution that is wholly unnecessary in NixOS. Nice idea, though, since you can too easily screw up every other Linux.

kkfx
0 replies
9h41m

Nice UI :-)

Random notes/suggestions

- rsync is not a snapthot tool, so while in most of the cases we can rsync a live volume issueless on a desktop it's not a good idea doing so

- zfs support in 2024 is a must, btrfs honestly is the proof of how NOT to manage storage, like stratis

- it seems not much a backup tool, witch is perfectly fine but since the target seems to be end users not too much IT literate it should be stated clear...

ivanjermakov
0 replies
7h6m

Magical thing about timeshift is that you can use it straight from your live CD. It will find root, backups, and restore it together with a boot partition.

crabbone
0 replies
1h19m

My first "real" experience with Linux was with Wubi (Ubuntu packaged as a Windows program). I think it was based on Ubuntu version 6 or 8.

I also tried to update it, when the graphical shell displayed a message saying that update is available. Of course, it bricked the system.

I've switched from Ubuntu to Mint to Debian to Fedora to Arch to Manjaro for personal use and had to support a much wider variety of distributions professionally. My experience so far has been that upgrades inevitably damage the system. Most don't survive even a single upgrade. Arch-like systems survive several major package upgrades, but also start falling apart with time. Every few years enough problems accumulate that merit either a complete overhaul or just starting from scratch.

With this lesson learned, I don't try to work with backups for my own systems. When the inevitable happens, I try to push forward to the next iteration, and if some things to be lost, then so be it. To complement this, I try to make the personal data as small and as simple to replicate and to modify moving forward as possible. I.e. I would rule against using filesystem snapshots in favor of storing the file contents. I wouldn't use symbolic links (in that kind of data) because they can either break or not be supported in the archive tool. I wouldn't rely on file ownership or permissions (god forbid ACLs!) Try to remove as much of a "formatting" information as possible... so I end up with either text files or images.

This is not to discourage someone from building automated systems that can preserve much richer assembly of data. And for some data my approach would simply be impossible due to requirements. But, on a personal level... I think it's less of a software problem and more of a strategy about how not to accumulate data that's easy to lose.

8organicbits
0 replies
5h5m

I've found Debian Stable to be extremely stable, especially in recent years, I honestly don't think about system restore as much as I worry about a drive crashing or a laptop getting stolen. I assumed Linux Mint LTS was similarly stable.

Folks who have run into issues, what was the root cause?