Hey all, i would like to get some feedback on my backup strategy.
I have a debian webserver with a ZFS pool running nextcloud aio, immich and jellyfin. Thinking about adding other services as well but nextcloud and immich are the most important ones. The docker volumes of these services point of course to the zfs pool. My backup strategy would now be to use the internal backup solutions for nextcloud and immich to backup their databases, then stop the docker containers and do a borg backup of the zfs pool. The backups would be stored an extern hard drive (I want to expand on this but for now this is all I can afford). is this a viable approach or do i miss something? Could there be problems in case of a backup with the databases etc? The docker compose files are also stored on another machines together with my server documentation.


But the snapshots are only on the ZFS filesystem? or do you mean i should replace the borgbackup with the zfs snapshots?
If you are storing everything on the ZFS filesystem, taking a snapshot in ZFS will include all that data. So if you keep hourly snapshots for the past 24 hours, and daily snapshots for the past week, and then monthly for 3 months for example, you can often dip into those to do recovery when the issue is “oops I deleted something I didn’t want to” rather than going to your huge backup and restoring the entire system
Yes, but zfs snapshots are not an off-site backup which is more what i am looking for. Besides this, zfs snapshots are of course something i want to implement
If you use zfs for docker, dbs and vms you don’t have to shut down anything. Just snapshot and send/recv to sync snapshots to another ZFS drive.
You can even mount and copy the latest snapshot to the cloud with rsync/rclone; probably also borg/restic/kopia. Each applications state will be internally consistent if the snapshot is performed for all data at the same time. If you’re paranoid you can stop everything for a few mins to perform the snapshot, but it’s not really necessary.
You can use the
zfs sendcommand to copy snapshots from one dataset to another.Your backup could be a ZFS dataset stored on an external drive(s) which would contain the snapshots of your online dataset. You could then encrypt and compress (by setting the appropriate ZFS dataset properties) the backup dataset for size efficiency, and security.
To restore the backup you would use
zfs sendto move your backedup snapshots into a new dataset on your new un-disastered hardware.Since this is all done via CLI, you could write a bash script to create periodic snapshots, one to backup snapshots to the external dataset and another to delete old snapshots in your dataset. Toss 'em in your cron service of choice (or use systemd timers) and you’ve got a whole ZFS native backup system.
There may be backup software that’ll do this for you. I’ve seen that Timeshift supports snapshot-based backups for btrfs so you can probably find a GUI app to handle the automation.
yes the ZFS snapshots are in the same disk, but the most common scenario when you need backups is to get a handful of files in which case the ZFS snapshots are super convenient and they use very little space. I use restic + (B2 | sftp) and zfs snapshots. I may literally go years without needing to restore from restic because most of the time I can get what I need from the zfs snapshots.
You did not mention if you are using a single disk or more. If you can afford it and the machine allows it, doing mirroring or RAID-Z1 (equivalent of RAID 5) is a good option
I have 4 HDDS drives running in RAID-Z1. But yeah, the zfs snapshots on the pool itself are a no brainer :) but good to hear that they work so well
If you wanted to get really in the weeds of ZFS, you can use ZFS send to send copies of your snapshots into a dataset that you store on your external.
You can enable encryption and compression on the external dataset as well.
This would use snapshots, give you the ability to make block-level incremental backups and allow encryption and compression using only ZFS tooling.
You’d have to script it though (it’s possible someone has already done this in some other backup application).