Hey all, i would like to get some feedback on my backup strategy.
I have a debian webserver with a ZFS pool running nextcloud aio, immich and jellyfin. Thinking about adding other services as well but nextcloud and immich are the most important ones. The docker volumes of these services point of course to the zfs pool. My backup strategy would now be to use the internal backup solutions for nextcloud and immich to backup their databases, then stop the docker containers and do a borg backup of the zfs pool. The backups would be stored an extern hard drive (I want to expand on this but for now this is all I can afford). is this a viable approach or do i miss something? Could there be problems in case of a backup with the databases etc? The docker compose files are also stored on another machines together with my server documentation.


You can use the
zfs sendcommand to copy snapshots from one dataset to another.Your backup could be a ZFS dataset stored on an external drive(s) which would contain the snapshots of your online dataset. You could then encrypt and compress (by setting the appropriate ZFS dataset properties) the backup dataset for size efficiency, and security.
To restore the backup you would use
zfs sendto move your backedup snapshots into a new dataset on your new un-disastered hardware.Since this is all done via CLI, you could write a bash script to create periodic snapshots, one to backup snapshots to the external dataset and another to delete old snapshots in your dataset. Toss 'em in your cron service of choice (or use systemd timers) and you’ve got a whole ZFS native backup system.
There may be backup software that’ll do this for you. I’ve seen that Timeshift supports snapshot-based backups for btrfs so you can probably find a GUI app to handle the automation.