• 1 Post
  • 23 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • So extra background, I was put off by proxmox’s weird steps to get ISO’s onto the system via USB so I was like “I am not touching the backup stuff” and just rolled my own (I treat the VMs/containers on my proxmox server like individual servers and back them up accordingly and do not back up the underlying proxmox instance itself).

    I see proxmox has a similar pruning setting to Restic, and it exports the files like incus. So I’d say yes, proxmox is one-stop-shop for backup while with incus you have to put its container export options and restic together and put that in a cron job.

    Still hard to say what I’d definitively tell a newbie to go with. I found (and still find) the proxmox ui daunting and difficult while the incus UI makes much more sense to me and is easier (has an ISO pulling system built in for instance. But as you’ve pointed out - proxmox gives you an easy way to have robust backups that takes much more effort on the incus side.

    As backups are paramount, proxmox for a total newbie. If someone is familiar with scripting, then incus - because it needs scripted backups to be as robust as proxmox’ backups. @barnaclebill@lemmy.dbzer0.com this conclusion should help you choose proxmox (most likely)!


  • https://linuxcontainers.org/incus/docs/main/howto/instances_backup/#instances-backup-export

    A bit down from the snapshots section is the export section, what I do is I export to a place then back it up with Restic. I do not compress on export and instead do it myself with the —rsyncable flag added to zstd. (Flag applies to gzip too) With the rsyncable flag incremental backups work on the zip file so it’s space efficient despite being compressed. I don’t worry about collating individual zip files, instead I rely on Restic’s built-in versioning to get a specific version of the VM/container if I needed it.

    Also a few of my containers I linked the real file system (big ole data drive) into the container and just snapshot the big ole data drive/send said snapshot using the BTRFS/ZFS methods cause that seemed easier, those containers are easy enough to stand up on a whim and then just need said data hooked up.

    I also restic the sent snapshot since snapshots are write-static and restic can read from it at its leisure. Restic is the final backup orchestrator for all of my data. One restic call == one “restic snapshot” so I call it monolithically with one call covering several data sources.

    Hope that helps!




  • Since you’re not using proxmox as an OS install, why not check out Incus? It accomplishes the same goals as proxmox but is easier to use (for me at least). Make sure you install incus’ web ui, makes it ez pz. Incus does the VMs and containers just like proxmox but isn’t focused on clustering 1st but rather machine 1st. It does do clustering, but the default UI is set for your machine to start so it makes more sense to me. The forums are very useful and questions get answered quickly, and there’s an Ubuntu-only fork called LXD which expands the available pool of answers. (For now, almost all commands are the same between Incus and LXD). I run the incus stable release from the Zabbly package repo, I think the long term release doesn’t have the web ui yet (I could be wrong). Never have had a problem. When Debian 13 hits I’ll switch to whatever is included there and should be set.

    https://linuxcontainers.org/incus/docs/main/installing/#installing-from-package

    I use incus for VMs and LXC containers. I also have Docker on the Debian system. Many types of containers for every purpose!

    I installed incus on a Debian system that I encrypted with LUKS. It unlocks after reboots with a USB drive, basically I use it like a yubikey but you could leave it in so the system always reboots no problem. There’s also a network unlock too but I didn’t try to figure that out. Without USB drive or network, you’ll have to enter the encryption key on every reboot.



  • I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored BTRFS raid1) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.

    I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.

    Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.

    If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.

    Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.

    Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data (in addition to the restic password you stored safely!).






  • I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose. The --read-data-subset also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.

    When I set up a new repo, I restore some stuff to make sure it’s there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'.

    You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.

    Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'.


  • I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.

    I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.

    I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.


  • I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

    Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

    Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

    On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

    So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

    If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

    Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.


  • I do this but with root Docker, every service gets a user:####:#### and that #### is tied to a useradd I made. Chown the data directory the container is given and it just works. In Docker this does not work for Linuxserver images but podman has way more user: specifications so I have a feeling Linuxserver images will work there with the user restrictions.

    For something like Gotenberg which is part of paperless ngx I gave Gotenberg its own user too, it has chromium and might ingest a malicious pdf somehow or something. Might as well keep Gotenberg from being able to hose the rest of paperless!

    I do plan to move to podman with 5.0+ in Debian 13 and that will remove the Docker daemon attack surface and the occasional firewall issues that come with Docker. So I’m not advocating for Docker over podman here.



  • glizzyguzzler@lemmy.blahaj.zonetoSelfhosted@lemmy.worldAdvantages of rootless podman?
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    4 months ago

    This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).

    So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!