Also known as Schrödinger’s Backup.
Also known as Schrödinger’s Backup.


Zfs can become painfully slow if you don’t have RAM for it. I tried to run ZFS on my old setup with 64GB RAM and with moderate amount of virtual hosts and it was nearly useless with heavier io-loads. I didn’t try to tweak settings for it, so there might be some workarounds to make it work better, I just repartitioned all the storage drives with mdadm raid5 array and lvm-thin on top of that. Zfs will work with limited memory in a sense that you don’t risk losing data because of it, but as mentioned, performance might drop significantly. Now that I have a system which has memory to run raidz2 it’s pretty damn good, but with limited hardware I would not recommend it.
LVM itself is pretty trivial to move on a another system, most modern kernels just autodetect volume groups and you can use them as any normal filesystem. If you move full, intact, mdadm array to a new system (and have necessary utils installed) it should be autodetected too, but specially with degraded array manual reassembly might be needed. I don’t know what kind of issues you’ve been getting, but in general moving both lvm and mdadm drives between systems is pretty painless. Instead of mdadm you could also run lvm-mirroring on the drives so it’ll drop one layer off from your setup and it potentially makes rebuilding the array a bit simpler on another system, but neither approach should prevent moving drives to another host.
Lvm-thin is more flexible and while it might be a slightly slower on some scenarios I’d still recommend using that. Maybe the biggest benefit you’ll get from it is an option to take snapshots from VMs. Mounting plain directories will work too, but if your storage is only used by proxmox I don’t see any point in that over LVM setup.


For whatever reason ISPs tend (at least in here) to be pretty bad at keeping their DNS services up and running and that could cause issues you’re having. Easy test is to switch your laptop DNS servers to cloudflare (1.1.1.1, 1.0.0.1) or opendns (208.67.222.222, 208.67.220.220) and see if the problem goes away. Or even faster by doing single queries from terminal, like ‘dig a google.com @1.1.1.1’.
If that helps you can change your router WAN DNS server to something than what operator offers you via DHCP. I personally use opendns servers, but cloudflare or google (8.8.8.8, 8.8.4.4) are common pretty decent choices too.
Depends on what you’re looking for, but for server use even a bit older hardware is just fine. My proxmox server has Xeon 2620v3 CPU and it’s plenty for my needs. For storage I went with SAS-controller, controllers are relatively cheap and if you happen to have a friend in some IT department you might get lucky when they replace hardware. RAM is a pain in the rear, but 8GB DDR4 rdimms work still just fine (if someone is interested I have few around)
Personally I wouldn’t pay current prices for new hardware, specially if it’s for hosting. A bit older, but server rated, components give a lot more value for your money.


This, in turn, is different from APT, which is not Debian’s repository, but Debian’s package manager. So, technically, I could write “sudo apt install (anything)” to get any piece of software from Debian’s repository indeed, but I could also use that command to get software from somewhere else also in the form of a Deb package but which would not have come from Debian itself.
With apt (and discover which uses apt/dpkg at the background) you can install anything from repositories configured on your system. So, if you want to use apt to install packages not built by Debian team you’ll need to add those repositories in your system, so they don’t just appear out of nothing.
Some software vendors offers .deb packages you can install which then add their own repository on your system and then you can ‘apt install’ their product just like you would on native Debian software and the same upgrade process which keeps your system up to date will include that ‘3rd party’ software as well. Also some offer instructions on how to add their repository manually, but with a downloaded .deb it might be a bit easier to add repository without really paying attention to it.
Spotify is one of the big vendors who have their own repository for Debian and Ubuntu and with Ubuntu there’s “ppa” repositories, which are basically just random individuals offering their packages for everyone to use and they are generally not going trough the same scrutiny than official repositories.


the joke was that in the USA you can be multi millionaire+ wealthy and pay 0% tax
That is the actual joke here, agreed. If, and that’s a pretty damn big if, there was any sense on USA government they could just take our progressive steps and leave everything above 35% away from it and still have a crapload of budget to actually make their country great again.
But spending 100 million bucks per hour to demolish schools half way across the world is cool too I guess.


You don’t need to be Elon-wealthy to get those percentages. Over 500 000€/year salary gives you nice 50% tax bracket. You absolutely are not poor if your taxes are that high, but you don’t need to be CEO of Google either.
No deduplication, encryption nor support for non-linux operating systems for a start.
Not from my wife if I lose our photo collection which has been building up since we got our first digital camera 20ish years ago.


I have dynamic dns address and a handful of CNAME records on my domains pointing on that dyndns-address so I can use ‘proper’ names with my services. When my public IP changes it takes a few minutes for the records to update, but it usually happens only when my router reboots so it’s been good enough for me.
Also I use two separate dyndns providers so there’s likely at least one working DNS entry to my network.


I’ve always liked the way slashdot handles comment rating. It’s a bit complicated, so maybe that’s why it’s not adopted elsewhere, but it gives a much more fine grained options instead of just up/downvote.


ISP obviously don’t see the traffic inside your own network, regardless of the router used. But as soon as you open any kind of connection over the internet, incoming or outgoing, your ISP has to have some information about it to route the traffic. DNS over TLS doesn’t hide that your browser opens connections to servers, they can see if you use wireguard to access your services (not which ones, just in general that there’s traffic coming and going) and even if you use VPN for everything they can still see the encrypted VPN traffic and, at least technically, apply pattern recognitions on that to figure out what you’re doing. And if you use VPN then your VPN provider can do the same than your last-mile internet provider, so you’ll just move the goal by doing that.
Last-mile ISP is going to be a middleman on your network usage no matter what you use and they’ll always have at least some information about your usage patterns.


ISP can see your traffic anyways regardless if their router is at your end or not. In here any kind of ‘user behavior monitoring’ or whatever they call it is illegal, but the routers ISPs generally give out are as cheap as you can get so they are generally not too reliable and they tend to have pretty limited features.
Also, depending on ISP, they might roll out updates on your device which may or may not reset the configuration. That’s usually (at least around here) made with ISPs account on the router and if you disable/remove that their automation can’t access your router anymore.
So, as a rule of thumb, your own router is likely better for any kind of self hosting or other tinkering, but there’s exceptions too.
Pretty much all ‘major’ distributions (Debian, Ubuntu, Mint, Fedora, openSuse…) have 20+ years on their belt and none of those are not likely to go away any time soon. Some niche variants of those might vanish, but the main distributions will be there.


You are on the right track. Installing Debian packages don’t require password to access shared libraries but to write into system wide directories. That way you don’t need to install every software separately for every user. Flatpacks are ‘self sufficient’ packages and thus often way bigger, since they don’t generally share resources.
From security point of view there’s not much difference in every day use for average user. Sandboxed flatpacks can be more secure in a sense that if you harden your system properly they have limited access to the underlying system, but they can be equally unsafe if you just pull random software from a shady website and run it without any precautions.
Flatpacks tend to have more recent versions of the software as they can ‘skip’ the official build chain and they don’t need to worry about system wide libraries. Tradeoff is that the installations are bigger and as flatpacks run on their own little sandbox you may need to tinker with flatpack environment to get access to files or devices. Also if you install flatpacks only for your user and you have multi-user setup other users of the machine can’t access your software, which might be exactly what you want, depends on your use case.
Personally I stick with good old Debian packaging whenever possible, I don’t see benefits of containers like flatpack on my own workstation. Newer software releases or using software not included in official repository are pretty much the only exceptions when flatpacks make more sense to me.
But there’s a ton of nuances on this, so someone might disagree with me and have perfectly valid resons to do so, but for me, on my personal computer, flatpacks just don’t offer much.


I agree. It’s a bit tedious to configure, but rock solid and has all the features you could ask from a proxy.


I’d guess there’s some tools which rely on RSS feeds or something to update seeds automatically, but that’s just a gut feeling. Also it shouldn’t be too difficult to write your own, but I don’t know if anything ‘production ready’ is out there.


Discoverability is one issue and trust for longevity is another. No bigger distribution is going to rely their official download links on an individual home lab which can disappear overnight. Also I guess there’s also guestion if images are provided as is without adding/removing your own ‘extensions’, but that’s what cheksums are for.
And this is obviously on a general level, I’m not trying to suggest that xana is not trustworthy :) But torrent seeding is a helpful thing for community, and easy/safe to set up.
Sound and power consumption. At least in my case those are important if I was going to store data at my mothers house. Power consumption might not matter that much, but HDD sound definetly does. And even with spinning rust hardware cost would be somewhere around 250€ compared to ~20€/month of cloud storage.
YMMV, in my scenario it’s just easier to use a cloud provider.
Fixed headaches with my proxmox backup server. It has a SAS-controller and 4 spinning drives running backups at detached garage and the old fujitsu desktop I dug out of office dumpster pile just kept crashing. Flashed controller to IT-firmware, updated bios on motherboard and did everything else I could figure out but the system just lost the drives pretty much daily and required a hard reset. Turns out, or at least that’s my conclusion, that the PSU on the machine just didn’t have enough juice for the whole setup and that caused instability. I dug out old (2010 or so) desktop from my own pile and threw 600W PSU on the box, it’s now been stable for at least a week.
I would’ve liked to keep the fujitsu-machine as it’s in a more compact case and couple of generations newer CPU, but that thing has propietary power supply so it was easier to swap out the whole system and just move drives from one to another. So, the current setup consumes maybe a bit more electricity, but at least it’s doing what it is supposed to.