So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I’ve given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don’t continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?
Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @DailyGameBot@lemmy.zip (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I’ll definitely want to expand to more things eventually, though I don’t know what. Probably all/most in Docker.
For now I’m likely to keep using Synology’s reverse proxy and built-in Let’s Encrypt certificate support, unless there are good reasons to avoid that. And as much as it’s possible, I’ll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.
Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don’t have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?
Bonus question: what’s a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?


In layman terms, it’s a Debian-based distro that makes managing your virtual machines and lxc containers easier. Thanks to an advanced virtual interface, you can set up most things graphically, monitor and control your VMs and containers at a glance, and just generally take the pain away from managing it all.
It’s just so much better when you see everything important straight away.
I guess I have the same question for you as I did for curbstickle. What’s the advantage of doing things that way with VMs, vs running Docker containers? How does it end up working?
Proxmox can work with VMs and LXC containers.
When you need to always have resources reserved specifically for a given task, VMs are very handy. VM will always have access to the resources it needs, and can be used with any OS and any piece of software without any preparations and special images. Proxmox manages VMs in an efficient way, ensuring near-native performance.
When you want to run service in parallel with other with minimal resource usage on idle, you go with containers.
LXC containers are very efficient, more so than Docker, but limited to Linux images and software, as they share the kernel with the host. Proxmox allows you to manage LXC containers in a very straightforward way, as if they were standalone installations, while at the same time maintaining the rest behind the scenes.