So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I’ve given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don’t continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?

Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @DailyGameBot@lemmy.zip (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I’ll definitely want to expand to more things eventually, though I don’t know what. Probably all/most in Docker.

For now I’m likely to keep using Synology’s reverse proxy and built-in Let’s Encrypt certificate support, unless there are good reasons to avoid that. And as much as it’s possible, I’ll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.

Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don’t have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?

Bonus question: what’s a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?

  • Dran@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    5 days ago

    All of those would be perfectly cromulent nodes for small containers. The first issue you’ll run into is the low ram. Some homelab projects would cause you to exceed 8gb, but the good news is if you’re using an external backend via NFS, you can always scale out (more nodes) or up(more compute per node,) later with minimal headache.

    If you’re going to be memory constrained, don’t waste 1-2gb on a gui, install Ubuntu/Debian/whatever headless

    • Zagorath@aussie.zoneOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Thanks! I genuinely wasn’t sure how much RAM would be necessary, and would have probably seriously considered 8 GB enough if I hadn’t gotten the feedback.

      • Dran@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        That wasn’t quite the takeaway I was going for. You can get a lot done on 8gb of ram. I was just trying to point out that it would probably be your first bottleneck as you started to scale out, and that you should consider using the server headless to make the ram you have go that much further.

        • Zagorath@aussie.zoneOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          23 hours ago

          Oh yeah, the “run headless” tip too was great! I would never have used a desktop environment, and would in effect have been using it headless. But had you and others not specifically suggested running it as headless it would probably not have occurred to me that that’s a setting change I’d need to make while installing it.

          • Dran@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            13 hours ago

            It definitely can be disabled post-install but is much simpler to install without it at install-time, and has the added benefit of not pulling 2-5gb of other things that won’t be relevant to your use case. It’s not that the disk waste is that big of a deal, but any issues you run into will be that much easier to troubleshoot with fewer moving parts.