I’m considering starting a Lemmy instance with a limited federation model, and one of the things I’m thinking about from the start is how to support and maintain it as it grows, while spending as little attention as possible on the technical side of infrastructure management itself.

Because of that, I’m especially interested in hearing from admins who host Lemmy instances, particularly larger ones. I’d like to understand what your actual workflow looks like in practice: how you organize administration, what methodologies you use, how you handle backups, data recovery, upgrades, monitoring, and infrastructure maintenance in general. I’m also interested in whether there are any best practices or operational patterns that have proven reliable over time.

From what I’ve found so far, the official Lemmy documentation on backup and restore seems reasonably good for small instances, but as the instance grows, more nuances and complications appear. So ideally, I’d like to find or assemble something closer to a real guideline or runbook based on practices that are actually used by admins running larger instances.

If you run or have run a Lemmy instance, especially one that had to scale beyond a small personal or experimental setup, I’d really appreciate hearing about your experience. Even brief notes, links to documentation, internal checklists, or descriptions of what has and hasn’t worked for you would be very useful.

  • nachitima@lemmy.mlOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Hey, this is really useful.

    I wanted to ask a few follow-ups, because the jump from 16 GB to 64 GB sounds pretty dramatic:

    • What kind of storage were you using when it was struggling — HDD, SSD, NVMe?
    • Did you only increase RAM, or did storage / CPU / other settings change too?
    • Roughly what kind of workload was this? Number of users, subscribed communities, amount of federated traffic, image-heavy browsing, etc.
    • Do you remember what the actual bottleneck looked like — high RAM use, swap, I/O wait, Postgres getting slow, pictrs, federation queue buildup?
    • When you say disabling image proxying helped, how much did it help in practice?
    • Was this on a recent Lemmy version, or a while back?

    I’m trying to separate “Lemmy really needs big hardware” from “a specific part of the stack was the real problem”.

    Sorry if some of these questions are a bit basic or oddly specific — I’m using AI to help gather as much real-world Lemmy hosting experience as possible, and it generated most of these follow-up questions for me.

    • Björn@swg-empire.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      I was and still am on HDD. The CPU was upgraded as well. I migrated to a new server.

      The main culprit was the database. As far as I’m aware Lemmy is missing some indexes and due to the ORM they used didn’t always have optimised queries. Now with 64 GB RAM the whole database (almost 30 GB) fits in there fixing most of those issues.

      The real fix will probably come with Lemmy 1.0. They radically changed the database layout and queries.

      Image proxying wasn’t bad for performance. Just storage space. It was growing really really fast. Now that only I am using it to host the pictures I uploaded it is still much too large (24 GB). But its directory structure is so convoluted that I can’t really debug it. My stuff really shouldn’t be taking up more than a few hundred MBs.

      I am the only one using this instance. I am subscribed to a hundred communities or so. I am always pretty up to date with my Lemmy versions.