What’s going on on your servers?

I had to bite the bullet and buy new drives after the old ones filled up. I went for used enterprise SSDs on eBay and eventually found some that had an okay price, even though it’s been much more than last time I got some. Combined with Hetzner’s hefty price increase some month ago, my hobby has become a bit more expensive again thanks to the ever growing appetite of companies building more data centers to churn more energy.

Anyways, the drives are in, my Ansible playbook to properly encrypt them and make them available in Proxmox worked, so that was smooth (ignoring the part where I disassembled the Lenovo tiny from the rack, open it, SSD out, SSD in, close it and put it back in only to realize I put in the old ssd again).

Any changes in your hardware setups? Did the price increase make you reconsider some design decisions? Let us know!

  • 0x0f@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    7 hours ago

    Currently planning for adding a third hardware node into my setup, adding proper distributed storage and enabling ha for all my applications.

    • tofu@lemmy.nocturnal.gardenOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      What kind of distributed storage do you want to use, Ceph? What kind of orchestration/hypervisor do you use? I also have two nodes currently (Proxmox) with pseudo shared storage (zfs replication).

      • 0x0f@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 hours ago

        I’m planning on using proxmox with Ceph. I’m already using proxmox for virtualization of my k8s nodes, but with three hardware nodes it’s finally time to enable ceph as well as HA.

        • tofu@lemmy.nocturnal.gardenOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          HA is possible with 2 (+Qdevice) with zfs repl, but I’ll look for a third one as well sooner or later. I haven’t used ceph, but everyone tells me how much of an overhead it has

          • 0x0f@piefed.social
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 hours ago

            Yeah, I’m aware of the overhead and hope I won’t have to roll back to something less, but if it turns out to be too demanding, I will have to roll back to nfs or think of something else.