Since 2016, I’ve had a fileserver mostly just for backups. System is on 1 drive, RAID6 for files, and semi-annual cold backup.

I was playing with Photoprism, and their docs say “we recommend placing the storage folder on a local SSD drive for best performance.” In this case, the storage folder holds basically everything but the pictures themselves such as the database files.

Up until now, if I lost any database files, it was just a matter of rebuilding them by re-indexing my photos or whatever, but I’m looking for something more robust since I’ll have some friends/family using Pixelfed, Matrix, etc.

So my question is: Is it a valid strategy to keep database files on the SSD with some kind of nightly backup to RAID, or should I just store the whole lot on the RAID from the get go? Or does it even matter if all of these databases can fit in RAM anyway?

edit: I’m just now learning of ZFS caching which might be my answer.

  • mlg@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 hours ago

    I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).

    That being said I’m in a similar boat doing an upgrade and I have some warnings that I have found are helpful:

    1. Consumer grade NVMEs are not designed for tons of write ops, so they should optimally only be used in RAID 0/1/10. RAID 5/6 will literally start with a massive parity rip on the drives, and the default timer for RAID checks on Linux is 1 week. Same goes for ZFS and mdadm caching, just proceed with caution (ie 321 backups) if you go that route. Even if you end up doing RAID 5/6, make sure you get quality hardware with decent TBW, as sever grade NVMEs are often triple in TBW rating.
    2. ZFS is a load of pain if you’re running anything related to Fedora or Redhat, and the performance implications from lots and lots of testing is still arguably inconclusive on a NAS/Home lab setup. Unless you rely on the specific feature set or are making an actual hefty storage node, stock mdadm and LVM will probably fulfill your needs.
    3. Btrfs has all the features you need but is a load of trash in performance, highly recommend XFS for file integrity features + built in data dedup, and mdadm/lvm for the rest.

    I’m personally going with the NVME scheduled backups to RAID because the caching just doesn’t seem worth it when I’m gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That’s probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I’m throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.

    On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don’t have to compile an antiquated binary for CUDA support.

    • ch00f@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Thanks for the tips. I’ll definitely at least start with mdadm since that’s what I’ve already got running, and I’ve got enough other stuff to worry about.

      Are you worried at all about bit rot? I hear that’s one drawback of mdadm or raid vs. zfs.

      Also, any word on when photoprism will support the Coral TPU? I’ve got one of those and haven’t found much use for it.