

I tend to use /opt/[service]/, like for example /opt/forgejo/. It’s outside of any user’s Homedir and it seems to fit into what the FHS 3.0 (Filesystem Hierarchy Standard) defines.


I tend to use /opt/[service]/, like for example /opt/forgejo/. It’s outside of any user’s Homedir and it seems to fit into what the FHS 3.0 (Filesystem Hierarchy Standard) defines.
My, what could that possibly be?


This is the way! At least install security upgrades nightly using unattended-upgrades and reboot from time to time to get the latest Kernel version.
Want to get REAL technical? Try a Ceph MultiSite Setup. I’ve only heard about it quite recently myself, so I don’t have any experience yet, but I think it might fit your needs. It would replicate your data over all locations though, so you would have to have enough storage everywhere.


mate
Hey, this is a coffee joke! We don’t allow mate tea here!!!1!
Not OP, and haven’t done that (yet), but I think we really all should.


Of yeah, there really was, thank you. :)


und denke mal, bei dem Username, dass du deutsch sprechen kannst haha Jup, stimmt. :D
Ich bleib’ trotzdem mal bei Englisch, damit’s im englischen Thread verstanden wird.
ENGLISH: Yeah, you’re right, I wasn’t particularly on-topic there. :D I tried to address your underlying assumptions as well as the actual file format question, and it kinda derailed from there.
Sooo, file format… I think you’re restricting yourself too much if you just use the formats that are included in binutils. Also, you have conflicting goals there: it’s compression (make the most of your storage) vs. resilience (have a format that is stable in the long term). Someone here recommended lzip, which is definitely a right answer for good compression ratio. The Wikipedia article I linked features a table that compares compressed archive formats, so that might be a good starting point to find resilient formats. Look out for formats with at least Integrity Check and possibly Recovery Record, as these seem to be more important than compression ratio. When you have settled on a format, run some tests to find the best compression algorithm for your material. You might also want to measure throughput/time while you’re at it to find variants that offer a reasonable compromise between compression and performance. If you’re so inclined, try to read a few format specs to find suitable candidates.
You’re generally looking for formats that:
You might want to read up on more technical infos on how an actual archive handles these challenges at https://slubarchiv.slub-dresden.de/technische-standards-fuer-die-ablieferung-von-digitalen-dokumenten and the PDF files with specifications linked there (all in German).


They all will, if the filesystem images aren’t pre-compressed themselves, and if OP is archiving raw image formats (DNG, CR2, …).


You’re asking the right questions, and there have been some great answers on here already.
I work at the crossover between IT and digital preservation in a large GLAM institution, so I’d like to offer my perspective. Sorry of there are any peculiarities in my comment, English is my 2nd language.
First of all (and as you’ve correctly realizes), compression is an antipattern in DigiPres and adds risk that you should only accept of you know what you’re doing. Some formats do offer integrity information (MKV/FFV1 for video comes to mind, or the BagIt archival information package structure), including formats that use lossless compression, and these should be preferred.
You might want to check this to find a suitable format here: https://en.wikipedia.org/wiki/List_of_archive_formats -> Containers and compression
Depending on your file formats, it might not even be beneficial to use a compressed container, e.g. if you’re archiving photos/videos that already exist in compressed formats (JPEG/JFIF, h.264, …).
You can make your data more resilient by choosing appropriate formats not only for the compressed container but also for the payload itself. Find significant properties of your data and pick formats accordingly, not the other way round. Convert before archival of necessary (the term is normalization).
You might also want to consider to reduce the risk of losing the entirety of your archive by compressing each file individually. Bit rot is a real threat, and you probably want to limit the impact of flipped bits. Error rates for spinning HDDs are well studied and understood, and even relatively small archives tend to be within the size range for bit flips. I can’t seem to find the sources just now, but iirc, it was something like 1 Bit in 1.5TB for disks at write time.
Also, there’s only so much you can do against bit rot on the format side, so consider using a filesystem that allows you to run regular scrubs and so actually run them; ZFS or Btrfs come to mind. If you use a more “traditional” filesystem like ext4, you could at least add checksum files for all of your archival data that you can then use as a baseline for more manual checks, but these won’t help you repair damaged payload files. You can also create BagIt bags for your archive contents, because bags come with fixity mechanisms included. See RFC 8493 (https://datatracker.ietf.org/doc/html/rfc8493). There are even libraries and software that help you verify the integrity of bags, so that may be helpful.
The disk hardware itself is a risk as well; having your disk laying around for prolonged periods of time might have an adverse effect on bearings etc. You don’t have to keep it running every day, but regular scrubs might help to detect early signs of hardware degradation. Enable SMART if possible. Don’t save on disk quality. If at all possible, purchase two disks (different make & model) to store the information.
DigiPres is first and foremost a game of risk reduction and an organizational process, even of we tend to prioritize the technical aspects of it. Keep that in mind at all times
And finally, I want to leave you with some reading material on DigiPres and personal archiving on general.
I’ve probably forgotten a few things (it’s late…), but if you have any further questions, feel free to ask.
EDIT: I answered to a similar thread a few months ago, see https://sh.itjust.works/comment/13922388


Very much this.


(…) 'cause it was quarter part eleven
on a Saturday in 1999
🎶🎶
To answer your questions, I work on the Bash, because it’s what’s largely used at work and I don’t have the nerve to constantly make the switch in my head. I have tried nushell for a few minutes a few months ago, and I think it might actually be great as a human interface, but maybe not so much for scripting, idk.
It’s been a while for me and i can’t try things out atm, but i think vSphere SSH access is only for managing the appliance itself, not objects like VMs in a vSphere cluster. For that, you would have to use the Python SDK or PowerCLI.
If you run VMware, you can use PowerCLI to interact with your vSphere servers, and PowerCLI requires PowerShell and uses similar syntax. I haven’t tried it on Linux yet, but I would assume that that might be a valid use case.


+1 for Fractal Design Node cases. I’ve used the 304 (it’s Mini ITX, Mini DTX) for years as a NAS case and I love it!
I run a J5040 ITX board for my homelab needs, which has been released a few years ago and has served me well, even through I run it with more RAM than the board specs allow. The natural successors of that are the Atom N100/N105 and the i3 N300/N305 (all 1 Gen newer than J5040) and AFAIK the Atom N150 and i3 N350 (2 Gen newer), all of which are available on ITX boards. Models for the latest chips might be a but rare though, and you might have to go to AliExpress to get one, but for the N100/105/300/305, there’s a wide variety available. Just make sure to get one with enough SATA ports for all your disks, so you can use it for NAS as well.
Disclaimer: I’m quite sure this is enough for your homelab/NAS use-case, but I’m not familiar with Minecraft requirements, and you might need beefier hardware for that. However, the above boards leave enough room in your budget for RAM, NVMe and HDDs, should deliver quite some bang for the little buck you have, and will barely sip energy, making cooling easy.


I’m not much of an expert on Bluetooth, but I would expect that you can create an override for the corresponding Systemd service (bluetoothd perhaps, or some Logitech daemon) and make it depend on a Target that is reached earlier in the boot process.
Sorry that I can’t be more helpful…


Interesting option, I’m familiar with Git, YAML and yq. Thank you!
Yes, that’s what I meant, thanks for the clarification.