Not interested in an MIT-licensed coreutils. Thanks, but no thanks!
Not interested in an MIT-licensed coreutils. Thanks, but no thanks!
My media server, which is just my server generally, is an old thinkpad I have from 2014. For media I use Jellyfin and I ensure the content is already in a format that will not require transcoding on any device I care to serve to (typically mp4 1080p hevc + aac).
If you look at the used computer market, there are endless options to attain what you are asking for. My only real advice is make sure the computer doesn’t draw much power and, if possible, doesn’t emit much or any fan noise. A laptop is a decent choice because the battery kind of serves as an uninterruptible power supply. I just cap my charge limit at 80% since I never unplug it.


Interesting writing. But my concern is that social responsibility will be dumped by the cost factor as he said. Anything that is GPL is under threat by an AI-based reimplementation. The cost of doing that seems artificially low now (investment hype phase, not ROI phase of these businesses), so it’s not really the idea anyone could do it that concerns me. The concerning part is no matter the price, bigger companies can take the hit and now direct their resources to undo the GPL everywhere and simultaneously replace labor in doing it.


I’ve been self-hosting for years, but with a recent move comes a recent opportunity to do my network a bit differently. I’m now running a capable OpenWRT router, and support for AdGuard Home is practically built into OpenWRT. I just needed to configure it right and set it up, but the documentation was comprehensive enough.
For years I had kept a Debian VM for Pi-Hole running. I kept it ultra lean with a cloud kernel and 3 gb of disk space and 160MB of RAM, just so it could control its own network stack. And I’d set devices to manually use its IP address to be covered. AGH seems to be about the same exact thing as Pi-Hole. With my new setup the entire network is covered automatically without having to configure any device. And yes, I know I could’ve done the same before by forwarding the DNS lookups to the Pi-Hole, but I was always afraid it would cause a problem for me and I’d need an easy way to back out of the adblocking. Subjectively, over about 6 years, I only had a couple worthless websites that blocked me out.
I haven’t yet gotten to the point where I’m trying to also to intercept hardcoded DNS lookups, but soon… It’s not urgent for me because I don’t have sinister devices that do that.


Gavin Gruesome at it again


It even has RSS! Hell yeah let’s go


Wow how have I not seen these weekly roundups before? Cool little news digest
Sorry but I find this claim irreconcilable with how SLES and Fedora default to btrfs with their installations, or how a company like Meta uses it across their entire fleet.
I don’t know if Meta uses the raid feature directly or if they use, as you suggested, mdraid with btrfs on top. I know that that’s what Synology does.
I’m not sure what you are suggesting as the alternative. Nor do I know what silent btrfs corruption bug you are referring to, either. Btrfs has been widely deployed in enterprise and personal environments for years, and I cannot find evidence of data loss due to the file system itself.


Absolutely correct. I used to maintain vigorous whole disk backups, and made sure my MacBook also had regular Time Machine backups and that kind of thing.
Then I realized there are actually tiers of important data. The most important stuff would be on the order of megabytes (tax documents, my lease, historical records of that stuff, and config files that I’ve built up over time).
Then I have my vacation photos and videos. Family photos. A few gigabytes. That’s not that much in the grand scheme and it’s still easy to back these up to a cloud service for minimal to no cost.
The rest of the data on my computer is easily recoverable or can be reconstructed with minimal effort. The OS install. The games. Media from online. I would not bother backing up this stuff.
Once this stuff is in perspective it’s very easy to devise a backup solution that fits your needs at an appropriate price. Not everyone has usage like mine and maybe their important data is much larger than mine is, but the point is we should think about which of the data is actually important, and not blindly duplicate pointless data.
Interesting. Looks like I have some reading to do.
Damn. And I thought 3 disks was risky…
three disks to get 6tb/2=3tb of available space
Exactly! A happy outcome when using this FS.
My concern with ZFS is I use Fedora, so the kernel updates really frequently. I know that it kicks ass, but I just like having it straightforward in my kernel that I already have installed so that I never have to deal with a
If kernel module can not be loaded, your kernel version might be not yet supported by OpenZFS. An option is to an LTS kernel from COPR, provided by a third-party. Use it at your own risk:
situation. (https://openzfs.github.io/openzfs-docs/Getting Started/Fedora/index.html)
I was unfamiliar with single mode. What advantage does it give me over RAID0 in terms of combining their capacities?
That’s a good point about scrubbing on RAID5. I don’t think I really want to spend time on that ever. RAID1 at least sounds less complex both in terms of setup and down-the-line maintenance.
with three drives, raid1 doesn’t make sense
It’s perfectly usable with a btrfs setup. If one drive fails, you can mount in a degraded state.


Yes it does. But I would guess it’s not yet as powerful as LaTeX, either, but I couldn’t cite you specific examples.
The distinction is that BSD coreutils are not attempting to be a drop-in 1:1 compatible replacement of GNU coreutils. The Rust coreutils has already accomplished this with its inclusion into Ubuntu 26.04.
If I wanted a permissively licensed system, I’d use BSD. I don’t, so I primarily use Linux. I think citing a proprietary OS like macOS as a reason why permissively licensed coreutils are OK is kind of funny. It’s easy to forget that before before the GPL there were many incompatible UNIX systems developed by different companies, and IMO the GPL has kept MIT and BSD-licensed projects “Honest”, so-to-speak. Without the GPL to keep things in check, we’d be back to how things were in the 80s.
So what’s next on the docket for Ubuntu? A permissively licensed libc?