

Your stuff is more likely to get scanned sitting in a VPS with no firewall than behind a firewall on a home network
Your stuff is more likely to get scanned sitting in a VPS with no firewall than behind a firewall on a home network
Yeah Stalwart seems to have a lot of momentum, I’ll probably be setting up a server with my kubernetes+ceph cluster this month.
Check out NixOS. It can build qcow images from scratch for you to import into proxmox
https://github.com/nix-community/nixos-generators
I have 8 bare-metal servers and I do everything automated with NixOS, I rarely ever access the servers directly.
Here are the nixos configs for my DHCP server and kubernetes servers that you can use as a base.
https://codeberg.org/jlh/h5b/src/branch/main/porygonz
https://codeberg.org/jlh/h5b/src/branch/main/nodes
For what it’s worth, Ive been using Ansible off and on at work for 8 years, and I think it’s pretty outdated and clunky these days, there are much smarter ways to manage workloads such as kubernetes, cloud-init, terraform, and NixOS. If you don’t want to get into Kubernetes then definitely learn NixOS.
not to mention there are 48 and 64gb dimms out now too that work with basically all alder lake atoms
Yeah, what you’re talking about is called GitOps. Using git as the single source of truth for your infrastructure. I have this set up for my home servers.
nodes
has NixOS configuration for my 5 kubernetes servers and a script that builds a flash drive for each of them to use as a boot drive (same setup for porygonz
, but that’s my dedicated DHCP/DNS/NTP mini server)
mikrotik
has a dump of my Mikrotik router config and a script that deploys the config from the git repo.
applications
has all my kubernetes config: containers, proxies, load balancers, config files, certificate renewal, databases, clustered raid, etc. It’s all super automated. A pretty typical “operator” container to run in Kubernetes is ArgoCD, which watches a git repo and automatically deploys any changes or desyncs back to the Kubernetes API so it’s always in sync with git. I don’t use any GUI or console commands to deploy or update a container, I just edit git and commit.
The kubernetes cluster runs about 400 containers, most of them just automatic replicas of services for high-availability. Of course there’s always some manual setup steps outside of git, like partitioning drives, joining the nodes to the cluster, writing hardware-specific config, and bootstrapping Argocd to watch git. But overall, my house could burn down tomorrow and I would have everything I need to redeploy using this git repo, the secrets git repo, and my backups of my databases and container /data
dirs.
I think Portainer supports doing GitOps on Docker compose? Never used it.
https://docs.portainer.io/user/docker/stacks/add
Argocd is really the gold standard for GitOps though. I highly recommend trying out k3s on a server and running ArgoCD on it, it’s super easy to use.
https://argo-cd.readthedocs.io/en/stable/getting_started/
Kubernetes is definitely different than Docker Compose, and tutorials are usually written for Docker compose.yml
, not Kubernetes Deployments
, but It’s super powerful and automated. Very hard to crash once you have it running. I don’t think it’s as scary as a lot of people think, and you definitely don’t need more than one server to run it.
nah you’re probably not going to get any benefits from it. The best way to make your setup more maintainable is to start putting your compose/kubernetes configuration in git, if you’re not already.
Ah, no, Kopia uses a shared bucket.
Seems like a good way to do it.
Keep in mind Kopia has some weirdness when it comes to transferring repos between filesystem and S3, so you’d probably want to only keep one repo.
https://kopia.discourse.group/t/exported-s3-storage-backup/3560
Backblaze B2 is a cheap S3 provider. Hetzner storage box is even cheaper, but it doesn’t support S3 natively, so you’re likely to run into issues with the kopia repo compatibility I mentioned.
PHP does actually scale better than something like Lemmy which is written in rust
But sure, you can act like you know more than the Nextcloud devs
Isn’t Opencloud just extended Nextcloud? (Still PHP)
Also, nextcloud core components are written in Rust, the PHP just handles incoming requests.
https://nextcloud.com/blog/nextcloud-faster-than-ever-introducing-files-high-performance-back-end/
Does antimatter have mass?
There is some contention about whether this can necessarily be attributed to the tariff. The Great Depression was already in motion before Smoot-Hawley, mainly due to financial instability, falling demand, and poor banking practices. However, the tariff worsened the crisis by shrinking global trade, hurting farmers, and reducing employment in export-dependent industries. Had it not passed, the Depression still would have occurred, but perhaps with less severity.
Monetarists, such as Milton Friedman, who emphasized the central role of the money supply in causing the depression, considered the Smoot–Hawley Act to be only a minor cause of the Great Depression in the United States.
https://en.wikipedia.org/wiki/Smoot–Hawley_Tariff_Act
yeah maybe my nuance leaned too much to the no side, but I wanted to explain tariffs a bit. Trump tariffs are not protectionism or coercion, they’re just stupid.
There is some nuance here. Smoot-Hawley didn’t cause the great depression, and there a lot of economists who say it didn’t have that much of an effect at all.
Tarriffs can have some useful effects when used for protectionism, diplomatic coercion, or trade barrier reduction coercion. However, Trump’s tariffs are way dumber than anything that came before, because he’s trying to do all three of these at once. All of these have conflicting effects on each other, and it is literally impossible to design a tariff strategy that can accomplish all three, since raising a tariff for one purpose means that you need to lower tariffs for other purposes. All he’s doing by raising across the board is causing instability in the economy and convincing all partners to ditch the US.
Buy used Samsung pm983s on ebay. Super cheap, super fast, and they have power-loss protection. Only downside is that they’re M.2 22110, not m.2 2280. There’s also a bunch of cheap Samsung and hgst u.2 drives on eBay, but you’ll need an adapter.
The sqlite database that Jellyfin uses tends to get corrupted easily, especially if the disk gets full.
The main big feature that Jellyfin devs are working right now is a complete overhaul of the internal database system:
Yeah, I think you pick up things from all over the place as a consultant. I see lots of different environments and learn from them.
Ah yeah, external-dns operator is great! it’s maybe a bit basic at times but its super convenient to just have A/AAAA records appear for all your loadbalancer svcs and HTTPRoutes. Saves a ton of time.
That’s super unfortunate that the certs are siloed off. Maybe they can give you a NS record for a subdomain for you to use ACME on? I’ve seen that at some customers. Super important that all engineers have access to self-service certs, imo.
Rook is great! It definitely can be quite picky about hardware and balancing, as I’ve learned from trying to set it up with two nodes at home with spare hdds and ssds 😅 Very automated once it’s all set up and you understand its needs, though. NFS provisioner is also a good option for a storageclass as a first step, that’s what I used in my homelab from 2021 to 2023.
Heres my rook config:
https://codeberg.org/jlh/h5b/src/branch/main/argo/external_applications/rook-ceph-helm.yaml
https://codeberg.org/jlh/h5b/src/branch/main/argo/custom_applications/rook-ceph
Up to 3 nodes and 120TiB now and I’m about to add 4 more nodes. I probably would recommend just automatically adding disks instead of manually adding them, I’m just a bit more cautious and manual with my homelab “pets”.
I’m not very far on my RHCE yet tbh 😅 Red hat courses are a bit hard to follow 😅 But hopefully will make some progress before the summer.
The CKA and CKS certs are great! Some really good courses for those on udemy and acloudguru, there’s a good lab environment on killer.sh, and the practice exams are super useful. I definitely recommend those certs, you learn a lot and it’s a good way to demonstrate your expertise.
Well, my point was to explain how Kubernetes simplifies devops to the point of being simpler than most proxmox or Ansible setups. That’s especially true if you have a platform/operations team managing the cluster for you.
Some more details missed here would be that external-dns and cert-manager operators usually handle the DNS records and certs for you in k8s, you just have to specify the hostname in the HTTPRoute/VirtualService and in the Certificate. For storage, ansible probably simplifies some of this away, but LVM is likely more manual to set up and manage than pointing a PVC at a storageclass and saying “100Gi”.
Either way, I appreciate the discussion, it’s always good to compare notes on production setups. No hard feelings even in the case that we disagree on things. I’m a Red Hat Openshift consultant myself these days, working on my RHCE, so maybe we’ll cross paths some day in a Red Hat environment!
You’re not using a reverse proxy on rhel, so you’ll need to also make sure that the ports you want are available, and set up a dns record for it, and set up certbot.
On k8s, I believe istio gateways are meant to be reused across services. You’re using a reverse proxy so the ports will already be open, so no need to use firewall-cmd. What would be wrong with the Service included in the elasticsearch chart?
It’s also worth looking at the day 2 implications.
For backups you’re looking at bespoke cronjobs to either rsync your database or clone your entire 100gb disk image, compared to either using velero or backing up your underlying storage.
For updates, you need to run system updates manually on rhel, likely requiring a full reboot of the node, while in kubernetes, renovate can handle rolling updates in the background with minimal downtime. Not to mention the process required to find a new repo when rhel 11 comes out.
There’s much more tooling for containerd containers than there is for LXC
all home routers have NAT which functions as a firewall, but VPSes don’t cone with any firewall by default, so you’d have to set one up. Also VPS ranges seem to hotter for scanning.