

Id love to believe this is to weed out the bad applicants.
People that answer “lol, I just want a job” actually get the interviews


Id love to believe this is to weed out the bad applicants.
People that answer “lol, I just want a job” actually get the interviews


3x minisforums MS-01


A NAS as bare metal makes sense.
It can then correctly interact with the raw disks.
You could pass an entire HBA card through to a VM, but I feel like it should be horses for courses.
Let a storage device be a storage device, and let a hypervisor be a hypervisor.


especially once a service does fail or needs any amount of customization.
A failed service gets killed and restarted. It should then work correctly.
If it fails to recover after being killed, then it’s not a service that’s fully ready for containerisation.
So, either build your recovery process to account for this… or fix it so it can recover.
It’s often why databases are run separately from the service. Databases can recover from this, and the services are stateless - doesn’t matter how many you run or restart.
As for customisation, if it isn’t exposed via env vars then it can’t be altered.
If you need something beyond the env vars, then you use that container as a starting point and make your customisation a part of your container build processes via a dockerfile (or equivalent)
It’s a bit like saying “chisels are great. But as soon as you need to cut a fillet steak, you need to sharpen a side of the chisel instead of the tip of the chisel”.
It’s using a chisel incorrectly.


I would always run proxmox to set up docker VMs.
I found Talos Linux, which is a dedicated distro for kubernetes. Which aligned with my desire to learn k8s.
It was great. I ran it as bare-metal on a 3 node cluster. I learned a lot, I got my project complete, everything went fine.
I will use Talos Linux again.
However next time, I’m running proxmox with 2 VMs per node - 3 talos control VMs and 3 talos worker VMs.
I imagine running 6 servers with Talos is the way to go. Running them hyperconverged was a massive pain. Separating control plane and data/worker plane (or whatever it is) makes sense - it’s the way k8s is designed.
It wasn’t the hardware that had issues, but various workloads. And being able to restart or wipe a control node or a worker node would’ve made things so much easier.
Also, why wouldn’t I run proxmox?
Overhead is minimal, get nice overview, get a nice UI, and I get snapshots and backups


I’ve never installed a package on proxmox.
I’ve BARELY interacted with CLI on proxmox (I have a script that creates a nice Debian VM template, and occasionally having to really kill a VM).
What would you install on proxmox?!


“God will protect us. He has sent judgement on those unworthy” also contributes. Not directly eugenics, but damn fucking close


What a fantastic reply. Thank you


Hmm, telescope pointing downwards, huh?


What about liquid particles in the flatulence phase-changing and lowering the temperature? (Like how an evaporative swamp cooler works)
I’d still run k8s inside a proxmox VM. Even if it’s basically all resources dedicated to the VM, proxmox gives you a huge amount of oversight and additional tooling.
Proxmox doesn’t have to do much (or even anything), beyond provide a virtual machine.
I’ve ran Talos OS (dedicated k8s distro) bare metal. It was fine, but I wish I had a hypervisor. I was lucky that my project could be wiped and rebuilt with ease. Having a hypervisor would mean I could’ve just rolled back to a snapshot, and separated worker/master nodes without running additional servers.
This was sorely missed when I was both learning the deployment of k8s, and k8s itself.
For the next project that is similar, I’ll run talos inside proxmox VMs.
As far as “how does cloudflare work in k8s”… However you want?
You could manually deploy the example manifests provided by cloudflare.
Or perhaps there are some helm charts that can make it all a bit easier?
Or you could install an operator, which will look for Custom Resource Definitions or specific metadata on standard resources, then deploy and configure the suitable additional resources in order to make it work.
https://github.com/adyanth/cloudflare-operator seems popular?
I’d look to reduce the amount of yaml you have to write/configure by hand. Which is why I like operators


The bad news is you have Towerful Inclusion Syndrome, where you try to add to an excellent joke but end up making it not funny by beating a dead horse and over explaining things and failing to feel included in the social occasion


That’s it re-stoking the internal combustion engine. It’s perfectly fine


I really wish there was a way to enforce transparency of docker env vars.
I get that it’s impossible to make it a part of docker, env vars get parsed by code and turned into variables. There is no way that docker can enforce it, cause a null/undefined check with a default value is all that would be needed to subvert checks by docker, and every language uses a different way to check env vars (eg .env files, environment init scripts, whatever).
And even then, the env var value could be passed through a ridiculous chain of assignments and checks.
And, some of those ‘get env var’ routines could be conditional. Not all projects capture all env vars during some initial routine.
I’ve spent hours (maybe days) trawling through undocumented env vars trying to figure out their purpose, in order to leverage them in docker/k8s stacks.
I wish there was something.
Thankfully, a bit of time spent with a FOSS project and reviewing the code does shed light on hidden env vars.
And a PR or 2 gets comments and documentation updated.
Open source is awesome


Interesting, I might check them out.
I liked garden because it was “for kubernetes”. It was a horse and it had its course.
I had the wrong assumption that all those CD tools were specifically tailored to run as workers in a deployment pipeline.
I’m willing to re-evaluate my deployment stack, tbh.
I’ll definitely dig more into flux and ansible.
Thanks!


Oh, operators are absolutely the way for “released” things.
But on bigger projects with lots of different pods etc, it’s a lot of work to make all the CRD definitions, hook all the events, and write all the code to deploy the pods etc.
Similar to helm charts, I don’t see the point for personal projects. I’m not sharing it with anyone, I don’t need helm/operator abstraction for it.
And something like cdk8s will generate the yaml for you to inspect. So you can easily validate that you are “doing the right thing” before slinging it into k8s.


Everyone talks about helm charts.
I tried them and hate writing them.
I found garden.io, and it makes a really nice way to consume repos (of helm charts, manifests etc) and apply them in a sensible way to a k8s cluster.
Only thing is, it seems to be very tailored to a team of developers. I kinda muddled through with it, and it made everything so much easier.
Although I massively appreciate that helm charts are used for most projects, they make sense for something you are going to share.
But if it’s a solo project or consuming other people’s projects, I don’t think it really solves a problem.
Which is why I used garden.io. Designed for deploying kubernetes manifests, I found it had just enough tooling to make things easier.
Though, if you are used to ansible, it might make more sense to use ansible.
Pretty sure ansible will be able to do it all in a way you are familiar with.
As for writing the manifests themselves, I find it rare I need to (unless it’s something I’ve made myself). Most software has a k8s helm chart. So I just reference that in a garden file, set any variables I need to, and all good.
If there aren’t helm charts or kustomize files, then it’s adapting a docker compose file into manifests. Which is manual.
Occasionally I have to write some CRDs, config maps or secrets (CMs and secrets are easily made in garden).
I also prefer to install operators, instead of the raw service. For example, I use Cloudnative Postgres to set up postgres databases.
I create a CRD that defines the database, and CNPG automatically provisions all the storage, pods, services, config maps and secrets.
The way I use kubernetes for the projects I do is:
Apply all the infrastructure stuff (gateways, metallb, storage provisioners etc) from helm files (or similar).
Then apply all my pods, services, certificates etc from hand written manifests.
Using garden, I can make sure things are deployed in the correct order: operators are installed before trying to apply a CRD, secrets/cms created before being referenced etc.
If I ever have to wipe and reinstall a cluster, it takes me 30 minutes or so from a clean TalosOS install to the project up and running, with just 3 or 4 commands.
Any on-the-fly changes I make, I ensure I back port to the project configs so when I wipe, reset, reinstall I still get what I expect.
However, I have recently found https://cdk8s.io/ and I’m meaning to investigate that for creating the manifests themselves.
Write code using a typed language, and have cdk8s create the raw yaml manifests. Seems like a dream!
I hate writing yaml. Auto complete is useless (the editor has no idea what format the yaml doc should take), auto formatting is useless (mostly because yaml is whitespace sensitive, and the editor has no idea what things are a child or a new parent). It just feels ugly and clunky.


So uplink is 500/500.
LAN speed tests at 1000/1000.
WAN is 100/400.
VPN is 8/8.
I’m guessing the VPN is part of your homelab? Or do you mean a generic commercial VPN (like pia or proton)?
How does the domain resolve on the LAN? Is it split horizon (so local ip on the lan, public IP on public DNS)?
Is the homelab on a separate subnet/vlan from the computer you ran the speed test from? Or the same subnet?
I hear that the US has oil and WMDs