

O.o I had no idea the specs had improved so much on these things. Range of over 80 miles? That’s insane.
Thank you for the sauce, I really appreciate it! (°▽°)/
O.o I had no idea the specs had improved so much on these things. Range of over 80 miles? That’s insane.
Thank you for the sauce, I really appreciate it! (°▽°)/
Whhhaaattt?? 60mph?! What brand, I must know!
Thats just how IPv6 works. You get a delegate address from your ISP for your router and then any device within that gets it own unique address. Considering how large the pool is, all address are unique. No NAT means no port forwarding needed!
Right? My flake is pretty complex at this point. I use it for over 6 computers, my storage server, compute servers, VPS etc etc. Been perfectly stable for over 3 years. I update with the release cycle every 6 months. Never needed more than a small change here or there and it usually warns me of the depreciations ahead of time.
Thankfully I’ve only needed to roll back twice and it was perfect. Lost no data and kept working while I waited for a fix. If my flake ever blows up completely I’ll switch… but I dobt that will happen lol
The rules still apply to the host, just not inside the container. Docker is just ignoring the rules. If you block all ports but then have port 81 open like you do in that section of docker compose, you would think that UFW would block docker but thats not the case. Going to http://yourip:81/ will show then NPM gui, even if you specifically use ufw to block 81. If you only expose port 80 and 443, you should be fine. Your NPM container would have to be compromised then they would have to break out of the container.
Also I think your issue is with your DNS. You should have an A record for the IP pointing to example.com and then a CNAME record pointing to sub.example.com
Docker completely ignores UFW rules. If you check your ip tables you’ll see docker rules are put in before UFW. For the 504 though, it sounds like traffic is not getting to NPM. Have you routed ports 80 and 443 to the docker container?
I use headscale on a VPS as an ingress point into my network and I love it. On top of headscale, I use two instances of traefik to make my network. I have one instance of traefik running on the vps which runs a couple of services that I want running 24/7(headscale-ui is nice). It pulls a subdomain certificate for TLS. So any services under say *.vps.example.com get routed to the VPS.
Then I have a wildcard TCP rule pointing the rest of the network traffic to my home server through headscale. My home server is running another instance of traefik where all my services are running. This pulls another wildcard cert for the rest of the *.example.com subdomains.
Cool thing about this setup is I can now have my DNS server rewrite *.example.com to my servers LAN IP. Now when my device is home, it works even when WAN is out. But when I’m out and about, it hits the public DNS and goes through my VPS. With traefik I can write a not !ClientIP rule and essentially block the VPS. Now I can host a service at home but also block it from being accessed from the public. But if I need access to the LAN remotely, I can just use a tailsacale client and get into headscale and see everything.
Its an odd network, but it’s super flexible and works very well for my use case. If you have any questions I’d love to help you set something like this up :D
The over lap of docker containers needs to happen from inside the perspective of the container. If you send Radarr to pull a movie from bittorrent, they both need to “be in the same spot”. If bittorrent thinks it’s saving a movie to /data/torrent then Radarr also needs to see the movie at /data/torrent.
That’s why so many guides use the /data/ label scheme. Its just easy to use and implement. Side note, for hard links to work, all the folders need to be on the same drive. Can’t hard link between different drives.
Ah sorry to hear that. Did you find something better that works for you? I’m open to suggestions :D
I followed along the nixos wiki for kubernetes and creating the “master” kublet is super easy when you set easyCerts = true. Problem is, it spits out files to /var/lib/kubernetes/secrets/ that is owned by root. Specifically, the cluster-admin.pem file. If I want to push commands to the cluster using kubectl I have to elevate to a root shell. I could just chmod or chown the file but that seems like a security risk.
Now I’m not familiar with k8s at all. This is my first go through, so I could be doing something wrong or missing a step. I saw something about the role based security but I haven’t jumped down that rabbit hole yet. Any tips for running kubectl without root?
I’m working on my first kubernetes cluster. I’m trying to set the systems up with NixOS. I can get a kublet and a control plane running. But I’m getting permission errors when trying to use kubectl rootless on the system running the control plane. I think I figured out which file i need to change, now I just want to record that change in my configuration.nix.
Ohhh come on now, you’ve got too see the irony here. Don’t you get tired of repeatedly adding that license? No, of course not. You just like the attention, it’s okay lol I won’t tell anyone your secret ;)
I wish I had setup an identity management system sooner. Been self-hosting for years and about a year ago took the full plunge into setting up all my services behind Authentik. Its a game changer not having to deal with all the usernames and passwords.
In a similar vein, before Authentik, I used Vaultwarden to manage all my credentials. That was also a huge game changer with my significant other. Being able to have them setup their own account and then share credentials as an organization is super handy.