I created a short tutorial on using sub domains to access services hosted within my home network, thought I would share it here in case anyone finds it useful
This is the first time I’ve made a technical tutorial so apologise if there are mistakes/its confusing, feedback will be appreciated
I did similar with caddy. I own a domain and my server runs pihole and it is configured as DNS server. So what I did was setup caddy to create local subdomains that are only reachable through my network. For example: subdomain.mydomain.com , that works only from home. It works with ssl as well
I use caddy as well, but with headscale too so I can access it from wherever I am.
Thanks for sharing!
I can recommend nginx-proxy-manager for people who do not want to fiddle with proxy config files.
It works well, but if you want to do ‘custom stuff’ (like hosting a matrix instance) you’ll be out of luck
I am once again recommending that you not expose any services to the internet except a VPN
Good as a general recommendation.
I also feel like the risk levels are very different. If it’s something that performs a function but doesn’t save/serve any custom data (e.g. bentopdf), that’s a lot easier to decide to do than something complicate like Jellyfin.
I do have public addresses for Matrix, overleaf, AppFlowy, immich because they would be much less useful otherwise. Haven’t had any problems yet, but wouldn’t necessarily recommend it to others.
I’d never host any stuff with “Linux ISOs” on a public adress, that seems like it’d be looking for trouble.
Doesn’t matter. Any exposure risks compromise. From there, an attacker could pivot to read your data, mine cryptocurrency on your device(s), serve objectionable material, or other unsavory activities.
Even if you have authentication enabled, not all APIs require authentication. Jellyfin in particular is not designed to be internet-facing. And even if it does require authentication, authentication bypass attacks are a thing.
If you really want to secure your computer, encase that puppy in concrete (after disconnecting it from power),
What do you mean JellyFin is not designed to be Internet facing??? https://jellyfin.org/docs/general/post-install/networking/
Designed meaning in that case intended to be exposed.
More of an internal thing.
VPNs on the other hand are designed to be exposed. Same with some ssh servers or reverse proxies like traefik, nginx etc.So you mean the JellyFin ports should not be directlly exposed, but self hosting and exposing nginx to forward the traffic locally to jellyfin is fine?
Better rather than worse, yes.
Just need to be aware if what you expose and how and where.
So its also not designed to be exposed via nginx?
… sure. Nothing here is wrong, but there’s ways to try and mitigate that. And then it’s kinda an arms race, and vigilance.
I’m not sure that’s gonna work with my Jellyfin 🤔🫤
I mean it WOULD work you would just need a von on every device you wanted to use.
The REAL answer is never host them DIRECTLY, always use a reverse proxy like nginx. Many projects (i believe jellyfin is one of them) explicitly recommend this for better security. Which it looks like you did so congrats
For extra bonus points you can setup nginx to run as a non privileged user and use iptables to forward the lower ports (80/443). A pain but closes out a large chunk of nginx as a risk.
Throw in fail2ban as well.
Eh, i just use pubkey only Auth config (so password entirely disabled as an option) and put ssh on a non standard port to reduce script kid noise. (and no 2222 is not non-standard it may as well be the default)
Fail2ban triggers false too often for my taste in a high traffic environment.
Did you learn that about my jellyfin by looking at my post history or connecting to my server? 😀
It’d struggle a bit on some setups, like when I’m using the Jellyfin app on my GFs smart TV… Or explaining to a friend in Europe how to setup a VPN without breaking anything else on his network… It is a risk for sure.
I’m too tired right now to parse what you mean about the port forwarding. I guess the idea is to reduce the impact if Jellyfin were breached or exploited. If you’re up for it, can you explain more about why that relates to port forwarding?
If you ran nginx as a non privileged user it wouldn’t be able to bind to 80/443 as those are privileged ports. So you would need to use iptables to forward them to an unprivlaged port
Ah, gotcha! Thanks.
I feel like that information could have multiple implications in future. Thank you!
I personally use caddy as a reverse proxy
Yeah it was honestly changing the router settings that was the hardest part for me, exposing port 22 and 80. Caddy was really easy to use
You seem to have descibed your port forwards backwards It is the router forwarding the ports to the gateway pi (and potentially other devices), not gateway pi and other devices forwarding to the router. The forwards to servers are incoming from the internet.
(Theoretically you could have your pi physically between the router and the internet (modem) acting as a sort of pre-router, but this would be unusual. Perhaps you could describe your physical setup more clearly. What is physically/wirelessly connected to what, to the internet.)
He does refer to the pi as a gateway, so you would be right about it coming before the router. In that case, the pi would be the device handling NAT and forwarding ports.
So I think he’s describing it accurately… it’s just not a common setup to see these days.
Very cool, great work!
Worth noting about this approach is that the global list of subdomains is publicly searchable. So, you’ll see vulnerability and AI scans on those endpoints.
If that’s a concern for you, using path-based routing (e.g. Apache VirtualHost) allows you to use difficult to guess paths to your cloud.
Worth noting about this approach is that the global list of subdomains is publicly searchable.
Can you expand on this? What is it that you call the “global list of subdomains”?
Certificate transparency, unless you use wildcard certs
It’d be better and more accurate say the list of certificates then.
Sub domains aren’t public unless your DNS server has XFER on.
Interesting! I’m going to look into this. Not sure my provider has this in their UI
It trivial to get a list of all registered domains and subdomains and the IP addresses they map to. There are any number of paid services to make it easy (e.g. https://subdomainfinder.c99.nl/) but I’m pretty sure there’s also a way to do it yourself.
This is kind of why I have wild card on my main domain… Nothing on www.
Except it isn’t. Saying it is trivial is just gross generalization. It’s trivial to configure bind to have internal zones that aren’t resolvable publically. It all depends on configuration, such as reverse ns entries, zone accessibility, etc.
You can have (sub)domains that are listed in the certificate lists and yet aren’t resolvable externally as well.
Actually, wait. Something you a said might actually be just what I’m looking for: you mean that I can have DNS entry for mydomain.com and no additional CNAMEs, and have a cert for nextcloud.mydomain.com (or wildcard maybe?) and somehow still be able to use name based virtual servers?
Hmmm. I thought I was going to be limited to path-based.
Explain more?
Absolutely. Simply use ACME with the DNS validation method. Using bind you’ll want to create keys and allow TXT access for those keys to the validation domains. Fear not, this isn’t exclusive to bind, ACME tools supports dozens of other backends. That’s all you need the actual domain doesn’t need to be resolvable with an A/CNAME record. Internally you can run an entirely different DNS server to resolve your hosts, use hosts files, or use bind zones.
Okay. Yup, that’s probably true. I’m not that deep into network stuff. But, if you’re just doing the basic, ‘ha.mydomain.com => 121.41.38.9’ that works out of the box with name based virtual hosts and reverse proxy, then yeah, you’ll get traffic on that within 24 hours.
I reckon if a person understands what you’re talking about though, they’re already doing better than most.
Oh excellent, thanks for sharing.
Not positive, but I think you left in a reference to real info (twilightparadox.com) instead of “example-fying” it (mydomain.com), in the paragraph just before section 4:
For example say I have home-assistant running on a Pi with the local address 192.168.0.11, I could create a subdomain named ha that has the value mysub.twilightparadox.com then create the following nginx config
server{ listen 80; server_name ha.mydomain.com; resolver 192.168.0.1; location / { proxy_pass http://192.168.0.11/; } }When nginx sees a request for ha.mydomain.com it passes it to the address 192.168.0.11 port 80.
I’m not sure which reference you are referring to, twilightparadox.com is a domain on the dynamic DNS service, mysub is also an example
Section 1 says you’re using freedns.afraid.org though.
freedns.afraid.org is the site you use to manage the dynamic DNS twilightparadox is the default free option for creation DNS records on that comes so I used that as an example
Oh I thought it was sub domains for localhost. I actually wonder now if that’s possible.
It is, although they all go… to localhost
nice job
deleted by creator











