I have a few selfhosted services, but I’m slowly adding more. Currently, they’re all in subdomains like linkding.sekoia.example etc. However, that adds DNS records to fetch and means more setup. Is there some reason I shouldn’t put all my services under a single subdomain with paths (using a reverse proxy), like selfhosted.sekoia.example/linkding?
Everyone is saying subdomains so I’ll try to give a reason for paths. Using subdomains makes local access a bit harder. With paths you can use httpS://192etc/example, but if you use subdomains, how do you connect internally with https? Https://example.192etc won’t work as you can’t mix an ip address with domain resolution. You’ll have to use http://192etc:port. So no httpS for internal access. I got around this by hosting adguard as a local DNS and added an override so that my domain resolved to the local IP. But this won’t work if you’re connected to a VPN as it’ll capture your DNS requests, if you use paths you could exclude the IP from the VPN.
Edit: not sure what you mean by “more setup”, you should be using a reverse proxy either way.
If your router has NAT reflection, then the problem you describe is non existent. I use the same domain/protocol both inside and outside my network.
Does NAT reflection still work if your PC is connected to a VPN?
Depends:
If you have your VPN setup so it sends all traffic to the internet, then your request will pass through the VPN server, then back to your location from the internet.
If you have your VPN setup to exempt LAN traffic, then if you specify a local IP, your traffic will stay on your LAN, however, if you specify the domain, the VPN will almost certainly continue to treat it as internet-bound traffic and route it through their servers. This is possibly avoidable if you also put your own IP on the exempt list, if that is a feature.
This is not really correct. When you use
http
this implies that you want to connect to port 80 without encryption, while usinghttps
implies that you want to use an ssl connection to port 443.You can still use https on a different port, Proxmox by default exposes itself on
https://proxmox-ip:8006
for example.Its still better to use (sub)domains as then you don’t have to remember strings of numbers.
I understand, though if the services you’re hosting are all http by themselves, and https due to a reverse proxy, if you attempt to connect to the reverse proxy it’ll only serve the root service. I’m not aware of a method of getting to subdomains from the reverse proxy if you try to reach it locally via ip.
Generally a hostname based reverse proxy routes requests based on the host header, which some tools let you set. For example, curl:
curl -H 'Host: my.local.service.com' http://192.168.1.100
here 192.168.1.100 is the LAN IP address of your reverse proxy and my.local.service.com is the service behind the proxy you are trying to reach. This can be helpful for tracking down network routing problems.
If TLS (https) is in the mix and you care about it being fully secure even locally it can get a little tricky depending on whether the route is pass through (application handles certs) or terminate and reencrypt (reverse proxy handles certs). Most commonly you’ll run into problems with the client not trusting the server because the “hostname” (the LAN IP address when accessing directly) doesn’t match what the certificate says (the DNS name). Lots of ways around that as well, for example adding the service’s LAN IP address to the cert’s subject alternate names (SAN) which feels wrong but it works.
Personally I just run a little DNS server so I can resolve the various services to their LAN IP addresses and TLS still works properly. You can use your /etc/hosts file for a quick and dirty “DNS server” for your dev machine.