Hi folks,
TL;DR: my remaining issue seems to be firefox specific, I’ve otherwise made it work on other browsers and other devices, so I’ll consider this issue resolved. Thank you very much for all your replies and help! (Edit, this was also solved now in EDIT-4).
I’m trying to setup HTTPS for my local services on my home network. I’m gotten a domain name mydomain.tld
and my homeserver is running at home on let’s say 192.168.10.20. I’ve setup Nginx Proxy Manager and I can access it using its local ip address as I’ve forwarded ports 80 and 443 to it.
Hence, when I navigate on my computer to http://192.168.10.20/
I am greeted with the NPM default Congratulations screen confirming that it’s reachable. Great!
Next, I’ve setup an A record on my registrar pointing to 192.168.10.20
. I think I’ve been able to confirm this works because when I check on an online DNS lookup tool like https://centralops.net/CO/Traceroute
as it says 192.168.10.20 is a special address that is not allowed for this tool.
. Great!
Now, what I’m having trouble with, is the following: make it such that when I navigate to http://mydomain.tld/
I get to the NPM welcome screen at http://192.168.10.20/
. When I try this, I’m getting the firefox message:
Hmm. We’re having trouble finding that site.
We can’t connect to the server at mydomain.tld.
Strangely, whenever I try to navigate to http://mydomain.tld/
it redirects me to https://mydomain.tld/
, so I’ve tried solving this using a certificate, using the DNS-01 challenge from NPM, and setting up a reverse proxy from https://mydomain.tld/
to http://192.168.10.20/
and with the wildcard certificate from the challenge, but it hasn’t changed anything.
I’m unsure how to keep debugging from here? Any advice or help? I’m clearly missing something in my understanding of how this works. Thanks!
EDIT: It seems several are confused by my use of internal IP addresses in this way, yes it is entirely possible. There are multiple people reporting to use exactly this kind of setup, here are some examples.
EDIT-2: I’ve made progress. It seems I’m having two issues simultaneously. First one was that I was trying to test my NPM instance by attempting to reach the Congratulations page, served on port 80. That in itself was not working as it ended in an infinite-loop resolving loop, so trying to instead expose the admin page, default port 81, seems to work in some cases. And that’s due to the next issue, which is that on some browsers / with some DNS, the endpoint can be reached but not on others. For some reason I’m unable to make it work on Firefox, but on Chromium (or even on Vanadium on my phone), it works just fine. I’m still trying to understand what’s preventing it from working on Firefox, I’ve attempted multiple DNS settings, but it seems there’s something else at play as well.
EDIT-3: While I have not made it work in all situations I wanted, I will consider this “solved”, because I believe the remaining issue is a Firefox-specific one. My errors so far, which I’ve addressed are that I could not attempt at exposing the NPM congratulations page which was shown on port 80, because it lead to a resolution loop. Exposing the actual admin page on port 81 was a more realistic test to verify whether it worked. Then, setting up the forwarding of that page using something like https://npm.mydomain.tld/
and linking that to the internal IP address of my NPM instance, and port 81, while using the wildcard certificate for my public domain was then necessary. Finally, I was testing exclusively on Firefox. While I also made no progress when using dig
, curl
or host
, as suggested in the commends (which are still useful tools in general!) I managed to access my NPM admin page using other browsers and other devices, all from my home network (the only use-case I was interested in). I’ll keep digging to figure out what specific issue remains with my Firefox, I’ve verified multiple things, from changing the DNS on firefox (seems not to work, showing Status: Not active (TRR_BAD_URL)
in the firefox DNS page (e.g. with base.dns.mullvad.dns). Yet LibreWolf works just fine when changing DNS. Go figure…
EDIT-4: I have now solved it in firefox too, thanks to @non_burglar@lemmy.world! So it turns out, firefox has setup a validation system for DNS settings, called TRR. You can read more about it here: https://wiki.mozilla.org/Trusted_Recursive_Resolver Firefox has a number of TRR configurations, preventing the complete customization of DNS, but also with specific defaults that prevent my use-case. By opening up the firefox config page at about:config, search for network.trr.allow-rfc1918
and set it to true
. This now solved it for me. This allows the resolution of local IP addresses. You can read more about RFC1918 here: https://datatracker.ietf.org/doc/html/rfc1918
I’ll probably still look to actually make other DNS usable, such as base.dns.mullvad.net
which is impossible to use on Firefox by default…
This is a really good idea that I see dismissed a lot here. People should not access things over their LAN via HTTP (especially if you connect and use these services via WG/Tailscale). If you’re self hosting an vital service that requires authentication, your details are transmitted plaintext. Imagine the scenario where you lose connection to your tailscale on someone else’s WiFi and your clients try to make a connection over HTTP. This is terrible opsec.
Setting up letsencrypt via DNS is super simple.
Setting up an A record to your internal IP address is really easy, can be done via /etc/hosts, on your router (if it supports it, most do), in your tailnet DNS records, or on a self hosted DNS resolved like pihole.
After this, you’d simply access everything via HTTPS after reverse proxing your services. Works well locally, and via tailscale.
Can you point me in the right direction?
So far I’ve been installing my caddy certs manually (because as you mention, the idea anyone on my network or tailscale can see all traffic unencrypted is bonkers), which works in the browser, but then when I go to use
curl
or 90% of command line tools they don’t verify the certificate correctly. I’ve had this problem on macOS and linux.I don’t even know the right words to search for to learn more about this right now.
Edit: found this: https://tailscale.com/kb/1190/caddy-certificates
I’m not sure how caddy works, but if curl says it’s insecire, to me it sounds like the certs are not installed correctly.
I just set up caddy to work correctly (had to add tailscale sock to the container)
The certs were fine before, just janky installation on the clients.
People sleep on the DNS-01 challenges option for TLS. You don’t need an internet accessible site to generate a LetsEncrypt/ZeroSSL certificate if you can use DNS-01 challenges instead. And a lot of common DNS providers (often also your domain registrar by default) are supported by the common tools for doing this.
Whether you’re doing purely LAN connections or a mix of both LAN and internet, it’s better to have TLS setup consistently.
💯 Generally I see the dismissal from people who use their services purely through LAN. But I think it’s good practice to just set up HTTPS/SSL/TLS for everything. You never know when your needs might change to where you need to access things via VPN/WG/Tailnet, and the moment you do, without killswitches everywhere, your OPSEC has diminished dramatically.
I usually combine with using client certificate authentication as well for anything that isn’t supposed to be world accessible, just internet accessible for me. Even if the site has it’s own login.
Also good to do. I think using HTTPS, even over LAN, is just table stakes at this point. And people dismissing that are doing more harm than good.
If you lose connection, I would imagine that the connection to these servers would not be established and therefore no authentication information would be send, no?
Generally the tokens and credentials are sent along with the request. Which is plaintext if you don’t use HTTPS. If you lose connection, you’re sending the details along regardless if it connects (and if you’re on someone’s network, they can track and log).
(It’s also plaintext if the auth method isn’t secure as well; e.g. using a GET request or sending auth through HTTP headers unencrypted)