Hi folks,
TL;DR: my remaining issue seems to be firefox specific, I’ve otherwise made it work on other browsers and other devices, so I’ll consider this issue resolved. Thank you very much for all your replies and help! (Edit, this was also solved now in EDIT-4).
I’m trying to setup HTTPS for my local services on my home network. I’m gotten a domain name mydomain.tld
and my homeserver is running at home on let’s say 192.168.10.20. I’ve setup Nginx Proxy Manager and I can access it using its local ip address as I’ve forwarded ports 80 and 443 to it.
Hence, when I navigate on my computer to http://192.168.10.20/
I am greeted with the NPM default Congratulations screen confirming that it’s reachable. Great!
Next, I’ve setup an A record on my registrar pointing to 192.168.10.20
. I think I’ve been able to confirm this works because when I check on an online DNS lookup tool like https://centralops.net/CO/Traceroute
as it says 192.168.10.20 is a special address that is not allowed for this tool.
. Great!
Now, what I’m having trouble with, is the following: make it such that when I navigate to http://mydomain.tld/
I get to the NPM welcome screen at http://192.168.10.20/
. When I try this, I’m getting the firefox message:
Hmm. We’re having trouble finding that site.
We can’t connect to the server at mydomain.tld.
Strangely, whenever I try to navigate to http://mydomain.tld/
it redirects me to https://mydomain.tld/
, so I’ve tried solving this using a certificate, using the DNS-01 challenge from NPM, and setting up a reverse proxy from https://mydomain.tld/
to http://192.168.10.20/
and with the wildcard certificate from the challenge, but it hasn’t changed anything.
I’m unsure how to keep debugging from here? Any advice or help? I’m clearly missing something in my understanding of how this works. Thanks!
EDIT: It seems several are confused by my use of internal IP addresses in this way, yes it is entirely possible. There are multiple people reporting to use exactly this kind of setup, here are some examples.
EDIT-2: I’ve made progress. It seems I’m having two issues simultaneously. First one was that I was trying to test my NPM instance by attempting to reach the Congratulations page, served on port 80. That in itself was not working as it ended in an infinite-loop resolving loop, so trying to instead expose the admin page, default port 81, seems to work in some cases. And that’s due to the next issue, which is that on some browsers / with some DNS, the endpoint can be reached but not on others. For some reason I’m unable to make it work on Firefox, but on Chromium (or even on Vanadium on my phone), it works just fine. I’m still trying to understand what’s preventing it from working on Firefox, I’ve attempted multiple DNS settings, but it seems there’s something else at play as well.
EDIT-3: While I have not made it work in all situations I wanted, I will consider this “solved”, because I believe the remaining issue is a Firefox-specific one. My errors so far, which I’ve addressed are that I could not attempt at exposing the NPM congratulations page which was shown on port 80, because it lead to a resolution loop. Exposing the actual admin page on port 81 was a more realistic test to verify whether it worked. Then, setting up the forwarding of that page using something like https://npm.mydomain.tld/
and linking that to the internal IP address of my NPM instance, and port 81, while using the wildcard certificate for my public domain was then necessary. Finally, I was testing exclusively on Firefox. While I also made no progress when using dig
, curl
or host
, as suggested in the commends (which are still useful tools in general!) I managed to access my NPM admin page using other browsers and other devices, all from my home network (the only use-case I was interested in). I’ll keep digging to figure out what specific issue remains with my Firefox, I’ve verified multiple things, from changing the DNS on firefox (seems not to work, showing Status: Not active (TRR_BAD_URL)
in the firefox DNS page (e.g. with base.dns.mullvad.dns). Yet LibreWolf works just fine when changing DNS. Go figure…
EDIT-4: I have now solved it in firefox too, thanks to @non_burglar@lemmy.world! So it turns out, firefox has setup a validation system for DNS settings, called TRR. You can read more about it here: https://wiki.mozilla.org/Trusted_Recursive_Resolver Firefox has a number of TRR configurations, preventing the complete customization of DNS, but also with specific defaults that prevent my use-case. By opening up the firefox config page at about:config, search for network.trr.allow-rfc1918
and set it to true
. This now solved it for me. This allows the resolution of local IP addresses. You can read more about RFC1918 here: https://datatracker.ietf.org/doc/html/rfc1918
I’ll probably still look to actually make other DNS usable, such as base.dns.mullvad.net
which is impossible to use on Firefox by default…
This is a really good idea that I see dismissed a lot here. People should not access things over their LAN via HTTP (especially if you connect and use these services via WG/Tailscale). If you’re self hosting an vital service that requires authentication, your details are transmitted plaintext. Imagine the scenario where you lose connection to your tailscale on someone else’s WiFi and your clients try to make a connection over HTTP. This is terrible opsec.
Setting up letsencrypt via DNS is super simple.
Setting up an A record to your internal IP address is really easy, can be done via /etc/hosts, on your router (if it supports it, most do), in your tailnet DNS records, or on a self hosted DNS resolved like pihole.
After this, you’d simply access everything via HTTPS after reverse proxing your services. Works well locally, and via tailscale.
Can you point me in the right direction?
So far I’ve been installing my caddy certs manually (because as you mention, the idea anyone on my network or tailscale can see all traffic unencrypted is bonkers), which works in the browser, but then when I go to use
curl
or 90% of command line tools they don’t verify the certificate correctly. I’ve had this problem on macOS and linux.I don’t even know the right words to search for to learn more about this right now.
Edit: found this: https://tailscale.com/kb/1190/caddy-certificates
I’m not sure how caddy works, but if curl says it’s insecire, to me it sounds like the certs are not installed correctly.
People sleep on the DNS-01 challenges option for TLS. You don’t need an internet accessible site to generate a LetsEncrypt/ZeroSSL certificate if you can use DNS-01 challenges instead. And a lot of common DNS providers (often also your domain registrar by default) are supported by the common tools for doing this.
Whether you’re doing purely LAN connections or a mix of both LAN and internet, it’s better to have TLS setup consistently.
💯 Generally I see the dismissal from people who use their services purely through LAN. But I think it’s good practice to just set up HTTPS/SSL/TLS for everything. You never know when your needs might change to where you need to access things via VPN/WG/Tailnet, and the moment you do, without killswitches everywhere, your OPSEC has diminished dramatically.
I usually combine with using client certificate authentication as well for anything that isn’t supposed to be world accessible, just internet accessible for me. Even if the site has it’s own login.
Also good to do. I think using HTTPS, even over LAN, is just table stakes at this point. And people dismissing that are doing more harm than good.
If you lose connection, I would imagine that the connection to these servers would not be established and therefore no authentication information would be send, no?
Generally the tokens and credentials are sent along with the request. Which is plaintext if you don’t use HTTPS. If you lose connection, you’re sending the details along regardless if it connects (and if you’re on someone’s network, they can track and log).
(It’s also plaintext if the auth method isn’t secure as well; e.g. using a GET request or sending auth through HTTP headers unencrypted)
The easy answer is to enable NAT loopback (also sometimes called NAT hairpinning) on your edge router.
This was not required in my case, but maybe it solves other issues?
It solves all your issues. No weird, non-standard DNS records. Just turn it on and everything both on your local network and external (if you want it to) works via domain name.
Given your setup, I presume you’re trying to access your server via a domain name, only from within your home network? That’s what the linked blog posts are talking about.
EDIT: It seems several are confused by my use of internal IP addresses in this way, yes it is entirely possible. There are multiple people reporting to use exactly this kind of setup, here are some examples.
Or maybe your example IP address is just confusing. IP addresses in the ranges 192.168.0.0/16, 172.16.0.0/12, and 10.0.0.0/8 are all reserved for “private routing” and are not routable on the larger internet.Your home will have devices with those IP addresses because it’s a private LAN that uses Network Address Translation (NAT) at the boundary with your ISP. Your ISP might also have it’s own NAT called Carrier-Grade NAT (CGNAT) that has another translation boundary where it reaches the internet. If your ISP doesn’t have CGNAT, and allows incoming connections on your desired ports, you might be able to use the IP address your ISP assigned your router as the pubic IP, but if not you’ll need to figure out some other routing method (e.g. VPS hosting a private VPN exit point with routing rules to allow incoming and entry point somewhere in your network with routing rules to reply thru that VPN).
EDIT: Added quote
If you’re just trying to do this within your home network, you’re doing what’s called “split DNS”, where the DNS in your home network is different from the global DNS.
I do this for services I host, though usually I can also access them remotely as well, just from a different IP address. The easiest from the TLS certificates (TLS is what gives you the S in HTTPS) is to use DNS-01 challenges for tour LetsEncrypt/ZeroSSL certificate generation because it doesn’t have to actually reach your domain’s site to prove you own the domain, it instead has you put extra temporary DNS records in instead.
Thanks for your response. Indeed, this is only for myself within my home network. No split DNS required, the public DNS record mentions my local private IP address which of course will only resolve to my homeserver from within my home network and will not lead anywhere for anyone else from any other network. That’s all what makes this great. Yes, I did the DNS challenge as I mentioned in my OP and retrieved a wildcard certificate for all my local needs :)
Yes, I did the DNS challenge as I mentioned in my OP and retrieved a wildcard certificate for all my local needs :)
Saw that, I just wasn’t sure if you knew why it worked, which is why I mentioned it again. Glad you figured it out.
Ah, that’s why it’s not working with Firefox then too. Firefox comes with one of the secure DNS options turned on by default (DoH), which guarantees it will always reach a public DNS server and not get trapped into one from your home router, a cafe’s router, or your ISP. Since it knows the DNS will always be public, it also knows that the 192.168.10.20 address is not routable on the internet where it found it. S ome malicious sites can use a DNS record with a non-public IP address like this to get you to run JavaScript in your browser from the site you visited, to attack a device on your home network. So Firefox blocks that IP address from public DNS replies.
Generally people will have a home router that allows them to have their own recursive DNS where they can insert their own records to things within their home network, and will disable the DoH or DoT (“secure DNS”) settings in their browsers as the way to do this. Putting the private IP in the Public DNS record doesn’t hurt though, it just might get stopped by various modern security protections is all.
Since it knows the DNS will always be public, it also knows that the 192.168.10.20 address is not routable on the internet where it found it.
That is in fact not it. I left the default firefox DNS setting. I simply enabled
network.trr.allow-rfc1918
from within theabout:config
which allows the resolution of local IP addresses. It now works. All my DNS are public, I make no use of any private, local DNS.
You set the A record to your internal ip address from within your router?
Nginx configs have a lot of options, you can route differently depending on the source context
So a couple questions:
- Do you only want to access this from your local network? If so setting up a domain name in the broader internet makes no sense, you’re telling the whole world what local ip within your switch/router is your server. Make your own dns or something if you just want an easier way to hit your local resources
- do you want to access this from the internet, like when you’re away from home? Then the ip address you add to your a record should be your isp’s public ip address you were assigned, it will not start with 192.168, then you have your modem forward the port to your local system (nginx)
If you don’t know what you are doing and have a good firewall setup do not make this service public, you will receive tons and tons of attacks just for making a public a record.
The A record was set on my registrar, so on a public DNS, so to speak.
- It allows me to use HTTPS on a private service without setting up any custom DNS locally and without me using any selfsigned certificates and with all my IP addresses being private. It’s a good solution for me to have the real certificates using the default public infrastructure while keeping everything private. What’s the danger of sharing that my private server is accessible at 192.168.10.20 for the external world? What could they do with that information?
- I use my tailscale network to which I expose my local network to allow remote access. Works great for me.
Then next I would examine the redirect and check your stack, is it a 302, 304, etc, is there a service identifying header with the redirect?
After that I would try to completely change your setup for testing purposes, greatly simplify things removing as many variables as possible, maybe setup an api server with a single route on express or something and see if that can be faithfully served
If you can’t serve with even a simple setup then you need to go back to the drawing board and try a different option
Opening up the network developer tools in Firefox, I’m seeing the following error:
NS_ERROR_UNKNOWN_HOST
, though I haven’t been able to determine how to solve this yet. It does make sense, because it would also explain why curl is unable to resolve it, if the nameserver is unreachable. I’m still confused though, because cloudflare, google and most other DNS’s I’ve tried work without issue. Even setting google’s dns in firefox does not resolve it.
The obvious question: Do you want to access your server only from within your network or also from anywhere else?
Good question. I’m only interested in accessing it from my home network and through my tailscale network.
Then you don’t need to inform the rest of the world about your domain. Just use the hostname of the server on your tailnet and it should work all the time
Wouldn’t that require me to use tailscale even at home on my home network? It also does not provide HTTPS unless you maybe use magic DNS, but then we’re back to using a public domain I guess.
It’s very likely that DNS servers aren’t going to propagate that A name record because of it being an internal IP. What DNS settings are you using for Tailscale? You could also check that the address is resolving locally with the command
host mydomain.tld
which should returnmydomain.tld has address 192.168.10.20
if things are set up correctly.Edit: you can also do a reverse lookup with
host 192.168.10.20
which should spit out20.10.168.192.in-addr.arpa domain name pointer mydomain.tld.
Sorry this will most definitely not work with your local IP address on an external DNS. That is not routable over the internet. I have a 192.168.10.20 IP address in my home network as well. You need to go to whatsmyip.com or ipchicken.com and get your external IP and put that in the DNS at your registrar. Most likely you will need a Dynamic DNS provider as your ISP probably gives you a dynamic public IP address that will change occasionally.
If you just want to resolve mydomain.tld INTERNALLY so you can use a mydomain.tld HTTPs certificate then you just need to add mydomain.tld to your INTERNAL DNS server pointing at your INTERNAL IP address for your server. Likely your router is set up as a DNS server but it just forward all requests to the external DNS which is why you just get sent to mydomain.tld instead of your internal server.
It does work. In my first edit I’m sharing multiple examples of others making it work, and I’ve made it work in some cases which I explain in my second edit. I’m not using an HTTP challenge, but a DNS challenge which is not specific to any IP address and does not require the IP address to be reachable from outside my network. I only care about accessing the endpoint from within my home network. The use of a real domain allows me to make use of the public chain of trust infrastructure and DNS allowing me to reach my homeserver using any device without having to setup any specific local DNS or installing any custom certificate on any of my devices.
Try turning off WiFi on your phone and see if you can connect from there. Connecting from a device within your home network to a another device in your home network is different than connecting from a device out on the internet to a device in your home network. Phone using data is a good way to check that “internet device to home network” case.
Works flawlessly with my tailscale setup :) Thanks for asking! I’m not trying to expose anything to the open. Just for me personally, from home or remotely using my VPN.
No, it is not fully working.
Many have tried to explain to you that your setup only works for YOU on YOUR subnet.
Your are then asking other public tools meant to lookup public ips with publicly-available DNS names to resolve your internal addresses, which they obviously don’t know anything about, and you’re getting those errors from tools that follow rfc because you are putting the equivalent of “bedroom” on the outside of an envelope and expecting the post office to know that it means YOUR bedroom.
For dns to work properly, the authoritative DNS server should be able to create a reverse lookup record for every a record that allow a DNS client to ask “what record do you have for this IP?” and get a coherent response. Since 192.168.10.0/24 is a non-routable network, you will never have such a reverse record.
Wolfgang has done you a disservice by giving you a shortcut that works as a side-effect of dns before you fully understood how DNS works.
No, it is not fully working. Many have tried to explain to you that your setup only works for YOU on YOUR subnet.
That’s exactly what I want. I don’t know why you thought I wanted something else? I’m trying to reach services in my home network from home, using HTTPS, without requiring a local DNS or to load self-signed certificates.
EDIT: I realize I maybe could’ve made a better job at explaining that the intention was for it to work exclusively for me on my home network.
I know what you’re trying to do, and what those tutorials don’t tell you is that you are shortcutting normal DNS flow, which most apps are expecting.
DNS isn’t designed to work that way, so some apps (like Firefox) with internal hard-coded DNS functions are going to balk at private RFC ips in a DNS record. Or a lack of reverse record.
Again, slow down and think about what your trying to do here. You are complicating your stack for no reason other than you don’t want to set up a local DNS handler.
so some apps (like Firefox) with internal hard-coded DNS functions
Thank you! This was the information I needed! It landed me on this page https://support.mozilla.org/en-US/kb/firefox-dns-over-https which shows
When DoH is enabled, Firefox by default directs DoH queries to DNS servers that are operated by a trusted partner, which has the ability to see users' queries
and lead me to this page https://wiki.mozilla.org/Trusted_Recursive_Resolver where I was able to read more about it. That explains why it does not work, I appreciate the insight!Glad you figured it out.
Yes, I now managed to make it fully work on firefox too, needed to set
network.trr.allow-rfc1918
totrue
in theabout:config
settings! :)
Try a different browser, or the curl command in another comment (but while on the LAN). Your understanding so far is correct, though unusual, typically it’s not recommended to put LAN records in WAN DNS.
But if you’ve ever run HTTPS there before, Firefox might remember that and try to use it automatically. I think there’s a setting in Firefox. You might also try the function to forget site information, both for the name and IP. I assume you haven’t turned on any HTTP-to-HTTPS redirect in nginx.
Also verify that nginx is set up with a site for that name, or has a default site. If it doesn’t, then it doesn’t know what to do and will fail.
This was a good suggestion, indeed other browsers seem to work just fine, I updated my post with a new edit. I’m making progress, it seems I’m having some specific issue with Firefox, my default browser. And your last point was also spot-on, though I only understand now what you meant now that I figured out the port-80 resolution loop trap.
Yeah, either check for that setting I mentioned or clear the site data.
I do this exact thing on my network so I know it works, but why are you trying to downgrade https to http? if you’ve set up dns-01 properly it should just work with https.
how did you configure dns-01?
Yes, it was an attempt at doing on step at the time, but I realize I’ve been able to make it work in some browsers and on some DNS using HTTPS, as hoped. I’m now mostly trying to solve specific DNS issues, trying to understand why there are some cases where it’s not working (i.e. in Firefox regardless of DNS setting, or when calling
dig
,curl
orhost
).
You can’t point to 192.168.X.X that’s your local network IP address. You need to point to your public IP address which you can find by just searching ‘what is my IP’. Note that you can’t be behind CGNAT for this, and either need a static IP or dynamic DNS configuration. Be aware of the risks involved exposing your home server to the internet in this manner.
You can’t point to 192.168.X.X that’s your local network IP address. You need to point to your public IP address
That’s not true at all. That is exactly how I have my setup. A wildcard record at Porkbun pointing to the private IP of my home server so when I am home I have zero issues accessing things.
A wildcard record at Porkbun pointing to the private IP of my home server
Which can not be 192.168.X.X
read: https://en.wikipedia.org/wiki/IP_address#Private_addresses
And yet, that is exactly what I am doing and it is working.
Rfc1918 address are absolutely usable with DNS in this fashion.
If I were to try to access it while I wasn’t home it absolutely wouldn’t work but that is not what I do.You are technically correct. I assumed that it was for external access because why would you pay porkbun for something internal?
You can just selfhost a DNS with that entry like https://technitium.com/dns/ (near bottom of feature list) it has a WebUI that allows you to manage DNS-Records through it.
That’s true but then I would have to deal with PKI, cert chains, and DNS. When now all I need to do is get Traefik to grab a wildcard Let’s Encrypt cert and everything is peachy.
No, you’d just need to deal with running DNS locally, you can still use LE for internal certs.
But you still need to pass one of their challenges. Public DNS works for that. You don’t need to have any records in public DNS though.
That doesn’t make any sense
I think I can see where they’re going with it, but it is a bit hard to write out
Say I set up my favorite service in house, and said service has a client app. If I create my own DNS at home and point the client to the entry, and the service is running an encrypted connection with a self signed cert it can give the client app fits for being untrusted.
Compare that to putting NPM in front of the app, using it to get a LetsEncrypt cert using the DNS record option (no need to have LE reach the service publicly) and now you have a trusted cert signed by a public CA for the client app to connect to.
I actually do the same for a couple internal things that I want the local traffic secured because I don’t want creds to be sniffable on the wire, but they’re not public facing. I already have a domain for other public things so it doesn’t cost anything extra to do it this way.
You sure can. You can see someone doing just that here successfully:
Okay sure, for a specific use case yes you can point a record to a private IP, however this explicitly doesn’t expose your homelab to the web. I misunderstood OPs intention.
One thing you probably forgot to check is if your TLD registrar supports DyDNS and you have it set on both sides of the route.
Would you mind explaining further what you mean by “setting it up on both sides of the route”? Much appreciated!
The IP address you’ve used as an example would not work. That is a ‘local’ address, ie home address. If you want DNS to resolve your public domain name to your home server, you need to set the A record to your ‘public’ IP address, ie the external address of your modem/router. Find this by going to whatismyip.com or something similar.
That will connect your domain name with your router. You then set up port forwarding on the router to pass requests to the server.
Why do you need a domain on an internet facing dns if you can just define it with your local dns? Unless you want to access your services via internet, in which case you would need a public ip.
To have HTTPS without additional setup on all the devices which I use to access my services and without having to setup my own DNS server.
Your issue is using a non-routable IP on a public DNS provider, some home routers will assume it’s a miss configuration and drop it.
If your only going to use the domain over a VPN and local network, I would use something like pihole to do the DNS.
If you want access from the internet at large, you will need your public IP in your DNS provider.
Have you considered using a mesh VPN instead of opening a port to the public? Nebula and TailScale are both great options that have a free tier which is more than enough for most home use cases. With Nebula you can even selfhost your discovery node so nothing is cloud-based, but then you’re back to opening firewall ports again.
Anyway, its going to be more secure than even a properly configured reverse proxy setup and way less hassle.