Hey all, I’m relatively new to the selfhosting game the most I’ve done to date is own and maintain a plex server for the last few years, but that mainly handles all of the networking for me so I’d say it doesn’t really count.

Recently, due in part to the ongoing controversy with audibles royalty and streaming model I’ve decided to try my hand at setting up an Audiobookshelf server of my own. For reference I’m running on a machine with Ubuntu 20.04. Ive managed to get Audiobookshelf and nginx running through docker and accessible via the localhost:port, but now I feel like I’m missing some key understandings.

I assume I need to have a domain name through a DNS service like cloudflare in order to make use of it, but I’m not sure what to do after that and the documentation that I have read doesn’t outright answer my questions.

Once I have my DNS setup, how do I associate it with my server or point it through the nginx reverse proxy?

I know I’ll have to setup a .conf file for nginx at some point and I found the example .conf in the audiobookshelf documentation, but I just feel like I’m missing the step between getting a domain name and establishing the reverse proxy.

Any help would be greatly appreciated, thanks!

  • mic_check_one_two@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    5 hours ago

    I assume I need to have a domain name through a DNS service like cloudflare in order to make use of it

    Yes, you’re correct here.

    Once I have my DNS setup, how do I associate it with my server or point it through the nginx reverse proxy?

    You begin by forwarding ports 80 and 443 to your Nginx proxy server’s external ports. These are the standard ports for http and https requests, respectively. So your Nginx will immediately be able to tell if a request is http or https based on which port it is coming in on.

    Next, you would set an A name record on your domain manager. This A name record will point a subdomain to a specific IPv4 address. So for instance, maybe the name is “abs” and the IP is your home WAN IP. So whenever an http or https request comes in on “abs.{your domain}” it will get redirected to your WAN IP. If you wanted to use IPv6, that would be an AAAA name record instead… But if this is your first foray into self-hosting, you probably don’t want to use IPv6.

    On Nginx’s side, it receives all of those incoming http and https requests because the ports are forwarded to it. You configure it to take requests for those subdomains, and route them to your various devices accordingly. You’ll also need to do some config for SSL certificates, which will allow https requests to resolve successfully. You can either use a single certificate for the entire site, or an individual certificate for each subdomain. Neither is “more” correct for your needs, (though I’m sure people will argue about that in responses to this).

    So for instance, you send a request to https://abs/.\{your domain}. The domain manager forwards this to your WAN IP on port 443. Nginx receives this request, resolves the SSL certificate, and forwards the request to the device running abs. So your ABS instance isn’t directly accessible from the net, and needs to bounce off of Nginx with a valid https request in order to be accessible.

    You’ll want to run something like Fail2Ban or Crowdsec to try and prevent intrusion. Fail2Ban listens to your various services’ log files, and IP-bans repeated login failures. This is to help avoid bots that find common services (like ABS) and try to brute-force them by spamming common passwords. You can configure it to do timeouts with increasing periods. So maybe the first ban is only 5 minutes, then 10, then 20, etc…

    Lastly, you would probably want to run something like Cloudflare-DDNS to keep that WAN IP updated. I’m assuming you don’t have a static IP, and you don’t want your connections to break every time your IP address changes. DDNS is a system that routinely checks your WAN IP every few minutes, and pushes an update to your provider if it has changed. So if your IP address changes, you’ll only be down for (at most) 5 minutes. This will require some extra config on your provider’s part, to get an API key and to configure the DDNS service to point at your various A name records.

    If you need any help setting the individual services up, let me know. I personally suggest docker-compose for setting up the entire thing (Nginx, DDNS, and Fail2Ban) as a single stack, but that’s purely because it’s what I know and it makes updates easy. But this comment is already long enough, and each individual module could be just as long.

    • TheBlindPew@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Thank you so much this is very helpful, I’ll definitely be taking a run at it with all of this advice in mind this week. When you mention running the whole thing as a single stack does that mean getting all of it running inside a single docker container such that it only takes the 1 docker run command? Is it a requirement to get them able to talk or just a more elegant way to have the entirety of the server running in a singular container instead of spread across several?

      • mic_check_one_two@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 hours ago

        A stack is a group of containers that were all started together via a docker-compose.yml file. You can name the stack and have all of the containers dropped down below it. Compose is simply a straightforward way to ensure your containers all boot with the same parameters each time.

        Instead of needing to remember all of the various arguments for each container, you simply note them in the compose file and run that. Docker Compose reads the file and runs the containers with the various arguments.

        Moving from docker to docker-compose is the single largest ease-of-use change I made in my setup. If you want some help in how to use it, I can post a quick example and some instructions on setting it up. You would use cd [directory with your docker-compose.yml] to select the proper directory, then docker-compose up -d to run the compose file.

        Updating would be docker-compose down to stop the stack, docker-compose pull to pull updated images, docker-compose up -d to start your stack again, then docker image prune -f to delete the old (now outdated and unused) images.