Every 15 minutes exactly, in whatever terminal window(s) I have connected to my server, I’m getting these system-wide broadcast message:

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 00:45:00 PST):  

systemd[291622]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 00:45:00 ...  
 systemd[291622]:Failed to allocate manager object: Too many open files  

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:00:01 PST):  

systemd[330416]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 01:00:01 ...  
 systemd[330416]:Failed to allocate manager object: Too many open files  

Broadcast message from systemd-journald@localhost (Sun 2026-02-15 01:15:01 PST):  

systemd[367967]: Failed to allocate manager object: Too many open files  


Message from syslogd@localhost at Feb 15 01:15:01 ...  
 systemd[367967]:Failed to allocate manager object: Too many open files  

The only thing I found online that’s kind of similar is this forum thread, but it doesn’t seem like this is an OOM issue. I could totally be wrong about that, but I have plenty of available physical RAM and swap. I have no idea where to even begin troubleshooting this, but any help would be greatly appreciated.

ETA: I’m not even sure if this is necessarily a bad thing that’s happening, but it definitely doesn’t look good, so I’d rather figure out what it is now before it bites me in the ass later

  • just_another_person@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    3 days ago

    You have a process holding open a bunch of FD’s. Instead of just blindly increasing the system limits, try and find the culprit with something like: lsof | awk '{print $1}' | sort | uniq -c | sort -nr

    That will give you a list of which processes are holding open descriptors. See which are the worst offenders and try and fix the issue.

    You COULD just increase the fd open max, but then you actually will more than likely run into OOMkill issues because you aren’t solving the problematic process.

    • guynamedzero@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      3 days ago

      running this, i find that qbittorrent fluctuates around 19000 and python3 is steady around 18000, and my arrs a bit behind, in this case, I’m not sure if there’s anything I can easily do without stopping seeding

        18460 qbittorre
        18424 python3
        14056 docker-pr
        11424 Sonarr
        11072 Radarr
         9440 Prowlarr
      
      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        20
        ·
        3 days ago

        Reduce the number of active connections, or the total number of active transfers available at once, and that will lower that number.

        If you’re POSITIVE your memory situation is in good shape (meaning you’re not running out of memory), then you can increase the max number of open files allowed for your user, or globally: https://www.howtogeek.com/805629/too-many-open-files-linux/

        Again: if you do this, you will likely start hitting OOMkill situations, which is going to be worse. The file limit set right now are preventing that from happening.

        • guynamedzero@piefed.zeromedia.vipOP
          link
          fedilink
          English
          arrow-up
          8
          ·
          3 days ago

          I just reduced my global max connection from infinity to 500 (somehow that increased my upload speed?) and it reduced qbits shit by a couple thousand, but it’s still in the many thousands. I’m assuming the number on the left is literally just how many files it’s using, in which case, how could that ever get below 1024? Not to mention I have many other services that are also above 1024. (See below). In any case, I’m only using 14 GB of memory out of my 32 GB of ram and 16 swap, so I think it would be fine to increase the limit, but that does worry me a bit.

            18841 python3
            16294 qbittorre
            14064 docker-pr
            11900 Sonarr
             8940 Radarr
             8441 Cleanupar
             8246 Prowlarr
             6130 java
             5836 postgres
             3532 container
             2766 gunicorn
             1980 dockerd
          
          • graycube@lemmy.world
            link
            fedilink
            arrow-up
            11
            ·
            3 days ago

            Your number of python file descriptors went up after that change. Have you looked at what python stuff is running? Something isn’t closing files or sockets after it is done with them.

            • graycube@lemmy.world
              link
              fedilink
              arrow-up
              7
              ·
              3 days ago

              You can also look at how many network sockets you have open and where they are connecting. netstat -an will give you a quick look. lsof can help you figure out what is using those ports.

              • graycube@lemmy.world
                link
                fedilink
                arrow-up
                8
                ·
                3 days ago

                If you really need that many connections there are some tcp tunables you can do to help them be more efficient.

                • guynamedzero@piefed.zeromedia.vipOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 days ago

                  I went to sleep last night after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.

          • [object Object]@lemmy.world
            link
            fedilink
            arrow-up
            5
            ·
            edit-2
            3 days ago

            somehow that increased my upload speed?

            Network hardware is sensitive to lots of small packets going over many connections — some cheap routers can straight up overheat from that. And especially if your WiFi router doesn’t support full-duplex connection, uploads will compete with downloads over the bandwidth — which includes metadata communication like “hey, how much of that torrent have you got?”

    • [object Object]@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      How much memory does an open descriptor really use? If it’s a whole kilobyte, 20000 open files would take 20 MB of memory.

      • just_another_person@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Depends on how the code is using it. You could look deeper, but that’s not what OP is asking for help with.

        It’s not about how big they are really, it’s about how many can be open at a time. Without sane limits, then anything is a ticking time bomb.

  • ShawiniganHandshake@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    3 days ago

    If it’s happening every 15 minutes, it’s probably a systemd timer trying to kick off a unit on a schedule. Check for .timer files in your system and user systemd configuration and see if there are any configured to run every 15 minutes.

    Whatever process is trying to start is probably exceeding the open files ulimit. ulimits can be set system-wide, per user, and per cgroup.

    The ulimit may be too low, there may be some process leaking file handles (opening files periodically but never closing them), or the unit might be configured to run under the wrong user or cgroup.

    If a reboot gets rid of the problem temporarily, it’s most likely a file handle leak. Remember that objects like network sockets also count as files for the purposes of the open files ulimit.

    A tool like lsof can help you track down processes with a lot of open file handles.

    • guynamedzero@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      I went to sleep last night after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.

    • guynamedzero@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I was running the same things I always have, which is just a large collection of services in docker. I went to sleep last night after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.

      • 0xtero@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        large collection of services in docker.

        Ok, so the error message would indicate lack of available file descriptors and that probably depends on what your docker services are doing at the time and if you’ve set any ulimits on your containers.

        So what are your host vs. docker ulimits? If you haven’t set any container ulimits, it’ll just use the global from the host.

        ulimit -Hn

        Longer explanation: https://www.baeldung.com/linux/limit-file-descriptors

  • CallMeAl (Not AI)@piefed.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    3 days ago

    are you running anything that would cause a large number of files to be open?

    was it always like this? if not when did it start?

    • guynamedzero@piefed.zeromedia.vipOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I have no idea when it started, I just happened to notice it last night. I went to sleep after posting this and left a ssh connection open to see what it did in the morning when I woke up. And when I woke up and checked it out, I found that coincidentally it stopped doing the timer almost exactly as I went to sleep, yet I don’t think I was doing anything that would make that happen. I have no idea why it stopped, but it hasn’t started again either.