• cecilkorik@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    11 hours ago

    S3 compatibility is nice I guess if you need S3 compatibility but also… why would you need that?

    sshfs does everything I need and compatibility is almost native.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      3 hours ago

      SSHFS is very unreliable. At least use NFSv4 or even SMB/CIFS.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      why would you need that?

      So you can switch to S3 if needed? Using compatible solutions means you have choice. Choice is good.

    • qaz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      Many cloud providers offer S3-compatible storage, so it’s a common protocol to use in applications. There are even some databases like SlateDB that fully rely on object storage for everything. Supporting more API’s is extra work (unless you’re using OpenDAL) so most people pick S3 compatible API’s because they’re the most widely supported across all cloud platforms.

      • cecilkorik@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        So enlighten me then, save me from my terrible hack that is working fine for me and tell me what it DOES have to do with. I thought S3 was a remote filesystem you can use, essentially Amazon’s proprietary version of webdav where you get a http bucket you can only access with aws proprietary tools. What’s the attraction? Clearly it seems like people love it, and I am getting dunked on for asking an honest question, which feels a bit unhealthy and unpleasant for the self-hosting community.

        Am I supposed to be familiar with AWS infrastructure as a prerequisite for being here?

        • Wispy2891@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          S3 is designed for being used by applications via API, for example you can easily save and retrieve files from it even with a JavaScript application. It is much more difficult to do the same with sshfs

          If instead you use it mounted on a computer, S3 is worse because each time you need to list its contents that’s an API request, if you have hundreds of thousands of files then it’s thousands of API reuqests

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          7 hours ago

          ok, to start with, if you need a POSIX interface to the filesystem, you already have an SSH connection to that server, and don’t need much stability across multiple clients, SSHFS may do just fine. For a homelab, that is likely the case.

          now, if you’re hosting a web server that needs data distributed across drives/nodes, data redundancy, and the usage is primarily programmatic, closer to a CDN’s or machine learning pipeline than a single user browsing files; then you want an S3-compatible solution. The S3 API makes it easier to plug it into your application, while allowing you to migrate to a different one - which I’m actually currently doing for a MinIO deployment at work.

          • dan@upvote.au
            link
            fedilink
            English
            arrow-up
            1
            ·
            43 minutes ago

            if you need a POSIX interface

            SSHFS isn’t POSIX compliant. It doesn’t support hard links, file locking, atomic renames, full support for changing file permissions, umasks, and probably other things.