New design sets a high standard for post-quantum readiness.

    • lemmee_in@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 days ago

      Signal puts a lot of effort into their threat model that assumes a hostile host (i.e. AWS). That’s the whole point of end to end encryption, even if the host is compromised the attackers do not get any information. They even go as far as padding out the lengths of encrypted messages so everyone looks like they are sending identical blocks of data

      • shortwavesurfer@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.

        You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              6 days ago

              that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account

              centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery