Post reads: "❓ Do you know who are the inventors of the hardware-level kill switch for smartphones?

🤫 Stay tuned! We’re teaming up with them to offer you more privacy.

👇 Share your ideas in the comments! "

  • solrize@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    22 hours ago

    Not commenting on the other stuff but people should get used to the fact that anonymized private data is still private, so a so-called privacy app should not be leaking or disclosing or selling it. It might be LESS invasive than personally identifiable data, but it’s not NON-invasive.

    Who is willing to pay for it after all? Almost certainly, someone who is up to no good. And if you can think of a way it can possibly be misused, then enabling that misuse is invasive.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      13 hours ago

      I’ll preface my answer to clarify that I’m against surveillance capitalism and privacy Zuckering. I say that in the open, do not use Google services, Amazon, have my own PeerTube instance, IoT at home is HomeAssistant with ZigBee, etc. So my goal here is NOT to cut some slack to anyone.

      I started with this because I’m not actually sure what you are referring to. Since my initial comment is about Murena STT I’ll assume it’s that but if not please correct me. This specific service… is not a compromise I would accept. So I’m in NO way advocating for me. The only thing I’m clarifying is that this service is not something one can “stumble upon” and enable without paying attention. That’s why I put such recurring emphasis on it. It’s not coherent with “sharing all data” or imagining a scenario where somebody buys an /e/OS phone Murena and somehow ending up getting their data leaked (due to the potentially imperfect anonymization) to OpenAI. One has to activate it and to do so one must be a Murena services paying customer. This is not the case when “just” installing /e/OS. So once again I’m not saying Murena is perfect, not even that it did the right choice (according to my own privacy preferences) my relying on OpenAI, and yet that problem is not relevant to most people who use /e/OS.

      To make a quick a analogy it’s like installing WhatsApp on a privacy OS phone. Sure you technically can do that but if you do and complain about how Meta is collecting your data then you did it on yourself, you can’t blame the OS developers.

      • solrize@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        11 hours ago

        due to the potentially imperfect anonymization

        I don’t understand what you’re saying above, but my point is that disclosing any info to adversaries is invasive even if the anonymization is 100% perfect. The potential imperfection makes it worse, but that’s a side issue.

        An example is polling. Some terrible politician X wants to know what voters think of issue Y, like “35% in favor”. So she hires a polling firm to call people and ask their opinions about Y, with the result being completely anonymized and aggregated, again, like “35% in favor”. What will X do with that info? Something bad, of course! We said at the beginning that they are terrible!

        So do you want to cooperate with such a poll, that X commissioned to serve an evil purpose? Of course not! Or at least, I hope of course not. In that case, what do you think of software that effectively enrolls you in such a poll against your wishes?

        If your private activity is being statistically reported to your adversaries, your privacy is being invaded even if there is zero PII in what the adversary gets. This is infosec 101. A quotation due to Silvio Micali is “a good disguise does not reveal the person’s height”. Statistically summarized information is still information, and calling it otherwise is self-serving nonsense. You want to give the adversary NO information. Anonymization is irrrelevant.