• chrash0@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    my point is that it’s hard to program someone’s subjective, if written in whatever form of legalese, point of view into a detection system, especially when those same detection systems can be used to great effect to train systems to bypass them. any such detection system would likely be an “AI” in the same way the ones they ban are and would be similarly prone to mistakes and to reflecting the values of the company (read: Jack Dorsey) rather than enforcing any objective ethical boundary.

    • Chronographs@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Every single comment I said that detecting them would be the hard part, I’ve been talking about defining the type of content that is allowed/banned not the part where they actually have to filter it.

      • chrash0@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        4 days ago

        i guess the point that’s being missed is that when i say “hard” i mean practically impossible

        • Chronographs@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Yeah I’m basically ignoring the part of implementing it as a separate issue from defining it, which is the part I’m saying is objective. Given a definition of what type of content they want to ban you should be able to figure out whether something you’re going to post is allowed or not, that’s why I’m saying it’s not subjective. Whether it can be detected if you post it anyways, would probably have to be based on reports, human reviewers and strict account bans if caught, with the burden of proof on the accused to prove it isn’t AI to have any chance of working at all. This would get abused, and be super obnoxious (and expensive) but it would probably work to a point.