• floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    65
    ·
    edit-2
    20 hours ago

    Previously they would have had to encounter a person who wanted to manipulate them. Now there’s a widely marketed technology that will reliably chew these vulnerable people up.

    • Steve@startrek.website
      link
      fedilink
      English
      arrow-up
      47
      ·
      19 hours ago

      Chew them up for no reason at all. No goal, no scam, just a shitty word salad machine doing what it does.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        16 hours ago

        And there are countless AI hype bros who will just dismiss all of this and call the people who fall into this morons.

        It’s really insidious.

        • Amnesigenic@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          Tbf the people who fall for this are morons, but that doesn’t mean they deserve to be fucked over

          • paraphrand@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            I don’t know that they always are. It’s easy from our nerd bubble to dismiss AI and LLMs because we understand their limitations and how they work to an extent.

            We shouldn’t look down on anyone who takes the advertising and idea that these are “intelligence” at face value. The disclaimers that say that the intelligence is fallible, just like us, are never as strongly worded as they should be. If the AI companies made things clearer, they would be de-hyping their products.

            I dunno, this whole thing is unprecedented. And the hype around it all, taken at face value, is irresponsible and misleading.