Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • pkjqpg1h@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    it’s not just about bad Web data or Reddit data even old books has some unconscious bias

    and even if you find every “wrong” or “bad” data (which is you can’t because somethings are just subjective) and after remove them still you can’t be sure about it

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      What is your fixation with trying to tell me i’m saying you can remove all bias?