• TheTechnician27@lemmy.world
    link
    fedilink
    arrow-up
    56
    ·
    edit-2
    18 hours ago

    Can we please stop treating AI “confessions” like they mean jack shit? It’s just giving genAI companies the self-seriousness they crave and making anti-AI people look like hypocritical morons.

    • Zedstrian@sopuli.xyz
      link
      fedilink
      arrow-up
      11
      ·
      17 hours ago

      It’s just a way for them to shift the blame for corporate negligence from either company onto an AI model.

      • TheJesusaurus@piefed.ca
        link
        fedilink
        English
        arrow-up
        7
        ·
        17 hours ago

        Exactly. “Our busted ass untested software deleted our own database” doesn’t fill investors with confidence

      • [object Object]@lemmy.ca
        link
        fedilink
        arrow-up
        1
        ·
        17 hours ago

        And honestly the negligence was Railway hosting backuos on the same volume as production data.

        I don’t know if that was Railway’s fault, but it was definitely this companies fault to use a company who followed that pattern.

  • circuitfarmer@lemmy.world
    link
    fedilink
    arrow-up
    23
    ·
    17 hours ago

    We, as a society, need to stop pretending LLMs are conscious.

    This is vectors between numbers. We humans ascribe value to it.

    • mindbleach@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      17 hours ago

      It is possible for vectors between numbers to be conscious - these just aren’t.

      The Chinese Room isn’t real. John Searle pointed to a hard drive and said “processor.” The whole argument is Cartesian dualism, except instead of a soul, you need Steve to pay attention. If he gets the same answers while distracted then they don’t count.

  • LogicOverFeelings@piefed.ca
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 hours ago

    It’s kinda funny where tech is going. We are going from programming the machines to do exactly what we want to saying what we want in natural language to some model hoping they are gonna do it right.

    When technology become more magic then science.

    Maybe he should have tried saying please. 😆

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      17 hours ago

      I’ve been saying it a lot lately, we finally built a computer that’s as unreliable as a human. I’m pretty sure that’s not a good thing.

  • [object Object]@lemmy.ca
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    17 hours ago

    The AI confession neither has internal monolog or access to the thinking tokens.

    LLMs are incapable of introspection, they can’t playback their attention weights, or review them, or recall what they thought.

    Even thinking tokens are just a reinforcement learning based loop to anneal the models thinking back to a solution. And again, Claude hides the thinking tokens so they don’t get used for model distillation.

    This article, and all the articles like it, are pandering bullshit written by morons hoping to fool morons. It is a fiction written by the model to contrast <bad thing happened> to the chat log and system prompt.

    Good day.

  • pelespirit@sh.itjust.works
    link
    fedilink
    arrow-up
    3
    ·
    17 hours ago

    So correct me if I’m wrong, but the following happens for AI:

    • Company gives guidelines and parameters for the project
    • Company trains AI on whatever data
    • No matter the data, AI still gives a general answer or summary.
    • The answers are sometimes confidently incorrect
    • The AI is uncontrollable because it considers the data general or loosely based guidelines
    • There is no way to control the AI after a certain tipping point, because its learning is based in a fuzzy math way of thinking

    What I don’t get is, even if the data wasn’t shitty like reddit’s info, would it still go off the rails? It sure seems like it.