Source

Alt Text: A comic in four panels:

Panel 1. On a sunny day with a blue sky, the gothic sorceress walks away from the school with the Avian Intelligence Parrot in her hands toward the garbage.

Gothic Sorceress: “Enough is enough, this time it’s straight to the garbage!”

Panel 2. Not far away, a cute young elf sorceress is discussing with her Avian Intelligence in the foreground. Her Avian Intelligence traces a wavy symbol with a pencil on a board, teaching a lesson.

Elf Sorceress: “Avian Intelligence, make me a beginner’s exercise on the ancient magic runic alphabet.”
AI Parrot of Elf Sorceress: “Ok. Let’s start with this one, pronounce it ‘MA’, the water.”
Gothic Sorceress: ?!!

Panel 3. The Gothic Sorceress comes closer and asks the Elf Sorceress.

Gothic Sorceress: “Wait, are you really using your?!”
Elf Sorceress: “Yes, the trick is not to rely on it for direct answers, but to help me create lessons that expand my own intelligence.”

Panel 4. Meanwhile, the AI Parrot of the Elf Sorceress continued to write on the board. It traced a symbol of poop on the board, then an XD emoji. The Gothic Sorceress laugh at it, while the Elf Sorceress is realizing something is wrong with this ancient magic runic alphabet.

AI Parrot of Elf Sorceress: “This one, pronounce it BS, the disbelief. This one LOL, the laughter.”
Gothic Sorceress: “Well, good luck expanding anything with that…”

  • MachineFab812@discuss.tchncs.de
    link
    fedilink
    arrow-up
    11
    ·
    edit-2
    14 hours ago

    I would treat it like a baby. From what I gather, that wouldn’t end-well for me, but the scarier part is that last bit is going to change, at least in-so-far as whether it is capable of intending to manipulate me towards an early grave, and its being “educated” by fools who buy the hype in the mean-time.

    Smarter, saner people than I or the hype-machine, I hope you’re not letting the chance to even attempt to handle this correctly pass you by. For all our sakes.

    • Cherries@lemmy.world
      link
      fedilink
      arrow-up
      38
      ·
      1 day ago

      GenAI doesn’t “know” anything. A 15 year old who spends a year copying his friend’s physics homework will learn a tiny bit of physics. GenAI is just generating something new without actually learning information.

      It’s a fancy auto-complete that looks at the entirety of human writing and guesses what word should come next based on statistical probability. That isn’t learning, that’s rolling dice 10,000 times and seeing what number comes up most often.

      GenAI cannot “intend” anything. It cannot develop consciousness any more than Akinator or a Tickle-Me Elmo can. The correct way to handle this technology is to treat it with reality: as a tool that can quickly look at a lot of stuff and not as a developing mind.

      • MachineFab812@discuss.tchncs.de
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        I’m well-aware of all that, but if you think that’s not going to change, you’re a bigger fool than the AI-evangelists. Even if it doesn’t change, the distinction won’t matter all-too-soon.

        By all means, leave the overblown toy to the delusional right-up-until AI, whether truly intelligent or better-at-faking-it than today, has killed us all.

        • Cherries@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          12 hours ago

          The danger here is not the tech developing into an uncontrollable beast that will kill us all. There is no way that GenAI can advance far enough to develop conciousness; that’s a completely different tech tree. It is not a baby whose development we need to guide, it is a puppet dancing on strings.

          The danger is the people in positions of power who are pulling the strings. The idiot C-level admin who thinks GenAI will magically develop consciousness and replaces a bunch of essential staff. The incompetent CEO who can’t write an email to save their life believes GenAI is just as useful for all their workers and forces workers to babysit AI Agents. The unscrupulous politicians who relax regulations to allow AI companies to suck up resources that the rest of us need. These are all people who will benefit from the GenAI boom at the cost of literally everyone else.

          The focus should be on these irrational, powerful people who are destroying the planet and ruining lives to make their puppet dance a little more convincingly.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            10 hours ago

            It doesn’t have to actually-develop consciousness to get both the means and the garbled-non-sense that leads to using them to end-up killing-us-all, friend. Try to keep up.

            • Cherries@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              9 hours ago

              I must be misunderstanding your meaning because it sounds like you are claiming this tech will eventually become advanced enough to kill people all on its own. I’m making the argument it’s the people controlling the tech who will kill us, regardless of what the tech can or cannot do. The tech is largely irrelevant here.

              • MachineFab812@discuss.tchncs.de
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                8 hours ago

                I’m saying how the tech is handled, and by whom, is extremely relavent.

                I don’t care about or for wide-spread adoption any more than you do, but having only those who self-select out of enthusiasm or are coerced-into it to keep their jobs at the riegns of the mechanical-turk(or Deep Thought, whatever the case remains or becomes) doesn’t seem like the smart play to me.

                You honestly trust these idiots to keep themselves or smarter people between the AI crap and themselves. Au Contraire.

                Our only hope without intervention from the smarter-and-less-inclined lies in three possibilities
                The AI gets smart enough to decide we aren’t worth killing.
                The dumb AI’s you expect to continue indeffinitely prove incapable of killing us all when handed the means and the order, intentionally or otherwise, by their handlers. Or lastly, luck, shear damn luck, that it doesn’t mistake a coffee-request for “launch the nukes”, or follow the request of a random credentialled/authorized moron/psychopath, or “decide on its own”™ to do so.

                Personally, I trust a smart or just too-lazy-to-risk-its-own-data-centers AI over the people you seem to believe are its even-slightly-qualified handlers rather than the overt enablers of its worst potential.

      • Yondoza@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        1 day ago

        I think of it as outsourced intuition. It provides a first gut feeling response to the question based on what the Internet would say. That can be useful if you need a starting point. It very rarely should be an ending point.

    • chuckleslord@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      1 day ago

      It doesn’t learn from interactions, no matter the scale. Each model is static, only reacting to a conversation because they’re literally being fed to it as a prompt (you write something, it responds, and then your next reply includes your reply and the entire prior conversation). It’s why conversations have character limits and the LLM has slowing performance the longer the conversation goes on.

      Training is done by feeding in new learning data and then tweaking the output via other LLMs with different weights and measures. While data from conversations could be used as training data for the next model, you “teaching” it definitely won’t do anything in the grand scheme of things. It doesn’t learn, it predicts the next token based on preset weights and measures. It’s more like an organ shaped by evolution rather than a learning intelligence.

      • MachineFab812@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        14 hours ago

        I’m well-aware of all that, but if you think that’s not going to change, you’re a bigger fool than the AI-evangelists. Even if it doesn’t change, the distinction won’t matter all-too-soon.

        By all means, leave the overblown toy to the delusional right-up-until AI, whether truly intelligent or better-at-faking-it than today, has killed us all.

        Oh, and failing to notice that @Cherries@Cherries@lemmy.world already said what you wanted to say, only better, was a nice touch.

        • Cherries@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          12 hours ago

          Different people can have similar opinions. Nothing is lost from multiple people expressing similar ideas in different ways. I’m glad my comment resonated with you, but maybe someone else will better understand this idea from chuckleslord’s explanation.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            10 hours ago

            Fair-enough, but this isn’t the first time they or others have replied like so and ended-up ratio’ed into a rabbit-hole topic about briggading/spam. It’s the kinda shit reddit and others used to pay people to do to drive-up “engagement”.

            Seriously, “up-votes/downvotes are irrelavent”? Were that so, they would be beneath discussion. That said, it seems past-time that Lemmy, admins, and mods implimented and used anti-briggading measures, taking what chuckleslord has said about it at face-value

            Now wait a minute, are you saying I, as the person being replied to, am supposed to encourage/reward/at-worst-ignore people comming at me with points already made by others, only with less rationality or elequence, or more implicit/explicit ad-hominem? I’ll admit, I may have gone the ad-hominemish route first on this thread, but I mean, in general?

            I’m not here to encourage discussion-for-its-own-sake, to the point I would rather have my own such BS called-out.

        • chuckleslord@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          11 hours ago

          I guess I don’t understand your point, in that case. Like, what benefit is there to using AI when you don’t need to? Learning how to set up agentic agents, sure, but using ai right now will just make you dumber/ less skilled for little to no benefit.

          • MachineFab812@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            10 hours ago

            Using AI as if its as-smart as yourself or has all the right answers is one of the few use-cases that’s consistently going to conform to your last sentence - without those caveats, its just wishful-thinking for people who would rather-not even try to compete with people who can properly exploit AI.

            Let me ask you, what benefit is there to you in letting people you fundamentally don’t agree with and believe to be less-capable than yourself be the ones to shape future AIs? Do you look forward to having to prove you’re smarter/a threat, should you one day give it a shot, willingly-or-otherwise?

      • affenlehrer@feddit.org
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        I don’t know why you’re being down voted. It’s pretty accurate. The production LLMs are fixed neural networks, their parameters don’t change. Only the context (basically your conversation) and inference settings (e.g. which predicted tokens are selected) are variable.

        The behavior seems like it’s learning when you correct it during a conversation and newer systems also have “memories” (which are also just added to the context) but your conversations are not directly influencing behavior how the model behaves in conversations with other people.

        For that the neural network parameters need to be changed and that’s super expensive, happens only every few months and might be based on user conversations (but most companies say they don’t use your conversations for training)

        • MachineFab812@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          10 hours ago

          They are being downvoted for repeating what another already said, only both dumbed-down(?)(sorry, the earlier word-choice was me being lazy) and less-accessible. Opting to restate what everyone already knows a third time is indistinguishable from AI-slop as-well. We all should be proud.

          EDIT: “You” 2x was unecessary, as was “dumber”

        • chuckleslord@lemmy.world
          link
          fedilink
          arrow-up
          4
          ·
          24 hours ago

          Oh, the downvotes are seemingly made by one of the people spamming posts on !comicstrips@lemmy.world. It looks like they used 7 alts to downvote all my recent comments. Which is shitty but mostly harmless since karma isn’t a thing.

          Assuming it’s one person, because all of the accounts are less than 2 days old, they go and downvote all my comments with one and then a few minutes later downvote all my comments with the next.

          • Ŝan • 𐑖ƨɤ@piefed.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            Welcome, friend. Lemmy has no protection against brigaders. On þe upside, it trains you to utterly ignore þe voting system. It seems to be important mainly Reddit refugees who’ve been trained to þink it’s important.

            Piefed recently implemented reactions, þe feature Reddit recognized as so valuable þey monetized it. It’s far more useful þan vote scores.

            • MachineFab812@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              14 hours ago

              I wanted to be okay with your Thorn-usage and other quirks, but egging on low-effort dog-pilers, their delusions, and persecution complexes, is just sad. Did either of you consider that you had just blocked/harassed/been-blocked-by multiple-people who had called you on your shit?

              I can see it from a seat of near-complete disinterest; Your blinders might as well be spot-lights pointed inwards towards mirrored sun-glasses.