• nonentity@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    16
    ·
    14 hours ago

    I’ll never understand how an explosively imprecise, statistically luke-warm, grey goo extrusion sphincter could ever be mistaken for intelligence.

    AI doesn’t exist, it’s a vacuous marketing term.

    LLMs have vanishingly narrow legitimate, defensible use cases, but their output is intrinsically inaccurate, and should never be used without supervision from relevant domain experts.

    • texture@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      50 minutes ago

      theres plenty of legitimate use cases. your comment just sounds like youre repeating what everyone else says about it.

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 minutes ago

        There could be many use cases, and some of them may even be legitimate, but I’m yet to observe any which have broad applicability, and they should only ever be wielded by a responsible, expert adult.

    • dogzilla@masto.deluma.biz
      link
      fedilink
      arrow-up
      3
      ·
      14 hours ago

      @nonentity @technology I think the problem with your framing is it implies that humans are not also “explosively imprecise, statistically luke-warm, grey goo extrusion sphincter(s)”. We weren’t exactly living in a perfect world prior to AI, and all AI does is regurgitate what humans created. AI isn’t really changing the character of anything - and in several domains I’d argue it’s improving the baseline (coding for one).

      • nonentity@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        ·
        13 hours ago

        It’s telling that you assumed the description applied exclusively to LLMs.

        No one who persists in labelling LLMs as ‘AI’ should be treated as an authority on the subject, and I’d argue it’s one of the greatest indicators of how little they comprehend the situation.

        • astronaut_sloth@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          13 hours ago

          THANK YOU! I studied AI in school, and it always bothers me when people think that LLMs are the only facet of AI. Between 2022-2024, I had a knee jerk reaction of explaining that AI is more than LLMs and that LLMs are really a small subset of the entire universe of AI, yadda yadda yadda. Now I’ve given up and roll my eyes as someone tries to tell me about the cool new Claude skill they built.

          What’s funnier is people think I hate LLMs. That couldn’t be further from the truth; they are a fantastically interesting and innovative technology! “Attention is All You Need” is a great paper, and super impactful. I just hate that people are outsourcing their thinking to a chatbot and neglect the rest of my field of study.

          • howrar@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            10 hours ago

            LLMs are still a facet of AI though. It sounds like they’re saying it shouldn’t be categorized as AI at all.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          12 hours ago

          I’m confused. Aren’t you the one who referred to LLMs In a thread that was conflating LLMs with AI? The parent’s comment seems to be right on point.

          It’s kind of like how we’ve lost the war on hacking.

          Large language models fall under the current definition of artificial intelligence just as much as Cyc or Cog did in their day, or various expert systems and machine learning models, diffusion models, etc.

          Pretty much any non-deterministic inference engine can be classified as an AI, including LLMs.