• Hawke@lemmy.world
    link
    fedilink
    arrow-up
    36
    ·
    11 hours ago

    That was a different “AI”

    Machine learning is different from large language models.

      • NeilNuggetstrong@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        9 hours ago

        You’re right, but there’s still a lot of different kinds of AI. The hype is all about LLMs. Of course there are similarities between them. Most AIs use artificial neural networks for instance. The training process of these networks, and their architecture is usually what separates them.

        Reinforcement learning is oftenly used to play video games, or control robots. They learn by taking actions in an environment, and getting feedback for their actions.

        Supervised learning can be used for object detections in images for instance. You manually create a dataset where you label each pixel of an image containing a dog for instance, and you end up with a dataset of labeled images with dogs. Then you train the neural network to spot the similarities. One researcher trained such a network to separate between dogs and wolves a few years back. But when they performed explainability analysis on the output they could see that the AI had instead noticed that snow was present on most images with wolves, so it would classify every image they provided containing snow was a wolf lol.

        Then you have unsupervised learning which is basically to just let the neural network learn on its own the structure and relationships in the data.

        Of course there are many different variations of training, oftentimes multiple stages consisting of some of, or even all of the above. It’s a mess

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          2 hours ago

          The other side of the hype is the graphical AI, diffusion models and related. Thanks to marketing for the LLMs, it’s all been shoved into the AI label and blurred together. Not all AI is bad, and I would say the bad part is how the tool is being pushed towards use anywhere and everywhere, when it’s only best for certain applications. The joke was always that when you get a hammer, everything looks like a nail, and yet we’re doing that now.

      • TheFriendlyDickhead@feddit.org
        link
        fedilink
        arrow-up
        3
        ·
        10 hours ago

        They use machine learning heavily and thats the main “magic part”. But thats still only a part of the llm structure. Machine learning on its own basically only wants to maximize a value.

    • TheLeadenSea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      11 hours ago

      Large language models are a thing that uses machine learning, it’s more a subcategory than a totally separate thing

  • ryannathans@aussie.zone
    link
    fedilink
    arrow-up
    12
    ·
    10 hours ago

    They are, it just takes time for those advancements to make it into patients hands.

    Look up alpha fold and have your mind blown

    • ikt@aussie.zone
      link
      fedilink
      arrow-up
      7
      ·
      9 hours ago

      yeah lol I was gonna say they do but this place downvotes every AI did something good post to -100

  • mycodesucks@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    9 hours ago

    Those things will come back when the scientists get reestablished in China and start developing good things instead of plagiarism machines and self-driving murder pods.