• 0 Posts
  • 86 Comments
Joined 3 years ago
cake
Cake day: June 14th, 2023

help-circle














  • CheeseNoodle@lemmy.worldtoComic Strips@lemmy.world1312
    link
    fedilink
    English
    arrow-up
    20
    ·
    10 days ago

    Wasn’t his job trying to figure out how muggle stuff worked? With most wizards not even having a high school level comprehension of the basics of technology. Guys job was probably the most important of all given that technology advances exponentially and magic in the setting appears to be almost completely stagnant.



  • So the two biggest examples I am currently aware of are googles AI for unfolding proteins and a startup using one to optimize rocket engine geometry but AI models in general can be highly efficient when focussed on niche tasks. As far as I understand it they’re still very similar in underlying function to LLMs but the approach is far less scattershot which makes them exponentially more efficient.

    A good way to think of it is even the earliest versions of chat GPT or the simplest local models are all equally good at actually talking but language has a ton of secondary requirements like understanding context and remembering things and the fact that not every gramatically valid bannana is always a useful one. So an LLM has to actually be a TON of things at once while an AI designed for a specific technical task only has to be good at that one thing.

    Extension: The problem is our models are not good at talking to eachother because they don’t ‘think’ they just optimize an output using an intput and a set of rules, so they don’t have any common rules or internal framework. So we can’t say take an efficient rocket engine making AI and plug it into an efficient basic chatbot and have that chatbot be able to talk knowledgably about rockets, instead we have to try and make the chatbot memorise a ton about rockets (and everything else) which it was never initially designed to do which leads to immense bloat.