Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.
World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.
Sounds reasonable.
That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.
EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.
I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.
Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There’s more, but I got interrupted by a drunk guy who needs my attention, I’ll be back.
LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
I sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you’ve been pitched. It’s super fast sorting from static data. That’s it.
Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.
You feed it 102 combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:
all the logic programmed by humans
The data collected and sanitized by humans
The task groups set by humans
The output validated by humans
It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, is novel output, or validated by any models, because…they can’t do that.
You addressed that they haven’t tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis… even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale…
you have still failed to answer the question. You’re also neglecting to include an explanation of temperature in your argument, which may be relevant here.
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.
Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.
So you sort of sarcastically answered your own stupid question 🤌
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation
Neural nets do indeed learn new relationships. Maybe you are thinking of the fact that most architectures require training to be a separate process from interacting; that is not the case for all architectures.
To design a neural long-term memory module, we need a model that can encode the abstraction of the past history into its parameters. An example of this can be LLMs that are shown to be memorizing their training data [98, 96, 61]. Therefore, a simple idea is to train a neural network and expect it to memorize its training data. Memorization, however, has almost always been known as an undesirable phenomena in neural networks as it limits the model generalization [7], causes privacy concerns [98], and so results in poor performance at test time. Moreover, the memorization of the training data might not be helpful at test time, in which the data might be out-of-distribution. We argue that, we need an online meta-model that learns how to memorize/forget the data at test time. In this setup, the model is learning a function that is capable of memorization, but it is not overfitting to the training data, resulting in a better generalization at test time.
Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.
I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn’t agree with the approach.
Does it, though? Feels like we’re just rewriting the sales manual without thinking about what “learning from video” would actually entail.
Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.
There’s an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you’ve got a shot at new one in the near future.
What these Tech Companies are chasing is the Next Big Thing, even when they don’t really understand what that is. And they’re so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people’s money) on whatever it is they think that might be.
The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.
he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality
Sounds reasonable.
That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.
EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.
I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.
https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres
So Meta probably cannot only be doing AGI work.
Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There’s more, but I got interrupted by a drunk guy who needs my attention, I’ll be back.
WHAT HAPPENED WITH THE DRHNK DUDE?
LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.
The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.
https://blog.google/technology/ai/google-gemma-ai-cancer-therapy-discovery/ how did it do this?
Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.
I’ll never speak to this topic again since I’ve clearly been bested with your knowledge from a Google Blog.
yes, google reported about their ai discovering a novel cancer treatment, of course they did?
now tell me about how it isn’t true. Do you have anything of substance to discredit this?
this reeks of confirmation bias, did you even try to invalidate your preconcieved notions?
I sure do. Knowledge, and being in the space for a decade.
Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣
LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.
I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you’ve been pitched. It’s super fast sorting from static data. That’s it.
You’re drunk on Kool-Aid, kiddo.
You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.
so what if that’s how it works? It clearly is capable of novel things.
🤦🤦🤦 No…it really isn’t:
Not only is there no validation, they have only begun even looking at it.
Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.
Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.
You feed it 102 combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:
It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM
Nothing at any stage if developed, is novel output, or validated by any models, because…they can’t do that.
Wow, if you really do know something about this subject, you’re being a real asshole about it 🙄
You addressed that they haven’t tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis… even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale…
you have still failed to answer the question. You’re also neglecting to include an explanation of temperature in your argument, which may be relevant here.
And how do you think animal brains develop comprehension…?
Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.
Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.
So you sort of sarcastically answered your own stupid question 🤌
This Nobel prize winner seems to disagree with you.
Neural nets do indeed learn new relationships. Maybe you are thinking of the fact that most architectures require training to be a separate process from interacting; that is not the case for all architectures.
From your own linked paper:
Literally what I just said. This is specifically addressing the problem I mentioned, and goes on further to exacting specificity on why it does not exist in production tools for the general public (it’ll never make money, and it’s slow, honestly). In fact, there is a minor argument later on that developing a separate supporting system negates even referring to the outcome as an LLM, and the supported referenced papers linked at the bottom dig even deeper into the exact thing I mentioned on the limitations of said models used in this way.
I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn’t agree with the approach.
Does it, though? Feels like we’re just rewriting the sales manual without thinking about what “learning from video” would actually entail.
There’s an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you’ve got a shot at new one in the near future.
What these Tech Companies are chasing is the Next Big Thing, even when they don’t really understand what that is. And they’re so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people’s money) on whatever it is they think that might be.
The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.
he’s been salty about this for years now and frustrated at companies throwing training and compute scaling at LLMs hoping for another emergent breakthrough like GPT-3. i believe he’s the one that really tried to push the Llama models toward multimodality