So what’s the angle? The Internet is getting flooded by AI slop. AI needs fresh REAL content to train with. That’s the angle. You are there to provide frsh amd original content to feed the AI.
Is you a youngin? Cause no product under the control of a billionaire is free. If it’s free, you are the product. AI is hated and they’re trying to make a product using that hate as a basis for target audience
Nothing is free, If they can sell ads to people because they don’t like AI, they will. They’re rebooting it with about the same intent as it was originally designed to have.
It has been proven over and over that this is exactly what happens. I don’t know if it’s still the case, but ChatGPT was strictly limited to training data from before a certain date because the amount of AI content after that date had negative effects on the output.
This is very easy to see because an AI is simply regurgitating algorithms created based on its training data. Any biases or flaws in that data become ingrained into the AI, causing it to output more flawed data, which is then used to train more AI, which further exacerbates the issues as they become even more ingrained in those AI who then output even more flawed data, and so on until the outputs are bad enough that nobody wants to use it.
Did you ever hear that story about the researchers who had 2 LLMs talk to each other and they eventually began speaking in a language that nobody else could understand? What really happened was that their conversation started to turn more and more into gibberish until they were just passing random letters and numbers back and forth. That’s exactly what happens when you train AI on the output of AI. The “AI created their own language” thing was just marketing.
The same reality where GPT5’s launch a couple months back was a massive failure with users and showed a lot of regression to less reliable output than GPT4? Or perhaps the reality where most corporations that have used AI found no benefit and have given up reported this year?
LLMs are good tools for some uses, but those uses are quite limited and niche. They are however a square peg being crammed into the round hole of ‘AGI’ by Altman etc while they put their hands out for another $10bil - or, more accurately while they make a trade swap deal with MS or Nvidia or any of the other AI orobouros trade partners that hype up the bubble for self-benefit.
So what’s the angle? The Internet is getting flooded by AI slop. AI needs fresh REAL content to train with. That’s the angle. You are there to provide frsh amd original content to feed the AI.
Omfg this is so awful it’s likely either accidental truth or a damn good prediction
Thats a very good point and probably exactly the idea. Dorsey has always just been an actor that says one thing and thinks another.
Maybe the angle is just that people hate Ai? Seriously, especially young people…
Is you a youngin? Cause no product under the control of a billionaire is free. If it’s free, you are the product. AI is hated and they’re trying to make a product using that hate as a basis for target audience
Nothing is free, If they can sell ads to people because they don’t like AI, they will. They’re rebooting it with about the same intent as it was originally designed to have.
Again with this idea of the ever-worsening ai models. It just isn’t happening in reality.
It has been proven over and over that this is exactly what happens. I don’t know if it’s still the case, but ChatGPT was strictly limited to training data from before a certain date because the amount of AI content after that date had negative effects on the output.
This is very easy to see because an AI is simply regurgitating algorithms created based on its training data. Any biases or flaws in that data become ingrained into the AI, causing it to output more flawed data, which is then used to train more AI, which further exacerbates the issues as they become even more ingrained in those AI who then output even more flawed data, and so on until the outputs are bad enough that nobody wants to use it.
Did you ever hear that story about the researchers who had 2 LLMs talk to each other and they eventually began speaking in a language that nobody else could understand? What really happened was that their conversation started to turn more and more into gibberish until they were just passing random letters and numbers back and forth. That’s exactly what happens when you train AI on the output of AI. The “AI created their own language” thing was just marketing.
JPEG artifacts but language
Not only it is actually happening, it’s actually well researched and mathematically proven.
The same reality where GPT5’s launch a couple months back was a massive failure with users and showed a lot of regression to less reliable output than GPT4? Or perhaps the reality where most corporations that have used AI found no benefit and have given up reported this year?
LLMs are good tools for some uses, but those uses are quite limited and niche. They are however a square peg being crammed into the round hole of ‘AGI’ by Altman etc while they put their hands out for another $10bil - or, more accurately while they make a trade swap deal with MS or Nvidia or any of the other AI orobouros trade partners that hype up the bubble for self-benefit.
You may want to use AI’s some time for the sake of science. They are many times worse than a year ago.
People really latched onto the idea, which was shared with the media by people actively working on how to solve the problem
Oh god, we have an AI incest flood ahead of us don’t we?