Take away a calculator halfway through an exam, and suddenly, people are surly and unmotivated about simple long division.
That’s how every ‘AI makes you stupid!’ article works. Like, ‘doctors used AI to detect more cancer, but when we took it away, they were worse at eyeballing it.’ Sorry, can we go back to the part about detecting cancer better?
The doctors were worse, not just taking longer. So this would be more like people unlearning division.
While people using calculators may occasionally be unlearning division, this seems less problematic than doctors unlearning how to spot cancer on their own (since then I’m guessing you won’t have AI training data anymore since apparently AI can’t feed into AI without collapse) or software engineers unlearning how to write correct code.
You also wouldn’t want a mathematician unlearning how to do division.
The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”
since apparently AI can’t feed into AI without collapse
Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.
That’s a lot of “could” and “will” from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn’t find a second internet’s worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.
Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. ‘Make it look less AI’ is how you get generative adversarial networks.
Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.
Take away a calculator halfway through an exam, and suddenly, people are surly and unmotivated about simple long division.
That’s how every ‘AI makes you stupid!’ article works. Like, ‘doctors used AI to detect more cancer, but when we took it away, they were worse at eyeballing it.’ Sorry, can we go back to the part about detecting cancer better?
The doctors were worse, not just taking longer. So this would be more like people unlearning division.
While people using calculators may occasionally be unlearning division, this seems less problematic than doctors unlearning how to spot cancer on their own (since then I’m guessing you won’t have AI training data anymore since apparently AI can’t feed into AI without collapse) or software engineers unlearning how to write correct code.
You also wouldn’t want a mathematician unlearning how to do division.
The doctors were better, until someone yanked the tool away. That’s how every tool works! Even going from a handsaw to a table saw and back will make you lose some skill with the handsaw, because your brain focused on higher-level goals and finer motions. That’s not proof a table saw is bad for woodworking. The problem is “and back.”
Have you checked on that narrative? It’s been a while. Things stopped getting yellow. Improvements continued.
Have you checked on that narrative?
The only workaround known so far seems to be to make sure enough data is fresh: https://www.inria.fr/en/collapse-ia-generatives https://en.wikipedia.org/wiki/Model_collapse But read for yourself.
That’s a lot of “could” and “will” from an article a year old, primarily about concerns from two years ago, while image models to-day keep getting smaller and better. They didn’t find a second internet’s worth of JPEGs. Better training on the same data, or even better labels on less data, beats a simple obsession with scale.
Yes, photocopying a photocopy will degrade, but diffusion is a denoising algorithm. Un-degrading an image is its central function. ‘Make it look less AI’ is how you get generative adversarial networks.
Anyway, the grim truth is that the central concern is mistaken. Training data for cancer screening does not require the patient lived.
The article links a study. What’s your study that collapse isn’t a concern?
For what it’s worth, my worry was never focused on cancer, these doctors were just an example measured for the likely universal unlearning effect.