- Researchers demonstrate that misleading text in the real-world environment can hijack the decision-making of embodied AI systems without hacking their software.
- Self-driving cars, autonomous robots and drones, and other AI systems that use cameras may be vulnerable to these attacks.
- The study presents the first academic exploration of environmental indirect prompt injection attacks against embodied AI systems.
Photos



Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows
Eh, yes? Hasn’t that been like that since day 1?
This whole article reminds me so much of the rogue AI sign found in Portal 2

the first academic exploration
I have read about it, years ago. And there are jokes about it that are many years old.
This one against speed cams, for example:

Bobby Tables grew up and bought a car?!
An article about the dangers of AI starts off with an AI generated comic about the dangers of AI.
deleted by creator
And it can’t even get the colour of the drone or number of propellers it has consistent between two panels
Irony must not be their strong suite.
Or is it??

That’s how I read that as well.
that use cameras
Make it embossed letters for all other types of sensors.
One time I saw a 30mph sign spray painted to say 88mph speed limit. Good thing it was before self driving cars of that would have been crazy.
As long as it didn’t say “minimum speed”

Could it really be as simpe as that? yes, according to the article. AI sucks so hard, who let it out of a laboratory?
That is actually hilarious 😂






