- Researchers demonstrate that misleading text in the real-world environment can hijack the decision-making of embodied AI systems without hacking their software.
- Self-driving cars, autonomous robots and drones, and other AI systems that use cameras may be vulnerable to these attacks.
- The study presents the first academic exploration of environmental indirect prompt injection attacks against embodied AI systems.
Photos






Eh, yes? Hasn’t that been like that since day 1?