• Researchers demonstrate that misleading text in the real-world environment can hijack the decision-making of embodied AI systems without hacking their software.
  • Self-driving cars, autonomous robots and drones, and other AI systems that use cameras may be vulnerable to these attacks.
  • The study presents the first academic exploration of environmental indirect prompt injection attacks against embodied AI systems.
Photos

  • Phoenixz@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 hours ago

    Misleading text in the physical world can hijack AI-enabled robots, cybersecurity study shows

    Eh, yes? Hasn’t that been like that since day 1?