- cross-posted to:
- Aii@programming.dev
- cross-posted to:
- Aii@programming.dev
That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends.
Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.
Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).
These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.



sounds like advertising and marketing directed at A.I.
why is this poisoning A.I. yet the constant barrage of algo recommendations trying to do the same thing to meatbags isn’t “poisoning humans” ?
Because Ads are usually recognizable as such in your doom scrolling. Sometimes you can even filter them out with ad blockers.
Ads in AI response is more similar to influencers not disclosing their sponsored messages.