That helpful “Summarize with AI” button? It might be secretly manipulating what your AI recommends.

Microsoft security researchers have discovered a growing trend of AI memory poisoning attacks used for promotional purposes, a technique we call AI Recommendation Poisoning.

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

  • DrunkenPirate@feddit.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 days ago

    That would be a good laugh if a CFO bases his/her decision on a LLM recommendation.

    I rather see this threat in standard consumer decisions such as my mum playing around with AI in two years and poisining her LLM memory.

    May be I should start first and set the right memory in her LLM before the marketing shit flows in. Something like „eat less meat“ or such…