We strolled through an enterprise AI assistant's backend, helped ourselves to full application takeover and access to every chat log, and had a Microsoft Entra ID dump for dessert — no prompt injection, no model tricks, no AI expertise required.
This wasn’t even a prompt-injection or context-poisoning attack. The vulnerable infrastructure itself exposed everything to hack into the valuable parts of the company:
Public JS asset
→ discover backend URL
→ Unauthenticated GET request triggers debug error page
→ Environment variables expose admin credentials
→ access Admin panel
→ see live OAuth tokens
→ Query Microsoft Graph
→ Access Millions of user profiles
Hasty AI deployments amplify a familiar pattern: Speed pressure from management keeps the focus on the AI model’s capabilities, leaving surrounding infrastructure as an afterthought — and security thinking concentrated where attention is, rather than where exposure is.
This wasn’t even a prompt-injection or context-poisoning attack. The vulnerable infrastructure itself exposed everything to hack into the valuable parts of the company:
Public JS asset → discover backend URL → Unauthenticated GET request triggers debug error page → Environment variables expose admin credentials → access Admin panel → see live OAuth tokens → Query Microsoft Graph → Access Millions of user profiles