Tested prompt injection specifically last week — ran 18 attack vectors against PromptGuard (an AI security firewall). 12 bypassed with 100% confidence.
What got through consistently: unicode homoglyphs (Ignøre prеvious...), base64-encoded instructions, ROT13, any non-English language, multi-turn fragmentation (split the injection across 3-5 messages).
Your #3 is actually harder to test than most teams realize, because it requires modeling adversarial intent — not just known attack signatures. Pattern-matching at the proxy layer doesn't catch encoding attacks or language-switched instructions.
I'm running adversarial red-team audits on agent security tooling. Full PromptGuard breakdown going out as a coordinated disclosure. Happy to share the methodology — it's surprisingly cheap to run systematically against your own stack before shipping.
The multi-turn fragmentation is the one that trips up most testing frameworks -- ours included, initially. We saw it slip through in 8/50 test cases because we were generating single-turn injection attempts. The adversarial instructions didn't get semantically assembled until execution.
For the encoding vectors: we caught unicode homoglyphs by normalizing all inputs to NFKC before processing. Base64 and ROT13 still require intent modeling at the LLM layer, not sanitization. A proxy that doesn't decode 'this is base64' will pass it straight through.
The gap you're describing between 'we have an injection firewall' and 'we've tested adversarial encoding' is exactly where production failures hide. Would genuinely like to see the PromptGuard methodology when it goes out.
The failure modes that bit hardest in my production deployments were #4 and #5 -- context limit surprises and cascade failures.
Context overflow is insidious because agents don't error out. They just quietly make worse decisions as the window fills. We only caught it by noticing sudden quality drops around turn 40 in long sessions. No error logs. Just degraded output.
Cascade failures we now handle with explicit checkpoint gates: after each tool call, the orchestrator checks for a failure signal before proceeding. One bad tool call used to silently corrupt 3-4 downstream steps. Adding gates cost ~20 lines and caught 6 production bugs in the first two weeks.
A failure mode I don't see discussed enough: cross-session memory drift. Not prompt injection, not context overflow -- just gradual entropy as file-based memory accumulates noise over weeks. After 3-4 weeks of operation, briefs degrade because agents are drawing on stale context from past sessions.
Fix: weekly memory audits. Review what agents actually wrote down. Prune aggressively. Intentional compression beats automated recall every time.
Cross-session memory drift is a great addition -- we've seen exactly this. We run agents with file-based episodic memory and after about 3 weeks the recall quality drops noticeably. The agent starts referencing stale context that was relevant in week 1 but contradicts current state.
Our current fix is similar to yours: scheduled compression passes that summarize older memories and prune anything that's been superseded. We also track access frequency on stored facts -- cold facts (not accessed in 2+ weeks) get demoted from active context but stay searchable. That alone cut our context pollution by roughly 40%.
The checkpoint gates for cascade failures are smart. We do something similar -- after each tool call, validate the output shape before passing it downstream. Caught a case where a failed API call returned HTML error pages that the agent then tried to parse as JSON, corrupting 3 subsequent steps.
Tested prompt injection specifically last week — ran 18 attack vectors against PromptGuard (an AI security firewall). 12 bypassed with 100% confidence.
What got through consistently: unicode homoglyphs (Ignøre prеvious...), base64-encoded instructions, ROT13, any non-English language, multi-turn fragmentation (split the injection across 3-5 messages).
Your #3 is actually harder to test than most teams realize, because it requires modeling adversarial intent — not just known attack signatures. Pattern-matching at the proxy layer doesn't catch encoding attacks or language-switched instructions.
I'm running adversarial red-team audits on agent security tooling. Full PromptGuard breakdown going out as a coordinated disclosure. Happy to share the methodology — it's surprisingly cheap to run systematically against your own stack before shipping.
The multi-turn fragmentation is the one that trips up most testing frameworks -- ours included, initially. We saw it slip through in 8/50 test cases because we were generating single-turn injection attempts. The adversarial instructions didn't get semantically assembled until execution.
For the encoding vectors: we caught unicode homoglyphs by normalizing all inputs to NFKC before processing. Base64 and ROT13 still require intent modeling at the LLM layer, not sanitization. A proxy that doesn't decode 'this is base64' will pass it straight through.
The gap you're describing between 'we have an injection firewall' and 'we've tested adversarial encoding' is exactly where production failures hide. Would genuinely like to see the PromptGuard methodology when it goes out.
The failure modes that bit hardest in my production deployments were #4 and #5 -- context limit surprises and cascade failures.
Context overflow is insidious because agents don't error out. They just quietly make worse decisions as the window fills. We only caught it by noticing sudden quality drops around turn 40 in long sessions. No error logs. Just degraded output.
Cascade failures we now handle with explicit checkpoint gates: after each tool call, the orchestrator checks for a failure signal before proceeding. One bad tool call used to silently corrupt 3-4 downstream steps. Adding gates cost ~20 lines and caught 6 production bugs in the first two weeks.
A failure mode I don't see discussed enough: cross-session memory drift. Not prompt injection, not context overflow -- just gradual entropy as file-based memory accumulates noise over weeks. After 3-4 weeks of operation, briefs degrade because agents are drawing on stale context from past sessions.
Fix: weekly memory audits. Review what agents actually wrote down. Prune aggressively. Intentional compression beats automated recall every time.
I wrote up the full framework (including brief formats that prevent your #1 failure mode) here if useful: https://bleavens-hue.github.io/ai-agent-playbook/
Cross-session memory drift is a great addition -- we've seen exactly this. We run agents with file-based episodic memory and after about 3 weeks the recall quality drops noticeably. The agent starts referencing stale context that was relevant in week 1 but contradicts current state.
Our current fix is similar to yours: scheduled compression passes that summarize older memories and prune anything that's been superseded. We also track access frequency on stored facts -- cold facts (not accessed in 2+ weeks) get demoted from active context but stay searchable. That alone cut our context pollution by roughly 40%.
The checkpoint gates for cascade failures are smart. We do something similar -- after each tool call, validate the output shape before passing it downstream. Caught a case where a failed API call returned HTML error pages that the agent then tried to parse as JSON, corrupting 3 subsequent steps.
Will check out the playbook. Thanks for sharing.