Assume Breach When Building AI Apps less than 1 minute read AI jailbreaks are not vulnerabilities; they are expected behavior. Direct Link Share on X Facebook LinkedIn Bluesky Previous Next
Reconstructing a timeline for Amazon Q prompt infection 4 minute read In the 404media article the hacker explains how they did it:
Why Aren’t We Making Any Progress In Security From AI 6 minute read Guardrails Are Soft Boundaries. Hard Boundaries Do Exist.
OAI Q&A on Security From AI 1 minute read This is part 3 on OpenAI’s Security Research Conference. Here are part 1 and part 2.
Fully-Autonomous AI Systems Are Discovering Vulns Today 2 minute read This is part 2 on OpenAI’s Security Research Conference. Here is part 1.