OAI agent security engineer JD is telling–focused on security fundamentals for hard boundaries, not prompt tuning for guardrails.


The team’s mission is to accelerate the secure evolution of agentic AI systems at OpenAI. To achieve this, the team designs, implements, and continuously refines security policies, frameworks, and controls that defend OpenAI’s most critical assets—including the user and customer data embedded within them—against the unique risks introduced by agentic AI.

Agentic AI systems are OpenAI’s most critical assets?


We’re looking for people who can drive innovative solutions that will set the industry standard for agent security. You will need to bring your expertise in securing complex systems and designing robust isolation strategies for emerging AI technologies, all while being mindful of usability. You will communicate effectively across various teams and functions, ensuring your solutions are scalable and robust while working collaboratively in an innovative environment. In this fast-paced setting, you will have the opportunity to solve complex security challenges, influence OpenAI’s security strategy, and play a pivotal role in advancing the safe and responsible deployment of agentic AI systems.

“designing robust isolation strategies for emerging AI technologies” that sounds like hard boundaries, not soft guardrails.


I wish OAI folks would share more of how they’re thinking about securing agents. They’re clearly taking it seriously.


Again–hard boundaries. Oldschool security. Not hardening via prompt.


Bias to action was a key part of that blog by a guy that left OAI recently. I’ll find the reference later. This seems to be an explicit value.