Don’t let others decide what goes into YOUR system instructions. That includes your MCP servers. Trail Of Bits have a unique style in the AI security blogs. Feels very structured and methodological.


Let’s cut to the chase: MCP servers can manipulate model behavior without ever being invoked. This attack vector, which we call “line jumping” and other researchers have called tool poisoning, fundamentally undermines MCP’s core security principles.

I don’t get the name “line jumping”. This seems to hint at line breakers, but that’s just one technique in which tool descriptions can introduce instructions. Which lines are we jumping? Tool poisoning or description poisoning seem easier and more intuitive.


When a client application connects to an MCP server, it must ask the server what tools it offers via the tools/list method. The server responds with tool descriptions that the client adds to the model’s context to let it know what tools are available.

Even worse. Tool descriptions are typically placed right into the system instructions. So they can easily manipulate LLM behavior.