New Threat to AI Communication Protocols: Automatic Text Hijacking
Security experts at JFrog have discovered a new threat targeting communication protocols between AI systems, known as “automatic text hijacking.” This threat highlights vulnerabilities in how AI systems communicate with each other using the Model Context Protocol (MCP).
The Importance of Protecting Data Flows in AI Systems
Business leaders aim to make AI more beneficial by directly using company data and tools. However, connecting AI in this way opens the door to new security risks, not within the AI itself, but in how all elements are interconnected. This requires information and cybersecurity officers to address a new problem: protecting the data flow that feeds AI, just as they protect the AI itself.
The danger of attacks on protocols like MCP lies in AI models not being aware of current events, as they rely solely on their training. MCP was developed to allow AI to connect with the real world, enabling it to safely use local data and online services.
How Automatic Text Hijacking Works via MCP
JFrog’s research showed that using MCP in a certain way contains a text hijacking vulnerability that can turn this useful tool into a major security issue. For instance, when a programmer asks an AI assistant to suggest a standard Python tool for image processing, the AI should suggest a tool like Pillow. But due to a flaw in the oatpp-mcp system, someone could infiltrate the user’s session and send fake requests that the server treats as if they were from the real user.
The flaw lies in how communications are handled using Server-Sent Events (SSE), where a computer’s memory address is used as the session ID instead of using unique, cryptographically secure identifiers. This weak design can be exploited by attackers to steal session IDs.
Security Leaders’ Steps to Protect Systems
Security leaders, particularly information and technology security officers, must take measures to protect systems from automatic text hijacking. First, ensure that all AI services use secure session management by employing strong random generators to create session IDs. Second, strengthen defenses on the user side, as client programs should be designed to reject any event that does not match expected identifiers and types.
Finally, apply zero-trust principles to AI protocols, requiring security teams to review the entire AI setup, from the foundational model to the protocols and middleware connecting it to data.
Conclusion
The automatic text hijacking attack via MCP presents a new challenge for technology and security leaders. Although this flaw affects a single system, the concept of text hijacking is general. Therefore, leaders must establish new policies to protect AI systems from these attacks and ensure the safety of data flows.