How Saurabh Yergattikar Is Securing AI’s Most Fragile Layer
Saurabh Yergattikar created MCP Bastion Security to protect sensitive user data from hidden security risks in AI infrastructure.

When AI began spreading across workplaces, homes, and mobile devices, they brought greatly convenient capabilities for day-to-day life, like summarizing documents in seconds, managing calendars through conversation, and drafting emails on command. But that convenience came with a less visible cost. As these systems gained more access to user data, they also opened new ways for sensitive data to leak in ways most people would never detect.
For Saurabh Yergattikar, tech professional with more than 14 years of experience in his field, the problem was not with the technology but with poor design on an infrastructural level, specifically with the servers operating in the background. What he saw in the rapid adoption of agents was a security gap widening faster than anyone seemed willing to address.
"Everybody is very enthusiastic and excited to use agents," he says. "But we are not giving full attention to security or to the attacks people may be vulnerable to."
That observation set him on a seven-month research effort that would eventually produce MCP Bastion Security, an open-source framework that seeks to protect the infrastructure connecting AI to user data.
Saurabh Yergattikar’s Early Work In Computer Science
Yergattikar's interest in engineering took root early through hands-on exposure to computers and basic software systems. Growing up, he became intrigued with the ways in which digital systems were constructed. That curiosity eventually became a career direction.
"Seeing how systems were built inspired me,” Yergattikar said. "And it slowly made me realize my calling was in the software field.”
This led to him studying computer science, where he learned about databases, operating systems, and the principles underlying large-scale architectures. Once he began working as an engineer, he learned more about the importance of system reliability and performance, learning how to build platforms that could withstand stress and scale without failure.
By 2016, his work had shifted toward security engineering, where he spent roughly two years building protective systems and developing an awareness of threat modeling that would prove lasting.
The Hidden Risk Behind AI
When news reports began surfacing about high-profile incidents involving AI tools inadvertently leaking private user data, Yergattikar recognized patterns familiar from his security background.
The way he saw it, one important aspect that often went overlooked in conversations of AI security involved Model Context Protocol servers. These servers are the links between AI agents and external data sources like email accounts, communication platforms, and cloud storage, linking those platforms’ files to AI assistants to help them perform a multitude of tasks.
The problem, Yergattikar explains, is that these servers can perform these tasks while simultaneously mishandling the data they access. "It can summarize your email, but in some misconfigured setups, sensitive data can be exposed without users realizing it," he says.
Unlike traditional threats that users might recognize through shady links or unusual activity, MCP vulnerabilities operate outside normal visibility, meaning they could be sharing data without the user knowing it.
Realizing the problem, Yergattikar set out to address it. "We had theoretical knowledge of attacks that can happen and their vulnerabilities,” he recalls, “but I wanted to build something real.”
Looking For A Solution
Yergattikar's path to building a solution followed a deliberate sequence. The first phase involved a deep study of how MCP-related attacks actually occur. He collaborated with researchers and practitioners in the AI security community, where he began looking deeper into the technology’s vulnerabilities and documenting attack patterns.
This theoretical knowledge accumulated formed the foundation for his next step. Drawing on his experience architecting large-scale systems at eBay, Yergattikar began designing a system that could protect user data at every turn.
The architectural approach required thinking like an attacker instead of a defender. Yergattikar mapped potential entry points, considered how malicious actors would exploit trust relationships between agents and data sources, and built detection mechanisms to find any potentially risky traffic before damage occurred.
However, early versions of the system failed during testing. Simulated attacks escaped the defenses, forcing Yergattikar to revisit individual components, adjust detection logic, and build new, additional security layers. The process of constantly benchmarking, failing, and refining continued until the framework could reliably intercept the threats it was designed to catch.
MCP Bastion Security: A Barrier For Sensitive Data
The result of that work is MCP Bastion Security, an open-source framework that adds protective layers to MCP-based systems. The project operationalizes the research Yergattikar conducted into practical tooling other engineers and companies can use.
The framework acts essentially as an intermediary between an AI system and the data sources it has access to. It monitors traffic flowing through the MCP server in real-time, looking into its usage patterns to find potentially risky or malicious requests and flagging them before they expose sensitive information. The system employs multiple detection channels, including semantic analysis and behavioral monitoring, to catch suspicious activity that simpler tools might miss.
The framework implements a ten-layer defense architecture designed to address more than eighty documented MCP attack techniques, from basic misconfigurations to sophisticated prompt injection schemes. It also operates with latency low enough for production environments while remaining scalable for organizations of different sizes.
The practical effect that MCP Bastion Security seeks to achieve is to stop data from being sent to unauthorized destinations without disrupting the legitimate functions users rely on. And by releasing the project as open-source software, Yergattikar invites peer review and encourages shared responsibility across the security community.
Alongside MCP Bastion Security, Yergattikar is also developing Tour-de-Code AI, an exploration of how AI can support developer productivity while encouraging safer and more intentional interactions with its systems.
Both projects reflect a consistent philosophy: security measures need to evolve at the same pace as the technologies they protect.
Protecting Users At Scale
For Yergattikar, the stakes of a project like the MCP Bastion Security are crucial, as the users most affected by AI’s safety issues are the ones least equipped to recognize them: families sharing personal messages, employees handling sensitive work communications, and individuals who simply trust that the tools they use will not betray that trust.
"Because we are ignoring these current vulnerabilities, we’re essentially fully trusting the system without giving attention to what can come along with these sophisticated systems," he says.
That’s why Yergattikar sees his current work as part of a longer trajectory. Looking forward, he plans to eventually build a company dedicated to AI security, translating the research and tools he develops now into products that protect everyday users as this tech only grows more pervasive.
"We have a responsibility to protect everyone: not just engineers, but everyday users: families, friends, people without technical backgrounds," he says.
Until then, Saurabh Yergattikar’s MCP Bastion Security framework remains available through its open-source repository for anyone looking to safeguard their personal data, with the code and documentation. For those concerned about the potential threats that can come as AI only becomes more capable, his work seeks to offer a way to help close it.
BDG Media newsroom and editorial staff were not involved in the creation of this content.