This project focuses on vulnerability detection in Model Context Protocol (MCP) servers for large language models (LLMs), aiming to identify security weaknesses that could expose sensitive data, enable prompt injection, or allow unauthorized access. By systematically analyzing server implementations, communication protocols, and integration points with LLMs, the project develops methods to detect and mitigate risks before exploitation. The ultimate goal is to enhance the reliability and security of MCP servers, ensuring safer deployment of LLM-powered applications in real-world environments.

School

Computer Science and Engineering

Research Area

Computer science | AI | Security

Suitable for recognition of Work Integrated Learning (industrial training)? 

No

Any OS

Vulnerabilities and maybe some bounties.