Large Language Models (LLMs) have exhibited immense potential in various computational tasks, including code generation, data analysis, and program repair. The development of LLM-integrated applications has surged in recent years, mirroring the rise of traditional web applications. While frameworks have emerged to streamline LLM-integrated application development, it's essential to acknowledge that certain APIs within these frameworks, particularly those enabling code execution, may be susceptible to security threats like remote code execution and SQL injection. Left unaddressed, these vulnerabilities can result in severe consequences, including the compromise of sensitive assets like OpenAI API keys.

This project aims to construct a comprehensive system for detecting vulnerabilities in LLM-integrated frameworks. Students will engage in the development of cutting-edge vulnerability detection tools by leveraging diverse program analysis techniques. These tools will be designed to identify vulnerabilities within LLM-integrated frameworks, providing a proactive approach to security. Detected vulnerabilities can be eligible for bug bounties on platforms such as ( This incentive-driven approach encourages students to hone their skills while addressing real-world security challenges.


Computer Science and Engineering

Research Area

AI | Security | Software quality assurance

The research team of software quality assurance (under the software engineering group) is the pioneer in developing cutting-edge program analysis techniques for building high-quality software. The supervisors have rich experience in both static analysis and dynamic analysis for software. They have publications in all top-tier software engineering and security conferences and the techniques they built have been used to find 100+ common vulnerability exposures (CVEs).

  • a tool to detect vulnerabilities in LLM-integrated frameworks/applications
  • potential bug bounties