The Bounty Hunter project leverages advanced Large Language Model (LLM)-based tools to systematically discover and exploit vulnerabilities in popular Machine Learning frameworks, such as TensorFlow, PyTorch, and scikit-learn. By intelligently analyzing source code and documentation, our automated system rapidly identifies security flaws and generates detailed, reproducible reports optimized for bug bounty submissions. This approach not only accelerates vulnerability discovery for profit but also makes security auditing of complex ML libraries efficient and engaging, bridging the gap between security researchers, ML practitioners, and the thriving bug bounty community.

School

Computer Science and Engineering

Research Area

AI | Software vulnerability

Key features of the research environment include:

 

  • The opportunity to work in the Software and AI Security research group under the supervision of Yuekang Li and other experts in security, vulnerability detection, and large language models.
  • Access to cutting-edge facilities and resources provided by the School of Computer Science and Engineering, fostering an innovative environment for cybersecurity and AI research.
  • A collaborative atmosphere where students actively engage with industry partners and bug bounty communities, providing practical experience in real-world security challenges and solutions.
  • Interaction with fellow research students and security professionals to enhance skills in vulnerability analysis, software auditing, and ethical hacking practices.
  • An automated workflow of vulnerability detection and reporting.
  • Bounties.
Lecturer Yuekang Li
opens in a new window