The transition from passive Large Language Models (LLMs) to Agentic AI systems capable of autonomous reasoning, tool use, and environmental interaction marks a significant shift in the artificial intelligence landscape. While these agents offer transformative potential for productivity and complex problem solving, they introduce a novel attack surface and unique safety challenges that traditional AI frameworks are ill equipped to handle. This project will 1) explore the state of the art methods in of building trustworthy agentic AI, and 2) Implement a prototype domain specific language which will only allow implementation of trustworthy agentic workflows over untrustworthy LLMs.
Computer Science and Engineering
Formal methods, Security and machine learning
No
- Research environment
- Expected outcomes
- Supervisory team
- Reference material/links
The selected student will get an opportunity to closely work with researchers having expertise in formal methods, cybersecurity and AI.
- A language based framework for writing agentic systems that can be provably trusted.