The transition from passive Large Language Models (LLMs) to Agentic AI systems capable of autonomous reasoning, tool use, and environmental interaction marks a significant shift in the artificial intelligence landscape. While these agents offer transformative potential for productivity and complex problem solving, they introduce a novel attack surface and unique safety challenges that traditional AI frameworks are ill equipped to handle. This project will 1) explore the state of the art methods in of building trustworthy agentic AI, and 2) Implement a prototype domain specific language which will only allow implementation of trustworthy agentic workflows over untrustworthy LLMs.

School

Computer Science and Engineering

Research Area

Formal methods, Security and machine learning

Suitable for recognition of Work Integrated Learning (industrial training)?

No

The selected student will get an opportunity to closely work with researchers having expertise in formal methods, cybersecurity and AI.

  1. A language based framework for writing agentic systems that can be provably trusted.