From driverless cars to robotic caregivers for the elderly, we live in a world where ‘smart’ systems are increasingly part of our everyday lives. Autonomous systems operate in complex and open-ended environments, using artificial intelligence (AI) to respond to feedback and unforeseen changes in the environment. These intelligent systems shift the workload from humans and let us take advantage of information technology and its ability to deliver significant benefits. There is huge potential for these systems to positively impact our society and increase our quality of life in industries like health, education and defence.
As the application of autonomous systems becomes more widespread the emphasis on a cooperative relationship between humans and machines becomes more significant. Safety, accuracy and security are always major concerns related to the adoption of autonomous systems. For people to feel comfortable using these systems they need to be trustworthy. But what does trustworthiness mean when it comes to artificial intelligence? And how can it be designed to develop trusted autonomous systems?
Trusted autonomy is an emerging field of research focused on understanding and designing the interaction space between entities, each of which exhibits a level of autonomy. These entities can be humans, computer-controlled machines or a mix of the two. Our aim is to integrate humans and machines seamlessly, naturally and efficiently to create a trusted and cooperative team to solve complex problems in an uncontrolled, uncertainty-rich environment.
We have expertise in traditional machine learning, the navigation and control of autonomous vehicles, developmental robotics, computational motivation and computational red teaming. We are unique in Australia because of this mix of expertise. We have the ability to innovate concepts, taking them from ideation through to real-world applications that raise productivity, improve resources and enhance human safety.