Safeguarding the expanding world of artificial intelligence
From smart home devices to the imminent arrival of self-driving cars, the role that artificial intelligence (AI) plays in our everyday lives is rapidly expanding.
From smart home devices to the imminent arrival of self-driving cars, the role that artificial intelligence (AI) plays in our everyday lives is rapidly expanding.
From smart home devices to the imminent arrival of self-driving cars, the role that artificial intelligence (AI) plays in our everyday lives is rapidly expanding.
Experts from a range of disciplines must be brought together to address the implications of these technologies, according to UNSW Canberra AI Hub Deputy Director Matt Garratt.
“AI cuts across all aspects of technology, such as engineering, science, business logistics and digital humanities,” Professor Garratt said.
“Moreover, there are strong ethical and legal implications to the use of AI, and humanities-based discussion around its use is incredibly important to ensure all members of society are aware of issues such as machine bias, legal liability of autonomous systems, privacy and human connectedness.”
About 80 academics from across UNSW Canberra Business, Engineering, Science, Humanities and Social Sciences are exploring the many facets of artificial intelligence.
For Public Service Research Group Director and Business Professor Helen Dickinson, that means investigating the role that AI will have in our public service and care sectors.
“We’ve always looked at the future of the public service workforce, including workplaces becoming more flexible,” Professor Dickinson said.
“We’re now looking at the tools that governments and public servants are using. We’re seeing that AI and robotics are starting to be used, but there really hasn’t been much written about the public management and administration literature – the technology is running ahead of the literature and regulation.”
Professor Dickinson cites the use of home automation in the care industries and government failures, such as the robodebt scandal, as examples of where there are some disconnects between the designers of AI technology and the people who use them.
‘We are in urgent need of a conversation about how we want to see these technologies used and where some of the limitations are,” Professor Dickinson said.
“With something as important as this, we need to bring together ethicists, technologists and the political sciences, so we can collectively think about how we can gain the maximum benefits from AI, while guarding against any negative implications.”
Business efficiency is among the areas where AI may prove to be particularly useful. Associate Professor Omar Hussain is leading a team investigating the use of AI in supply chain risk management.
“UNSW Canberra's research in the use of AI and assisting in business decision-making is at the forefront,” Associate Professor Hussain said.
“The research team works on using cutting-edge technologies that enable businesses to increase the accuracy and efficiency of their operations and save time.
“An essential requirement is to have a cross-disciplinary link between businesses and AI to produce industry-relevant research.”
With jobs across administration, transport, health, education and legal sectors among those likely to be changed by artificial intelligence, Professor Garratt said a human-centred approach to AI is more important than ever.
“Automation of work by AI technologies has the potential to greatly reduce the burden of everyday living for humans,” Professor Garratt said.
“It can make our lives safer, fairer and provide a better work-life balance. However, this will not happen unless safeguards and social programs are enacted to ensure the benefits are evenly distributed amongst all humans.
There is a risk that people, organisations and governments can over-trust AI, leading to bias, incorrect decisions and unfair treatment of individuals. All new technology must be thoroughly tested by skilled practitioners to ensure AI systems are not deployed which are unreasonably biased, or where risks are not properly understood and managed.”