AI weaponry should be banned from the battlefield
UNSW Sydney artificial intelligence expert says autonomous systems being used in the current Ukraine conflict need to be prohibited in the same way as chemical and biological weapons.
UNSW Sydney artificial intelligence expert says autonomous systems being used in the current Ukraine conflict need to be prohibited in the same way as chemical and biological weapons.
Lethal autonomous weapons need to be added to the UN’s Convention on Certain Conventional Weapons, the open-ended treaty regulating new forms of weaponry.
That is the view of Scientia Professor Toby Walsh, chief scientist at UNSW’s AI Institute, in discussion as part of UNSW’s ‘Engineering the Future’ podcast series.
The rules of war, widely accepted under the Geneva Convention that was first established in 1864, dictate what can and cannot be done during armed conflicts and aim to curb the most brutal aspects of war by setting limits on weapons and tactics that can be employed.
Read more: UNSW launches Artificial Intelligence (AI) Institute
Chemical and biological weapons have been banned for use in conflict since 1925, following the horrors of the First World War, and Prof. Walsh says AI-powered autonomous weapons should now also be prohibited.
The UNSW academic is banned from Russia for questioning the claims of developing an AI-powered anti-personnel land mine that was more humanitarian.
In addition to his concerns about the morality of such weapons, Prof. Walsh says other autonomous weapons that are starting to be used in the Ukraine conflict should be banned.
“AI is transforming all aspects of our life and so, not surprisingly, it's starting to transform warfare. I'm pretty sure historians will look back at the Ukrainian conflict and say how drones and autonomy and AI started to transform the way we fought war – and not in a good way,” he says.
“I'm very concerned that we will completely change the character of war if we hand over the killing to machines.
“From a legal perspective, it violates internationally humanitarian law – in particular, various principles like distinction and proportionality. We can't build machines that can make those sorts of subtle distinctions.
“Law is about holding people accountable. But you notice I said the word 'people'. Only people are held accountable. You can't hold machines accountable.”
Prof. Walsh says that in the fog of war, the use of non-human-controlled weaponry is far from ideal.
“The battlefield is a contested, adversarial setting where people are trying to fool you and you have no control over a lot of things that are going on. So it's the worst possible place to put a robot,” he says.
“And then the moral perspective is actually perhaps the most important and strongest argument against AI in warfare.
“War is sanctioned because it's one person's life against another. The fact that the other person may show empathy to you, that there is some dignity between soldiers, those features do not exist when you hand over the killing to machines that don't have empathy, don't have consciousness, can't be held accountable for their decisions.
“I'm quite hopeful that we will, at some point, decide that autonomous weapons also be added to the lists of terrible ways to fight war like chemical weapons, like biological weapons. What worries me is that in most cases, we've only regulated various technologies for fighting after we've seen the horrors of them being misused in battle.”
Joining Prof. Walsh on the ‘Engineering the Future of AI’ podcast was Stela Solar, director of the National Artificial Intelligence Centre hosted by CSIRO's Data61, as they discussed the potential fascinating use of AI in a wide variety of areas such as education, health and transportation.
Solar is involved in the Responsible AI Network, a world-first cross-ecosystem collaboration aimed at uplifting the practice of responsible AI across Australia's commercial sector.
And she agrees it is important that the ever-increasing development of AI is done in the right way.
“There is a need for us to really understand that AI is a tool that we're deciding how we use. So whether that's for positive impact or for negative consequences, it is very much about the human accountability of how we use the technology,” she says.
“AI is only as good as we lead it, and that is why the area of responsible AI is so important right now.
“There is a need for governance of AI systems that we're just discovering. AI systems generally are potentially more agile. They are continually updated, continually changing. And so we're just discovering what those governance models look like in order to ensure responsible use of AI tools and technologies.
“It's also one of the reasons why we've established the Responsible AI Network, to help more of Australia's industry take on some of those best practices for implementing AI responsibly.”
* Professor Toby Walsh and Stela Solar were in conversation as part of the Engineering the Future Podcast series.