Chances are you have heard of the term 'killer robot', but what exactly are they and are they inevitable or do they already exist? We sat down with UNSW Canberra academic Austin Wyatt to find out more. 

 

What is a Lethal Autonomous Weapon System (LAWS)?

There's no universally accepted definition, the most commonly used comes from a US Department of Defence Directive which defines a weapon as fully autonomous if when activated it “can select and engage targets without further intervention by a human operator”.

In my research I use a function-based definition: "Killer robots" are fully autonomous weapon systems. These are weapons that operate without meaningful human control, meaning that the weapon itself can take decisions about where and how it is used; what or whom it is used against; and the effects of use.

 

Where did the term ‘Killer Robot’ come from?

The term Killer Robot is a negative term that has been consistently used to focus the discourse on the capability of lethal aspect of LAWS, particularly in media appearances and published materials, as well as in the central questions of the public surveys commissioned by Campaign to Stop Killer Robots over the past three years.

I am not totally certain who coined the term, but it is prominently used by non-governmental organisations. 

 

How involved are humans in LAWS? Is there still an element of human control?

To date, no military has reported that they have developed fully autonomous (human out of the loop) weapons. When we talk about current or in-development systems they primarily retain some level of human supervision or control.

A major component of the current international debate is how to maintain a sufficient level of human involvement without losing the efficiency benefits of uninhabited systems. While the term 'Meaningful Human Control' is commonly used, there is no agreement on the specifics of what that would look like.

 

Do we have reason to be fearful of LAWS?

The prospect of robotic systems ending human life without a direct human decision certainly contributes to fear toward the concept of LAWS. LAWS raise important moral, ethical and legal concerns that deserve serious and transparent consideration before they could be legitimately deployed.

However, these concerns should not be used to justify a push for a pre-emptive ban on an inherently dual-use technology at the expense of responsible, technically grounded debate.

 

Are there strict regulations around LAWS? If not, should there be international regulations?

There are no current regulations specifically around LAWS, although their use would fall under the tenets of International Humanitarian LAW. The debate around whether specific international regulation, a pre-emptive international ban on their development, or even whether current international law is sufficient. 

For specific regulation to work, the international community would need to establish measurable technical standards to objectively determine whether a given system is 'autonomous' or merely highly automated. Unfortunately, the current negotiation process occurring at the United Nations has been focused principally on the question of a pre-emptive ban on autonomous weapons. 

 

Austin Wyatt is a Research Associate in the Values in Defence & Security Technology group at UNSW Canberra. Read more about his work this subject here