Impact

Ask an Ethicist: Is it ethical to use robots to kill in a war?

Credit: © Antoniooo / ShutterstockAll Rights Reserved.

In partnership with the Rock Ethics Institute, Penn State Today’s feature column, "Ask an Ethicist," aims to shed light on ethical questions from our readers. Each article in this column will feature a different ethical question answered by a Penn State ethicist. We invite you to ask a question by filling out and submitting this form. An archive of the columns can be found on the Rock Ethics Institute website.

Question:

I recently read an article online about something called the “Campaign to Stop Killer Robots.” At first I thought this was a joke but, after looking into it, I read that the group is really working to ban the use of robotic weapons systems. This research made me wonder, is it OK for a robot to kill in a war?

An ethicist responds:

War has traditionally raised a number of important ethical dilemmas. The advent of autonomous (self-controlled) robots presents important new questions for those that study robotics and ethics. Most people who study these topics believe that recent advances in autonomous robots and artificial intelligence will fundamentally change warfare. Autonomous robots, because they are not physiologically limited, can operate without sleep or food, perceive things that people do not, and move in ways that humans cannot. These abilities suggest that using robots in war offers an important tactical advantage, and militaries around the world are making significant investments in robot-related research and development.  

There are a number of rules and conventions that dictate right from wrong during war. Many of these rules govern the use of technology during war. On the one hand, the Hague Conventions, for example, limit the use of chemical and biological warfare. On the other hand, military necessity is often invoked as means for justifying certain actions during war. Broadly, military necessity states that armed forces can do whatever is necessary, given that the action is not currently illegal under humanitarian law, to achieve legitimate military objectives. In some cases, military necessity has been successfully used to justify ignoring the current laws and conventions of war.   

It is important to point out that, typically, the laws and conventions that govern the use of a technology in war are made after the fact. Generally, it’s hard to predict how and if a particular technology will be used and the impact that it will have. Further, the use of some technologies may change as the war progresses. For example, bombing raids during World War II initially only focused on military targets, but errant bombing raids and their reprisals opened the use of this technology to carpet bomb cities.  

In order to best answer your question, I will focus only on robots that are capable of making decisions. By making decisions I mean that the robotic system evaluates its surroundings and uses information such as prior experience or commands from authority figures to decide without direct human intervention which enemy to engage, when to engage them, and how to engage them. Imagine an autonomous robotic solider sent on a mission just as a human solider would be sent on a mission. 

Some scholars passionately argue that the use of robots to kill in war is unethical. A variety of arguments against the use of robots have been made, but two predominate. One argument is that the use of robot soldiers will cheapen the cost of war, making future wars more likely. In most conflicts, human causalities generate political pressure, which influences leaders to end the war. Robotic soldiers, because they shift risk away from a nation’s soldiers, may upset that calculus. A second argument, more specifically directed at robots that kill, is that currently robots are not as capable of distinguishing between civilians and enemy combatants. As such, these robots have the potential to target and kill innocent civilians.      

Other scholars suggest that using robots in warfare may actually be more ethical than not using robots. For one thing, not using robots ensures that people will be put in harm’s way. Assuming that war is currently being conducted, military and political leaders are ethically obligated to reduce casualties, and using robots may be one method for doing so. Another argument for the use of robotic soldiers is that, because they lack emotions and are immune to the stress of war, these machines are therefore less likely to commit atrocities and more likely to follow the rules of war, the Hague Conventions, and other declarations delimiting how a war should be fought.

One important point of contention between the two sides is decision-making. Should an autonomous robot be allowed to decide whether or not to target an enemy? Current United States military doctrine states that only a human can make the decision to kill. But it’s easy to imagine this doctrine eroding as military necessity dictates. There is an obvious military advantage in terms of speed by allowing robots to decide if a person should be targeted. Robots that can target and kill without calling their command centers will be more efficient killers than those that must request permission from a human. Some environments, such as the demilitarized zone separating North and South Korea, are devoid of civilians and, because of the size of the North Korean army and its proximity to South Korean population centers, rapid response to an invasion is a critical military necessity for South Korea. Along this border autonomous machine guns that fire without human confirmation have already been deployed.

From an ethical standpoint, it should be our goal to both decrease the number of wars and the violence of these wars. A debate exists as to how autonomous robots will impact this goal. Unfortunately, like the advent of many previous technologies, we are not likely to know the answer until the next major war. That limitation should not, however, prevent us from debating and discussing this important topic.

Alan R. Wagner is an assistant professor in the Department of Aerospace Engineering and a research associate in the Rock Ethics Institute. His research and teaching interests focus on the human-robot trust and the creation of robots that learn ethical behavior. Wagner’s research has won several awards, including an Air Force Young Investigator Award, and Time magazine described his work on deception as the 13th most important invention of 2010.

Have a question? Submit it here.

Note: The "Ask an Ethicist" column is a forum to promote ethical awareness and inquiry across the Penn State community. These articles represent the interests and judgments of each author as an individual scholar and are neither official positions of the Rock Ethics Institute nor Penn State University. They are designed to offer a possible approach to a subject and are not intended as definitive statements on what is or is not ethical in any given situation. Read the full disclaimer.

Alan R. Wagner is an assistant professor in the Department of Aerospace Engineering and a research associate in the Rock Ethics Institute. His research and teaching interests focus on the human-robot trust and the creation of robots that learn ethical behavior.  Credit: Penn StateCreative Commons

Last Updated February 24, 2017

Contacts