Mason Smithers[1] is a student of robotics and aviation. He has taken part in building and programming robots for various purposes and is seeking a career as a pilot.
Jason Criss Howk[2] is an adjunct professor of national security and Islamic studies and was Mason’s guest instructor during the COVID-19 quarantine.
Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.
National Security Situation: The deployment of robots on the battlefield raises many questions for nations that desire to do so.
Date Originally Written: April, 5, 2020.
Date Originally Published: June 24, 2020.
Author and / or Article Point of View: This paper is based on the assumption that robots will one day become the predominant actor on a battlefield, as AI and robotics technology advance. The authors believe it is the moral duty of national and international policy-makers to debate and establish the rules for this future now.
Background: Robots on the battlefield in large quantities, where they make up the majority of the combatants making direct-contact with a nation’s enemies, will raise new concerns for national leaders and human rights scholars. Whether they are tethered to a human decision-maker or not, when robots become the primary resource that a nation puts at risk during war, there will be an avalanche of new moral and ethical questions to debate.
This shift in the “manning” of warfighting organizations could increase the chances that nations will go to war because they can afford to easily replace robots, and without a human-life cost, citizens may not be as eager to demand a war be ended or be avoided.
Significance: While the U.S. currently uses human-operated ground and air robots (armed unmanned aircraft-AKA drones, reconnaissance robots, bomb technician’s assistants etc.), a robust debate about whether robots can be safely untethered from humans is currently underway. If the United States or other nations decide to mass produce infantry robots that can act, without a human controlling them and making critical decisions for them, what are the costs and risks associated? The answers to these questions about the future, matter now to every leader involved in warfare and peace preservation.
Option #1: The U.S. continues to deploy robots in the future with current requirements for human decision-making (aka human in the loop) in place. In this option the humans in any military force will continue to make all decisions for robots with the capability to use deadly force.
Risk: If other nations choose to use robots with their own non-human decision capability or in larger numbers, U.S. technology and moral limits may cause the U.S. force smaller and possibly outnumbered. Requiring a human in the loop will stretch a U.S. armed forces that is already hurting in the areas of retention and readiness. Humans in the loop, due to eventual distraction or fatigue, will be slower in making decisions when compared to robots. If other nations perfect this technology before the U.S., there may not be time to catch up in a war and regain the advantage. The U.S. alliance system may be challenged by differing views of whether or not to have a human in the loop.
Gain: Having a human in the loop will decreases the risk of international incidents that cause wars due to greater an assumed greater discretion capacity with the human. A human can make decisions that are “most correct” and not simply the fastest or most logical. Humans stand the best chance at making choices that can create positive strategic impacts when a gray area presents itself.
Option #2: The U.S. transitions to a military force that is predominantly robotic and delegate decision-making to the robots at the lowest, possibly individual robot, level.
Risk: Programmers cannot account for every situation on the battlefield. When robots encounter new techniques from the enemy (human innovations) the robots may become confused and be easily defeated until they are reprogrammed. Robots may be more likely to mistake civilians for legal combatants. Robots can be hacked, and then either stopped or turned on the owner. Robots could be reprogrammed to ignore the Laws of Warfare to frame a nation for war crimes. There is an increased risk for nations when rules of warfare are broken by robots. Laws will be needed to determine who gets the blame for the war crimes (i.e. designers, owners, programmers, elected officials, senior commanders, or the closest user). There will be a requirement to develop rights for the robots in warfare. There could be prisoner of war status issues and discussions about how shutdown and maintenance requirements work so robots are not operated until they malfunction and die. This option can lead to the question, “if robots can make decisions, are they sentient/living beings?” Sentient status would require nations to consider minimum requirements for living standards of robots. This could create many questions about the ethics of sending robots to war.
Gain: This option has a lower cost than human manning of military units. The ability to mass produce robots allows means the U.S. can quickly keep up with nations that produce large human or robotic militaries. Robots may be more accurate with weapons systems which may reduce civilian casualties.
Other Comments: While this may seem like science fiction to some policy-makers, this future is coming, likely faster than many anticipate.
Recommendation: None.
Endnotes:
[1] Mason Smithers is a 13-year-old, 7th grade Florida student. He raised this question with his guest instructor Jason Howk during an impromptu national security class. When Mason started to explain in detail all the risks and advantages of robots in future warfare, Jason asked him to write a paper about the topic. Ninety percent of this paper is from Mason’s 13-year-old mind and his view of the future. We can learn a lot from our students.
[2] Mason’s mother has given permission for the publication of his middle school project.