Options for the Deployment of Robots on the Battlefield

Mason Smithers[1] is a student of robotics and aviation. He has taken part in building and programming robots for various purposes and is seeking a career as a pilot. 

Jason Criss Howk[2] is an adjunct professor of national security and Islamic studies and was Mason’s guest instructor during the COVID-19 quarantine.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The deployment of robots on the battlefield raises many questions for nations that desire to do so.

Date Originally Written:  April, 5, 2020.

Date Originally Published:  June 24, 2020.

Author and / or Article Point of View:  This paper is based on the assumption that robots will one day become the predominant actor on a battlefield, as AI and robotics technology advance. The authors believe it is the moral duty of national and international policy-makers to debate and establish the rules for this future now.

Background:  Robots on the battlefield in large quantities, where they make up the majority of the combatants making direct-contact with a nation’s enemies, will raise new concerns for national leaders and human rights scholars. Whether they are tethered to a human decision-maker or not, when robots become the primary resource that a nation puts at risk during war, there will be an avalanche of new moral and ethical questions to debate.

This shift in the “manning” of warfighting organizations could increase the chances that nations will go to war because they can afford to easily replace robots, and without a human-life cost, citizens may not be as eager to demand a war be ended or be avoided.

Significance:  While the U.S. currently uses human-operated ground and air robots (armed unmanned aircraft-AKA drones, reconnaissance robots, bomb technician’s assistants etc.), a robust debate about whether robots can be safely untethered from humans is currently underway. If the United States or other nations decide to mass produce infantry robots that can act, without a human controlling them and making critical decisions for them, what are the costs and risks associated? The answers to these questions about the future, matter now to every leader involved in warfare and peace preservation.

Option #1:  The U.S. continues to deploy robots in the future with current requirements for human decision-making (aka human in the loop) in place. In this option the humans in any military force will continue to make all decisions for robots with the capability to use deadly force.

Risk:  If other nations choose to use robots with their own non-human decision capability or in larger numbers, U.S. technology and moral limits may cause the U.S. force smaller and possibly outnumbered. Requiring a human in the loop will stretch a U.S. armed forces that is already hurting in the areas of retention and readiness. Humans in the loop, due to eventual distraction or fatigue, will be slower in making decisions when compared to robots. If other nations perfect this technology before the U.S., there may not be time to catch up in a war and regain the advantage. The U.S. alliance system may be challenged by differing views of whether or not to have a human in the loop.

Gain:  Having a human in the loop will decreases the risk of international incidents that cause wars due to greater an assumed greater discretion capacity with the human. A human can make decisions that are “most correct” and not simply the fastest or most logical. Humans stand the best chance at making choices that can create positive strategic impacts when a gray area presents itself.

Option #2:  The U.S. transitions to a military force that is predominantly robotic and delegate decision-making to the robots at the lowest, possibly individual robot, level.

Risk:  Programmers cannot account for every situation on the battlefield. When robots encounter new techniques from the enemy (human innovations) the robots may become confused and be easily defeated until they are reprogrammed. Robots may be more likely to mistake civilians for legal combatants. Robots can be hacked, and then either stopped or turned on the owner. Robots could be reprogrammed to ignore the Laws of Warfare to frame a nation for war crimes. There is an increased risk for nations when rules of warfare are broken by robots. Laws will be needed to determine who gets the blame for the war crimes (i.e. designers, owners, programmers, elected officials, senior commanders, or the closest user).  There will be a requirement to develop rights for the robots in warfare. There could be prisoner of war status issues and discussions about how shutdown and maintenance requirements work so robots are not operated until they malfunction and die.  This option can lead to the question, “if robots can make decisions, are they sentient/living beings?” Sentient status would require nations to consider minimum requirements for living standards of robots. This could create many questions about the ethics of sending robots to war.

Gain:  This option has a lower cost than human manning of military units. The ability to mass produce robots allows means the U.S. can quickly keep up with nations that produce large human or robotic militaries. Robots may be more accurate with weapons systems which may reduce civilian casualties.

Other Comments:  While this may seem like science fiction to some policy-makers, this future is coming, likely faster than many anticipate.

Recommendation:  None.


Endnotes:

[1] Mason Smithers is a 13-year-old, 7th grade Florida student. He raised this question with his guest instructor Jason Howk during an impromptu national security class. When Mason started to explain in detail all the risks and advantages of robots in future warfare, Jason asked him to write a paper about the topic. Ninety percent of this paper is from Mason’s 13-year-old mind and his view of the future. We can learn a lot from our students.

[2]  Mason’s mother has given permission for the publication of his middle school project.

Artificial Intelligence & Human-Machine Teaming Jason Criss Howk Mason Smithers Option Papers

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence & Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk

U.S. Options to Combat Chinese Technological Hegemony

Ilyar Dulat, Kayla Ibrahim, Morgan Rose, Madison Sargeant, and Tyler Wilkins are Interns at the College of Information and Cyberspace at the National Defense UniversityDivergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  China’s technological rise threatens U.S. interests both on and off the battlefield.

Date Originally Written:  July 22, 2019.

Date Originally Published:  February 10, 2020.

Author and / or Article Point of View:  This article is written from the point of view of the United States Government.

Background:  Xi Jinping, the Chairman of China’s Central Military Commission. affirmed in 2012 that China is acting to redefine the international world order through revisionist policies[1]. These policies foster an environment open to authoritarianism thus undermining Western liberal values. The Chinese Communist Party (CCP) utilizes emerging technologies to restrict individual freedoms of Chinese citizens, in and out of cyberspace. Subsequently, Chinese companies have exported this freedom-restricting technology to other countries, such as Ethiopia and Iran, for little cost. These technologies, which include Artificial Intelligence-based surveillance systems and nationalized Internet services, allow authoritarian governments to effectively suppress political dissent and discourse within their states. Essentially monopolizing the tech industry through low prices, China hopes to gain the loyalty of these states to obtain the political clout necessary to overcome the United States as the global hegemon.

Significance:  Among the technologies China is pursuing, 5G is of particular interest to the U.S.  If China becomes the leader of 5G network technologies and artificial intelligence, this will allow for opportunities to disrupt the confidentiality, integrity, and availability of data. China has been able to aid regimes and fragmented democracies in repressing freedom of speech and restricting human rights using “digital tools of surveillance and control[2].” Furthermore, China’s National Security Law of 2015 requires all Chinese tech companies’ compliance with the CCP. These Chinese tech companies are legally bound to share data and information housed on Chinese technology, both in-state and abroad. They are also required to remain silent about their disclosure of private data to the CCP. As such, information about private citizens and governments around the world is provided to the Chinese government without transparency. By deploying hardware and software for countries seeking to expand their networks, the CCP could use its authority over domestic tech companies to gain access to information transferred over Chinese built networks, posing a significant threat to the national security interests of the U.S. and its Allies and Partners. With China leading 5G, the military forces of the U.S. and its Allies and Partners would be restricted in their ability to rely on indigenous telecoms abroad, which could cripple operations critical to U.S. interests [3]. This risk becomes even greater with the threat of U.S. Allies and Partners adopting Chinese 5G infrastructure, despite the harm this move would do to information sharing with the United States.

If China continues its current trajectory, the U.S. and its advocacy for personal freedoms will grow increasingly marginal in the discussion of human rights in the digital age. In light of the increasing importance of the cyber domain, the United States cannot afford to assume that its global leadership will seamlessly transfer to, and maintain itself within, cyberspace. The United States’ position as a leader in cyber technology is under threat unless it vigilantly pursues leadership in advancing and regulating the exchange of digital information.

Option #1:  Domestic Investment.

The U.S. government could facilitate a favorable environment for the development of 5G infrastructure through domestic telecom providers. Thus far, Chinese companies Huawei and ZTE have been able to outbid major European companies for 5G contracts. American companies that are developing 5G infrastructure are not large enough to compete at this time. By investing in 5G development domestically, the U.S. and its Allies and Partners would have 5G options other than Huawei and ZTE available to them. This option provides American companies with a playing field equal to their Chinese counterparts.

Risk:  Congressional approval to fund 5G infrastructure development will prove to be a major obstacle. Funding a development project can quickly become a bipartisan issue. Fiscal conservatives might argue that markets should drive development, while those who believe in strong government oversight might argue that the government should spearhead 5G development. Additionally, government subsidized projects have previously failed. As such, there is no guarantee 5G will be different.

Gain:  By investing in domestic telecommunication companies, the United States can remain independent from Chinese infrastructure by mitigating further Chinese expansion. With the U.S. investing domestically and giving subsidies to companies such as Qualcomm and Verizon, American companies can develop their technology faster in an attempt to compete with Huawei and ZTE.

Option #2:  Foreign Subsidization.

The U.S. supports European competitors Nokia and Ericsson, through loans and subsidies, against Huawei and ZTE. In doing so, the United States could offer a conduit for these companies to produce 5G technology at a more competitive price. By providing loans and subsidies to these European companies, the United States delivers a means for these companies to offer more competitive prices and possibly outbid Huawei and ZTE.

Risk:  The American people may be hostile towards a policy that provides U.S. tax dollars to foreign entities. While the U.S. can provide stipulations that come with the funding provided, the U.S. ultimately sacrifices much of the control over the development and implementation of 5G infrastructure.

Gain:  Supporting European tech companies such as Nokia and Ericsson would help deter allied nations from investing in Chinese 5G infrastructure. This option would reinforce the U.S.’s commitment to its European allies, and serve as a reminder that the United States maintains its position as the leader of the liberal international order. Most importantly, this option makes friendlier telecommunications companies more competitive in international markets.

Other Comments:  Both options above would also include the U.S. defining regulations and enforcement mechanisms to promote the fair usage of cyberspace. This fair use would be a significant deviation from a history of loosely defined principles. In pursuit of this fair use, the United States could join the Cyber Operations Resilience Alliance, and encourage legislation within the alliance that invests in democratic states’ cyber capabilities and administers clearly defined principles of digital freedom and the cyber domain.

Recommendation:  None.


Endnotes:

[1] Economy, Elizabeth C. “China’s New Revolution.” Foreign Affairs. June 10, 2019. Accessed July 31, 2019. https://www.foreignaffairs.com/articles/china/2018-04-17/chinas-new-revolution.

[2] Chhabra, Tarun. “The China Challenge, Democracy, and U.S. Grand Strategy.” Democracy & Disorder, February 2019. https://www.brookings.edu/research/the-china-challenge-democracy-and-u-s-grand-strategy/.

[3] “The Overlooked Military Implications of the 5G Debate.” Council on Foreign Relations. Accessed August 01, 2019. https://www.cfr.org/blog/overlooked-military-implications-5g-debate.

Artificial Intelligence & Human-Machine Teaming China (People's Republic of China) Cyberspace Emerging Technology Ilyar Dulat Kayla Ibrahim Madison Sargeant Morgan Rose Option Papers Tyler Wilkins United States

Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence & Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Dan Lee is a government employee who works in Defense, and has varying levels of experience working with Five Eyes nations (US, UK, Canada, Australia, New Zealand).  He can be found on Twitter @danlee961.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Date Originally Written:  September 29, 2018.

Date Originally Published:  October 29, 2018.

Author and / or Article Point of View:  The article is written from the point of view of Five Eyes national defense organizations. 

Background:  The Five Eyes community consists of the United Kingdom (UK), the United States (US), Canada, Australia and New Zealand; its origins can be traced to the requirement to cooperate in Signals Intelligence after World War Two[1]. Arguably, the alliance is still critical today in dealing with terrorism and other threats[2].

Autonomous systems may provide the Five Eyes alliance an asymmetric advantage, or ‘offset’, to counter its strategic competitors that are on track to field larger and more technologically advanced military forces. The question of whether or not to develop and employ Lethal Autonomous Weapons Systems (LAWS) is currently contentious due to the ethical and social considerations involved with allowing machines to choose targets and apply lethal force without human intervention[3][4][5]. Twenty-six countries are calling for a prohibition on LAWS, while three Five Eyes partners (Australia, UK and the US) as well as other nations including France, Germany, South Korea and Turkey do not support negotiating new international laws on the matter[6]. When considering options, at least two issues must also be addressed.

The first issue is defining what LAWS are; a common lexicon is required to allow Five Eyes partners to conduct an informed discussion as to whether they can come to a common policy position on the development and employment of these systems. Public understanding of autonomy is mostly derived from the media or from popular culture and this may have contributed to the hype around the topic[7][8][8]. Currently there is no universally accepted definition of what constitutes a fully autonomous lethal weapon system, which has in turn disrupted discussions at the United Nations (UN) on how these systems should be governed by the Convention on Certain Conventional Weapons (CCWUN)[10]. The US and UK have different definitions, which makes agreement on a common position difficult even amongst like-minded nations[11][12]. This lack of lexicon is further complicated by some strategic competitors using more liberal definitions of LAWS, allowing them to support a ban while simultaneously developing weapons that do not require meaningful human control[13][14][15][16].

The second issue one of agreeing how autonomous systems might be employed within the Five Eyes alliance. For example, as a strategic offset technology, the use of autonomous systems might mitigate the relatively small size of their military forces relative to an adversary’s force[17]. Tactically, they could be deployed completely independently of humans to remove personnel from danger, as swarms to overwhelm the enemy with complexity, or as part of a human-machine team to augment human capabilities[18][19][20].

A failure of Five Eyes partners to come to a complete agreement on what is and is not permissible in developing and employing LAWS does not necessarily mean a halt to progress; indeed, this may provide the alliance with the ability for some partners to cover the capability gaps of others. If some members of the alliance choose not to develop lethal systems, it may free their resources to focus on autonomous Intelligence, Surveillance, and Reconnaissance (ISR) or logistics capabilities. In a Five Eyes coalition environment, these members who chose not to develop lethal systems could provide support to the LAWS-enabled forces of other partners, providing lethal autonomy to the alliance as whole, if not to individual member states.

Significance:  China and Russia may already be developing LAWS; a failure on the part of the Five Eyes alliance to actively manage this issue may put it at a relative disadvantage in the near future[21][22][23][24]. Further, dual-use civilian technologies already exist that may be adapted for military use, such as the Australian COTSbot and the Chinese Mosquito Killer Robot[25][26]. If the Five Eyes alliance does not either disrupt the development of LAWS by its competitors, or attain relative technological superiority, it may find itself starting in a position of disadvantage during future conflicts or deterrence campaigns.

Option #1:  Five Eyes nations work with the UN to define LAWS and ban their development and use; diplomatic, economic and informational measures are applied to halt or disrupt competitors’ LAWS programs. Technological offset is achieved by Five Eyes autonomous military systems development that focuses on logistics and ISR capabilities, such as Boston Dynamics’ LS3 AlphaDog and the development of driverless trucks to free soldiers from non-combat tasks[27][28][29][30].

Risk:  In the event of conflict, allied combat personnel would be more exposed to danger than the enemy as their nations had, in essence, decided to not develop a technology that could be of use in war. Five Eyes militaries would not be organizationally prepared to develop, train with and employ LAWS if necessitated by an existential threat. It may be too late to close the technological capability gap after the commencement of hostilities.

Gain:  The Five Eyes alliance’s legitimacy regarding human rights and the just conduct of war is maintained in the eyes of the international community. A LAWS arms race and subsequent proliferation can be avoided.

Option #2:  Five Eyes militaries actively develop LAWS to achieve superiority over their competitors.

Risk:  The Five Eyes alliance’s legitimacy may be undermined in the eyes of the international community and organizations such as The Campaign to Stop Killer Robots, the UN, and the International Committee of the Red Cross. Public opinion in some partner nations may increasingly disapprove of LAWS development and use, which could fragment the alliance in a similar manner to the Australia, New Zealand and United States Security Treaty[31][32].

The declared development and employment of LAWS may catalyze a resource-intensive international arms race. Partnerships between government and academia and industry may also be adversely affected[33][34].

Gain:  Five Eyes nations avoid a technological disadvantage relative to their competitors; the Chinese information campaign to outmanoeuvre Five Eyes LAWS development through the manipulation of CCWUN will be mitigated. Once LAWS development is accepted as inevitable, proliferation may be regulated through the UN.

Other Comments:  None

Recommendation:  None.


Endnotes:

[1] Tossini, J.V. (November 14, 2017). The Five Eyes – The Intelligence Alliance of the Anglosphere. Retrieved from https://ukdefencejournal.org.uk/the-five-eyes-the-intelligence-alliance-of-the-anglosphere/

[2] Grayson, K. Time to bring ‘Five Eyes’ in from the cold? (May 4, 2018). Retrieved from https://www.aspistrategist.org.au/time-bring-five-eyes-cold/

[3] Lange, K. 3rd Offset Strategy 101: What It Is, What the Tech Focuses Are (March 30, 2016). Retrieved from http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/

[4] International Committee of the Red Cross. Expert Meeting on Lethal Autonomous Weapons Systems Statement (November 15, 2017). Retrieved from https://www.icrc.org/en/document/expert-meeting-lethal-autonomous-weapons-systems

[5] Human Rights Watch and
Harvard Law School’s International Human Rights Clinic. Fully Autonomous Weapons: Questions and Answers. (October 2013). Retrieved from https://www.hrw.org/sites/default/files/supporting_resources/10.2013_killer_robots_qa.pdf

[6] Campaign to Stop Killer Robots. Report on Activities Convention on Conventional Weapons Group of Governmental Experts meeting on lethal autonomous weapons systems – United Nations Geneva – 9-13 April 2018. (2018) Retrieved from https://www.stopkillerrobots.org/wp-content/uploads/2018/07/KRC_ReportCCWX_Apr2018_UPLOADED.pdf

[7] Scharre, P. Why You Shouldn’t Fear ‘Slaughterbots’. (December 22, 2017). Retrieved from https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots

[8] Winter, C. (November 14, 2017). ‘Killer robots’: autonomous weapons pose moral dilemma. Retrieved from https://www.dw.com/en/killer-robots-autonomous-weapons-pose-moral-dilemma/a-41342616

[9] Devlin, H. Killer robots will only exist if we are stupid enough to let them. (June 11, 2018). Retrieved from https://www.theguardian.com/technology/2018/jun/11/killer-robots-will-only-exist-if-we-are-stupid-enough-to-let-them

[10] Welsh, S. Regulating autonomous weapons. (November 16, 2017). Retrieved from https://www.aspistrategist.org.au/regulating-autonomous-weapons/

[11] United States Department of Defense. Directive Number 3000.09. (November 21, 2012). Retrieved from https://www.hsdl.org/?view&did=726163

[12] Lords AI committee: UK definitions of autonomous weapons hinder international agreement. (April 17, 2018). Retrieved from http://www.article36.org/autonomous-weapons/lords-ai-report/

[13] Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects – Geneva, 9–13 April 2018 (first week) Item 6 of the provisional agenda – Other matters. (11 April 2018). Retrieved from https://www.unog.ch/80256EDD006B8954/(httpAssets)/E42AE83BDB3525D0C125826C0040B262/$file/CCW_GGE.1_2018_WP.7.pdf

[14] Welsh, S. China’s shock call for ban on lethal autonomous weapon systems. (April 16, 2018). Retrieved from https://www.janes.com/article/79311/china-s-shock-call-for-ban-on-lethal-autonomous-weapon-systems

[15] Mohanty, B. Lethal Autonomous Dragon: China’s approach to artificial intelligence weapons. (Nov 15 2017). Retrieved from https://www.orfonline.org/expert-speak/lethal-autonomous-weapons-dragon-china-approach-artificial-intelligence/

[16] Kania, E.B. China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems. (April 17, 2018). Retrieved from https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems

[17] Tomes, R. Why the Cold War Offset Strategy was all about Deterrence and Stealth. (January 14, 2015) Retrieved from https://warontherocks.com/2015/01/why-the-cold-war-offset-strategy-was-all-about-deterrence-and-stealth/

[18] Lockie, A. The Air Force just demonstrated an autonomous F-16 that can fly and take out a target all by itself. (April 12, 2017). Retrieved from https://www.businessinsider.com.au/f-16-drone-have-raider-ii-loyal-wingman-f-35-lockheed-martin-2017-4?r=US&IR=T

[19] Schuety, C. & Will, L. An Air Force ‘Way of Swarm’: Using Wargaming and Artificial Intelligence to Train Drones. (September 21, 2018). Retrieved from https://warontherocks.com/2018/09/an-air-force-way-of-swarm-using-wargaming-and-artificial-intelligence-to-train-drones/

[20] Ryan, M. Human-Machine Teaming for Future Ground Forces. (2018). Retrieved from https://csbaonline.org/uploads/documents/Human_Machine_Teaming_FinalFormat.pdf

[21] Perrigo, B. Global Arms Race for Killer Robots Is Transforming the Battlefield. (Updated: April 9, 2018). Retrieved from http://time.com/5230567/killer-robots/

[22] Hutchison, H.C. Russia says it will ignore any UN ban of killer robots. (November 30, 2017). Retrieved from https://www.businessinsider.com/russia-will-ignore-un-killer-robot-ban-2017-11/?r=AU&IR=T

[23] Mizokami, K. Kalashnikov Will Make an A.I.-Powered Killer Robot – What could possibly go wrong? (July 20, 2017). Retrieved from https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/

[24] Atherton, K. Combat robots and cheap drones obscure the hidden triumph of Russia’s wargame. (September 25, 2018). Retrieved from https://www.c4isrnet.com/unmanned/2018/09/24/combat-robots-and-cheap-drones-obscure-the-hidden-triumph-of-russias-wargame/

[25] Platt, J.R. A Starfish-Killing, Artificially Intelligent Robot Is Set to Patrol the Great Barrier Reef Crown of thorns starfish are destroying the reef. Bots that wield poison could dampen the invasion. (January 1, 2016) Retrieved from https://www.scientificamerican.com/article/a-starfish-killing-artificially-intelligent-robot-is-set-to-patrol-the-great-barrier-reef/

[26] Skinner, T. Presenting, the Mosquito Killer Robot. (September 14, 2016). Retrieved from https://quillorcapture.com/2016/09/14/presenting-the-mosquito-killer-robot/

[27] Defence Connect. DST launches Wizard of Aus. (November 10, 2017). Retrieved from https://www.defenceconnect.com.au/key-enablers/1514-dst-launches-wizard-of-aus

[28] Pomerleau, M. Air Force is looking for resilient autonomous systems. (February 24, 2016). Retrieved from https://defensesystems.com/articles/2016/02/24/air-force-uas-contested-environments.aspx

[29] Boston Dynamics. LS3 Legged Squad Support Systems. The AlphaDog of legged robots carries heavy loads over rough terrain. (2018). Retrieved from https://www.bostondynamics.com/ls3

[30] Evans, G. Driverless vehicles in the military – will the potential be realised? (February 2, 2018). Retrieved from https://www.army-technology.com/features/driverless-vehicles-military/

[31] Hambling, D. Why the U.S. Is Backing Killer Robots. (September 15, 2018). Retrieved from https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/

[32] Ministry for Culture and Heritage. ANZUS treaty comes into force 29 April 1952. (April 26, 2017). Retrieved from https://nzhistory.govt.nz/anzus-comes-into-force

[33] Shalal, A. Researchers to boycott South Korean university over AI weapons work. (April 5, 2018). Retrieved from https://www.reuters.com/article/us-tech-korea-boycott/researchers-to-boycott-south-korean-university-over-ai-weapons-work-idUSKCN1HB392

[34] Shane, S & Wakabayashi, D. ‘The Business of War’: Google Employees Protest Work for the Pentagon. (April 4, 2018). Retrieved from https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

 

Artificial Intelligence & Human-Machine Teaming Australia Australia, New Zealand, United States Security Treaty (ANZUS) Autonomous Weapons Systems Canada Dan Lee New Zealand Option Papers United Kingdom United States

An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Date Originally Written:  May 25, 2018.

Date Originally Published:  July 16, 2018.

Summary:  While the U.S. Department of Defense (DoD) continues to experiment with Artificial Intelligence (AI) as part of its Third Offset Strategy, questions regarding levels of human participation, ethics, and legality remain.  Though a battlefield in the future will likely see autonomous decision-making technology as a norm, the transition between modern applications of artificial intelligence and potential applications will focus on incorporating human-machine teaming into existing frameworks.

Text:   In an essay titled Centaur Warfighting: The False Choice of Humans vs. Automation, author Paul Scharre concludes that the best warfighting systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each[1].  There are three potential partnerships.  The first potential partnership pegs humans as essential operators, meaning AI cannot operate without its human counterpart.  The second potential partnership tasks humans as the moral agents who make value-based decisions which prevent or promote the use of AI in combat situations.  Finally, the third potential partnership, in which humans are fail-safes, give more operational authority to AI systems.  The human operator only interferes if the system malfunctions or fails.  Artificial intelligence, specifically autonomous weapons systems, are controversial technologies that have the capacity to greatly improve human efficiency while reducing potential human burdens.  But before the Department of Defense embraces intelligent weapons systems or programs with full autonomy, more human-machine partnerships to test to viability, legality, and ethical implications of artificial intelligence will likely occur.

To better understand why artificial intelligence is controversial, it is necessary to distinguish between the arguments for and against using AI with operational autonomy.  In 2015, prominent artificial intelligence experts, including Steven Hawking and Elon Musk, penned an open letter in which the potential benefits for AI are highlighted, but are not necessarily outweighed by the short-term questions of ethics and the applicability of law[2].  A system with an intelligent, decision-making brain does carry significant consequences.  What if the system targets civilians?  How does international law apply to a machine?  Will an intelligent machine respond to commands?  These are questions with which military and ethical theorists grapple.

For a more practical thought problem, consider the Moral Machine project from the Massachusetts Institute of Technology[3].  You, the judge, are presented with two dilemmas involving intelligent, self-driving cars.  The car encounters break failure and must decide what to do next.  If the car continues straight, it will strike and kill x number of men, women, children, elderly, or animals.  If the car does not swerve, it will crash into a barrier causing immediate deaths of the passengers who are also x number of men or women, children, or elderly.  Although you are the judge in Moral Machine, the simulation is indicative of ethical and moral dilemmas that may arise when employing artificial intelligence in, say, combat.  In these scenarios, the ethical theorist takes issue with the machine having the decision-making capacity to place value on human life, and to potentially make irreversible and damaging decisions.

Assuming autonomous weapons systems do have a place in the future of military operations, what would prelude them?  Realistically, human-machine teaming would be introduced before a fully-autonomous machine.  What exactly is human-machine teaming and why is it important when discussing the future of artificial intelligence?  To gain and maintain superiority in operational domains, both past and present, the United States has ensured that its conventional deterrents are powerful enough to dissuade great powers from going to war with the United States[4].  Thus, an offset strategy focuses on gaining advantages against enemy powers and capabilities.  Historically, the First Offset occurred in the early 1950s upon the introduction of tactical nuclear weapons.  The Second Offset manifested a little later, in the 1970s, with the implementation of precision-guided weapons after the Soviet Union gained nuclear parity with the United States[5].  The Third Offset, a relatively modern strategy, generally focuses on maintaining technological superiority among the world’s great powers.

Human-machine teaming is part of the Department of Defense’s Third Offset strategy, as is deep learning systems and cyber weaponry[6].  Machine learning systems relieve humans from a breadth of burdening tasks or augment operations to decrease potential risks to the lives of human fighters.  For example, in 2017 the DoD began working with an intelligent system called “Project Maven,” which uses deep learning technology to identify objects of interest from drone surveillance footage[7].  Terabytes of footage are collected each day from surveillance drones.  Human analysts spend significant amounts of time sifting through this data to identify objects of interest, and then they begin their analytical processes[8].  Project Maven’s deep-learning algorithm allows human analysts to spend more time practicing their craft to produce intelligence products and less time processing information.  Despite Google’s recent departure from the program, Project Maven will continue to operate[9].  Former Deputy Defense Secretary Bob Work established the Algorithmic Warfare Cross-Functional Team in early 2017 to work on Project Maven.  In the announcement, Work described artificial intelligence as necessary for strategic deterrence, noting “the [DoD] must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors[10].”

This article collectively refers to human-machine teaming as processes in which humans interact in some capacity with artificial intelligence.  However, human-machine teaming can transcend multiple technological fields and is not limited to just prerequisites of autonomous weaponry[11].  Human-robot teaming may begin to appear as in the immediate future given developments in robotics.  Boston Dynamics, a premier engineering and robotics company, is well-known for its videos of human- and animal-like robots completing everyday tasks.  Imagine a machine like BigDog working alongside human soldiers or rescue workers or even navigating inaccessible terrain[12].  These robots are not fully autonomous, yet the unique partnership between human and robot offers a new set of opportunities and challenges[13].

Before fully-autonomous systems or weapons have a place in combat, human-machine teams need to be assessed as successful and sustainable.  These teams have the potential to improve human performance, reduce risks to human counterparts, and expand national power – all goals of the Third Offset Strategy.  However, there are challenges to procuring and incorporating artificial intelligence.  The DoD will need to seek out deeper relationships with technological and engineering firms, not just defense contractors.

Using humans as moral agents and fail-safes allow the problem of ethical and lawful applicability to be tested while opening the debate on future use of autonomous systems.  Autonomous weapons will likely not see combat until these challenges, coupled with ethical and lawful considerations, are thoroughly regulated and tested.


Endnotes:

[1] Paul Scharre, Temp. Int’l & Comp. L.J., “Centaur Warfighting: The False Choice of Humans vs. Automation,” 2016, https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf

[2] Daniel Dewey, Stuart Russell, Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” 2015, https://futureoflife.org/data/documents/research_priorities.pdf?x20046

[3] Moral Machine, http://moralmachine.mit.edu/

[4] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Work: Human-Machine Teaming Represents Defense Technology Future,” 8 November 2015, https://www.defense.gov/News/Article/Article/628154/work-human-machine-teaming-represents-defense-technology-future/

[5] Ibid.

[6] Katie Lange, DoDLive, “3rd Offset Strategy 101: What It Is, What the Tech Focuses Are,” 30 March 2016, http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/; and Mackenzie Eaglen, RealClearDefense, “What is the Third Offset Strategy?,” 15 February 2016, https://www.realcleardefense.com/articles/2016/02/16/what_is_the_third_offset_strategy_109034.html

[7] Cheryl Pellerin, Department of Defense News, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[8] Tajha Chappellet-Lanier, “Pentagon’s Project Maven responds to criticism: ‘There will be those who will partner with us’” 1 May 2018, https://www.fedscoop.com/project-maven-artificial-intelligence-google/

[9] Tom Simonite, Wired, “Pentagon Will Expand AI Project Prompting Protests at Google,” 29 May 2018, https://www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand/

[10] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[11] Maj. Gen. Mick Ryan, Defense One, “How to Plan for the Coming Era of Human-Machine Teaming,” 25 April 2018, https://www.defenseone.com/ideas/2018/04/how-plan-coming-era-human-machine-teaming/147718/

[12] Boston Dynamic Big Dog Overview, March, 2010, https://www.youtube.com/watch?v=cNZPRsrwumQ

[13] Richard Priday, Wired, “What’s really going on in those Bostom Dynamics robot videos?,” 18 February 2018, http://www.wired.co.uk/article/boston-dynamics-robotics-roboticist-how-to-watch

Ali Crawford Alternative Futures / Alternative Histories / Counterfactuals Artificial Intelligence & Human-Machine Teaming Capacity / Capability Enhancement United Nations