Simple Lethality: Assessing the Potential for Agricultural Unmanned Aerial and Ground Systems to Deploy Biological or Chemical Weapons

William H. Johnson, CAPT, USN/Ret, holds a Master of Aeronautical Science (MAS) from Embry-Riddle Aeronautical University, and a MA in Military History from Norwich University. He is currently an Adjunct Assistant Professor at Embry-Riddle in the College of Aeronautics, teaching unmanned system development, control, and interoperability. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Simple Lethality: Assessing the Potential for Agricultural Unmanned Aerial and Ground Systems to Deploy Biological or Chemical Weapons

Date Originally Written:  August 15, 2020.

Date Originally Published:  November 18, 2020.

Author and / or Article Point of View:  The author is a retired U.S. Naval Flight Officer who held command of the Navy’s sole unmanned air system squadron between 2001 and 2002. He has presented technical papers on unmanned systems, published on the same in professional journals, and has taught unmanned systems since 2016. The article is written from the point of view of an American analyst considering military force vulnerability to small, improvised, unmanned aerial or ground systems, hereby collectively referred to as UxS, equipped with existing technology for agricultural chemical dispersal over a broad area.

Summary:  Small, locally built unmanned vehicles, similar to those used in agriculture, can easily be configured to release a chemical or biological payload. Wide, air-dispersed agents could be set off over a populated area with low likelihood of either interdiction or traceability. Domestic counter-UAS can not only eliminate annoying imagery collection, but also to mitigate the growing potential for an inexpensive chemical or biological weapon attack on U.S. soil.

Text:  The ongoing development and improvement of UxS – primarily aerial, but also ground-operated – to optimize efficiency in the agricultural arena are matters of pride among manufacturers.  These developments and improvements are of interest to regulatory bodies such as the Federal Aviation Administration, and offer an opportunity to those seeking to inflict easy chemical or biological operations on U.S. soil. While the latter note concerning opportunity for enemies, may appear flippant and simplistic at first blush, it is the most important one on the list. Accepting the idea that hostile entities consider environment and objective(s) when choosing physical or cyber attack platforms, the availability of chemical-dispersing unmanned vehicles with current system control options make such weapons not only feasible, but ideal[1].

Commercially available UxS, such the Yamaha RMAX[2] or the DJI Agras MG-1[3], can be launched remotely, and with a simple, available autopilot fly a pre-programmed course until fuel exhaustion. These capabilities the opportunity for an insurgent to recruit a similarly minded, hobbyist-level UAS builder to acquire necessary parts and assemble the vehicle in private. The engineering of such a small craft, even one as large as the RMAX, is quite simple, and the parts could be innocuously and anonymously acquired by anyone with a credit card. Even assembling a 25-liter dispersal tank and setting a primitive timer for release would not be complicated.

With such a simple, garage-built craft, the dispersal tank could be filled with either chemical or biological material, launched anytime from a suburban convenience store parking lot.  The craft could then execute a straight-and-level flight path over an unaware downtown area, and disperse its tank contents at a predetermined time-of-flight. This is clearly not a precision mission, but it would be quite easy to fund and execute[4].

The danger lies in the simplicity[5]. As an historical example, Nazi V-2 “buzz bomb” rockets in World War II were occasionally pointed at a target and fueled to match the rough, desired time of flight needed to cross the planned distance. The V-2 would then simply fall out of the sky once out of gas. Existing autopilots for any number of commercially available UxS are far more sophisticated than that, and easy to obtain. This attack previously described would be difficult to trace and almost impossible to predict, especially if assembly were done with simple parts from a variety of suppliers. The extrapolated problem is that without indication or warning, even presently available counter-UxS technology would have no reason to be brought to bear until after the attack. The cost, given the potential for terror and destabilization, would be negligible to an adversary. The ability to fly such missions simultaneously over a number of metropolitan areas could create devastating consequences in terms of panic.

The current mitigations to UxS are few, but somewhat challenging to an entity planning such a mission. Effective chemical or weaponized biological material is well-tracked by a variety of global organizations.  As such, movement of any amount of such into the United States would be quite difficult for even the best-resourced individuals or groups. Additionally, there are some unique parts necessary for construction of a heavier-lift rotary vehicle.  With some effort, those parts could be cataloged under processes similar to existing import-export control policies and practices.

Finally, the expansion of machine-learning-driven artificial intelligence, the ongoing improvement in battery storage, and the ubiquity of UxS hobbyists and their products, make this type of threat more and more feasible by the day. Current domestic counter-UxS technologies have been developed largely in response to safety threats posed by small UxS to manned aircraft, and also because of the potential for unapproved imagery collection and privacy violation. To those, it will soon be time to add small scale counter-Weapons of Mass Destruction to the rationale.


Endnotes:

[1] Ash Rossiter, “Drone usage by militant groups: exploring variation in adoption,” Defense & Security Analysis, 34:2, 113-126, https://doi.org/10.1080/14751798.2018.1478183

[2] Elan Head, “FAA grants exemption to unmanned Yamaha RMX helicopter.” Verticalmag.com, online: https://www.verticalmag.com/news/faagrantsexemptiontounmannedyamaharmaxhelicopter Accessed: August 15, 2020

[3] One example of this vehicle is available online at https://ledrones.org/product/dji-agras-mg-1-octocopter-argriculture-drone-ready-to-fly-bundle Accessed: August 15, 2020

[4] ”FBI: Man plotted to fly drone-like toy planes with bombs into school. (2014).” CBS News. Retrieved from
https://www.cbsnews.com/news/fbi-man-in-connecticut-plotted-to-fly-drone-like-toy-planes-with-bombs-into-school Accessed: August 10, 2020

[5] Wallace, R. J., & Loffi, J. M. (2015). Examining Unmanned Aerial System Threats & Defenses: A Conceptual Analysis. International Journal of Aviation, Aeronautics, and Aerospace, 2(4). https://doi.org/10.15394/ijaaa.2015.1084

Artificial Intelligence & Human-Machine Teaming Assessment Papers Chemical, Biological, Radiological, and Nuclear Weapons Unmanned Systems William H. Johnson

Assessing the Chinese People’s Liberation Army’s Surreptitious Artificial Intelligence Build-Up

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Richard Tilley is a strategist within the Office of the Secretary of Defense. Previously, Richard served as a U.S. Army Special Forces Officer and a National Security Advisor in the U.S. House of Representatives. He is on Twitter @RichardTilley6 and on LinkedIn. The views contained in this article are the author’s alone and do not represent the views of the Department of Defense or the United States Government.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization or any group.


Title:  Assessing the Chinese People’s Liberation Army’s Surreptitious Artificial Intelligence Build-Up

Date Originally Written:  July 6, 2020.

Date Originally Published:  August 14, 2020.

Author and / or Article Point of View:  The author is an unconventional warfare scholar and strategist. He believes renewed American interest in great power competition and Chinese approaches to unrestricted warfare require the United States national security apparatus to better appreciate the disruptive role advanced technology will play on the future battlefield.

Summary:  China’s dreams of regional and global hegemony require a dominant People’s Liberation Army that faces the dilemma of accruing military power while not raising the ire of the United States. To meet this challenge, the Chinese Communist Party has bet heavily on artificial intelligence as a warfighting game-changer that it can acquire surreptitiously and remain below-the-threshold of armed conflict with the United States.

Text:  President Xi Jinping’s introduction of the “The China Dream” in 2013 offers the latest iteration of the Chinese Communist Party’s (CCP) decades-long quest to establish China in its rightful place atop the global hierarchy. To achieve this goal, Xi calls for “unison” between China’s newfound soft power and the People’s Liberation Army’s (PLA) hard power[1]. But, by the CCP’s own admission, “The PLA still lags far behind the world’s leading militaries[2].” Cognizant of this capability deficit, Beijing adheres to the policy of former Chairman Deng Xiaoping, “Hide your strength, bide your time” until the influence of the Chinese military can match that of the Chinese economy.

For the PLA, Deng’s maxim presents a dilemma: how to build towards militarily eclipsing the United States while remaining below the threshold of eliciting armed response. Beijing’s solution is to bet heavily on artificial intelligence (AI) and its potential to upend the warfighting balance of power.

In simple terms, AI is the ability of machines to perform tasks that normally require human intelligence. AI is not a piece of hardware but rather a technology integrated into nearly any system that enables computing more quickly, accurately, and intuitively. AI works by combining massive amounts of data with powerful, iterative algorithms to identify new associations and rules hidden therein. By applying these associations and rules to new scenarios, scientists hope to produce AI systems with reasoning and decision-making capabilities matching or surpassing that of humans.

China’s quest for regional and global military dominance has led to a search for a “Revolution in Military Affairs (RMA) with Chinese characteristics[3].” An RMA is a game-changing evolution in warfighting that upends the balance of power. In his seminal work on the subject, former Under Secretary of Defense Michael Vickers found eighteen cases of such innovations in history such as massed infantry, artillery, railroad, telegraph, and atomic weapons[4]. In each case, a military power introduces a disruptive technology or tactic that rapidly and enduringly changes warfighting. The PLA believes that AI can be their game-changer in the next conflict.

Evidence of the PLA’s confidence in AI abounds. Official PRC documents from 2017 called for “The use of new generation AI technologies as a strong support to command decision-making, military deductions [strategy], and defense equipment, among other applications[5].” Beijing matched this rhetoric with considerable funding, which the U.S. Department of Defense estimated as $12 billion in 2017 and growing to as much as $70 billion in 2020[6].

AI’s potential impact in a Western Pacific military confrontation is significant. Using AI, PLA intelligence systems could detect, identify, and assess the possible intent of U.S. carrier strike groups more quickly and with greater accuracy than traditional human analysis. Then, PLA strike systems could launch swarming attacks coordinated by AI that overwhelm even the most advanced American aerial and naval defenses. Adding injury to insult, the PLA’s AI systems will learn from this engagement to strike the U.S. Military with even more efficacy in the future.

While pursuing AI the CCP must still address the dilemma of staying below the threshold of armed conflict – thus the CCP masterfully conceals moves designed to give it an AI advantage. In the AI arms race, there are two key components: technology and data. To surpass the United States, China must dominate both, but it must do so surreptitiously.

AI systems require several technical components to operate optimally, including the talent, algorithms, and hardware on which they rely. Though Beijing is pouring untold resources into developing first-rate domestic capacity, it still relies on offshore sources for AI tech. To acquire this foreign know-how surreptitiously, the CCP engages in insidious foreign direct investment, joint ventures, cyber espionage, and talent acquisition[7] as a shortcut while it builds domestic AI production.

Successful AI also requires access to mountains of data. Generally, the more data input the better the AI output. To build these data stockpiles, the CCP routinely exploits its own citizens. National security laws passed in 2014 and 2017 mandate that Chinese individuals and organizations assist the state security apparatus when requested[8]. The laws make it possible for the CCP to easily collect and exploit Chinese personal data that can then be used to strengthen the state’s internal security apparatus – powered by AI. The chilling efficacy seen in controlling populations in Xinjiang and Hong Kong can be transferred to the international battlefield.

Abroad, the CCP leverages robust soft power to gain access to foreign data. Through programs like the Belt and Road Initiative, China offers low-cost modernization to tech-thirsty customers. Once installed, the host’s upgraded security, communication, or economic infrastructure allows Beijing to capture overseas data that reinforces their AI data sets and increases their understanding of the foreign environment[9]. This data enables the PLA to better train AI warfighting systems to operate in anywhere in the world.

If the current trends hold, the United States is at risk of losing the AI arms race and hegemony in the Western Pacific along with it. Despite proclaiming that, “Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States[10],” Washington is only devoting $4.9 billion to unclassified AI research in fiscal year 2020[11], just seven percent of Beijing’s investment.

The keep pace, the United States can better comprehend and appreciate the consequences of allowing the PLA to dominate AI warfighting in the future. The stakes of the AI race are not dissimilar to the race for nuclear weapons during World War 2. Only by approaching AI with the same interest, investment, and intensity of the Manhattan Project can U.S. Military hegemony hope to be maintained.


Endnotes:

[1] Page, J. (2013, March 13). For Xi, a ‘China Dream’ of Military Power. Wall Street Journal Retrieved June 20, 2020 from https://www.wsj.com/articles/SB10001424127887324128504578348774040546346

[2] The State Council Information Office of the People’s Republic of China. (2019). China’s National Defense in the New Era. (p. 6)

[3] Ibid.

[4] Vickers, M. G. (2010). The structure of military revolutions (Doctoral dissertation, Johns Hopkins University) (pp. 4-5). UMI Dissertation Publishing.

[5] PRC State Council, (2017, July 17). New Generation Artificial Intelligence Plan. (p. 1)

[6] Pawlyk, O. (2018, July 30). China Leaving the US behind on Artificial Intelligence: Air Force General. Military.com. Retrieved June 20, 2020 from https://www.military.com/defensetech/2018/07/30/china-leaving-us-behind-artificial-intelligence-air-force-general.html

[7] O’Conner, S. (2019). How Chinese Companies Facilitate Technology Transfer from the United States. U.S. – China Economic and Security Review Commission. (p. 3)

[8] Kharpal, A. (2019, March 5). Huawei Says It Would Never Hand Data to China’s Government. Experts Say It Wouldn’t Have a Choice. CNBC. Retrieved June 20, 2020 from https://www.cnbc.com/2019/03/05/huawei-would-have-to-give-data-to-china-government-if-asked-experts.html

[9] Chandran, N. (2018, July 12). Surveillance Fears Cloud China’s ‘Digital Silk Road.’ CNBC. Retrieved June 20, 2020 from https://www.cnbc.com/2018/07/11/risks-of-chinas-digital-silk-road-surveillance-coercion.html

[10] Trump, D. (2019, February 14). Executive Order 13859 “Maintaining American Leadership in Artificial Intelligence.” Retrieved June 20, 2020 from https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence

[11] Cornillie, C. (2019, March 28). Finding Artificial Intelligence Research Money in the Fiscal 2020 Budget. Bloomberg Government. Retrieved June 20, 2020 from https://about.bgov.com/news/finding-artificial-intelligence-money-fiscal-2020-budget

2020 - Contest: PRC Below Threshold Writing Contest Artificial Intelligence & Human-Machine Teaming Assessment Papers China (People's Republic of China) Richard Tilley United States

Alternative Future: The Perils of Trading Artificial Intelligence for Analysis in the U.S. Intelligence Community

John J. Borek served as a strategic intelligence analyst for the U.S. Army and later as a civilian intelligence analyst in the U.S. Intelligence Community.  He is currently an adjunct professor at Grand Canyon University where he teaches courses in governance and public policy. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Alternative Future: The Perils of Trading Artificial Intelligence for Analysis in the U.S. Intelligence Community

Date Originally Written:  June 12, 2020.

Date Originally Published:  August 12, 2020.

Author and / or Article Point of View:  The article is written from the point of view of a U.S. Congressional inquiry excerpt into an intelligence failure and the loss of Taiwan to China in 2035.

Summary:  The growing reliance on Artificial Intelligence (AI) to provide situational awareness and predictive analysis within the U.S. Intelligence Community (IC) resulted in an opportunity for China to execute a deception plan.  This successful deception plan resulted in the sudden and complete loss of Taiwan’s independence in 2035.

Text:  The U.S. transition away from humans performing intelligence analysis to the use of AI was an inevitable progression as the amount of data collected for analysis reached levels humans could not hope to manage[1] while machine learning and artificial neural networks developed simultaneously to the level they could match, if not outperform, human reasoning[2]. The integration of data scientists with analytic teams, which began in 2020, resulted in the attrition of of both regional and functional analysts and the transformation of the duties of those remaining to that of editor and briefer[3][4].

Initial successes in the transition led to increasing trust and complacency. The “Black Box” program demonstrated its first major success in identifying terrorist networks and forecasting terrorist actions fusing social media, network analysis, and clandestine collection; culminating in the successful preemption of the 2024 Freedom Tower attack. Moving beyond tactical successes, by 2026 Black Box was successfully analyzing climatological data, historical migration trends, and social behavior models to correctly forecast the sub-Saharan African drought and resulting instability, allowing the State Department to build a coalition of concerned nations and respond proactively to the event, mitigating human suffering and unrest.

The cost advantages and successes large and small resulted in the IC transitioning from a community of 17 coordinating analytic centers into a group of user agencies. In 2028, despite the concerns of this Committee, all analysis was centralized at the Office of the Director of National Intelligence under Black Box. Testimony at the time indicated that there was no longer any need for competitive or agency specific analysis, the algorithms of Black Box considered all likely possibilities more thoroughly and efficiently than human analysts could. Beginning that Fiscal Year the data scientists of the different agencies of the IC accessed Black Box for the analysis their decision makers needed. Also that year the coordination process for National intelligence Estimates and Intelligence Community Assessments was eliminated; as the intelligence and analysis was uniform across all agencies of government there was no longer any need for contentious, drawn out analytic sessions which only delayed delivery of the analysis to policy makers.

Regarding the current situation in the Pacific, there was never a doubt that China sought unification under its own terms with Taiwan, and the buildup and modernization of Chinese forces over the last several decades caused concern within both the U.S. and Taiwan governments[5]. This committee could find no fault with the priority that China had been given within the National Intelligence Priorities Framework. The roots of this intelligence failure lie in the IC inability to factor the possibility of deception into the algorithms of the Black Box program[6].

AI relies on machine learning, and it was well known that machines could learn biases based on the data that they were given and their algorithms[7][8]. Given the Chinese lead in AI development and applications, and their experience in using AI it to manage people and their perceptions[9][10], the Committee believes that the IC should have anticipated the potential for the virtual grooming of Black Box. As a result of this intelligence postmortem, we now know that four years before the loss of Taiwan the People’s Republic of China began their deception operation in earnest through the piecemeal release of false plans and strategy through multiple open and clandestine sources. As reported in the National Intelligence Estimate published just 6 months before the attack, China’s military modernization and procurement plan “confirmed” to Black Box that China was preparing to invade and reunify with Taiwan using overwhelming conventional military forces in 2043 to commemorate the 150th anniversary of Mao Zedong’s birth.

What was hidden from Black Box and the IC, was that China was also embarking on a parallel plan of adapting the lessons learned from Russia’s invasions of Georgia and the Ukraine. Using their own AI systems, China rehearsed and perfected a plan to use previously infiltrated special operations forces, airborne and heliborne forces, information warfare, and other asymmetric tactics to overcome Taiwan’s military superiority and geographic advantage. Individual training of these small units went unnoticed and was categorized as unremarkable and routine.

Three months prior to the October 2035 attack we now know that North Korea, at China’s request, began a series of escalating provocations in the Sea of Japan which alerted Black Box to a potential crisis and diverted U.S. military and diplomatic resources. At the same time, biometric tracking and media surveillance of key personalities in Taiwan that were previously identified as being crucial to a defense of the island was stepped up, allowing for their quick elimination by Chinese Special Operations Forces (SOF).

While we can’t determine with certainty when the first Chinese SOF infiltrated Taiwan, we know that by October 20, 2035 their forces were in place and Operation Homecoming received the final go-ahead from the Chinese President. The asymmetric tactics combined with limited precision kinetic strikes and the inability of the U.S. to respond due to their preoccupation 1,300 miles away resulted in a surprisingly quick collapse of Taiwanese resistance. Within five days enough conventional forces had been ferried to the island to secure China’s hold on it and make any attempt to liberate it untenable.

Unlike our 9/11 report which found that human analysts were unable to “connect the dots” of the information they had[11], we find that Black Box connected the dots too well. Deception is successful when it can either increase the “noise,” making it difficult to determine what is happening; or conversely by increasing the confidence in a wrong assessment[12]. Without community coordination or competing analysis provided by seasoned professional analysts, the assessment Black Box presented to policy makers was a perfect example of the latter.


Endnotes:

[1] Barnett, J. (2019, August 21). AI is breathing new life into the intelligence community. Fedscoop. Retrieved from https://www.fedscoop.com/artificial-intelligence-in-the-spying

[2] Silver, D., et al. (2016). Mastering the game of GO with deep neural networks and tree search. Nature, 529, 484-489. Retrieved from https://www.nature.com/articles/nature16961

[3] Gartin. G. W. (2019). The future of analysis. Studies in Intelligence, 63(2). Retrieved from https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-63-no-2/Future-of-Analysis.html

[4] Symon, P. B., & Tarapore, A. (2015, October 1). Defense intelligence in the age of big data. Joint Force Quarterly 79. Retrieved from https://ndupress.ndu.edu/Media/News/Article/621113/defense-intelligence-analysis-in-the-age-of-big-data

[5] Office of the Secretary of Defense. (2019). Annual report to Congress: Military and security developments involving the People’s Republic of China 2019. Retrieved from https://media.defense.gov/2019/May/02/2002127082/-1/-1/1/2019_CHINA_MILITARY_POWER_REPORT.pdf

[6] Knight, W. (2019). Tainted data can teach algorithms the wrong lessons. Wired. Retrieved from https://www.wired.com/story/tainted-data-teach-algorithms-wrong-lessons

[7] Boghani, P. (2019). Artificial intelligence can be biased. Here’s what you should know. PBS / Frontline Retrieved from https://www.pbs.org/wgbh/frontline/article/artificial-intelligence-algorithmic-bias-what-you-should-know

[8] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[9] Fanning, D., & Docherty, N. (2019). In the age of AI. PBS / Frontline. Retrieved from https://www.pbs.org/wgbh/frontline/film/in-the-age-of-ai

[10] Westerheide, F. (2020). China – the first artificial intelligence superpower. Forbes. Retrieved from https://www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-superpower/#794c7a52f053

[11] National Commission on Terrorist Attacks Upon the United States. (2004). The 9/11 Commission report. Retrieved from https://govinfo.library.unt.edu/911/report/911Report_Exec.htm

[12] Betts, R. K. (1980). Surprise despite warning: Why sudden attacks succeed. Political Science Quarterly 95(4), 551-572. Retrieved from https://www.jstor.org/stable/pdf/2150604.pdf

Alternative Futures / Alternative Histories / Counterfactuals Artificial Intelligence & Human-Machine Teaming Assessment Papers China (People's Republic of China) Information and Intelligence John J. Borek Taiwan

Options for the Deployment of Robots on the Battlefield

Mason Smithers[1] is a student of robotics and aviation. He has taken part in building and programming robots for various purposes and is seeking a career as a pilot. 

Jason Criss Howk[2] is an adjunct professor of national security and Islamic studies and was Mason’s guest instructor during the COVID-19 quarantine.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The deployment of robots on the battlefield raises many questions for nations that desire to do so.

Date Originally Written:  April, 5, 2020.

Date Originally Published:  June 24, 2020.

Author and / or Article Point of View:  This paper is based on the assumption that robots will one day become the predominant actor on a battlefield, as AI and robotics technology advance. The authors believe it is the moral duty of national and international policy-makers to debate and establish the rules for this future now.

Background:  Robots on the battlefield in large quantities, where they make up the majority of the combatants making direct-contact with a nation’s enemies, will raise new concerns for national leaders and human rights scholars. Whether they are tethered to a human decision-maker or not, when robots become the primary resource that a nation puts at risk during war, there will be an avalanche of new moral and ethical questions to debate.

This shift in the “manning” of warfighting organizations could increase the chances that nations will go to war because they can afford to easily replace robots, and without a human-life cost, citizens may not be as eager to demand a war be ended or be avoided.

Significance:  While the U.S. currently uses human-operated ground and air robots (armed unmanned aircraft-AKA drones, reconnaissance robots, bomb technician’s assistants etc.), a robust debate about whether robots can be safely untethered from humans is currently underway. If the United States or other nations decide to mass produce infantry robots that can act, without a human controlling them and making critical decisions for them, what are the costs and risks associated? The answers to these questions about the future, matter now to every leader involved in warfare and peace preservation.

Option #1:  The U.S. continues to deploy robots in the future with current requirements for human decision-making (aka human in the loop) in place. In this option the humans in any military force will continue to make all decisions for robots with the capability to use deadly force.

Risk:  If other nations choose to use robots with their own non-human decision capability or in larger numbers, U.S. technology and moral limits may cause the U.S. force smaller and possibly outnumbered. Requiring a human in the loop will stretch a U.S. armed forces that is already hurting in the areas of retention and readiness. Humans in the loop, due to eventual distraction or fatigue, will be slower in making decisions when compared to robots. If other nations perfect this technology before the U.S., there may not be time to catch up in a war and regain the advantage. The U.S. alliance system may be challenged by differing views of whether or not to have a human in the loop.

Gain:  Having a human in the loop will decreases the risk of international incidents that cause wars due to greater an assumed greater discretion capacity with the human. A human can make decisions that are “most correct” and not simply the fastest or most logical. Humans stand the best chance at making choices that can create positive strategic impacts when a gray area presents itself.

Option #2:  The U.S. transitions to a military force that is predominantly robotic and delegate decision-making to the robots at the lowest, possibly individual robot, level.

Risk:  Programmers cannot account for every situation on the battlefield. When robots encounter new techniques from the enemy (human innovations) the robots may become confused and be easily defeated until they are reprogrammed. Robots may be more likely to mistake civilians for legal combatants. Robots can be hacked, and then either stopped or turned on the owner. Robots could be reprogrammed to ignore the Laws of Warfare to frame a nation for war crimes. There is an increased risk for nations when rules of warfare are broken by robots. Laws will be needed to determine who gets the blame for the war crimes (i.e. designers, owners, programmers, elected officials, senior commanders, or the closest user).  There will be a requirement to develop rights for the robots in warfare. There could be prisoner of war status issues and discussions about how shutdown and maintenance requirements work so robots are not operated until they malfunction and die.  This option can lead to the question, “if robots can make decisions, are they sentient/living beings?” Sentient status would require nations to consider minimum requirements for living standards of robots. This could create many questions about the ethics of sending robots to war.

Gain:  This option has a lower cost than human manning of military units. The ability to mass produce robots allows means the U.S. can quickly keep up with nations that produce large human or robotic militaries. Robots may be more accurate with weapons systems which may reduce civilian casualties.

Other Comments:  While this may seem like science fiction to some policy-makers, this future is coming, likely faster than many anticipate.

Recommendation:  None.


Endnotes:

[1] Mason Smithers is a 13-year-old, 7th grade Florida student. He raised this question with his guest instructor Jason Howk during an impromptu national security class. When Mason started to explain in detail all the risks and advantages of robots in future warfare, Jason asked him to write a paper about the topic. Ninety percent of this paper is from Mason’s 13-year-old mind and his view of the future. We can learn a lot from our students.

[2]  Mason’s mother has given permission for the publication of his middle school project.

Artificial Intelligence & Human-Machine Teaming Jason Criss Howk Mason Smithers Option Papers

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence & Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk

U.S. Options to Combat Chinese Technological Hegemony

Ilyar Dulat, Kayla Ibrahim, Morgan Rose, Madison Sargeant, and Tyler Wilkins are Interns at the College of Information and Cyberspace at the National Defense UniversityDivergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  China’s technological rise threatens U.S. interests both on and off the battlefield.

Date Originally Written:  July 22, 2019.

Date Originally Published:  February 10, 2020.

Author and / or Article Point of View:  This article is written from the point of view of the United States Government.

Background:  Xi Jinping, the Chairman of China’s Central Military Commission. affirmed in 2012 that China is acting to redefine the international world order through revisionist policies[1]. These policies foster an environment open to authoritarianism thus undermining Western liberal values. The Chinese Communist Party (CCP) utilizes emerging technologies to restrict individual freedoms of Chinese citizens, in and out of cyberspace. Subsequently, Chinese companies have exported this freedom-restricting technology to other countries, such as Ethiopia and Iran, for little cost. These technologies, which include Artificial Intelligence-based surveillance systems and nationalized Internet services, allow authoritarian governments to effectively suppress political dissent and discourse within their states. Essentially monopolizing the tech industry through low prices, China hopes to gain the loyalty of these states to obtain the political clout necessary to overcome the United States as the global hegemon.

Significance:  Among the technologies China is pursuing, 5G is of particular interest to the U.S.  If China becomes the leader of 5G network technologies and artificial intelligence, this will allow for opportunities to disrupt the confidentiality, integrity, and availability of data. China has been able to aid regimes and fragmented democracies in repressing freedom of speech and restricting human rights using “digital tools of surveillance and control[2].” Furthermore, China’s National Security Law of 2015 requires all Chinese tech companies’ compliance with the CCP. These Chinese tech companies are legally bound to share data and information housed on Chinese technology, both in-state and abroad. They are also required to remain silent about their disclosure of private data to the CCP. As such, information about private citizens and governments around the world is provided to the Chinese government without transparency. By deploying hardware and software for countries seeking to expand their networks, the CCP could use its authority over domestic tech companies to gain access to information transferred over Chinese built networks, posing a significant threat to the national security interests of the U.S. and its Allies and Partners. With China leading 5G, the military forces of the U.S. and its Allies and Partners would be restricted in their ability to rely on indigenous telecoms abroad, which could cripple operations critical to U.S. interests [3]. This risk becomes even greater with the threat of U.S. Allies and Partners adopting Chinese 5G infrastructure, despite the harm this move would do to information sharing with the United States.

If China continues its current trajectory, the U.S. and its advocacy for personal freedoms will grow increasingly marginal in the discussion of human rights in the digital age. In light of the increasing importance of the cyber domain, the United States cannot afford to assume that its global leadership will seamlessly transfer to, and maintain itself within, cyberspace. The United States’ position as a leader in cyber technology is under threat unless it vigilantly pursues leadership in advancing and regulating the exchange of digital information.

Option #1:  Domestic Investment.

The U.S. government could facilitate a favorable environment for the development of 5G infrastructure through domestic telecom providers. Thus far, Chinese companies Huawei and ZTE have been able to outbid major European companies for 5G contracts. American companies that are developing 5G infrastructure are not large enough to compete at this time. By investing in 5G development domestically, the U.S. and its Allies and Partners would have 5G options other than Huawei and ZTE available to them. This option provides American companies with a playing field equal to their Chinese counterparts.

Risk:  Congressional approval to fund 5G infrastructure development will prove to be a major obstacle. Funding a development project can quickly become a bipartisan issue. Fiscal conservatives might argue that markets should drive development, while those who believe in strong government oversight might argue that the government should spearhead 5G development. Additionally, government subsidized projects have previously failed. As such, there is no guarantee 5G will be different.

Gain:  By investing in domestic telecommunication companies, the United States can remain independent from Chinese infrastructure by mitigating further Chinese expansion. With the U.S. investing domestically and giving subsidies to companies such as Qualcomm and Verizon, American companies can develop their technology faster in an attempt to compete with Huawei and ZTE.

Option #2:  Foreign Subsidization.

The U.S. supports European competitors Nokia and Ericsson, through loans and subsidies, against Huawei and ZTE. In doing so, the United States could offer a conduit for these companies to produce 5G technology at a more competitive price. By providing loans and subsidies to these European companies, the United States delivers a means for these companies to offer more competitive prices and possibly outbid Huawei and ZTE.

Risk:  The American people may be hostile towards a policy that provides U.S. tax dollars to foreign entities. While the U.S. can provide stipulations that come with the funding provided, the U.S. ultimately sacrifices much of the control over the development and implementation of 5G infrastructure.

Gain:  Supporting European tech companies such as Nokia and Ericsson would help deter allied nations from investing in Chinese 5G infrastructure. This option would reinforce the U.S.’s commitment to its European allies, and serve as a reminder that the United States maintains its position as the leader of the liberal international order. Most importantly, this option makes friendlier telecommunications companies more competitive in international markets.

Other Comments:  Both options above would also include the U.S. defining regulations and enforcement mechanisms to promote the fair usage of cyberspace. This fair use would be a significant deviation from a history of loosely defined principles. In pursuit of this fair use, the United States could join the Cyber Operations Resilience Alliance, and encourage legislation within the alliance that invests in democratic states’ cyber capabilities and administers clearly defined principles of digital freedom and the cyber domain.

Recommendation:  None.


Endnotes:

[1] Economy, Elizabeth C. “China’s New Revolution.” Foreign Affairs. June 10, 2019. Accessed July 31, 2019. https://www.foreignaffairs.com/articles/china/2018-04-17/chinas-new-revolution.

[2] Chhabra, Tarun. “The China Challenge, Democracy, and U.S. Grand Strategy.” Democracy & Disorder, February 2019. https://www.brookings.edu/research/the-china-challenge-democracy-and-u-s-grand-strategy/.

[3] “The Overlooked Military Implications of the 5G Debate.” Council on Foreign Relations. Accessed August 01, 2019. https://www.cfr.org/blog/overlooked-military-implications-5g-debate.

Artificial Intelligence & Human-Machine Teaming China (People's Republic of China) Cyberspace Emerging Technology Ilyar Dulat Kayla Ibrahim Madison Sargeant Morgan Rose Option Papers Tyler Wilkins United States

Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence & Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Dan Lee is a government employee who works in Defense, and has varying levels of experience working with Five Eyes nations (US, UK, Canada, Australia, New Zealand).  He can be found on Twitter @danlee961.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Date Originally Written:  September 29, 2018.

Date Originally Published:  October 29, 2018.

Author and / or Article Point of View:  The article is written from the point of view of Five Eyes national defense organizations. 

Background:  The Five Eyes community consists of the United Kingdom (UK), the United States (US), Canada, Australia and New Zealand; its origins can be traced to the requirement to cooperate in Signals Intelligence after World War Two[1]. Arguably, the alliance is still critical today in dealing with terrorism and other threats[2].

Autonomous systems may provide the Five Eyes alliance an asymmetric advantage, or ‘offset’, to counter its strategic competitors that are on track to field larger and more technologically advanced military forces. The question of whether or not to develop and employ Lethal Autonomous Weapons Systems (LAWS) is currently contentious due to the ethical and social considerations involved with allowing machines to choose targets and apply lethal force without human intervention[3][4][5]. Twenty-six countries are calling for a prohibition on LAWS, while three Five Eyes partners (Australia, UK and the US) as well as other nations including France, Germany, South Korea and Turkey do not support negotiating new international laws on the matter[6]. When considering options, at least two issues must also be addressed.

The first issue is defining what LAWS are; a common lexicon is required to allow Five Eyes partners to conduct an informed discussion as to whether they can come to a common policy position on the development and employment of these systems. Public understanding of autonomy is mostly derived from the media or from popular culture and this may have contributed to the hype around the topic[7][8][8]. Currently there is no universally accepted definition of what constitutes a fully autonomous lethal weapon system, which has in turn disrupted discussions at the United Nations (UN) on how these systems should be governed by the Convention on Certain Conventional Weapons (CCWUN)[10]. The US and UK have different definitions, which makes agreement on a common position difficult even amongst like-minded nations[11][12]. This lack of lexicon is further complicated by some strategic competitors using more liberal definitions of LAWS, allowing them to support a ban while simultaneously developing weapons that do not require meaningful human control[13][14][15][16].

The second issue one of agreeing how autonomous systems might be employed within the Five Eyes alliance. For example, as a strategic offset technology, the use of autonomous systems might mitigate the relatively small size of their military forces relative to an adversary’s force[17]. Tactically, they could be deployed completely independently of humans to remove personnel from danger, as swarms to overwhelm the enemy with complexity, or as part of a human-machine team to augment human capabilities[18][19][20].

A failure of Five Eyes partners to come to a complete agreement on what is and is not permissible in developing and employing LAWS does not necessarily mean a halt to progress; indeed, this may provide the alliance with the ability for some partners to cover the capability gaps of others. If some members of the alliance choose not to develop lethal systems, it may free their resources to focus on autonomous Intelligence, Surveillance, and Reconnaissance (ISR) or logistics capabilities. In a Five Eyes coalition environment, these members who chose not to develop lethal systems could provide support to the LAWS-enabled forces of other partners, providing lethal autonomy to the alliance as whole, if not to individual member states.

Significance:  China and Russia may already be developing LAWS; a failure on the part of the Five Eyes alliance to actively manage this issue may put it at a relative disadvantage in the near future[21][22][23][24]. Further, dual-use civilian technologies already exist that may be adapted for military use, such as the Australian COTSbot and the Chinese Mosquito Killer Robot[25][26]. If the Five Eyes alliance does not either disrupt the development of LAWS by its competitors, or attain relative technological superiority, it may find itself starting in a position of disadvantage during future conflicts or deterrence campaigns.

Option #1:  Five Eyes nations work with the UN to define LAWS and ban their development and use; diplomatic, economic and informational measures are applied to halt or disrupt competitors’ LAWS programs. Technological offset is achieved by Five Eyes autonomous military systems development that focuses on logistics and ISR capabilities, such as Boston Dynamics’ LS3 AlphaDog and the development of driverless trucks to free soldiers from non-combat tasks[27][28][29][30].

Risk:  In the event of conflict, allied combat personnel would be more exposed to danger than the enemy as their nations had, in essence, decided to not develop a technology that could be of use in war. Five Eyes militaries would not be organizationally prepared to develop, train with and employ LAWS if necessitated by an existential threat. It may be too late to close the technological capability gap after the commencement of hostilities.

Gain:  The Five Eyes alliance’s legitimacy regarding human rights and the just conduct of war is maintained in the eyes of the international community. A LAWS arms race and subsequent proliferation can be avoided.

Option #2:  Five Eyes militaries actively develop LAWS to achieve superiority over their competitors.

Risk:  The Five Eyes alliance’s legitimacy may be undermined in the eyes of the international community and organizations such as The Campaign to Stop Killer Robots, the UN, and the International Committee of the Red Cross. Public opinion in some partner nations may increasingly disapprove of LAWS development and use, which could fragment the alliance in a similar manner to the Australia, New Zealand and United States Security Treaty[31][32].

The declared development and employment of LAWS may catalyze a resource-intensive international arms race. Partnerships between government and academia and industry may also be adversely affected[33][34].

Gain:  Five Eyes nations avoid a technological disadvantage relative to their competitors; the Chinese information campaign to outmanoeuvre Five Eyes LAWS development through the manipulation of CCWUN will be mitigated. Once LAWS development is accepted as inevitable, proliferation may be regulated through the UN.

Other Comments:  None

Recommendation:  None.


Endnotes:

[1] Tossini, J.V. (November 14, 2017). The Five Eyes – The Intelligence Alliance of the Anglosphere. Retrieved from https://ukdefencejournal.org.uk/the-five-eyes-the-intelligence-alliance-of-the-anglosphere/

[2] Grayson, K. Time to bring ‘Five Eyes’ in from the cold? (May 4, 2018). Retrieved from https://www.aspistrategist.org.au/time-bring-five-eyes-cold/

[3] Lange, K. 3rd Offset Strategy 101: What It Is, What the Tech Focuses Are (March 30, 2016). Retrieved from http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/

[4] International Committee of the Red Cross. Expert Meeting on Lethal Autonomous Weapons Systems Statement (November 15, 2017). Retrieved from https://www.icrc.org/en/document/expert-meeting-lethal-autonomous-weapons-systems

[5] Human Rights Watch and
Harvard Law School’s International Human Rights Clinic. Fully Autonomous Weapons: Questions and Answers. (October 2013). Retrieved from https://www.hrw.org/sites/default/files/supporting_resources/10.2013_killer_robots_qa.pdf

[6] Campaign to Stop Killer Robots. Report on Activities Convention on Conventional Weapons Group of Governmental Experts meeting on lethal autonomous weapons systems – United Nations Geneva – 9-13 April 2018. (2018) Retrieved from https://www.stopkillerrobots.org/wp-content/uploads/2018/07/KRC_ReportCCWX_Apr2018_UPLOADED.pdf

[7] Scharre, P. Why You Shouldn’t Fear ‘Slaughterbots’. (December 22, 2017). Retrieved from https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots

[8] Winter, C. (November 14, 2017). ‘Killer robots’: autonomous weapons pose moral dilemma. Retrieved from https://www.dw.com/en/killer-robots-autonomous-weapons-pose-moral-dilemma/a-41342616

[9] Devlin, H. Killer robots will only exist if we are stupid enough to let them. (June 11, 2018). Retrieved from https://www.theguardian.com/technology/2018/jun/11/killer-robots-will-only-exist-if-we-are-stupid-enough-to-let-them

[10] Welsh, S. Regulating autonomous weapons. (November 16, 2017). Retrieved from https://www.aspistrategist.org.au/regulating-autonomous-weapons/

[11] United States Department of Defense. Directive Number 3000.09. (November 21, 2012). Retrieved from https://www.hsdl.org/?view&did=726163

[12] Lords AI committee: UK definitions of autonomous weapons hinder international agreement. (April 17, 2018). Retrieved from http://www.article36.org/autonomous-weapons/lords-ai-report/

[13] Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects – Geneva, 9–13 April 2018 (first week) Item 6 of the provisional agenda – Other matters. (11 April 2018). Retrieved from https://www.unog.ch/80256EDD006B8954/(httpAssets)/E42AE83BDB3525D0C125826C0040B262/$file/CCW_GGE.1_2018_WP.7.pdf

[14] Welsh, S. China’s shock call for ban on lethal autonomous weapon systems. (April 16, 2018). Retrieved from https://www.janes.com/article/79311/china-s-shock-call-for-ban-on-lethal-autonomous-weapon-systems

[15] Mohanty, B. Lethal Autonomous Dragon: China’s approach to artificial intelligence weapons. (Nov 15 2017). Retrieved from https://www.orfonline.org/expert-speak/lethal-autonomous-weapons-dragon-china-approach-artificial-intelligence/

[16] Kania, E.B. China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems. (April 17, 2018). Retrieved from https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems

[17] Tomes, R. Why the Cold War Offset Strategy was all about Deterrence and Stealth. (January 14, 2015) Retrieved from https://warontherocks.com/2015/01/why-the-cold-war-offset-strategy-was-all-about-deterrence-and-stealth/

[18] Lockie, A. The Air Force just demonstrated an autonomous F-16 that can fly and take out a target all by itself. (April 12, 2017). Retrieved from https://www.businessinsider.com.au/f-16-drone-have-raider-ii-loyal-wingman-f-35-lockheed-martin-2017-4?r=US&IR=T

[19] Schuety, C. & Will, L. An Air Force ‘Way of Swarm’: Using Wargaming and Artificial Intelligence to Train Drones. (September 21, 2018). Retrieved from https://warontherocks.com/2018/09/an-air-force-way-of-swarm-using-wargaming-and-artificial-intelligence-to-train-drones/

[20] Ryan, M. Human-Machine Teaming for Future Ground Forces. (2018). Retrieved from https://csbaonline.org/uploads/documents/Human_Machine_Teaming_FinalFormat.pdf

[21] Perrigo, B. Global Arms Race for Killer Robots Is Transforming the Battlefield. (Updated: April 9, 2018). Retrieved from http://time.com/5230567/killer-robots/

[22] Hutchison, H.C. Russia says it will ignore any UN ban of killer robots. (November 30, 2017). Retrieved from https://www.businessinsider.com/russia-will-ignore-un-killer-robot-ban-2017-11/?r=AU&IR=T

[23] Mizokami, K. Kalashnikov Will Make an A.I.-Powered Killer Robot – What could possibly go wrong? (July 20, 2017). Retrieved from https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/

[24] Atherton, K. Combat robots and cheap drones obscure the hidden triumph of Russia’s wargame. (September 25, 2018). Retrieved from https://www.c4isrnet.com/unmanned/2018/09/24/combat-robots-and-cheap-drones-obscure-the-hidden-triumph-of-russias-wargame/

[25] Platt, J.R. A Starfish-Killing, Artificially Intelligent Robot Is Set to Patrol the Great Barrier Reef Crown of thorns starfish are destroying the reef. Bots that wield poison could dampen the invasion. (January 1, 2016) Retrieved from https://www.scientificamerican.com/article/a-starfish-killing-artificially-intelligent-robot-is-set-to-patrol-the-great-barrier-reef/

[26] Skinner, T. Presenting, the Mosquito Killer Robot. (September 14, 2016). Retrieved from https://quillorcapture.com/2016/09/14/presenting-the-mosquito-killer-robot/

[27] Defence Connect. DST launches Wizard of Aus. (November 10, 2017). Retrieved from https://www.defenceconnect.com.au/key-enablers/1514-dst-launches-wizard-of-aus

[28] Pomerleau, M. Air Force is looking for resilient autonomous systems. (February 24, 2016). Retrieved from https://defensesystems.com/articles/2016/02/24/air-force-uas-contested-environments.aspx

[29] Boston Dynamics. LS3 Legged Squad Support Systems. The AlphaDog of legged robots carries heavy loads over rough terrain. (2018). Retrieved from https://www.bostondynamics.com/ls3

[30] Evans, G. Driverless vehicles in the military – will the potential be realised? (February 2, 2018). Retrieved from https://www.army-technology.com/features/driverless-vehicles-military/

[31] Hambling, D. Why the U.S. Is Backing Killer Robots. (September 15, 2018). Retrieved from https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/

[32] Ministry for Culture and Heritage. ANZUS treaty comes into force 29 April 1952. (April 26, 2017). Retrieved from https://nzhistory.govt.nz/anzus-comes-into-force

[33] Shalal, A. Researchers to boycott South Korean university over AI weapons work. (April 5, 2018). Retrieved from https://www.reuters.com/article/us-tech-korea-boycott/researchers-to-boycott-south-korean-university-over-ai-weapons-work-idUSKCN1HB392

[34] Shane, S & Wakabayashi, D. ‘The Business of War’: Google Employees Protest Work for the Pentagon. (April 4, 2018). Retrieved from https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

 

Artificial Intelligence & Human-Machine Teaming Australia Australia, New Zealand, United States Security Treaty (ANZUS) Autonomous Weapons Systems Canada Dan Lee New Zealand Option Papers United Kingdom United States

An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Date Originally Written:  May 25, 2018.

Date Originally Published:  July 16, 2018.

Summary:  While the U.S. Department of Defense (DoD) continues to experiment with Artificial Intelligence (AI) as part of its Third Offset Strategy, questions regarding levels of human participation, ethics, and legality remain.  Though a battlefield in the future will likely see autonomous decision-making technology as a norm, the transition between modern applications of artificial intelligence and potential applications will focus on incorporating human-machine teaming into existing frameworks.

Text:   In an essay titled Centaur Warfighting: The False Choice of Humans vs. Automation, author Paul Scharre concludes that the best warfighting systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each[1].  There are three potential partnerships.  The first potential partnership pegs humans as essential operators, meaning AI cannot operate without its human counterpart.  The second potential partnership tasks humans as the moral agents who make value-based decisions which prevent or promote the use of AI in combat situations.  Finally, the third potential partnership, in which humans are fail-safes, give more operational authority to AI systems.  The human operator only interferes if the system malfunctions or fails.  Artificial intelligence, specifically autonomous weapons systems, are controversial technologies that have the capacity to greatly improve human efficiency while reducing potential human burdens.  But before the Department of Defense embraces intelligent weapons systems or programs with full autonomy, more human-machine partnerships to test to viability, legality, and ethical implications of artificial intelligence will likely occur.

To better understand why artificial intelligence is controversial, it is necessary to distinguish between the arguments for and against using AI with operational autonomy.  In 2015, prominent artificial intelligence experts, including Steven Hawking and Elon Musk, penned an open letter in which the potential benefits for AI are highlighted, but are not necessarily outweighed by the short-term questions of ethics and the applicability of law[2].  A system with an intelligent, decision-making brain does carry significant consequences.  What if the system targets civilians?  How does international law apply to a machine?  Will an intelligent machine respond to commands?  These are questions with which military and ethical theorists grapple.

For a more practical thought problem, consider the Moral Machine project from the Massachusetts Institute of Technology[3].  You, the judge, are presented with two dilemmas involving intelligent, self-driving cars.  The car encounters break failure and must decide what to do next.  If the car continues straight, it will strike and kill x number of men, women, children, elderly, or animals.  If the car does not swerve, it will crash into a barrier causing immediate deaths of the passengers who are also x number of men or women, children, or elderly.  Although you are the judge in Moral Machine, the simulation is indicative of ethical and moral dilemmas that may arise when employing artificial intelligence in, say, combat.  In these scenarios, the ethical theorist takes issue with the machine having the decision-making capacity to place value on human life, and to potentially make irreversible and damaging decisions.

Assuming autonomous weapons systems do have a place in the future of military operations, what would prelude them?  Realistically, human-machine teaming would be introduced before a fully-autonomous machine.  What exactly is human-machine teaming and why is it important when discussing the future of artificial intelligence?  To gain and maintain superiority in operational domains, both past and present, the United States has ensured that its conventional deterrents are powerful enough to dissuade great powers from going to war with the United States[4].  Thus, an offset strategy focuses on gaining advantages against enemy powers and capabilities.  Historically, the First Offset occurred in the early 1950s upon the introduction of tactical nuclear weapons.  The Second Offset manifested a little later, in the 1970s, with the implementation of precision-guided weapons after the Soviet Union gained nuclear parity with the United States[5].  The Third Offset, a relatively modern strategy, generally focuses on maintaining technological superiority among the world’s great powers.

Human-machine teaming is part of the Department of Defense’s Third Offset strategy, as is deep learning systems and cyber weaponry[6].  Machine learning systems relieve humans from a breadth of burdening tasks or augment operations to decrease potential risks to the lives of human fighters.  For example, in 2017 the DoD began working with an intelligent system called “Project Maven,” which uses deep learning technology to identify objects of interest from drone surveillance footage[7].  Terabytes of footage are collected each day from surveillance drones.  Human analysts spend significant amounts of time sifting through this data to identify objects of interest, and then they begin their analytical processes[8].  Project Maven’s deep-learning algorithm allows human analysts to spend more time practicing their craft to produce intelligence products and less time processing information.  Despite Google’s recent departure from the program, Project Maven will continue to operate[9].  Former Deputy Defense Secretary Bob Work established the Algorithmic Warfare Cross-Functional Team in early 2017 to work on Project Maven.  In the announcement, Work described artificial intelligence as necessary for strategic deterrence, noting “the [DoD] must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors[10].”

This article collectively refers to human-machine teaming as processes in which humans interact in some capacity with artificial intelligence.  However, human-machine teaming can transcend multiple technological fields and is not limited to just prerequisites of autonomous weaponry[11].  Human-robot teaming may begin to appear as in the immediate future given developments in robotics.  Boston Dynamics, a premier engineering and robotics company, is well-known for its videos of human- and animal-like robots completing everyday tasks.  Imagine a machine like BigDog working alongside human soldiers or rescue workers or even navigating inaccessible terrain[12].  These robots are not fully autonomous, yet the unique partnership between human and robot offers a new set of opportunities and challenges[13].

Before fully-autonomous systems or weapons have a place in combat, human-machine teams need to be assessed as successful and sustainable.  These teams have the potential to improve human performance, reduce risks to human counterparts, and expand national power – all goals of the Third Offset Strategy.  However, there are challenges to procuring and incorporating artificial intelligence.  The DoD will need to seek out deeper relationships with technological and engineering firms, not just defense contractors.

Using humans as moral agents and fail-safes allow the problem of ethical and lawful applicability to be tested while opening the debate on future use of autonomous systems.  Autonomous weapons will likely not see combat until these challenges, coupled with ethical and lawful considerations, are thoroughly regulated and tested.


Endnotes:

[1] Paul Scharre, Temp. Int’l & Comp. L.J., “Centaur Warfighting: The False Choice of Humans vs. Automation,” 2016, https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf

[2] Daniel Dewey, Stuart Russell, Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” 2015, https://futureoflife.org/data/documents/research_priorities.pdf?x20046

[3] Moral Machine, http://moralmachine.mit.edu/

[4] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Work: Human-Machine Teaming Represents Defense Technology Future,” 8 November 2015, https://www.defense.gov/News/Article/Article/628154/work-human-machine-teaming-represents-defense-technology-future/

[5] Ibid.

[6] Katie Lange, DoDLive, “3rd Offset Strategy 101: What It Is, What the Tech Focuses Are,” 30 March 2016, http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/; and Mackenzie Eaglen, RealClearDefense, “What is the Third Offset Strategy?,” 15 February 2016, https://www.realcleardefense.com/articles/2016/02/16/what_is_the_third_offset_strategy_109034.html

[7] Cheryl Pellerin, Department of Defense News, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[8] Tajha Chappellet-Lanier, “Pentagon’s Project Maven responds to criticism: ‘There will be those who will partner with us’” 1 May 2018, https://www.fedscoop.com/project-maven-artificial-intelligence-google/

[9] Tom Simonite, Wired, “Pentagon Will Expand AI Project Prompting Protests at Google,” 29 May 2018, https://www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand/

[10] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[11] Maj. Gen. Mick Ryan, Defense One, “How to Plan for the Coming Era of Human-Machine Teaming,” 25 April 2018, https://www.defenseone.com/ideas/2018/04/how-plan-coming-era-human-machine-teaming/147718/

[12] Boston Dynamic Big Dog Overview, March, 2010, https://www.youtube.com/watch?v=cNZPRsrwumQ

[13] Richard Priday, Wired, “What’s really going on in those Bostom Dynamics robot videos?,” 18 February 2018, http://www.wired.co.uk/article/boston-dynamics-robotics-roboticist-how-to-watch

Ali Crawford Alternative Futures / Alternative Histories / Counterfactuals Artificial Intelligence & Human-Machine Teaming Capacity / Capability Enhancement United Nations