Assessing Terrorism and Artificial Intelligence in 2050

William D. Harris is a U.S. Army Special Forces Officer with six deployments for operations in Iraq and Syria and experience working in Jordan, Turkey, Saudi Arabia, Qatar, Israel, and other regional states. He has commanded from the platoon to battalion level and served in assignments with 1st Special Forces Command, 5th Special Forces Group, 101st Airborne Division, Special Operations Command—Central, and 3rd Armored Cavalry Regiment.  William holds a Bachelor of Science from United States Military Academy, a Master of Arts from Georgetown University’s Security Studies Program, a Masters from the Command and General Staff College, and a Masters from the School of Advanced Military Studies.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  AssessingTerrorism and Artificial Intelligence in 2050

Date Originally Written:  December 14, 2022.

Date Originally Published:  January 9, 2023.

Author and / or Article Point of View:  The author is an active-duty military member who believes that terrorists will pose increasing threats in the future as technology enables their operations.  

Summary:  The proliferation of artificial intelligence (AI) will enable terrorists in at least three ways.  First, they will be able to overcome their current manpower limitations in the proliferation of propaganda to increase recruitment.  Second, they will be able to use AI to improve target reconnaissance.  Third, terrorists can use AI to improve their attacks, including advanced unmanned systems and biological weapons.

Text:  Recent writing about the security implications of artificial intelligence (AI) has focused on the feasibility of a state like China or others with totalitarian aspirations building a modern panopticon, combining ubiquitous surveillance with massive AI-driven data processing and pattern recognition[1].  For years, other lines of research into AI have analyzed the application of AI to fast-paced conventional warfare.  Less has focused on how AI could help the sub-state actor, the criminal, insurgent, or terrorist.  Nevertheless, history shows that new technologies have never given their user an enduring and decisive edge.  Either the technology proliferates or combatants find countermeasures.  Consequently, understanding how AI technology could enable terrorists is a first step in preventing future attacks.

The proliferation of AI has the potential to enable terrorists similar to the way that the proliferation of man-portable weapons and encrypted communications have enabled terrorists to become more lethal[2].  Terrorists, or other sub-state entrepreneurs of violence, may be able to employ AI to solve operational problems.  This preliminary analysis will look at three ways that violent underground groups could use AI in the coming decades: recruitment, reconnaissance, and attack.

The advent of mass media allowed the spread of radical ideological tracts at a pace that led to regional and then global waves of violence.  In 1848, revolutionary movements threatened most of the states in Europe.  Half a century later, a global yet diffuse anarchist movement led to the assassination of five heads of state and the beginning of World War I[3].  Global revolutionary movements during the Cold War and then the global Islamist insurgency against the modern world further capitalized on the increasing bandwidth, range, and volume of communication[4].  The sleek magazine and videos of the Islamic State are the latest edition of the terrorists’ use of modern communications to craft and distribute a message intended to find and radicalize recruits.  If they employ advanced AI, terrorist organizations will be able to increase the production rate of quality materials in multiple languages, far beyond what they are currently capable of producing with their limited manpower.  The recent advances in AI, most notably with OpenAI’s Chatbot, demonstrate that AIs will be capable of producing quality materials.  These materials will be increasingly sophisticated and nuanced in a way to resonate with vulnerable individuals, leading to increased radicalization and recruitment[5].

Once a terrorist organization has recruited a cadre of fighters, then it can begin the process of planning and executing a terrorist attack, a key phase of which is reconnaissance.  AI could be an important tool here, enabling increased collection and analysis of data to find patterns of life and security vulnerabilities.  Distributed AI would allow terrorists conducting reconnaissance to collect and process vast quantities of information as opposed to relying on purely physical surveillance[6].  This AI use will speed up the techniques of open source intelligence collection and analysis, enabling the organization to identify the pattern of life of the employees of a targeted facility, and to find gaps and vulnerabilities in the security.  Open-source imagery and technical information could provide valuable sources for characterizing targets.  AI could also drive open architecture devices that enable terrorists to collect and access all signals in the electromagnetic spectrum and sound waves[7].  In the hands of skilled users, AI will able to enable the collection and analysis of information that was previously unavailable, or only available to the most sophisticated state intelligence operations.  Moreover, as the systems that run modern societies increase in complexity, that complexity will create new unanticipated failure modes, as the history of computer hacking or even the recent power grid attacks demonstrate[8].  

After conducting the target reconnaissance, terrorists could employ AI-enabled systems to facilitate or execute the attack.  The clearest example would be autonomous or semi-autonomous vehicles.  These vehicles will pose increasing problems for facilities protection in the future.  However, there are other ways that terrorists could employ AI to enable their attacks.  One idea would be to use AI agents to identify how they are vulnerable to facial recognition or other forms of pattern recognition.  Forewarned, the groups could use AI to generate deception measures to mislead security forces.  Using these AI-enabled disguises, the terrorists could conduct attacks with manned and unmanned teams.  The unmanned teammates could conduct parts of the operation that are too distant, dangerous, difficult, or restricted for their human teammates to action.  More frighteningly, the recent successes in applying machine learning and AI to understand deoxyribonucleic acid (DNA) and proteins could be applied to make new biological and chemical weapons, increasing lethality, transmissibility, or precision[9].  

Not all terrorist organizations will develop the sophistication to employ advanced AI across all phases of the organizations’ operations.  However, AI will continue and accelerate the arms race between security forces and terrorists.  Terrorists have applied most other human technologies in their effort to become more effective.  They will be able to apply AI to accelerate their propaganda and recruitment; target selection and reconnaissance; evasion of facial recognition and pattern analysis; unmanned attacks against fortified targets; manned-unmanned teamed attacks; and advanced biological and chemical attacks.  

One implication of this analysis is that the more distributed AI technology and access become, the more it will favor the terrorists.  Unlike early science fiction novels about AI, the current trends are for AI to be distributed and more available unlike the centralized mainframes of earlier fictional visions.  The more these technologies proliferate, the more defenders should be concerned.

The policy implications are that governments and security forces will continue their investments in technology to remain ahead of the terrorists.  In the west, this imperative to exploit new technologies, including AI, will increasingly bring the security forces into conflict with the need to protect individual liberties and maintain strict limits on the potential for governmental abuse of power.  The balance in that debate between protecting liberty and protecting lives will have to evolve as terrorists grasp new technological powers.


Endnotes:

[1] For example, see “The AI-Surveillance Symbiosis in China: A Big Data China Event,” accessed December 16, 2022, https://www.csis.org/analysis/ai-surveillance-symbiosis-china-big-data-china-event; “China Uses AI Software to Improve Its Surveillance Capabilities | Reuters,” accessed December 16, 2022, https://www.reuters.com/world/china/china-uses-ai-software-improve-its-surveillance-capabilities-2022-04-08/.

[2] Andrew Krepinevich, “Get Ready for the Democratization of Destruction,” Foreign Policy, n.d., https://foreignpolicy.com/2011/08/15/get-ready-for-the-democratization-of-destruction/.

[3] Bruce Hoffman, Inside Terrorism, Columbia Studies in Terrorism and Irregular Warfare (New York: Columbia University Press, 2017).

[4] Ariel Victoria Lieberman, “Terrorism, the Internet, and Propaganda: A Deadly Combination,” Journal of National Security Law & Policy 9, no. 95 (April 2014): 95–124.

[5] See https://chat.openai.com/

[6] “The ABCs of AI-Enabled Intelligence Analysis,” War on the Rocks, February 14, 2020, https://warontherocks.com/2020/02/the-abcs-of-ai-enabled-intelligence-analysis/.

[7] “Extracting Audio from Visual Information,” MIT News | Massachusetts Institute of Technology, accessed December 16, 2022, https://news.mit.edu/2014/algorithm-recovers-speech-from-vibrations-0804.

[8] Miranda Willson, “Attacks on Grid Infrastructure in 4 States Raise Alarm,” E&E News, December 9, 2022, https://www.eenews.net/articles/attacks-on-grid-infrastructure-in-4-states-raise-alarm/; Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations (Reading, Mass: Perseus Books, 1996).

[9] Michael Eisenstein, “Artificial Intelligence Powers Protein-Folding Predictions,” Nature 599, no. 7886 (November 23, 2021): 706–8, https://doi.org/10.1038/d41586-021-03499-y.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Emerging Technology Violent Extremism William D. Harris

Assessing the Foundation of Future U.S. Multi-Domain Operations

Marco J. Lyons is a Lieutenant Colonel in the U.S. Army who has served in tactical and operational Army, Joint, and interagency organizations in the United States, Europe, the Middle East, Afghanistan, and in the Western Pacific. He is currently a national security fellow at Harvard Kennedy School where he is researching strategy and force planning for war in the Indo-Pacific. He may be contacted at marco_lyons@hks.harvard.edu. The author thanks David E. Johnson for helpful comments on an earlier draft. Divergent Options’ content does not contain information of an official nature, nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessing the Foundation of Future U.S. Multi-Domain Operations 

Date Originally Written:  February 15, 2022. 

Date Originally Published:  March 14, 2022.

Author and / or Article Point of View:   The author believes that U.S. adversaries pose a greater threat if they outpace the U.S. in both technological development and integration.

Summary:  Both U.S. Joint Forces and potential adversaries are trying to exploit technology to lock in advantage across all domains. Through extensive human-machine teaming and better ability to exploit both initiative (the human quality augmented by AI/ML) and non-linearity, Army/Joint forces will lose the fight unless they perform better, even if only marginally better, than adversaries – especially at the operational level. 

Text:  The 2018 U.S. Army in Multi-Domain Operations (MDO) 2028 is a future operational concept – not doctrine – and not limited to fielded forces and capabilities[1]. A future operational concept consists of a “problem set,” a “solution set,” and an explanation for why the solution set successfully addresses the problem set[2]. Since 2018, there have been ongoing debates about what MDO are – whether they are revamped AirLand Battle or they are a next evolution of joint operations[3]. Before the Army finishes translating the concept into doctrine, a relook at MDO through historical, theoretical, and doctrinal lenses is necessary. 

The historical context is the 1990-1991 Gulf War, the Korean War, and the European and Pacific Theaters of World War Two. The theoretical basis includes Clausewitzian war, combined arms, attrition, and Maneuver Warfare. The doctrinal basis includes not just AirLand Battle, but also AirLand Battle – Future (ALB–F), Non-Linear Operations, and the 2012 Capstone Concept for Joint Operations. ALB–F was meant to replace AirLand Battle as the Army’s operational concept for the 1990s, before the end of the Cold War and dissolution of the Soviet Union interrupted its development. Never incorporated into Field Manual (FM) 100-5, Operations, ALB–F emphasized the nonlinear battlefield and conceived of combat operations through information-based technologies[4]. 

This assessment presupposes the possibility of great power war, defines potential enemies as the Chinese People’s Liberation Army (PLA) and Russian Armed Forces, assumes the centrality of major theater operations, and accepts that the Army/Joint force may still have to operate in smaller-scale contingencies and against enemy forces that represent subsets of the PLA and Russian Armed Forces. By assuming the PLA and Russian Armed Forces, this conceptual examination is grounded in the characteristics of opposing joint forces, mechanized maneuver, and primarily area fires. 

The Army/Joint force faces a core problem at each level of war. At the strategic level, the problem is preclusion, i.e., potential adversaries will use instruments of national power to achieve strategic objectives before the U.S./coalition leaders have the opportunity to respond[5]. At the operational level, the central problem is exclusion[6]. Anti-access/area denial is just one part of operational exclusion. Potential adversaries will use military power to split combined/joint forces and deny U.S./coalition ability to maneuver and mass. The tactical problem is dissolution. By exploiting advantages at the strategic and operational levels, potential adversaries will shape tactical engagements to be close-to-even fights (and potentially uneven fights in their favor), causing attrition and attempting to break U.S./coalition morale, both in fighting forces and among civilian populations. 

The best area to focus conceptual effort combines the determination of alliance/coalition security objectives of the strategic level of warfare with the design of campaigns and major operations of the operational level of warfare. The Army/Joint force will only indirectly influence the higher strategic-policy level. The problem of preclusion will be addressed by national-multinational policy level decisions. The tactical level of warfare and its attendant problems will remain largely the same as they have been since 1917[7]. If Army/Joint forces are not able to win campaigns at the operational level and support a viable military strategy that is in concert with higher level strategy and policy, the outcomes in great power war (and other major theater wars) will remain in doubt. 

The fundamentals of operational warfare have not changed substantially, but the means available have shrunk in capacity, become outdated, capabilities have atrophied, and understanding has become confused. Today’s Unified Land Operations (ULO) doctrine is, not surprisingly, a product of full-dimensional and full-spectrum operations, which were themselves primarily informed by a geopolitical landscape free of great power threats. Applying ULO or even earlier ALB solution sets to great power threats will prove frustrating, or possibly even disastrous in war. 

Given the primary operational problem of contesting exclusion by peer-adversary joint and mechanized forces, using various forms of multi-system operations, future Army/Joint forces will have to move under constant threat of attack, “shoot” at various ranges across multiple domain boundaries, and communicate faster and more accurately than the enemy. One way to look at the operational demands listed above is to see them as parts of command and control warfare (C2). C2 warfare, which has probably always been part of military operations, has emerged much more clearly since Napoleonic warfare[8]. Looking to the future, C2 warfare will probably evolve into something like “C4ISR warfare” with the advent of more automation, autonomous systems, artificial intelligence, machine learning, and deep neural networks. 

With technological advances, every force – or “node,” i.e., any ship, plane, or battalion – is able to act as “sensor,” “shooter,” and “communicator.” Command and control is a blend of intuition, creativity, and machine-assisted intelligence. Maximum exploitation of computing at the edge, tactical intranets (communication/data networks that grow and shrink depending on their AI-/ML-driven sensing of the environment), on-board deep data analysis, and laser/quantum communications will provide the technological edge leaders need to win tactical fights through initiative and seizing the offense. Tactical intranets are also self-defending. Army/Joint forces prioritize advancement of an “internet of battle things” formed on self-sensing, self-organizing, and self-healing networks – the basic foundation of human-machine teaming[9]. All formations are built around cross-domain capabilities and human-machine teaming. To maximize cross-domain capabilities means that Army/Joint forces will accept the opportunities and vulnerabilities of non-linear operations. Linear warfare and cross-domain warfare are at odds. 

Future major operations are cross-domain. So campaigns are built out of airborne, air assault, air-ground and air-sea-ground attack, amphibious, and cyber-ground strike operations – all enabled by space warfare. This conception of MDO allows service forces to leverage unique historical competencies, such as Navy’s Composite Warfare Commander concept and the Air Force’s concept of multi-domain operations between air, cyberspace, and space. The MDO idea presented here may also be seen – loosely – as a way to scale up DARPA’s Mosaic Warfare concept[10]. To scale MDO to the operational level against the potential adversaries will also require combined forces for coalition warfare. 

MDO is an evolution of geopolitics, technology, and the character of war – and it will only grow out of a complete and clear-eyed assessment of the same. Army/Joint forces require a future operational concept to expeditiously address emerging DOTMLPF-P demands. This idea of MDO could create a formidable Army/Joint force but it cannot be based on superiority, let alone supremacy. Great power war holds out the prospects for massive devastation and Army/Joint forces for MDO are only meant to deter sufficiently (not perfectly). Great power war will still be extended in time and scale, and Army/Joint forces for MDO are principally meant to help ensure the final outcome is never substantially in doubt. 


Endnotes:

[1] U.S. Department of the Army, Training and Doctrine Command, TRADOC Pamphlet 525-3-1, The U.S. Army in Multi-Domain Operations 2028 (Fort Eustis, VA: Government Printing Office, 2018). 

[2] U.S. Department of the Army, TRADOC Pamphlet 71-20-3, The U.S. Army Training and Doctrine Command Concept Development Guide (Fort Eustis, VA: Headquarters, United States Army Training and Doctrine Command, December 6, 2011), 5–6. 

[3] See Dennis Wille, “The Army and Multi-Domain Operations: Moving Beyond AirLand Battle,” New America website, October 1, 2019, https://www.newamerica.org/international-security/reports/army-and-multi-domain-operations-moving-beyond-airland-battle/; and Scott King and Dennis B. Boykin IV, “Distinctly Different Doctrine: Why Multi-Domain Operations Isn’t AirLand Battle 2.0,” Association of the United States Army website, February 20, 2019, https://www.ausa.org/articles/distinctly-different-doctrine-why-multi-domain-operations-isn’t-airland-battle-20. 

[4] Stephen Silvasy Jr., “AirLand Battle Future: The Tactical Battlefield,” Military Review 71, no. 2 (1991): 2–12. Also see Jeff W. Karhohs, AirLand Battle–Future—A Hop, Skip, or Jump? (Fort Leavenworth, KS: Army Command and General Staff College, School of Advanced Military Studies, 1990). 

[5] This of course reverses what the Army identified as a U.S. advantage – strategic preclusion – in doctrinal debates from the late 1990s. See James Riggins and David E. Snodgrass, “Halt Phase Plus Strategic Preclusion: Joint Solution for a Joint Problem,” Parameters 29, no. 3 (1999): 70–85. 

[6] U.S. Joint Forces Command, Major Combat Operations Joint Operating Concept, Version 2.0 (Norfolk, VA: Department of Defense, December 2006), 49–50. The idea of operational exclusion was also used by David Fastabend when he was Deputy Director, TRADOC Futures Center in the early 2000s. 

[7] World War I was a genuine military revolution. The follow-on revolutionary developments, like blitzkrieg, strategic bombing, carrier warfare, amphibious assaults, and information warfare, seem to be essentially operational level changes. See Williamson Murray, “Thinking About Revolutions in Military Affairs,” Joint Force Quarterly 16 (Summer 1997): 69–76. 

[8] See Dan Struble, “What Is Command and Control Warfare?” Naval War College Review 48, no. 3 (1995): 89–98. C2 warfare is variously defined and explained, but perhaps most significantly, it is generally included within broader maneuver warfare theory. 

[9] Alexander Kott, Ananthram Swami, and Bruce J. West, “The Internet of Battle Things,” Computer: The IEEE Computer Society 49, no, 12 (2016): 70–75. 

[10] Theresa Hitchens, “DARPA’s Mosaic Warfare — Multi Domain Ops, But Faster,” Breaking Defense website, September 10, 2019, https://breakingdefense.com/2019/09/darpas-mosaic-warfare-multi-domain-ops-but-faster/. 

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Governing Documents and Ideas Marco J. Lyons United States

Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Zac Rogers is an academic from Adelaide, South Australia. Zac has published in journals including International Affairs, The Cyber Defense Review, Joint Force Quarterly, and Australian Quarterly, and communicates with a wider audience across various multimedia platforms regularly. Parasitoid is his first book.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Date Originally Written:  January 3, 2022.

Date Originally Published:  January 17, 2022.

Author and / or Article Point of View:  The author is an Australia-based academic whose research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.  The author works closely with industry and government partners across multiple projects. 

Summary:  Military investment in war-gaming, table-top exercises, scenario planning, and future force design is increasing.  Some of this investment focuses on adversary activities in the “cognitive domain.” While this investment is necessary, it may fail due to it anchoring to data-driven machine-learning and automation for both offensive and defensive purposes, without a clear understanding of their appropriateness. 

Text:  In 2019 the author wrote a short piece for the U.S. Army’s MadSci website titled  “In the Cognitive War, the Weapon is You![1]” This article attempted to spur self-reflection by the national security, intelligence, and defence communities in Australia, the United States and Canada, Europe, and the United Kingdom.  At the time these communities were beginning to incorporate discussion of “cognitive” security/insecurity in their near future threat assessments and future force design discourses. The article is cited in in the North Atlantic Treaty Organization (NATO) Cognitive Warfare document of 2020[2]. Either in ways that demonstrate the misunderstanding directly, or as part of the wider context in which the point of that particular title is thoroughly misinterpreted, the author’s desired self-reflection has not been forthcoming. Instead, and not unexpectedly, the discourse on the cognitive aspects of contemporary conflict have consumed and regurgitated a familiar sequence of errors which will continue to perpetuate rather than mitigate the problem if not addressed head-on.  

What the cognitive threat is

The primary cognitive threat is us[3]. The threat is driven by a combination of, firstly, techno-futurist hubris which exists as a permanently recycling feature of late-modern military thought.  The threat includes a precipitous slide into scientism which military thinkers and the organisations they populate have not avoided[4].  Further contributing to the threat is the commercial and financial rent-seeking which overhangs military affairs as a by-product of private-sector led R&D activities and government dependence on and cultivation of those activities increasingly since the 1990s[5].  Lastly, adversary awareness of these dynamics and an increasing willingness and capacity to manipulate and exacerbate them via the multitude of vulnerabilities ushered in by digital hyper-connectivity[6]. In other words, before the cognitive threat is an operational and tactical menace to be addressed and countered by the joint force, it is a central feature of the deteriorating epistemic condition of the late modern societies in which said forces operate and from which its personnel, funding, R&D pathways, doctrine and operating concepts, epistemic communities and strategic leadership emerge. 

What the cognitive threat is not   

The cognitive threat is not what adversary military organisations and their patrons are doing in and to the information environment with regard to activities other than kinetic military operations. Terms for adversarial activities occurring outside of conventional lethal/kinetic combat operations – such as the “grey-zone” and “below-the-threshold” – describe time-honoured tactics by which interlocutors engage in methods aimed at weakening and sowing dysfunction in the social and political fabric of competitor or enemy societies.  These tactics are used to gain advantage in areas not directly including military conflict, or in areas likely to be critical to military preparedness and mobilization in times of war[7]. A key stumbling block here is obvious: its often difficult to know which intentions such tactics express. This is not cognitive warfare. It is merely typical of contending across and between cross-cultural communities, and the permanent unwillingness of contending societies to accord with the other’s rules. Information warfare – particularly influence operations traversing the Internet and exploiting the dominant commercial operations found there – is part of this mix of activities which belong under the normal paradigm of competition between states for strategic advantage. Active measures – influence operations designed to self-perpetuate – have found fertile new ground on the Internet but are not new to the arsenals of intelligence services and, as Thomas Rid has warned, while they proliferate, are more unpredictable and difficult to control than they were in the pre-Internet era[8]. None of this is cognitive warfare either. Unfortunately, current and recent discourse has lapsed into the error of treating it as such[9], leading to all manner of self-inflicted confusion[10]. 

Why the distinction matters

Two trends emerge from the abovementioned confusion which represent the most immediate threat to the military enterprise[11]. Firstly, private-sector vendors and the consulting and lobbying industry they employ are busily pitching technological solutions based on machine-learning and automation which have been developed in commercial business settings in which sensitivity to error is not high[12]. While militaries experiment with this raft of technologies, eager to be seen at the vanguard of emerging tech; to justify R&D budgets and stave off defunding; or simply out of habit, they incur opportunity cost.  This cost is best described as stultifying investment in the human potential which strategic thinkers have long identified as the real key to actualizing new technologies[13], and entering into path dependencies with behemoth corporate actors whose strategic goal is the cultivation of rentier-relations not excluding the ever-lucrative military sector[14]. 

Secondly, to the extent that automation and machine learning technologies enter the operational picture, cognitive debt is accrued as the military enterprise becomes increasingly dependent on fallible tech solutions[15]. Under battle conditions, the first assumption is the contestation of the electromagnetic spectrum on which all digital information technologies depend for basic functionality. Automated data gathering and analysis tools suffer from heavy reliance on data availability and integrity.  When these tools are unavailable any joint multinational force will require multiple redundancies, not only in terms of technology, but more importantly, in terms of leadership and personnel competencies. It is evidently unclear where the military enterprise draws the line in terms of the likely cost-benefit ratio when it comes to experimenting with automated machine learning tools and the contexts in which they ought to be applied[16]. Unfortunately, experimentation is never cost-free. When civilian / military boundaries are blurred to the extent they are now as a result of the digital transformation of society, such experimentation requires consideration  in light of all of its implications, including to the integrity and functionality of open democracy as the entity being defended[17]. 

The first error of misinterpreting the meaning and bounds of cognitive insecurity is compounded by a second mistake: what the military enterprise chooses to invest time, attention, and resources into tomorrow[18]. Path dependency, technological lock-in, and opportunity cost all loom large if  digital information age threats are misinterpreted. This is the solipsistic nature of the cognitive threat at work – the weapon really is you! Putting one’s feet in the shoes of the adversary, nothing could be more pleasing than seeing that threat self-perpetuate. As a first step, militaries could organise and invest immediately in a strategic technology assessment capacity[19] free from the biases of rent-seeking vendors and lobbyists who, by definition, will not only not pay the costs of mission failure, but stand to benefit from rentier-like dependencies that emerge as the military enterprise pays the corporate sector to play in the digital age. 


Endnotes:

[1] Zac Rogers, “158. In the Cognitive War – The Weapon Is You!,” Mad Scientist Laboratory (blog), July 1, 2019, https://madsciblog.tradoc.army.mil/158-in-the-cognitive-war-the-weapon-is-you/.

[2] Francois du Cluzel, “Cognitive Warfare” (Innovation Hub, 2020), https://www.innovationhub-act.org/sites/default/files/2021-01/20210122_CW%20Final.pdf.

[3] “us” refers primarily but not exclusively to the national security, intelligence, and defence communities taking up discourse on cognitive security and its threats including Australia, the U.S., U.K., Europe, and other liberal democratic nations. 

[4] Henry Bauer, “Science in the 21st Century: Knowledge Monopolies and Research Cartels,” Journal of Scientific Exploration 18 (December 1, 2004); Matthew B. Crawford, “How Science Has Been Corrupted,” UnHerd, December 21, 2021, https://unherd.com/2021/12/how-science-has-been-corrupted-2/; William A. Wilson, “Scientific Regress,” First Things, May 2016, https://www.firstthings.com/article/2016/05/scientific-regress; Philip Mirowski, Science-Mart (Harvard University Press, 2011).

[5] Dima P Adamsky, “Through the Looking Glass: The Soviet Military-Technical Revolution and the American Revolution in Military Affairs,” Journal of Strategic Studies 31, no. 2 (2008): 257–94, https://doi.org/10.1080/01402390801940443; Linda Weiss, America Inc.?: Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin UK, 2018).

[6] Timothy L. Thomas, “Russian Forecasts of Future War,” Military Review, June 2019, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MJ-19/Thomas-Russian-Forecast.pdf; Nathan Beauchamp-Mustafaga, “Cognitive Domain Operations: The PLA’s New Holistic Concept for Influence Operations,” China Brief, The Jamestown Foundation 19, no. 16 (September 2019), https://jamestown.org/program/cognitive-domain-operations-the-plas-new-holistic-concept-for-influence-operations/.

[7] See Peter Layton, “Social Mobilisation in a Contested Environment,” The Strategist, August 5, 2019, https://www.aspistrategist.org.au/social-mobilisation-in-a-contested-environment/; Peter Layton, “Mobilisation in the Information Technology Era,” The Forge (blog), N/A, https://theforge.defence.gov.au/publications/mobilisation-information-technology-era.

[8] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, Illustrated edition (New York: MACMILLAN USA, 2020).

[9] For example see Jake Harrington and Riley McCabe, “Detect and Understand: Modernizing Intelligence for the Gray Zone,” CSIS Briefs (Center for Strategic & International Studies, December 2021), https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/211207_Harrington_Detect_Understand.pdf?CXBQPSNhUjec_inYLB7SFAaO_8kBnKrQ; du Cluzel, “Cognitive Warfare”; Kimberly Underwood, “Cognitive Warfare Will Be Deciding Factor in Battle,” SIGNAL Magazine, August 15, 2017, https://www.afcea.org/content/cognitive-warfare-will-be-deciding-factor-battle; Nicholas D. Wright, “Cognitive Defense of the Joint Force in a Digitizing World” (Pentagon Joint Staff Strategic Multilayer Assessment Group, July 2021), https://nsiteam.com/cognitive-defense-of-the-joint-force-in-a-digitizing-world/.

[10] Zac Rogers and Jason Logue, “Truth as Fiction: The Dangers of Hubris in the Information Environment,” The Strategist, February 14, 2020, https://www.aspistrategist.org.au/truth-as-fiction-the-dangers-of-hubris-in-the-information-environment/.

[11] For more on this see Zac Rogers, “The Promise of Strategic Gain in the Information Age: What Happened?,” Cyber Defense Review 6, no. 1 (Winter 2021): 81–105.

[12] Rodney Brooks, “An Inconvenient Truth About AI,” IEEE Spectrum, September 29, 2021, https://spectrum.ieee.org/rodney-brooks-ai.

[13] Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology Is Only Half the Battle,” War on the Rocks, December 25, 2018, https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/.

[14] Jathan Sadowski, “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism,” Antipode 52, no. 2 (2020): 562–80, https://doi.org/10.1111/anti.12595.

[15] For problematic example see Ben Collier and Lydia Wilson, “Governments Try to Fight Crime via Google Ads,” New Lines Magazine (blog), January 4, 2022, https://newlinesmag.com/reportage/governments-try-to-fight-crime-via-google-ads/.

[16] Zac Rogers, “Discrete, Specified, Assigned, and Bounded Problems: The Appropriate Areas for AI Contributions to National Security,” SMA Invited Perspectives (NSI Inc., December 31, 2019), https://nsiteam.com/discrete-specified-assigned-and-bounded-problems-the-appropriate-areas-for-ai-contributions-to-national-security/.

[17] Emily Bienvenue and Zac Rogers, “Strategic Army: Developing Trust in the Shifting Strategic Landscape,” Joint Force Quarterly 95 (November 2019): 4–14.

[18] Zac Rogers, “Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven,” Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.

[19] For expansion see Zac Rogers and Emily Bienvenue, “Combined Information Overlay for Situational Awareness in the Digital Anthropological Terrain: Reclaiming Information for the Warfighter,” The Cyber Defense Review, no. Summer Edition (2021), https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/06_Rogers_Bienvenue_CDR_V6N3_2021.pdf?ver=6qlw1l02DXt1A_1n5KrL4g%3d%3d.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Below Established Threshold Activities (BETA) Cyberspace Influence Operations Information Systems Zac Rogers

Simple Lethality: Assessing the Potential for Agricultural Unmanned Aerial and Ground Systems to Deploy Biological or Chemical Weapons

William H. Johnson, CAPT, USN/Ret, holds a Master of Aeronautical Science (MAS) from Embry-Riddle Aeronautical University, and a MA in Military History from Norwich University. He is currently an Adjunct Assistant Professor at Embry-Riddle in the College of Aeronautics, teaching unmanned system development, control, and interoperability. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Simple Lethality: Assessing the Potential for Agricultural Unmanned Aerial and Ground Systems to Deploy Biological or Chemical Weapons

Date Originally Written:  August 15, 2020.

Date Originally Published:  November 18, 2020.

Author and / or Article Point of View:  The author is a retired U.S. Naval Flight Officer who held command of the Navy’s sole unmanned air system squadron between 2001 and 2002. He has presented technical papers on unmanned systems, published on the same in professional journals, and has taught unmanned systems since 2016. The article is written from the point of view of an American analyst considering military force vulnerability to small, improvised, unmanned aerial or ground systems, hereby collectively referred to as UxS, equipped with existing technology for agricultural chemical dispersal over a broad area.

Summary:  Small, locally built unmanned vehicles, similar to those used in agriculture, can easily be configured to release a chemical or biological payload. Wide, air-dispersed agents could be set off over a populated area with low likelihood of either interdiction or traceability. Domestic counter-UAS can not only eliminate annoying imagery collection, but also to mitigate the growing potential for an inexpensive chemical or biological weapon attack on U.S. soil.

Text:  The ongoing development and improvement of UxS – primarily aerial, but also ground-operated – to optimize efficiency in the agricultural arena are matters of pride among manufacturers.  These developments and improvements are of interest to regulatory bodies such as the Federal Aviation Administration, and offer an opportunity to those seeking to inflict easy chemical or biological operations on U.S. soil. While the latter note concerning opportunity for enemies, may appear flippant and simplistic at first blush, it is the most important one on the list. Accepting the idea that hostile entities consider environment and objective(s) when choosing physical or cyber attack platforms, the availability of chemical-dispersing unmanned vehicles with current system control options make such weapons not only feasible, but ideal[1].

Commercially available UxS, such the Yamaha RMAX[2] or the DJI Agras MG-1[3], can be launched remotely, and with a simple, available autopilot fly a pre-programmed course until fuel exhaustion. These capabilities the opportunity for an insurgent to recruit a similarly minded, hobbyist-level UAS builder to acquire necessary parts and assemble the vehicle in private. The engineering of such a small craft, even one as large as the RMAX, is quite simple, and the parts could be innocuously and anonymously acquired by anyone with a credit card. Even assembling a 25-liter dispersal tank and setting a primitive timer for release would not be complicated.

With such a simple, garage-built craft, the dispersal tank could be filled with either chemical or biological material, launched anytime from a suburban convenience store parking lot.  The craft could then execute a straight-and-level flight path over an unaware downtown area, and disperse its tank contents at a predetermined time-of-flight. This is clearly not a precision mission, but it would be quite easy to fund and execute[4].

The danger lies in the simplicity[5]. As an historical example, Nazi V-2 “buzz bomb” rockets in World War II were occasionally pointed at a target and fueled to match the rough, desired time of flight needed to cross the planned distance. The V-2 would then simply fall out of the sky once out of gas. Existing autopilots for any number of commercially available UxS are far more sophisticated than that, and easy to obtain. This attack previously described would be difficult to trace and almost impossible to predict, especially if assembly were done with simple parts from a variety of suppliers. The extrapolated problem is that without indication or warning, even presently available counter-UxS technology would have no reason to be brought to bear until after the attack. The cost, given the potential for terror and destabilization, would be negligible to an adversary. The ability to fly such missions simultaneously over a number of metropolitan areas could create devastating consequences in terms of panic.

The current mitigations to UxS are few, but somewhat challenging to an entity planning such a mission. Effective chemical or weaponized biological material is well-tracked by a variety of global organizations.  As such, movement of any amount of such into the United States would be quite difficult for even the best-resourced individuals or groups. Additionally, there are some unique parts necessary for construction of a heavier-lift rotary vehicle.  With some effort, those parts could be cataloged under processes similar to existing import-export control policies and practices.

Finally, the expansion of machine-learning-driven artificial intelligence, the ongoing improvement in battery storage, and the ubiquity of UxS hobbyists and their products, make this type of threat more and more feasible by the day. Current domestic counter-UxS technologies have been developed largely in response to safety threats posed by small UxS to manned aircraft, and also because of the potential for unapproved imagery collection and privacy violation. To those, it will soon be time to add small scale counter-Weapons of Mass Destruction to the rationale.


Endnotes:

[1] Ash Rossiter, “Drone usage by militant groups: exploring variation in adoption,” Defense & Security Analysis, 34:2, 113-126, https://doi.org/10.1080/14751798.2018.1478183

[2] Elan Head, “FAA grants exemption to unmanned Yamaha RMX helicopter.” Verticalmag.com, online: https://www.verticalmag.com/news/faagrantsexemptiontounmannedyamaharmaxhelicopter Accessed: August 15, 2020

[3] One example of this vehicle is available online at https://ledrones.org/product/dji-agras-mg-1-octocopter-argriculture-drone-ready-to-fly-bundle Accessed: August 15, 2020

[4] ”FBI: Man plotted to fly drone-like toy planes with bombs into school. (2014).” CBS News. Retrieved from
https://www.cbsnews.com/news/fbi-man-in-connecticut-plotted-to-fly-drone-like-toy-planes-with-bombs-into-school Accessed: August 10, 2020

[5] Wallace, R. J., & Loffi, J. M. (2015). Examining Unmanned Aerial System Threats & Defenses: A Conceptual Analysis. International Journal of Aviation, Aeronautics, and Aerospace, 2(4). https://doi.org/10.15394/ijaaa.2015.1084

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Chemical, Biological, Radiological, and Nuclear Weapons Unmanned Systems William H. Johnson

Assessing the Chinese People’s Liberation Army’s Surreptitious Artificial Intelligence Build-Up

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Richard Tilley is a strategist within the Office of the Secretary of Defense. Previously, Richard served as a U.S. Army Special Forces Officer and a National Security Advisor in the U.S. House of Representatives. He is on Twitter @RichardTilley6 and on LinkedIn. The views contained in this article are the author’s alone and do not represent the views of the Department of Defense or the United States Government.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization or any group.


Title:  Assessing the Chinese People’s Liberation Army’s Surreptitious Artificial Intelligence Build-Up

Date Originally Written:  July 6, 2020.

Date Originally Published:  August 14, 2020.

Author and / or Article Point of View:  The author is an unconventional warfare scholar and strategist. He believes renewed American interest in great power competition and Chinese approaches to unrestricted warfare require the United States national security apparatus to better appreciate the disruptive role advanced technology will play on the future battlefield.

Summary:  China’s dreams of regional and global hegemony require a dominant People’s Liberation Army that faces the dilemma of accruing military power while not raising the ire of the United States. To meet this challenge, the Chinese Communist Party has bet heavily on artificial intelligence as a warfighting game-changer that it can acquire surreptitiously and remain below-the-threshold of armed conflict with the United States.

Text:  President Xi Jinping’s introduction of the “The China Dream” in 2013 offers the latest iteration of the Chinese Communist Party’s (CCP) decades-long quest to establish China in its rightful place atop the global hierarchy. To achieve this goal, Xi calls for “unison” between China’s newfound soft power and the People’s Liberation Army’s (PLA) hard power[1]. But, by the CCP’s own admission, “The PLA still lags far behind the world’s leading militaries[2].” Cognizant of this capability deficit, Beijing adheres to the policy of former Chairman Deng Xiaoping, “Hide your strength, bide your time” until the influence of the Chinese military can match that of the Chinese economy.

For the PLA, Deng’s maxim presents a dilemma: how to build towards militarily eclipsing the United States while remaining below the threshold of eliciting armed response. Beijing’s solution is to bet heavily on artificial intelligence (AI) and its potential to upend the warfighting balance of power.

In simple terms, AI is the ability of machines to perform tasks that normally require human intelligence. AI is not a piece of hardware but rather a technology integrated into nearly any system that enables computing more quickly, accurately, and intuitively. AI works by combining massive amounts of data with powerful, iterative algorithms to identify new associations and rules hidden therein. By applying these associations and rules to new scenarios, scientists hope to produce AI systems with reasoning and decision-making capabilities matching or surpassing that of humans.

China’s quest for regional and global military dominance has led to a search for a “Revolution in Military Affairs (RMA) with Chinese characteristics[3].” An RMA is a game-changing evolution in warfighting that upends the balance of power. In his seminal work on the subject, former Under Secretary of Defense Michael Vickers found eighteen cases of such innovations in history such as massed infantry, artillery, railroad, telegraph, and atomic weapons[4]. In each case, a military power introduces a disruptive technology or tactic that rapidly and enduringly changes warfighting. The PLA believes that AI can be their game-changer in the next conflict.

Evidence of the PLA’s confidence in AI abounds. Official PRC documents from 2017 called for “The use of new generation AI technologies as a strong support to command decision-making, military deductions [strategy], and defense equipment, among other applications[5].” Beijing matched this rhetoric with considerable funding, which the U.S. Department of Defense estimated as $12 billion in 2017 and growing to as much as $70 billion in 2020[6].

AI’s potential impact in a Western Pacific military confrontation is significant. Using AI, PLA intelligence systems could detect, identify, and assess the possible intent of U.S. carrier strike groups more quickly and with greater accuracy than traditional human analysis. Then, PLA strike systems could launch swarming attacks coordinated by AI that overwhelm even the most advanced American aerial and naval defenses. Adding injury to insult, the PLA’s AI systems will learn from this engagement to strike the U.S. Military with even more efficacy in the future.

While pursuing AI the CCP must still address the dilemma of staying below the threshold of armed conflict – thus the CCP masterfully conceals moves designed to give it an AI advantage. In the AI arms race, there are two key components: technology and data. To surpass the United States, China must dominate both, but it must do so surreptitiously.

AI systems require several technical components to operate optimally, including the talent, algorithms, and hardware on which they rely. Though Beijing is pouring untold resources into developing first-rate domestic capacity, it still relies on offshore sources for AI tech. To acquire this foreign know-how surreptitiously, the CCP engages in insidious foreign direct investment, joint ventures, cyber espionage, and talent acquisition[7] as a shortcut while it builds domestic AI production.

Successful AI also requires access to mountains of data. Generally, the more data input the better the AI output. To build these data stockpiles, the CCP routinely exploits its own citizens. National security laws passed in 2014 and 2017 mandate that Chinese individuals and organizations assist the state security apparatus when requested[8]. The laws make it possible for the CCP to easily collect and exploit Chinese personal data that can then be used to strengthen the state’s internal security apparatus – powered by AI. The chilling efficacy seen in controlling populations in Xinjiang and Hong Kong can be transferred to the international battlefield.

Abroad, the CCP leverages robust soft power to gain access to foreign data. Through programs like the Belt and Road Initiative, China offers low-cost modernization to tech-thirsty customers. Once installed, the host’s upgraded security, communication, or economic infrastructure allows Beijing to capture overseas data that reinforces their AI data sets and increases their understanding of the foreign environment[9]. This data enables the PLA to better train AI warfighting systems to operate in anywhere in the world.

If the current trends hold, the United States is at risk of losing the AI arms race and hegemony in the Western Pacific along with it. Despite proclaiming that, “Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States[10],” Washington is only devoting $4.9 billion to unclassified AI research in fiscal year 2020[11], just seven percent of Beijing’s investment.

The keep pace, the United States can better comprehend and appreciate the consequences of allowing the PLA to dominate AI warfighting in the future. The stakes of the AI race are not dissimilar to the race for nuclear weapons during World War 2. Only by approaching AI with the same interest, investment, and intensity of the Manhattan Project can U.S. Military hegemony hope to be maintained.


Endnotes:

[1] Page, J. (2013, March 13). For Xi, a ‘China Dream’ of Military Power. Wall Street Journal Retrieved June 20, 2020 from https://www.wsj.com/articles/SB10001424127887324128504578348774040546346

[2] The State Council Information Office of the People’s Republic of China. (2019). China’s National Defense in the New Era. (p. 6)

[3] Ibid.

[4] Vickers, M. G. (2010). The structure of military revolutions (Doctoral dissertation, Johns Hopkins University) (pp. 4-5). UMI Dissertation Publishing.

[5] PRC State Council, (2017, July 17). New Generation Artificial Intelligence Plan. (p. 1)

[6] Pawlyk, O. (2018, July 30). China Leaving the US behind on Artificial Intelligence: Air Force General. Military.com. Retrieved June 20, 2020 from https://www.military.com/defensetech/2018/07/30/china-leaving-us-behind-artificial-intelligence-air-force-general.html

[7] O’Conner, S. (2019). How Chinese Companies Facilitate Technology Transfer from the United States. U.S. – China Economic and Security Review Commission. (p. 3)

[8] Kharpal, A. (2019, March 5). Huawei Says It Would Never Hand Data to China’s Government. Experts Say It Wouldn’t Have a Choice. CNBC. Retrieved June 20, 2020 from https://www.cnbc.com/2019/03/05/huawei-would-have-to-give-data-to-china-government-if-asked-experts.html

[9] Chandran, N. (2018, July 12). Surveillance Fears Cloud China’s ‘Digital Silk Road.’ CNBC. Retrieved June 20, 2020 from https://www.cnbc.com/2018/07/11/risks-of-chinas-digital-silk-road-surveillance-coercion.html

[10] Trump, D. (2019, February 14). Executive Order 13859 “Maintaining American Leadership in Artificial Intelligence.” Retrieved June 20, 2020 from https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence

[11] Cornillie, C. (2019, March 28). Finding Artificial Intelligence Research Money in the Fiscal 2020 Budget. Bloomberg Government. Retrieved June 20, 2020 from https://about.bgov.com/news/finding-artificial-intelligence-money-fiscal-2020-budget

2020 - Contest: PRC Below Threshold Writing Contest Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers China (People's Republic of China) Richard Tilley United States

Alternative Future: The Perils of Trading Artificial Intelligence for Analysis in the U.S. Intelligence Community

John J. Borek served as a strategic intelligence analyst for the U.S. Army and later as a civilian intelligence analyst in the U.S. Intelligence Community.  He is currently an adjunct professor at Grand Canyon University where he teaches courses in governance and public policy. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Alternative Future: The Perils of Trading Artificial Intelligence for Analysis in the U.S. Intelligence Community

Date Originally Written:  June 12, 2020.

Date Originally Published:  August 12, 2020.

Author and / or Article Point of View:  The article is written from the point of view of a U.S. Congressional inquiry excerpt into an intelligence failure and the loss of Taiwan to China in 2035.

Summary:  The growing reliance on Artificial Intelligence (AI) to provide situational awareness and predictive analysis within the U.S. Intelligence Community (IC) resulted in an opportunity for China to execute a deception plan.  This successful deception plan resulted in the sudden and complete loss of Taiwan’s independence in 2035.

Text:  The U.S. transition away from humans performing intelligence analysis to the use of AI was an inevitable progression as the amount of data collected for analysis reached levels humans could not hope to manage[1] while machine learning and artificial neural networks developed simultaneously to the level they could match, if not outperform, human reasoning[2]. The integration of data scientists with analytic teams, which began in 2020, resulted in the attrition of of both regional and functional analysts and the transformation of the duties of those remaining to that of editor and briefer[3][4].

Initial successes in the transition led to increasing trust and complacency. The “Black Box” program demonstrated its first major success in identifying terrorist networks and forecasting terrorist actions fusing social media, network analysis, and clandestine collection; culminating in the successful preemption of the 2024 Freedom Tower attack. Moving beyond tactical successes, by 2026 Black Box was successfully analyzing climatological data, historical migration trends, and social behavior models to correctly forecast the sub-Saharan African drought and resulting instability, allowing the State Department to build a coalition of concerned nations and respond proactively to the event, mitigating human suffering and unrest.

The cost advantages and successes large and small resulted in the IC transitioning from a community of 17 coordinating analytic centers into a group of user agencies. In 2028, despite the concerns of this Committee, all analysis was centralized at the Office of the Director of National Intelligence under Black Box. Testimony at the time indicated that there was no longer any need for competitive or agency specific analysis, the algorithms of Black Box considered all likely possibilities more thoroughly and efficiently than human analysts could. Beginning that Fiscal Year the data scientists of the different agencies of the IC accessed Black Box for the analysis their decision makers needed. Also that year the coordination process for National intelligence Estimates and Intelligence Community Assessments was eliminated; as the intelligence and analysis was uniform across all agencies of government there was no longer any need for contentious, drawn out analytic sessions which only delayed delivery of the analysis to policy makers.

Regarding the current situation in the Pacific, there was never a doubt that China sought unification under its own terms with Taiwan, and the buildup and modernization of Chinese forces over the last several decades caused concern within both the U.S. and Taiwan governments[5]. This committee could find no fault with the priority that China had been given within the National Intelligence Priorities Framework. The roots of this intelligence failure lie in the IC inability to factor the possibility of deception into the algorithms of the Black Box program[6].

AI relies on machine learning, and it was well known that machines could learn biases based on the data that they were given and their algorithms[7][8]. Given the Chinese lead in AI development and applications, and their experience in using AI it to manage people and their perceptions[9][10], the Committee believes that the IC should have anticipated the potential for the virtual grooming of Black Box. As a result of this intelligence postmortem, we now know that four years before the loss of Taiwan the People’s Republic of China began their deception operation in earnest through the piecemeal release of false plans and strategy through multiple open and clandestine sources. As reported in the National Intelligence Estimate published just 6 months before the attack, China’s military modernization and procurement plan “confirmed” to Black Box that China was preparing to invade and reunify with Taiwan using overwhelming conventional military forces in 2043 to commemorate the 150th anniversary of Mao Zedong’s birth.

What was hidden from Black Box and the IC, was that China was also embarking on a parallel plan of adapting the lessons learned from Russia’s invasions of Georgia and the Ukraine. Using their own AI systems, China rehearsed and perfected a plan to use previously infiltrated special operations forces, airborne and heliborne forces, information warfare, and other asymmetric tactics to overcome Taiwan’s military superiority and geographic advantage. Individual training of these small units went unnoticed and was categorized as unremarkable and routine.

Three months prior to the October 2035 attack we now know that North Korea, at China’s request, began a series of escalating provocations in the Sea of Japan which alerted Black Box to a potential crisis and diverted U.S. military and diplomatic resources. At the same time, biometric tracking and media surveillance of key personalities in Taiwan that were previously identified as being crucial to a defense of the island was stepped up, allowing for their quick elimination by Chinese Special Operations Forces (SOF).

While we can’t determine with certainty when the first Chinese SOF infiltrated Taiwan, we know that by October 20, 2035 their forces were in place and Operation Homecoming received the final go-ahead from the Chinese President. The asymmetric tactics combined with limited precision kinetic strikes and the inability of the U.S. to respond due to their preoccupation 1,300 miles away resulted in a surprisingly quick collapse of Taiwanese resistance. Within five days enough conventional forces had been ferried to the island to secure China’s hold on it and make any attempt to liberate it untenable.

Unlike our 9/11 report which found that human analysts were unable to “connect the dots” of the information they had[11], we find that Black Box connected the dots too well. Deception is successful when it can either increase the “noise,” making it difficult to determine what is happening; or conversely by increasing the confidence in a wrong assessment[12]. Without community coordination or competing analysis provided by seasoned professional analysts, the assessment Black Box presented to policy makers was a perfect example of the latter.


Endnotes:

[1] Barnett, J. (2019, August 21). AI is breathing new life into the intelligence community. Fedscoop. Retrieved from https://www.fedscoop.com/artificial-intelligence-in-the-spying

[2] Silver, D., et al. (2016). Mastering the game of GO with deep neural networks and tree search. Nature, 529, 484-489. Retrieved from https://www.nature.com/articles/nature16961

[3] Gartin. G. W. (2019). The future of analysis. Studies in Intelligence, 63(2). Retrieved from https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/csi-studies/studies/vol-63-no-2/Future-of-Analysis.html

[4] Symon, P. B., & Tarapore, A. (2015, October 1). Defense intelligence in the age of big data. Joint Force Quarterly 79. Retrieved from https://ndupress.ndu.edu/Media/News/Article/621113/defense-intelligence-analysis-in-the-age-of-big-data

[5] Office of the Secretary of Defense. (2019). Annual report to Congress: Military and security developments involving the People’s Republic of China 2019. Retrieved from https://media.defense.gov/2019/May/02/2002127082/-1/-1/1/2019_CHINA_MILITARY_POWER_REPORT.pdf

[6] Knight, W. (2019). Tainted data can teach algorithms the wrong lessons. Wired. Retrieved from https://www.wired.com/story/tainted-data-teach-algorithms-wrong-lessons

[7] Boghani, P. (2019). Artificial intelligence can be biased. Here’s what you should know. PBS / Frontline Retrieved from https://www.pbs.org/wgbh/frontline/article/artificial-intelligence-algorithmic-bias-what-you-should-know

[8] Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. ProPublica. Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[9] Fanning, D., & Docherty, N. (2019). In the age of AI. PBS / Frontline. Retrieved from https://www.pbs.org/wgbh/frontline/film/in-the-age-of-ai

[10] Westerheide, F. (2020). China – the first artificial intelligence superpower. Forbes. Retrieved from https://www.forbes.com/sites/cognitiveworld/2020/01/14/china-artificial-intelligence-superpower/#794c7a52f053

[11] National Commission on Terrorist Attacks Upon the United States. (2004). The 9/11 Commission report. Retrieved from https://govinfo.library.unt.edu/911/report/911Report_Exec.htm

[12] Betts, R. K. (1980). Surprise despite warning: Why sudden attacks succeed. Political Science Quarterly 95(4), 551-572. Retrieved from https://www.jstor.org/stable/pdf/2150604.pdf

Alternative Futures / Alternative Histories / Counterfactuals Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers China (People's Republic of China) Information and Intelligence John J. Borek Taiwan

Options for the Deployment of Robots on the Battlefield

Mason Smithers[1] is a student of robotics and aviation. He has taken part in building and programming robots for various purposes and is seeking a career as a pilot. 

Jason Criss Howk[2] is an adjunct professor of national security and Islamic studies and was Mason’s guest instructor during the COVID-19 quarantine.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The deployment of robots on the battlefield raises many questions for nations that desire to do so.

Date Originally Written:  April, 5, 2020.

Date Originally Published:  June 24, 2020.

Author and / or Article Point of View:  This paper is based on the assumption that robots will one day become the predominant actor on a battlefield, as AI and robotics technology advance. The authors believe it is the moral duty of national and international policy-makers to debate and establish the rules for this future now.

Background:  Robots on the battlefield in large quantities, where they make up the majority of the combatants making direct-contact with a nation’s enemies, will raise new concerns for national leaders and human rights scholars. Whether they are tethered to a human decision-maker or not, when robots become the primary resource that a nation puts at risk during war, there will be an avalanche of new moral and ethical questions to debate.

This shift in the “manning” of warfighting organizations could increase the chances that nations will go to war because they can afford to easily replace robots, and without a human-life cost, citizens may not be as eager to demand a war be ended or be avoided.

Significance:  While the U.S. currently uses human-operated ground and air robots (armed unmanned aircraft-AKA drones, reconnaissance robots, bomb technician’s assistants etc.), a robust debate about whether robots can be safely untethered from humans is currently underway. If the United States or other nations decide to mass produce infantry robots that can act, without a human controlling them and making critical decisions for them, what are the costs and risks associated? The answers to these questions about the future, matter now to every leader involved in warfare and peace preservation.

Option #1:  The U.S. continues to deploy robots in the future with current requirements for human decision-making (aka human in the loop) in place. In this option the humans in any military force will continue to make all decisions for robots with the capability to use deadly force.

Risk:  If other nations choose to use robots with their own non-human decision capability or in larger numbers, U.S. technology and moral limits may cause the U.S. force smaller and possibly outnumbered. Requiring a human in the loop will stretch a U.S. armed forces that is already hurting in the areas of retention and readiness. Humans in the loop, due to eventual distraction or fatigue, will be slower in making decisions when compared to robots. If other nations perfect this technology before the U.S., there may not be time to catch up in a war and regain the advantage. The U.S. alliance system may be challenged by differing views of whether or not to have a human in the loop.

Gain:  Having a human in the loop will decreases the risk of international incidents that cause wars due to greater an assumed greater discretion capacity with the human. A human can make decisions that are “most correct” and not simply the fastest or most logical. Humans stand the best chance at making choices that can create positive strategic impacts when a gray area presents itself.

Option #2:  The U.S. transitions to a military force that is predominantly robotic and delegate decision-making to the robots at the lowest, possibly individual robot, level.

Risk:  Programmers cannot account for every situation on the battlefield. When robots encounter new techniques from the enemy (human innovations) the robots may become confused and be easily defeated until they are reprogrammed. Robots may be more likely to mistake civilians for legal combatants. Robots can be hacked, and then either stopped or turned on the owner. Robots could be reprogrammed to ignore the Laws of Warfare to frame a nation for war crimes. There is an increased risk for nations when rules of warfare are broken by robots. Laws will be needed to determine who gets the blame for the war crimes (i.e. designers, owners, programmers, elected officials, senior commanders, or the closest user).  There will be a requirement to develop rights for the robots in warfare. There could be prisoner of war status issues and discussions about how shutdown and maintenance requirements work so robots are not operated until they malfunction and die.  This option can lead to the question, “if robots can make decisions, are they sentient/living beings?” Sentient status would require nations to consider minimum requirements for living standards of robots. This could create many questions about the ethics of sending robots to war.

Gain:  This option has a lower cost than human manning of military units. The ability to mass produce robots allows means the U.S. can quickly keep up with nations that produce large human or robotic militaries. Robots may be more accurate with weapons systems which may reduce civilian casualties.

Other Comments:  While this may seem like science fiction to some policy-makers, this future is coming, likely faster than many anticipate.

Recommendation:  None.


Endnotes:

[1] Mason Smithers is a 13-year-old, 7th grade Florida student. He raised this question with his guest instructor Jason Howk during an impromptu national security class. When Mason started to explain in detail all the risks and advantages of robots in future warfare, Jason asked him to write a paper about the topic. Ninety percent of this paper is from Mason’s 13-year-old mind and his view of the future. We can learn a lot from our students.

[2]  Mason’s mother has given permission for the publication of his middle school project.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Jason Criss Howk Mason Smithers Option Papers

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk

U.S. Options to Combat Chinese Technological Hegemony

Ilyar Dulat, Kayla Ibrahim, Morgan Rose, Madison Sargeant, and Tyler Wilkins are Interns at the College of Information and Cyberspace at the National Defense UniversityDivergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  China’s technological rise threatens U.S. interests both on and off the battlefield.

Date Originally Written:  July 22, 2019.

Date Originally Published:  February 10, 2020.

Author and / or Article Point of View:  This article is written from the point of view of the United States Government.

Background:  Xi Jinping, the Chairman of China’s Central Military Commission. affirmed in 2012 that China is acting to redefine the international world order through revisionist policies[1]. These policies foster an environment open to authoritarianism thus undermining Western liberal values. The Chinese Communist Party (CCP) utilizes emerging technologies to restrict individual freedoms of Chinese citizens, in and out of cyberspace. Subsequently, Chinese companies have exported this freedom-restricting technology to other countries, such as Ethiopia and Iran, for little cost. These technologies, which include Artificial Intelligence-based surveillance systems and nationalized Internet services, allow authoritarian governments to effectively suppress political dissent and discourse within their states. Essentially monopolizing the tech industry through low prices, China hopes to gain the loyalty of these states to obtain the political clout necessary to overcome the United States as the global hegemon.

Significance:  Among the technologies China is pursuing, 5G is of particular interest to the U.S.  If China becomes the leader of 5G network technologies and artificial intelligence, this will allow for opportunities to disrupt the confidentiality, integrity, and availability of data. China has been able to aid regimes and fragmented democracies in repressing freedom of speech and restricting human rights using “digital tools of surveillance and control[2].” Furthermore, China’s National Security Law of 2015 requires all Chinese tech companies’ compliance with the CCP. These Chinese tech companies are legally bound to share data and information housed on Chinese technology, both in-state and abroad. They are also required to remain silent about their disclosure of private data to the CCP. As such, information about private citizens and governments around the world is provided to the Chinese government without transparency. By deploying hardware and software for countries seeking to expand their networks, the CCP could use its authority over domestic tech companies to gain access to information transferred over Chinese built networks, posing a significant threat to the national security interests of the U.S. and its Allies and Partners. With China leading 5G, the military forces of the U.S. and its Allies and Partners would be restricted in their ability to rely on indigenous telecoms abroad, which could cripple operations critical to U.S. interests [3]. This risk becomes even greater with the threat of U.S. Allies and Partners adopting Chinese 5G infrastructure, despite the harm this move would do to information sharing with the United States.

If China continues its current trajectory, the U.S. and its advocacy for personal freedoms will grow increasingly marginal in the discussion of human rights in the digital age. In light of the increasing importance of the cyber domain, the United States cannot afford to assume that its global leadership will seamlessly transfer to, and maintain itself within, cyberspace. The United States’ position as a leader in cyber technology is under threat unless it vigilantly pursues leadership in advancing and regulating the exchange of digital information.

Option #1:  Domestic Investment.

The U.S. government could facilitate a favorable environment for the development of 5G infrastructure through domestic telecom providers. Thus far, Chinese companies Huawei and ZTE have been able to outbid major European companies for 5G contracts. American companies that are developing 5G infrastructure are not large enough to compete at this time. By investing in 5G development domestically, the U.S. and its Allies and Partners would have 5G options other than Huawei and ZTE available to them. This option provides American companies with a playing field equal to their Chinese counterparts.

Risk:  Congressional approval to fund 5G infrastructure development will prove to be a major obstacle. Funding a development project can quickly become a bipartisan issue. Fiscal conservatives might argue that markets should drive development, while those who believe in strong government oversight might argue that the government should spearhead 5G development. Additionally, government subsidized projects have previously failed. As such, there is no guarantee 5G will be different.

Gain:  By investing in domestic telecommunication companies, the United States can remain independent from Chinese infrastructure by mitigating further Chinese expansion. With the U.S. investing domestically and giving subsidies to companies such as Qualcomm and Verizon, American companies can develop their technology faster in an attempt to compete with Huawei and ZTE.

Option #2:  Foreign Subsidization.

The U.S. supports European competitors Nokia and Ericsson, through loans and subsidies, against Huawei and ZTE. In doing so, the United States could offer a conduit for these companies to produce 5G technology at a more competitive price. By providing loans and subsidies to these European companies, the United States delivers a means for these companies to offer more competitive prices and possibly outbid Huawei and ZTE.

Risk:  The American people may be hostile towards a policy that provides U.S. tax dollars to foreign entities. While the U.S. can provide stipulations that come with the funding provided, the U.S. ultimately sacrifices much of the control over the development and implementation of 5G infrastructure.

Gain:  Supporting European tech companies such as Nokia and Ericsson would help deter allied nations from investing in Chinese 5G infrastructure. This option would reinforce the U.S.’s commitment to its European allies, and serve as a reminder that the United States maintains its position as the leader of the liberal international order. Most importantly, this option makes friendlier telecommunications companies more competitive in international markets.

Other Comments:  Both options above would also include the U.S. defining regulations and enforcement mechanisms to promote the fair usage of cyberspace. This fair use would be a significant deviation from a history of loosely defined principles. In pursuit of this fair use, the United States could join the Cyber Operations Resilience Alliance, and encourage legislation within the alliance that invests in democratic states’ cyber capabilities and administers clearly defined principles of digital freedom and the cyber domain.

Recommendation:  None.


Endnotes:

[1] Economy, Elizabeth C. “China’s New Revolution.” Foreign Affairs. June 10, 2019. Accessed July 31, 2019. https://www.foreignaffairs.com/articles/china/2018-04-17/chinas-new-revolution.

[2] Chhabra, Tarun. “The China Challenge, Democracy, and U.S. Grand Strategy.” Democracy & Disorder, February 2019. https://www.brookings.edu/research/the-china-challenge-democracy-and-u-s-grand-strategy/.

[3] “The Overlooked Military Implications of the 5G Debate.” Council on Foreign Relations. Accessed August 01, 2019. https://www.cfr.org/blog/overlooked-military-implications-5g-debate.

Artificial Intelligence / Machine Learning / Human-Machine Teaming China (People's Republic of China) Cyberspace Emerging Technology Ilyar Dulat Kayla Ibrahim Madison Sargeant Morgan Rose Option Papers Tyler Wilkins United States

Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Dan Lee is a government employee who works in Defense, and has varying levels of experience working with Five Eyes nations (US, UK, Canada, Australia, New Zealand).  He can be found on Twitter @danlee961.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Options for Lethal Autonomous Weapons Systems and the Five Eyes Alliance

Date Originally Written:  September 29, 2018.

Date Originally Published:  October 29, 2018.

Author and / or Article Point of View:  The article is written from the point of view of Five Eyes national defense organizations. 

Background:  The Five Eyes community consists of the United Kingdom (UK), the United States (US), Canada, Australia and New Zealand; its origins can be traced to the requirement to cooperate in Signals Intelligence after World War Two[1]. Arguably, the alliance is still critical today in dealing with terrorism and other threats[2].

Autonomous systems may provide the Five Eyes alliance an asymmetric advantage, or ‘offset’, to counter its strategic competitors that are on track to field larger and more technologically advanced military forces. The question of whether or not to develop and employ Lethal Autonomous Weapons Systems (LAWS) is currently contentious due to the ethical and social considerations involved with allowing machines to choose targets and apply lethal force without human intervention[3][4][5]. Twenty-six countries are calling for a prohibition on LAWS, while three Five Eyes partners (Australia, UK and the US) as well as other nations including France, Germany, South Korea and Turkey do not support negotiating new international laws on the matter[6]. When considering options, at least two issues must also be addressed.

The first issue is defining what LAWS are; a common lexicon is required to allow Five Eyes partners to conduct an informed discussion as to whether they can come to a common policy position on the development and employment of these systems. Public understanding of autonomy is mostly derived from the media or from popular culture and this may have contributed to the hype around the topic[7][8][8]. Currently there is no universally accepted definition of what constitutes a fully autonomous lethal weapon system, which has in turn disrupted discussions at the United Nations (UN) on how these systems should be governed by the Convention on Certain Conventional Weapons (CCWUN)[10]. The US and UK have different definitions, which makes agreement on a common position difficult even amongst like-minded nations[11][12]. This lack of lexicon is further complicated by some strategic competitors using more liberal definitions of LAWS, allowing them to support a ban while simultaneously developing weapons that do not require meaningful human control[13][14][15][16].

The second issue one of agreeing how autonomous systems might be employed within the Five Eyes alliance. For example, as a strategic offset technology, the use of autonomous systems might mitigate the relatively small size of their military forces relative to an adversary’s force[17]. Tactically, they could be deployed completely independently of humans to remove personnel from danger, as swarms to overwhelm the enemy with complexity, or as part of a human-machine team to augment human capabilities[18][19][20].

A failure of Five Eyes partners to come to a complete agreement on what is and is not permissible in developing and employing LAWS does not necessarily mean a halt to progress; indeed, this may provide the alliance with the ability for some partners to cover the capability gaps of others. If some members of the alliance choose not to develop lethal systems, it may free their resources to focus on autonomous Intelligence, Surveillance, and Reconnaissance (ISR) or logistics capabilities. In a Five Eyes coalition environment, these members who chose not to develop lethal systems could provide support to the LAWS-enabled forces of other partners, providing lethal autonomy to the alliance as whole, if not to individual member states.

Significance:  China and Russia may already be developing LAWS; a failure on the part of the Five Eyes alliance to actively manage this issue may put it at a relative disadvantage in the near future[21][22][23][24]. Further, dual-use civilian technologies already exist that may be adapted for military use, such as the Australian COTSbot and the Chinese Mosquito Killer Robot[25][26]. If the Five Eyes alliance does not either disrupt the development of LAWS by its competitors, or attain relative technological superiority, it may find itself starting in a position of disadvantage during future conflicts or deterrence campaigns.

Option #1:  Five Eyes nations work with the UN to define LAWS and ban their development and use; diplomatic, economic and informational measures are applied to halt or disrupt competitors’ LAWS programs. Technological offset is achieved by Five Eyes autonomous military systems development that focuses on logistics and ISR capabilities, such as Boston Dynamics’ LS3 AlphaDog and the development of driverless trucks to free soldiers from non-combat tasks[27][28][29][30].

Risk:  In the event of conflict, allied combat personnel would be more exposed to danger than the enemy as their nations had, in essence, decided to not develop a technology that could be of use in war. Five Eyes militaries would not be organizationally prepared to develop, train with and employ LAWS if necessitated by an existential threat. It may be too late to close the technological capability gap after the commencement of hostilities.

Gain:  The Five Eyes alliance’s legitimacy regarding human rights and the just conduct of war is maintained in the eyes of the international community. A LAWS arms race and subsequent proliferation can be avoided.

Option #2:  Five Eyes militaries actively develop LAWS to achieve superiority over their competitors.

Risk:  The Five Eyes alliance’s legitimacy may be undermined in the eyes of the international community and organizations such as The Campaign to Stop Killer Robots, the UN, and the International Committee of the Red Cross. Public opinion in some partner nations may increasingly disapprove of LAWS development and use, which could fragment the alliance in a similar manner to the Australia, New Zealand and United States Security Treaty[31][32].

The declared development and employment of LAWS may catalyze a resource-intensive international arms race. Partnerships between government and academia and industry may also be adversely affected[33][34].

Gain:  Five Eyes nations avoid a technological disadvantage relative to their competitors; the Chinese information campaign to outmanoeuvre Five Eyes LAWS development through the manipulation of CCWUN will be mitigated. Once LAWS development is accepted as inevitable, proliferation may be regulated through the UN.

Other Comments:  None

Recommendation:  None.


Endnotes:

[1] Tossini, J.V. (November 14, 2017). The Five Eyes – The Intelligence Alliance of the Anglosphere. Retrieved from https://ukdefencejournal.org.uk/the-five-eyes-the-intelligence-alliance-of-the-anglosphere/

[2] Grayson, K. Time to bring ‘Five Eyes’ in from the cold? (May 4, 2018). Retrieved from https://www.aspistrategist.org.au/time-bring-five-eyes-cold/

[3] Lange, K. 3rd Offset Strategy 101: What It Is, What the Tech Focuses Are (March 30, 2016). Retrieved from http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/

[4] International Committee of the Red Cross. Expert Meeting on Lethal Autonomous Weapons Systems Statement (November 15, 2017). Retrieved from https://www.icrc.org/en/document/expert-meeting-lethal-autonomous-weapons-systems

[5] Human Rights Watch and
Harvard Law School’s International Human Rights Clinic. Fully Autonomous Weapons: Questions and Answers. (October 2013). Retrieved from https://www.hrw.org/sites/default/files/supporting_resources/10.2013_killer_robots_qa.pdf

[6] Campaign to Stop Killer Robots. Report on Activities Convention on Conventional Weapons Group of Governmental Experts meeting on lethal autonomous weapons systems – United Nations Geneva – 9-13 April 2018. (2018) Retrieved from https://www.stopkillerrobots.org/wp-content/uploads/2018/07/KRC_ReportCCWX_Apr2018_UPLOADED.pdf

[7] Scharre, P. Why You Shouldn’t Fear ‘Slaughterbots’. (December 22, 2017). Retrieved from https://spectrum.ieee.org/automaton/robotics/military-robots/why-you-shouldnt-fear-slaughterbots

[8] Winter, C. (November 14, 2017). ‘Killer robots’: autonomous weapons pose moral dilemma. Retrieved from https://www.dw.com/en/killer-robots-autonomous-weapons-pose-moral-dilemma/a-41342616

[9] Devlin, H. Killer robots will only exist if we are stupid enough to let them. (June 11, 2018). Retrieved from https://www.theguardian.com/technology/2018/jun/11/killer-robots-will-only-exist-if-we-are-stupid-enough-to-let-them

[10] Welsh, S. Regulating autonomous weapons. (November 16, 2017). Retrieved from https://www.aspistrategist.org.au/regulating-autonomous-weapons/

[11] United States Department of Defense. Directive Number 3000.09. (November 21, 2012). Retrieved from https://www.hsdl.org/?view&did=726163

[12] Lords AI committee: UK definitions of autonomous weapons hinder international agreement. (April 17, 2018). Retrieved from http://www.article36.org/autonomous-weapons/lords-ai-report/

[13] Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects – Geneva, 9–13 April 2018 (first week) Item 6 of the provisional agenda – Other matters. (11 April 2018). Retrieved from https://www.unog.ch/80256EDD006B8954/(httpAssets)/E42AE83BDB3525D0C125826C0040B262/$file/CCW_GGE.1_2018_WP.7.pdf

[14] Welsh, S. China’s shock call for ban on lethal autonomous weapon systems. (April 16, 2018). Retrieved from https://www.janes.com/article/79311/china-s-shock-call-for-ban-on-lethal-autonomous-weapon-systems

[15] Mohanty, B. Lethal Autonomous Dragon: China’s approach to artificial intelligence weapons. (Nov 15 2017). Retrieved from https://www.orfonline.org/expert-speak/lethal-autonomous-weapons-dragon-china-approach-artificial-intelligence/

[16] Kania, E.B. China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems. (April 17, 2018). Retrieved from https://www.lawfareblog.com/chinas-strategic-ambiguity-and-shifting-approach-lethal-autonomous-weapons-systems

[17] Tomes, R. Why the Cold War Offset Strategy was all about Deterrence and Stealth. (January 14, 2015) Retrieved from https://warontherocks.com/2015/01/why-the-cold-war-offset-strategy-was-all-about-deterrence-and-stealth/

[18] Lockie, A. The Air Force just demonstrated an autonomous F-16 that can fly and take out a target all by itself. (April 12, 2017). Retrieved from https://www.businessinsider.com.au/f-16-drone-have-raider-ii-loyal-wingman-f-35-lockheed-martin-2017-4?r=US&IR=T

[19] Schuety, C. & Will, L. An Air Force ‘Way of Swarm’: Using Wargaming and Artificial Intelligence to Train Drones. (September 21, 2018). Retrieved from https://warontherocks.com/2018/09/an-air-force-way-of-swarm-using-wargaming-and-artificial-intelligence-to-train-drones/

[20] Ryan, M. Human-Machine Teaming for Future Ground Forces. (2018). Retrieved from https://csbaonline.org/uploads/documents/Human_Machine_Teaming_FinalFormat.pdf

[21] Perrigo, B. Global Arms Race for Killer Robots Is Transforming the Battlefield. (Updated: April 9, 2018). Retrieved from http://time.com/5230567/killer-robots/

[22] Hutchison, H.C. Russia says it will ignore any UN ban of killer robots. (November 30, 2017). Retrieved from https://www.businessinsider.com/russia-will-ignore-un-killer-robot-ban-2017-11/?r=AU&IR=T

[23] Mizokami, K. Kalashnikov Will Make an A.I.-Powered Killer Robot – What could possibly go wrong? (July 20, 2017). Retrieved from https://www.popularmechanics.com/military/weapons/news/a27393/kalashnikov-to-make-ai-directed-machine-guns/

[24] Atherton, K. Combat robots and cheap drones obscure the hidden triumph of Russia’s wargame. (September 25, 2018). Retrieved from https://www.c4isrnet.com/unmanned/2018/09/24/combat-robots-and-cheap-drones-obscure-the-hidden-triumph-of-russias-wargame/

[25] Platt, J.R. A Starfish-Killing, Artificially Intelligent Robot Is Set to Patrol the Great Barrier Reef Crown of thorns starfish are destroying the reef. Bots that wield poison could dampen the invasion. (January 1, 2016) Retrieved from https://www.scientificamerican.com/article/a-starfish-killing-artificially-intelligent-robot-is-set-to-patrol-the-great-barrier-reef/

[26] Skinner, T. Presenting, the Mosquito Killer Robot. (September 14, 2016). Retrieved from https://quillorcapture.com/2016/09/14/presenting-the-mosquito-killer-robot/

[27] Defence Connect. DST launches Wizard of Aus. (November 10, 2017). Retrieved from https://www.defenceconnect.com.au/key-enablers/1514-dst-launches-wizard-of-aus

[28] Pomerleau, M. Air Force is looking for resilient autonomous systems. (February 24, 2016). Retrieved from https://defensesystems.com/articles/2016/02/24/air-force-uas-contested-environments.aspx

[29] Boston Dynamics. LS3 Legged Squad Support Systems. The AlphaDog of legged robots carries heavy loads over rough terrain. (2018). Retrieved from https://www.bostondynamics.com/ls3

[30] Evans, G. Driverless vehicles in the military – will the potential be realised? (February 2, 2018). Retrieved from https://www.army-technology.com/features/driverless-vehicles-military/

[31] Hambling, D. Why the U.S. Is Backing Killer Robots. (September 15, 2018). Retrieved from https://www.popularmechanics.com/military/research/a23133118/us-ai-robots-warfare/

[32] Ministry for Culture and Heritage. ANZUS treaty comes into force 29 April 1952. (April 26, 2017). Retrieved from https://nzhistory.govt.nz/anzus-comes-into-force

[33] Shalal, A. Researchers to boycott South Korean university over AI weapons work. (April 5, 2018). Retrieved from https://www.reuters.com/article/us-tech-korea-boycott/researchers-to-boycott-south-korean-university-over-ai-weapons-work-idUSKCN1HB392

[34] Shane, S & Wakabayashi, D. ‘The Business of War’: Google Employees Protest Work for the Pentagon. (April 4, 2018). Retrieved from https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html

 

Artificial Intelligence / Machine Learning / Human-Machine Teaming Australia Australia, New Zealand, United States Security Treaty (ANZUS) Autonomous Weapons Systems Canada Dan Lee New Zealand Option Papers United Kingdom United States

An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Likely Roles of Artificial Intelligence and Machine Learning Systems in the Near Future

Date Originally Written:  May 25, 2018.

Date Originally Published:  July 16, 2018.

Summary:  While the U.S. Department of Defense (DoD) continues to experiment with Artificial Intelligence (AI) as part of its Third Offset Strategy, questions regarding levels of human participation, ethics, and legality remain.  Though a battlefield in the future will likely see autonomous decision-making technology as a norm, the transition between modern applications of artificial intelligence and potential applications will focus on incorporating human-machine teaming into existing frameworks.

Text:   In an essay titled Centaur Warfighting: The False Choice of Humans vs. Automation, author Paul Scharre concludes that the best warfighting systems will combine human and machine intelligence to create hybrid cognitive architectures that leverage the advantages of each[1].  There are three potential partnerships.  The first potential partnership pegs humans as essential operators, meaning AI cannot operate without its human counterpart.  The second potential partnership tasks humans as the moral agents who make value-based decisions which prevent or promote the use of AI in combat situations.  Finally, the third potential partnership, in which humans are fail-safes, give more operational authority to AI systems.  The human operator only interferes if the system malfunctions or fails.  Artificial intelligence, specifically autonomous weapons systems, are controversial technologies that have the capacity to greatly improve human efficiency while reducing potential human burdens.  But before the Department of Defense embraces intelligent weapons systems or programs with full autonomy, more human-machine partnerships to test to viability, legality, and ethical implications of artificial intelligence will likely occur.

To better understand why artificial intelligence is controversial, it is necessary to distinguish between the arguments for and against using AI with operational autonomy.  In 2015, prominent artificial intelligence experts, including Steven Hawking and Elon Musk, penned an open letter in which the potential benefits for AI are highlighted, but are not necessarily outweighed by the short-term questions of ethics and the applicability of law[2].  A system with an intelligent, decision-making brain does carry significant consequences.  What if the system targets civilians?  How does international law apply to a machine?  Will an intelligent machine respond to commands?  These are questions with which military and ethical theorists grapple.

For a more practical thought problem, consider the Moral Machine project from the Massachusetts Institute of Technology[3].  You, the judge, are presented with two dilemmas involving intelligent, self-driving cars.  The car encounters break failure and must decide what to do next.  If the car continues straight, it will strike and kill x number of men, women, children, elderly, or animals.  If the car does not swerve, it will crash into a barrier causing immediate deaths of the passengers who are also x number of men or women, children, or elderly.  Although you are the judge in Moral Machine, the simulation is indicative of ethical and moral dilemmas that may arise when employing artificial intelligence in, say, combat.  In these scenarios, the ethical theorist takes issue with the machine having the decision-making capacity to place value on human life, and to potentially make irreversible and damaging decisions.

Assuming autonomous weapons systems do have a place in the future of military operations, what would prelude them?  Realistically, human-machine teaming would be introduced before a fully-autonomous machine.  What exactly is human-machine teaming and why is it important when discussing the future of artificial intelligence?  To gain and maintain superiority in operational domains, both past and present, the United States has ensured that its conventional deterrents are powerful enough to dissuade great powers from going to war with the United States[4].  Thus, an offset strategy focuses on gaining advantages against enemy powers and capabilities.  Historically, the First Offset occurred in the early 1950s upon the introduction of tactical nuclear weapons.  The Second Offset manifested a little later, in the 1970s, with the implementation of precision-guided weapons after the Soviet Union gained nuclear parity with the United States[5].  The Third Offset, a relatively modern strategy, generally focuses on maintaining technological superiority among the world’s great powers.

Human-machine teaming is part of the Department of Defense’s Third Offset strategy, as is deep learning systems and cyber weaponry[6].  Machine learning systems relieve humans from a breadth of burdening tasks or augment operations to decrease potential risks to the lives of human fighters.  For example, in 2017 the DoD began working with an intelligent system called “Project Maven,” which uses deep learning technology to identify objects of interest from drone surveillance footage[7].  Terabytes of footage are collected each day from surveillance drones.  Human analysts spend significant amounts of time sifting through this data to identify objects of interest, and then they begin their analytical processes[8].  Project Maven’s deep-learning algorithm allows human analysts to spend more time practicing their craft to produce intelligence products and less time processing information.  Despite Google’s recent departure from the program, Project Maven will continue to operate[9].  Former Deputy Defense Secretary Bob Work established the Algorithmic Warfare Cross-Functional Team in early 2017 to work on Project Maven.  In the announcement, Work described artificial intelligence as necessary for strategic deterrence, noting “the [DoD] must integrate artificial intelligence and machine learning more effectively across operations to maintain advantages over increasingly capable adversaries and competitors[10].”

This article collectively refers to human-machine teaming as processes in which humans interact in some capacity with artificial intelligence.  However, human-machine teaming can transcend multiple technological fields and is not limited to just prerequisites of autonomous weaponry[11].  Human-robot teaming may begin to appear as in the immediate future given developments in robotics.  Boston Dynamics, a premier engineering and robotics company, is well-known for its videos of human- and animal-like robots completing everyday tasks.  Imagine a machine like BigDog working alongside human soldiers or rescue workers or even navigating inaccessible terrain[12].  These robots are not fully autonomous, yet the unique partnership between human and robot offers a new set of opportunities and challenges[13].

Before fully-autonomous systems or weapons have a place in combat, human-machine teams need to be assessed as successful and sustainable.  These teams have the potential to improve human performance, reduce risks to human counterparts, and expand national power – all goals of the Third Offset Strategy.  However, there are challenges to procuring and incorporating artificial intelligence.  The DoD will need to seek out deeper relationships with technological and engineering firms, not just defense contractors.

Using humans as moral agents and fail-safes allow the problem of ethical and lawful applicability to be tested while opening the debate on future use of autonomous systems.  Autonomous weapons will likely not see combat until these challenges, coupled with ethical and lawful considerations, are thoroughly regulated and tested.


Endnotes:

[1] Paul Scharre, Temp. Int’l & Comp. L.J., “Centaur Warfighting: The False Choice of Humans vs. Automation,” 2016, https://sites.temple.edu/ticlj/files/2017/02/30.1.Scharre-TICLJ.pdf

[2] Daniel Dewey, Stuart Russell, Max Tegmark, “Research Priorities for Robust and Beneficial Artificial Intelligence,” 2015, https://futureoflife.org/data/documents/research_priorities.pdf?x20046

[3] Moral Machine, http://moralmachine.mit.edu/

[4] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Work: Human-Machine Teaming Represents Defense Technology Future,” 8 November 2015, https://www.defense.gov/News/Article/Article/628154/work-human-machine-teaming-represents-defense-technology-future/

[5] Ibid.

[6] Katie Lange, DoDLive, “3rd Offset Strategy 101: What It Is, What the Tech Focuses Are,” 30 March 2016, http://www.dodlive.mil/2016/03/30/3rd-offset-strategy-101-what-it-is-what-the-tech-focuses-are/; and Mackenzie Eaglen, RealClearDefense, “What is the Third Offset Strategy?,” 15 February 2016, https://www.realcleardefense.com/articles/2016/02/16/what_is_the_third_offset_strategy_109034.html

[7] Cheryl Pellerin, Department of Defense News, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[8] Tajha Chappellet-Lanier, “Pentagon’s Project Maven responds to criticism: ‘There will be those who will partner with us’” 1 May 2018, https://www.fedscoop.com/project-maven-artificial-intelligence-google/

[9] Tom Simonite, Wired, “Pentagon Will Expand AI Project Prompting Protests at Google,” 29 May 2018, https://www.wired.com/story/googles-contentious-pentagon-project-is-likely-to-expand/

[10] Cheryl Pellerin, Department of Defense, Defense Media Activity, “Project Maven to Deploy Computer Algorithims to War Zone by Year’s End,” 21 July 2017, https://www.defense.gov/News/Article/Article/1254719/project-maven-to-deploy-computer-algorithms-to-war-zone-by-years-end/

[11] Maj. Gen. Mick Ryan, Defense One, “How to Plan for the Coming Era of Human-Machine Teaming,” 25 April 2018, https://www.defenseone.com/ideas/2018/04/how-plan-coming-era-human-machine-teaming/147718/

[12] Boston Dynamic Big Dog Overview, March, 2010, https://www.youtube.com/watch?v=cNZPRsrwumQ

[13] Richard Priday, Wired, “What’s really going on in those Bostom Dynamics robot videos?,” 18 February 2018, http://www.wired.co.uk/article/boston-dynamics-robotics-roboticist-how-to-watch

Ali Crawford Alternative Futures / Alternative Histories / Counterfactuals Artificial Intelligence / Machine Learning / Human-Machine Teaming Capacity / Capability Enhancement United Nations