Options to Bridge the U.S. Department of Defense – Silicon Valley Gap with Cyber Foreign Area Officers

Kat Cassedy is a qualitative analyst with 20 years of work in hard problem solving, alternative analysis, and red teaming.  She currently works as an independent consultant/contractor, with experience in the public, private, and academic sectors.  She can be found on Twitter @Katnip95352013, tweeting on modern #politicalwarfare, #proxywarfare, #NatSec issues, #grayzoneconflict, and a smattering of random nonsense.  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The cultural gap between the U.S. Department of Defense and Silicon Valley is significant.  Bridging this gap likely requires more than military members learning tech speak as their primary duties allow.

Date Originally Written:  April 15, 2019. 

Date Originally Published:  April 15, 2019. 

Author and / or Article Point of View:  The author’s point of view is that the cyber-sector may be more akin to a foreign culture than a business segment, and that bridging the growing gulf between the Pentagon and Silicon Valley may require sociocultural capabilities as much or more so than technical or acquisition skills. 

Background:  As the end of the third decade of the digital revolution nears an end, and close to a year after the U.S. Cyber Command was elevated to a Unified Combatant Command, the gap between the private sector’s most advanced technology talents, intellectual property (IP), services, and products and that of the DoD is strained and increasing. Although the Pentagon needs and wants Silicon Valley’s IP and capabilities, the technorati are rejecting DoD’s overtures[1] in favor of enormous new markets such as those available in China. In the Information Age, DoD assesses that it needs Silicon Valley’s technology much the way it needed the Middle East’s fossil fuels over the last half century, to maintain U.S. global battlespace dominance. And Silicon Valley’s techno giants, with their respective market caps rivaling or exceeding the Gross Domestic Product of the globe’s most thriving economies, have global agency and autonomy such that they should arguably be viewed as geo-political power players, not simply businesses.  In that context, perhaps it is time to consider 21st century alternatives to the DoD way of thinking of Silicon Valley and its subcomponents as conventional Defense Industrial Base vendors to be managed like routine government contractors. 

Significance:  Many leaders and action officers in the DoD community are concerned that Silicon Valley’s emphasis on revenue share and shareholder value is leading it to prioritize relationships with America’s near-peer competitors – mostly particularly but not limited to China[2] – over working with the U.S. DoD and national security community. “In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete[3].” Given the DoD’s extreme reliance on and investment in highly networked and interdependent information systems to dominate the modern global operating environment, the possibility that U.S. companies are choosing foreign adversaries as clients and partners over the U.S. government is highly concerning. If this technology shifts away from U.S. national security concerns continues, 1)  U.S. companies may soon be providing adversaries with advanced capabilities that run counter to U.S. national interests[4]; and 2) even where these companies continue to provide products and services to the U.S., there is an increased concern about counter-intelligence vulnerabilities in U.S. Government (USG) systems and platforms due to technology supply chain vulnerabilities[5]; and 3) key U.S. tech startup and emerging technology companies are accepting venture capital, seed, and private equity investment from investors who’s ultimate beneficial owners trace back to foreign sovereign and private wealth sources that are of concern to the national security community[6].

Option #1:  To bridge the cultural gap between Silicon Valley and the Pentagon, the U.S. Military Departments will train, certify, and deploy “Cyber Foreign Area Officers” or CFAOs.  These CFAOs would align with DoD Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs[7]” and, within the cyber and Silicon Valley context, do the same as a traditional FAO and “provide expertise in planning and executing operations, to provide liaison with foreign militaries operating in coalitions with U.S. forces, to conduct political-military activities, and to execute military-diplomatic missions.”

Risk:  DoD treating multinational corporations like nation states risks further decreasing or eroding the recognition of nation states as bearing ultimate authority.  Additionally, there is risk that the checks and balances specifically within the U.S. between the public and private sectors will tip irrevocably towards the tech sector and set the sector up as a rival for the USG in foreign and domestic relationships. Lastly, success in this approach may lead to other business sectors/industries pushing to be treated on par.

Gain:  Having DoD establish a CFAO program would serve to put DoD-centric cyber/techno skills in a socio-cultural context, to aid in Silicon Valley sense-making, narrative development/dissemination, and to establish mutual trusted agency. In effect, CFAOs would act as translators and relationship builders between Silicon Valley and DoD, with the interests of all the branches of service fully represented. Given the routine real world and fictional depictions of Silicon Valley and DoD being from figurative different worlds, using a FAO construct to break through this recognized barrier may be a case of USG policy retroactively catching up with present reality. Further, considering the national security threats that loom from the DoD losing its technological superiority, perhaps the potential gains of this option outweigh its risks.

Option #2:  Maintain the status quo, where DoD alternates between first treating Silicon Valley as a necessary but sometimes errant supplier, and second seeking to emulate Silicon Valley’s successes and culture within existing DoD constructs.  

Risk:  Possibly the greatest risk in continuing the path of the current DoD approach to the tech world is the loss of the advantage of technical superiority through speed of innovation, due to mutual lack of understanding of priorities, mission drivers, objectives, and organizational design.  Although a number of DoD acquisition reform initiatives are gaining some traction, conventional thinking is that DoD must acquire technology and services through a lengthy competitive bid process, which once awarded, locks both the DoD and the winner into a multi-year relationship. In Silicon Valley, speed-to-market is valued, and concepts pitched one month may be expected to be deployable within a few quarters, before the technology evolves yet again. Continual experimentation, improvisation, adaptation, and innovation are at the heart of Silicon Valley. DoD wants advanced technology, but they want it scalable, repeatable, controllable, and inexpensive. These are not compatible cultural outlooks.

Gain:  Continuing the current course of action has the advantage of familiarity, where the rules and pathways are well-understood by DoD and where risk can be managed. Although arguably slow to evolve, DoD acquisition mechanisms are on solid legal ground regarding use of taxpayer dollars, and program managers and decision makers alike are quite comfortable in navigating the use of conventional DoD acquisition tools. This approach represents good fiscal stewardship of DoD budgets.

Other Comments:  None. 

Recommendation:  None.  


Endnotes:

[1] Malcomson, S. Why Silicon Valley Shouldn’t Work With the Pentagon. New York Times. 19APR2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/04/19/opinion/silicon-valley-military-contract.html.

[2] Hsu, J. Pentagon Warns Silicon Valley About Aiding Chinese Military. IEEE Spectrum. 28MAR2019. Retrieved 15APR2019, from https://spectrum.ieee.org/tech-talk/aerospace/military/pentagon-warns-silicon-valley-about-aiding-chinese-military.

[3] Zegart, A and Childs, K. The Growing Gulf Between Silicon Valley and Washington. The Atlantic. 13DEC2018. Retrieved 15APR2019, from https://www.theatlantic.com/ideas/archive/2018/12/growing-gulf-between-silicon-valley-and-washington/577963/.

[4] Copestake, J. Google China: Has search firm put Project Dragonfly on hold? BBC News. 18DEC2018. Retrieved 15APR2019, from https://www.bbc.com/news/technology-46604085.

[5] Mozur, P. The Week in Tech: Fears of the Supply Chain in China. New York Times. 12OCT2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/10/12/technology/the-week-in-tech-fears-of-the-supply-chain-in-china.html.

[6] Northam, J. China Makes A Big Play In Silicon Valley. National Public Radio. 07OCT2018. Retrieved 15APR2019, from https://www.npr.org/2018/10/07/654339389/china-makes-a-big-play-in-silicon-valley.

[7] Department of Defense Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs,” April 28, 2005.  Retrieved 15APR2019, from https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/131517p.pdf.

 

Cyberspace Emerging Technology Information Systems Kat Cassedy Option Papers Public-Private Partnerships and Intersections United States

An Assessment of the Role of Unmanned Ground Vehicles in Future Warfare

Robert Clark is a post-graduate researcher at the Department of War Studies at King’s College London, and is a British military veteran. His specialities include UK foreign policy in Asia Pacific and UK defence relations.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Role of Unmanned Ground Vehicles in Future Warfare

Date Originally Written:  February 17, 2019.

Date Originally Published:  February 25, 2019.

Summary:  The British Army’s recent land trials of the Tracked Hybrid Modular Infantry System of Unmanned Ground Vehicles, seeks to ensure that the British Army retains its lethality in upcoming short to medium level intensity conflicts.  These trials align with the announcements by both the British Army’s Chief of General Staff, General Carleton-Smith, and by the Defence Secretary, Gavin Williamson, regarding the evolving character of warfare.

Text:  The United Kingdom’s (UK) current vision for the future role of Unmanned Ground Vehicles (UGVs) originates from the British Army’s “Strike Brigade” concept, as outlined in the Strategic Defence Security Review 2015[1]. This review proposed that British ground forces should be capable of self-deployment and self-sustainment at long distances, potentially global in scope. According to this review, by 2025 the UK should be able to deploy “a war-fighting division optimised for high intensity combat operations;” indeed, “the division will draw on two armoured infantry brigades and two new Strike Brigades to deliver a deployed division of three brigades.” Both Strike Brigades should be able to operate simultaneously in different parts of the world, and by incorporating the next generation autonomous technology currently being trialled by the British Army, will remain combat effective post-Army 2020.

The ability for land forces of this size to self-sustain at long-range places an increased demand on logistics and the resupply chain of the British Army, which has been shown to have been overburdened in recent conflicts[2]. This overburdening is likely to increase due to the evolving character of warfare and of the environments in which conflicts are likely to occur, specifically densely populated urban areas. These densely populated areas are likely to become more cluttered, congested and contested than ever before. Therefore, a more agile and flexible logistics and resupply system, able to conduct resupply in a more dynamic environment and over greater distances, will likely be required to meet the challenges of warfare from the mid-2020s and beyond.

Sustaining the British Armed Forces more broadly in densely populated areas may represent something of a shift in the UK’s vision for UGV technology. This UGV technology was previously utilised almost exclusively for Explosive Ordnance Disposal (EOD) and for Countering-Improvised Explosive Devices for both the military and the police, as opposed to being truly a force-multiplier developing the logistics and resupply chains.

Looking at UGVs as a force multiplier, the Ministry of Defence’s Defence Science and Technology Laboratory (DTSL) is currently leading a three-year research and development programme entitled Autonomous Last Mile Resupply System (ALMRS)[3]. The ALMRS research is being undertaken to demonstrate system solutions which aim to reduce the logistical burden on the entire Armed Forces, in addition to providing new operational capability and to reduce operational casualties. Drawing on both commercial technology as well as conceptual academic ideas – ranging from online delivery systems to unmanned vehicles – more than 140 organisations from small and medium-sized enterprises, to large military-industrial corporations, submitted entries.

The first phase of the ALMRS programme challenged industry and academia to design pioneering technology to deliver vital supplies and support to soldiers on the front line, working with research teams across the UK and internationally. This research highlights the current direction with which the British vision is orientated regarding UGVs, i.e., support-based roles. Meanwhile, the second phase of the ALMRS programme started in July 2018 and is due to last for approximately twelve months. It included ‘Autonomous Warrior’, the Army Warfighting Experiment 18 (AWE18), a 1 Armoured Infantry Brigade battlegroup-level live fire exercise, which took place on Salisbury Plain in November 2018. This live fire exercise saw each of the five remaining projects left in the ALMRS programme demonstrate their autonomous capabilities in combined exercises with the British Armed Forces, the end user. The results of this exercise provided DSTL with user feedback, crucial to enable subsequent development; identifying how the Army can exploit developments in robotics and autonomous systems technology through capability integration.

Among the final five projects short-listed for the second phase of ALMRS and AWE18 was a UGV multi-purpose platform called TITAN, developed by British military technology company QinetiQ, in partnership with MILREM Robotics, an Estonian military technology company. Developing its Tracked Hybrid Modular Infantry System (THeMIS), the QinetiQ-led programme impressed in the AWE18.

The THeMIS platform is designed to provide support for dismounted troops by serving as a transport platform, a remote weapon station, an IED detection and disposal unit, and surveillance and targeting acquisition system designed to enhance a commander’s situational awareness. THeMIS is an open architecture platform, with subsequent models based around a specific purpose or operational capability.

THeMIS Transport is designed to manoeuvre equipment around the battlefield to lighten the burden of soldiers, with a maximum payload weight of 750 kilograms. This 750 kilogram load would be adequate to resupply a platoon’s worth of ammunition, water, rations and medical supplies and to sustain it at 200% operating capacity – in essence, two resupplies in one. In addition, when utilised in battery mode, THeMIS Transport is near-silent and can travel for up to ninety minutes. When operating on the front-line, THeMIS Transport proves far more effective than a quad bike and trailer, which are presently in use with the British Army to achieve the same effect. Resupply is often overseen by the Platoon Sergeant, the platoon’s Senior Non-Commissioned Officer and most experienced soldier. Relieving the Platoon Sergeant of such a burden would create an additional force multiplier during land operations.

In addition, THeMIS can be fitted to act as a Remote Weapons System (RWS), with the ADDER version equipped with a .51 calibre Heavy Machine Gun, outfitted with both day and night optics. Additional THeMIS models include the PROTECTOR RWS, which integrates Javelin anti-tank missile capability. Meanwhile, more conventional THeMIS models include GroundEye, an EOD UGV, and the ELIX-XL and KK-4 LE, which are surveillance platforms that allow for the incorporation of remote drone technology.

By seeking to understand further the roles within the British Armed Forces both artificial intelligence and robotics currently have, in addition to what drives these roles and what challenges them, it is possible to gauge the continued evolution of remote warfare with the emergence of such technologies. Specifically, UGVs and RWS’ which were trialled extensively in 2018 by the British Army. Based upon research conducted on these recent trials, combined with current up-to-date in-theatre applications of such technology, it is assessed that the use of such equipment will expedite the rise of remote warfare as the preferred method of war by western policy makers in future low to medium level intensity conflicts seeking to minimise the physical risks to military personnel in addition to engaging in conflict more financially viable.


Endnotes:

[1] HM Government. (2015, November). National Security Strategy and Strategic Defence and Security Review 2015. Retrieved February 17, 2019, from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/478933/52309_Cm_9161_NSS_SD_Review_web_only.pdf

[2] Erbel, M., & Kinsey, C. (2015, October 4). Think again – supplying war: Reappraising military logistics and its centrality to strategy and war. Retrieved February 17, 2019, from https://www.tandfonline.com/doi/full/10.1080/01402390.2015.1104669

[3] Defence Science and Technology Laboratory. (2017). Competition document: Autonomous last mile resupply. Retrieved February 17, 2019, from https://www.gov.uk/government/publications/accelerator-competition-autonomous-last-mile-supply/accelerator-competition-autonomous-last-mile-resupply

 

Assessment Papers Capacity / Capability Enhancement Emerging Technology Robert Clark United Kingdom

Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence & Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

Assessment of the Role of Cyber Power in Interstate Conflict

Eric Altamura is a graduate student in the Security Studies Program at Georgetown University’s School of Foreign Service. He previously served for four years on active duty as an armor officer in the United States Army.  He regularly writes for Georgetown Security Studies Review and can be found on Twitter @eric_senlu.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Role of Cyber Power in Interstate Conflict

Date Originally Written:  May 05, 2018 / Revised for Divergent Options July 14, 2018.

Date Originally Published:  September 17, 2018.

Summary:  The targeting of computer networks and digitized information during war can prevent escalation by providing an alternative means for states to create the strategic effects necessary to accomplish limited objectives, thereby bolstering the political viability of the use of force as a lever of state power.

Text:  Prussian General and military theorist Carl von Clausewitz wrote that in reality, one uses, “no greater force, and setting himself no greater military aim, than would be sufficient for the achievement of his political purpose.” State actors, thus far, have opted to limit cyberattacks in size and scope pursuant to specific political objectives when choosing to target information for accomplishing desired outcomes. This limiting occurs because as warfare approaches its unlimited form in cyberspace, computer network attacks increasingly affect the physical domain in areas where societies have become reliant upon IT systems for everyday functions. Many government and corporate network servers host data from industrial control systems (ICS) or supervisory control and data acquisition (SCADA) systems that control power generation, utilities, and virtually all other public services. Broader attacks on an adversary’s networks consequently affect the populations supported by these systems, so that the impacts of an attack go beyond simply denying an opponent the ability to communicate through digital networks.

At some point, a threshold exists where it becomes more practical for states to utilize other means to directly target the physical assets of an adversary rather than through information systems. Unlimited cyberattacks on infrastructure would come close to replicating warfare in its total form, with the goal of fully disarming an opponent of its means to generate resistance, so states become more willing to expend resources and effort towards accomplishing their objectives. In this case, cyber power decreases in utility relative to the use of physical munitions (i.e. bullets and bombs) as the scale of warfare increases, mainly due to the lower probability of producing enduring effects in cyberspace. As such, the targeting and attacking of an opponent’s digital communication networks tends to occur in a more limited fashion because alternative levers of state power provide more reliable solutions as warfare nears its absolute form. In other words, cyberspace offers much more value to states seeking to accomplish limited political objectives, rather than for waging total war against an adversary.

To understand how actors attack computer systems and networks to accomplish limited objectives during war, one must first identify what states actually seek to accomplish in cyberspace. Just as the prominent British naval historian Julian Corbett explains that command of the sea does not entail “the conquest of water territory,” states do not use information technology for the purpose of conquering the computer systems and supporting infrastructure that comprise an adversary’s information network. Furthermore, cyberattacks do not occur in isolation from the broader context of war, nor do they need to result in the total destruction of the enemy’s capabilities to successfully accomplish political objectives. Rather, the tactical objective in any environment is to exploit the activity that takes place within it – in this case, the communication of information across a series of interconnected digital networks – in a way that provides a relative advantage in war. Once the enemy’s communication of information is exploited, and an advantage achieved, states can then use force to accomplish otherwise unattainable political objectives.

Achieving such an advantage requires targeting the key functions and assets in cyberspace that enable states to accomplish political objectives. Italian General Giulio Douhet, an airpower theorist, describes command of the air as, “the ability to fly against an enemy so as to injure him, while he has been deprived of the power to do likewise.” Whereas airpower theorists propose targeting airfields alongside destroying airplanes as ways to deny an adversary access to the air, a similar concept prevails with cyber power. To deny an opponent the ability to utilize cyberspace for its own purposes, states can either attack information directly or target the means by which the enemy communicates its information. Once an actor achieves uncontested use of cyberspace, it can subsequently control or manipulate information for its own limited purposes, particularly by preventing the escalation of war toward its total form.

More specifically, the ability to communicate information while preventing an adversary from doing so has a limiting effect on warfare for three reasons. Primarily, access to information through networked communications systems provides a decisive advantage to military forces by allowing for “analyses and synthesis across a variety of domains” that enables rapid and informed decision-making at all echelons. The greater a decision advantage one military force has over another, the less costly military action becomes. Secondly, the ubiquity of networked information technologies creates an alternative way for actors to affect targets that would otherwise be politically, geographically, or normatively infeasible to target with physical munitions. Finally, actors can mask their activities in cyberspace, which makes attribution difficult. This added layer of ambiguity enables face-saving measures by opponents, who can opt to not respond to attacks overtly without necessarily appearing weak.

In essence, cyber power has become particularly useful for states as a tool for preventing conflict escalation, as an opponent’s ability to respond to attacks becomes constrained when denied access to communication networks. Societies’ dependence on information technology and resulting vulnerability to computer network attacks continues to increase, indicating that interstate violence may become much more prevalent in the near term if aggressors can use cyberattacks to decrease the likelihood of escalation by an adversary.


Endnotes:

[1] von Clausewitz, C. (1976). On War. (M. Howard, & P. Paret, Trans.) Princeton: Princeton University Press.

[2] United States Computer Emergency Readiness Team. (2018, March 15). Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors. (United States Department of Homeland Security) Retrieved May 1, 2018, from https://www.us-cert.gov/ncas/alerts/TA18-074A

[3] Fischer, E. A. (2016, August 12). Cybersecurity Issues and Challenges: In Brief. Retrieved May 1, 2018, from https://fas.org/sgp/crs/misc/R43831.pdf

[4] Corbett, J. S. (2005, February 16). Some Principles of Maritime Strategy. (S. Shell, & K. Edkins, Eds.) Retrieved May 2, 2018, from The Project Gutenberg: http://www.gutenberg.org/ebooks/15076

[5] Ibid.

[6] Douhet, G. (1942). The Command of the Air. (D. Ferrari, Trans.) New York: Coward-McCann.

[7] Singer, P. W., & Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know. New York: Oxford University Press.

[8] Boyd, J. R. (2010, August). The Essence of Winning and Losing. (C. Richards, & C. Spinney, Eds.) Atlanta.

Aggression Assessment Papers Cyberspace Emerging Technology Eric Altamura

Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity

Options for Next Generation Blue Force Biometrics

Sarah Soliman is a Technical Analyst at the nonprofit, nonpartisan RAND Corporation.  Sarah’s research interests lie at the intersection of national security, emerging technology, and identity.  She can be found on Twitter @BiometricsNerd.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Next Generation Biometrics for U.S. Forces.

Date Originally Written:  March 18, 2017.

Date Originally Published:  June 26, 2017.

Author and / or Article Point of View:  Sarah Soliman is a biometrics engineer who spent two years in Iraq and Afghanistan as contracted field support to Department of Defense biometrics initiatives.

Background:  When a U.S. Army specialist challenged Secretary of Defense Donald Rumsfeld in 2004, it became tech-innovation legend within the military.  The specialist asked what the secretary was doing to up-armor military vehicles against Improvised Explosive Device (IED) attacks[1].  This town hall question led to technical innovations that became the class of military vehicles known as Mine-Resistant Ambush Protected, the MRAP.

History repeated itself in a way last year when U.S. Marine Corps General Robert B. Neller was asked in a Marine Corps town hall what he was doing to “up-armor” military personnel—not against attacks from other forces, but against suicide within their ranks[2].  The technical innovation path to strengthening troop resiliency is less clear, but just as in need of an MRAP-like focus on solutions.  Here are three approaches to consider in applying “blue force” biometrics, the collection of physiological or behavioral data from U.S. military troops, that could help develop diagnostic applications to benefit individual servicemembers.

1

US Army Specialist Thomas Wilson addresses the Secretary of Defense on base in Kuwait in 2004. Credit: Gustavo Ferrari / AP http://www.nbcnews.com/id/6679801/ns/world_news-mideast_n_africa/t/rumsfeld-inquisitor-not-one-bite-his-tongue

Significance:  The September 11th terrorists struck at a weakness—the United States’ ability to identify enemy combatants.  So the U.S. military took what was once blue force biometrics—a measurement of human signatures like facial images, fingerprints and deoxyribonucleic acid (DNA) (which are all a part of an enrolling military member’s record)—and flipped their use to track combatants rather than their own personnel.  This shift led to record use of biometrics in Operation Iraqi Freedom and Operation Enduring Freedom to assist in green (partner), grey (unknown), and red (enemy) force identification.

After 9/11, the U.S. military rallied for advances in biometrics, developing mobile tactical handheld devices, creating databases of IED networks, and cutting the time it takes to analyze DNA from days to hours[3].  The U.S. military became highly equipped for a type of identification that validates a person is who they say they are, yet in some ways these red force biometric advances have plateaued alongside dwindling funding for overseas operations and troop presence.  As a biometric toolset is developed to up-armor military personnel for health concerns, it may be worth considering expanding the narrow definition of biometrics that the Department of Defense currently uses[4].

The options presented below represent research that is shifting from red force biometrics back to the need for more blue force diagnostics as it relates to traumatic brain injury, sleep and social media.

Option #1:  Traumatic Brain Injury (TBI).

The bumps and grooves of the brain can contain identification information much like the loops and whorls in a fingerprint.  Science is only on the cusp of understanding the benefits of brain mapping, particularly as it relates to injury for military members[5].

Gain:  Research into Wearables.

Getting military members to a field hospital equipped with a magnetic resonance imaging (MRI) scanner soon after an explosion is often unrealistic.  One trend has been to catalog the series of blast waves experienced—instead of measuring one individual biometric response—through a wearable “blast gauge” device.  The blast gauge program made news recently as the markers failed to give vibrant enough data and the program was cancelled[6].  Though not field expedient, another traumatic brain injury (TBI) sensor type to watch is brain activity trackers, which CNN’s Jake Tapper experienced when he donned a MYnd Analytics electroencephalogram brain scanning cap, drawing attention to blue force biometrics topics alongside Veterans Day[7].

 

2

Blast Gauge. Credit: DARPA http://www.npr.org/sections/health-shots/2016/12/20/506146595/pentagon-shelves-blast-gauges-meant-to-detect-battlefield-brain-injuries?utm_medium=RSS&utm_campaign=storiesfromnpr

Risk:  Overpromising, Underdelivering or “Having a Theranos Moment.”

Since these wearable devices aren’t currently viable solutions, another approach being considered is uncovering biometrics in blood.  TBI may cause certain proteins to spike in the blood[8]. Instead of relying on a subjective self-assessment by a soldier, a quick pin-prick blood draw could be taken.  Military members can be hesitant to admit to injury, since receiving treatment is often equated with stigma and may require having to depart from a unit.  This approach would get around that while helping the Department of Defense (DoD) gain a stronger definition of whether treatment is required.

3

Credit: Intelligent Optical Systems Inc http://www.intopsys.com/downloads/BioMedical/TBI-Brochure.pdf

Option #2:  Sleep.

Thirty-one percent of members of the U.S. military get five hours or less of sleep a night, according to RAND research[9].  This level of sleep deprivation affects cognitive, interpersonal, and motor skills whether that means leading a convoy, a patrol or back home leading a family.  This health concern bleeds across personal and professional lines.

Gain:  Follow the Pilots.

The military already requires flight crews to rest between missions, a policy in place to allow flight crews the opportunity to be mission ready through sleep, and the same concept could be instituted across the military.  Keeping positive sleep biometrics—the measurement of human signatures based on metrics like amount of total sleep time or how often a person wakes up during a sleep cycle, oxygen levels during sleep and the repeat consistent length of sleep—can lower rates of daytime impairment.

4
The prevalence of insufficient sleep duration and poor sleep quality across the force. Credit: RAND, Clock by Dmitry Fisher/iStock; Pillow by Yobro10/iStockhttp://www.rand.org/pubs/research_briefs/RB9823.html

Risk:  More memoirs by personnel bragging how little sleep they need to function[10].

What if a minimal level of rest became a requirement for the larger military community?  What sleep-tracking wearables could military members opt to wear to better grasp their own readiness?  What if sleep data were factored into a military command’s performance evaluation?

Option #3:  Social Media.

The traces of identity left behind through the language, images, and even emoji[11] used in social media have been studied, and they can provide clues to mental health.

Gain:  It’s easier to pull text than to pull blood.

Biometric markers include interactivity like engagement (how often posts are made), what time a message is sent (which can act as an “insomnia index”), and emotion detection through text analysis of the language used[12].  Social media ostracism can also be measured by “embeddedness” or how close-knit one’s online connections are[13].

 

5

Credit: https://twitter.com/DeptofDefense/status/823515639302262784?ref_src=twsrc%5Etfw

Risk:  Misunderstanding in social media research.

The DoD’s tweet about this research was misconstrued as a subtweet or mockery[14].  True to its text, the tweet was about research under development at the Department of Defense and in particular the DoD Suicide Prevention Office.  Though conclusions at the scale of the DoD have yet to be reached, important research is being built-in this area including studies like one done by Microsoft Research, which demonstrated 70 percent accuracy in estimating onset of a major depressive disorder[15].  Computer programs have identified Instagram photos as a predictive marker of depression[16] and Twitter data as a quantifiable signal of suicide attempts[17].

Other Comments:  Whether by mapping the brain, breaking barriers to getting good sleep, or improving linguistic understanding of social media calls for help, how will the military look to blue force biometrics to strengthen the health of its core?  What type of intervention should be aligned once data indicators are defined?  Many tombs of untapped data remain in the digital world, but data protection and privacy measures must be in place before they are mined.

Recommendations:  None.


Endnotes:

[1]  Gilmore, G. J. (2004, December 08). Rumsfeld Handles Tough Questions at Town Hall Meeting. Retrieved June 03, 2017, from http://archive.defense.gov/news/newsarticle.aspx?id=24643

[2]  Schogol, J. (2016, May 29). Hidden-battle-scars-robert-neller-mission-to-save-marines-suicide. Retrieved June 03, 2017, from http://www.marinecorpstimes.com/story/military/2016/05/29/hidden-battle-scars-robert-neller-mission-to-save-marines-suicide/84807982/

[3]  Tucker, P. (2015, May 20). Special Operators Are Using Rapid DNA Readers. Retrieved June 03, 2017, from http://www.defenseone.com/technology/2015/05/special-operators-are-using-rapid-dna-readers/113383/

[4]  The DoD’s Joint Publication 2-0 defines biometrics as “The process of recognizing an individual based on measurable anatomical, physiological, and behavioral characteristics.”

[5]  DoD Worldwide Numbers for TBI. (2017, May 22). Retrieved June 03, 2017, from http://dvbic.dcoe.mil/dod-worldwide-numbers-tbi

[6]  Hamilton, J. (2016, December 20). Pentagon Shelves Blast Gauges Meant To Detect Battlefield Brain Injuries. Retrieved June 03, 2017, from http://www.npr.org/sections/health-shots/2016/12/20/506146595/pentagon-shelves-blast-gauges-meant-to-detect-battlefield-brain-injuries?utm_medium=RSS&utm_campaign=storiesfromnpr

[7]  CNN – The Lead with Jake Tapper. (2016, November 11). Retrieved June 03, 2017, from https://vimeo.com/191229323

[8]  West Virginia University. (2014, May 29). WVU research team developing test strips to diagnose traumatic brain injury, heavy metals. Retrieved June 03, 2017, from http://wvutoday-archive.wvu.edu/n/2014/05/29/wvu-research-team-developing-test-strips-to-diagnose-traumatic-brain-injury-heavy-metals.html

[9]  Troxel, W. M., Shih, R. A., Pedersen, E. R., Geyer, L., Fisher, M. P., Griffin, B. A., . . . Steinberg, P. S. (2015, April 06). Sleep Problems and Their Impact on U.S. Servicemembers. Retrieved June 03, 2017, from http://www.rand.org/pubs/research_briefs/RB9823.html

[10]  Mullany, A. (2017, May 02). Here’s Arianna Huffington’s Recipe For A Great Night Of Sleep. Retrieved June 03, 2017, from https://www.fastcompany.com/3060801/heres-arianna-huffingtons-recipe-for-a-great-night-of-sleep

[11]  Ruiz, R. (2016, June 26). What you post on social media might help prevent suicide. Retrieved June 03, 2017, from http://mashable.com/2016/06/26/suicide-prevention-social-media.amp

[12]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[13]  Ibid.

[14]  Brogan, J. (2017, January 23). Did the Department of Defense Just Subtweet Donald Trump? Retrieved June 03, 2017, from http://www.slate.com/blogs/future_tense/2017/01/23/did_the_department_of_defense_subtweet_donald_trump_about_mental_health.html

[15]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[16]  Reece, A. G., & Danforth, C. M. (2016, August 13). Instagram photos reveal predictive markers of depression. Retrieved June 03, 2017, from https://arxiv.org/abs/1608.03282

[17]  Coppersmith, G., Ngo, K., Leary, R., & Wood, A. (2016, June 16). Exploratory Analysis of Social Media Prior to a Suicide Attempt. Retrieved June 03, 2017, from https://www.semanticscholar.org/paper/Exploratory-Analysis-of-Social-Media-Prior-to-a-Su-Coppersmith-Ngo/3bb21a197b29e2b25fe8befbe6ac5cec66d25413

Biometrics Emerging Technology Option Papers Psychological Factors Sarah Soliman United States