Assessing the Need for the Battlefield Scavenger

Shawn Moore is Principal of the Russell Area Technology Center. He has studied abroad as a Fulbright Scholar in Uzbekistan, Tajikistan, and Japan. He has conducted research studies in China and the Republic of Korea. Shawn is an Officer in the South Carolina State Guard and recipient of the Association of Former Intelligence Officers‘ Peter Jasin Graduate Fellowship. Shawn holds a Bachelor of Science in History and Geography from Morehead State University, a Master of Arts in Counseling, and a Masters of Arts in Leadership. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Need for the Battlefield Scavenger

Date Originally Written:  January 14, 2023.

Date Originally Published:  January 30, 2023.

Author and / or Article Point of View:  The author believes that a new type of support soldier is necessary for the battlefields of today.

Summary:  The decisive impact of autonomous systems on the battlefield today coupled with supply chain interruptions during major combat operations will lead to the novel creation of the battlefield scavenger. This scavenger will retrieve, repurpose, repair, and return autonomous systems to operational status, reducing supply chain dependence and enhancing combat effectiveness.

Text:  The war in Ukraine has shown the demands for a wide range of technical capabilities across all facets of conflict. Autonomous systems, for the purposes of this article, refers to “any particular machine or system capable of performing an automated function and potentially learning from its experiences to enhance its performance[1].”

Autonomous systems in Ukraine have carried out surveillance, kinetic strikes, electronic warfare, and resupply missions either independently or operating collaboratively. When employed in combat, autonomous systems provide operational advantages over an adversary. The Ukraine War has also shown the rapid rate in which materiel is consumed in modern war. These autonomous devices may not be costly, but the technology becomes increasingly difficult to obtain as factories and supply lines fall under attack. Further, in a Great Power Conflict, access to raw materials to produce autonomous systems will be contested.

The worldwide diffusion of technology has the potential to offset some of the supply and procurement problems in Ukraine. Officials in Europe addressed these problems publicly with the revelation that Russian Soldiers were seen cannibalizing components and microchips from refrigerators and washing machines to use for military purposes[2]. The Russian Military proved that autonomous systems being removed from the battlespace results in lives lost and the loss of valuable time at critical periods of battle. The Ukraine Military has turned to commercially available autonomous systems and modified them for combat operations.

The reliance on autonomous systems will require a new type of combat service support soldier who will scavenge the battlespace for discarded scrap, damaged autonomous systems, and devices that could be repurposed. Inspired by the Jawas of the film “Star Wars,” this article will refer to this new combat service support soldier also as JAWAS, though this is an acronym for Joint LAnd Water Air Scavenger. In “Star Wars,” Jawas[3] were passionate scavengers, combing the deserts of Tatooine for droids or scraps which they would capture and sell to the local residents, forming a codependent circle of trade. In a not too distant future, the side that is able to innovate and employ JAWAS the quickest will have an advantage over the adversary.

The JAWAS will work in on land, water, air, and even space. The JAWAS will be composed of individuals with exceptional imagination, the ability to think laterally while having the physical stamina to engage in scavenging the battle space and defending their area of operations. JAWAS will station close to the front line to reduce the response time operating as a self-contained company from a mobile platform that includes workshops. The JAWAS will operate on the Golden Hour, a term familiar to military medicine. The Golden Hour is the ability to get wounded warfighters off the battlefield and delivered to the care of a full-scale military hospital within an hour[4]. JAWAS will roam the environment to quickly retrieve, repurpose, repair and return autonomous systems to an operational status.

Once a system is acquired, relying on field expedient materials in theater and limited supplies, the JAWAS will undertake the process of designing, fabricating, programming, and assembling autonomous systems for combat on land, water, or air. JAWAS will need to be a special type of soldier coming from the science, technology, engineering and mathematics fields, but also have with an exceptional imagination. They will use power tools, hand tools, and advanced diagnostic equipment to support multidomain operations. The leadership from junior officers and noncommissioned officers of JAWAS will be no less than that required by combat troops.

JAWAS support combat operations by leveraging autonomous systems to create advantages over adversaries. Furthering the reliance on locally sourced materials will limit the supply and procurement requests for parts and components. This local sourcing will allow scarce transportation to be dedicated to moving war materiel into the theater. While JAWAS may not exist now, the demand signal is coming, and employing untrained soldiers in this manner will result in confusion, panic, and possible defeat.


Endnotes:

[1] James Rands, “Artificial Intelligence and Autonomous Systems on the Battlefield – Proof.” Posted 28 February 2019, (accessed May 2, 2020); Richard J. Sleesman, and Todd C. Huntley. “Lethal Autonomous Weapon Systems: An Overview.” Army Lawyer, no. 1, Jan. 2019, p. 32+, (access May 2, 2020).

[2] Nardelli, A., Baschuk, B., & Champion, M. (2022, October 29). Putin Stirs Worry That Russia Is Stripping Home-Appliance Imports for Arms. Time. Retrieved January 29, 2023, from https://time.com/6226484/russia-appliance-imports-weapons/

[3] Jawa. Wookieepedia. (n.d.). Retrieved January 29, 2023, from https://starwars.fandom.com/wiki/Jawa

[4] Aker, J. (2022, June 14). Military Medicine Is Preparing for the Next Conflict. Medical Education and Training Campus. Retrieved January 29, 2023, from https://www.metc.mil/METC-News/News/News-Display/Article/3062564/military-medicine-is-preparing-for-the-next-conflict/.

Assessment Papers Autonomous Weapons Systems Capacity / Capability Enhancement Emerging Technology Shawn Moore

Assessing Terrorism and Artificial Intelligence in 2050

William D. Harris is a U.S. Army Special Forces Officer with six deployments for operations in Iraq and Syria and experience working in Jordan, Turkey, Saudi Arabia, Qatar, Israel, and other regional states. He has commanded from the platoon to battalion level and served in assignments with 1st Special Forces Command, 5th Special Forces Group, 101st Airborne Division, Special Operations Command—Central, and 3rd Armored Cavalry Regiment.  William holds a Bachelor of Science from United States Military Academy, a Master of Arts from Georgetown University’s Security Studies Program, a Masters from the Command and General Staff College, and a Masters from the School of Advanced Military Studies.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  AssessingTerrorism and Artificial Intelligence in 2050

Date Originally Written:  December 14, 2022.

Date Originally Published:  January 9, 2023.

Author and / or Article Point of View:  The author is an active-duty military member who believes that terrorists will pose increasing threats in the future as technology enables their operations.  

Summary:  The proliferation of artificial intelligence (AI) will enable terrorists in at least three ways.  First, they will be able to overcome their current manpower limitations in the proliferation of propaganda to increase recruitment.  Second, they will be able to use AI to improve target reconnaissance.  Third, terrorists can use AI to improve their attacks, including advanced unmanned systems and biological weapons.

Text:  Recent writing about the security implications of artificial intelligence (AI) has focused on the feasibility of a state like China or others with totalitarian aspirations building a modern panopticon, combining ubiquitous surveillance with massive AI-driven data processing and pattern recognition[1].  For years, other lines of research into AI have analyzed the application of AI to fast-paced conventional warfare.  Less has focused on how AI could help the sub-state actor, the criminal, insurgent, or terrorist.  Nevertheless, history shows that new technologies have never given their user an enduring and decisive edge.  Either the technology proliferates or combatants find countermeasures.  Consequently, understanding how AI technology could enable terrorists is a first step in preventing future attacks.

The proliferation of AI has the potential to enable terrorists similar to the way that the proliferation of man-portable weapons and encrypted communications have enabled terrorists to become more lethal[2].  Terrorists, or other sub-state entrepreneurs of violence, may be able to employ AI to solve operational problems.  This preliminary analysis will look at three ways that violent underground groups could use AI in the coming decades: recruitment, reconnaissance, and attack.

The advent of mass media allowed the spread of radical ideological tracts at a pace that led to regional and then global waves of violence.  In 1848, revolutionary movements threatened most of the states in Europe.  Half a century later, a global yet diffuse anarchist movement led to the assassination of five heads of state and the beginning of World War I[3].  Global revolutionary movements during the Cold War and then the global Islamist insurgency against the modern world further capitalized on the increasing bandwidth, range, and volume of communication[4].  The sleek magazine and videos of the Islamic State are the latest edition of the terrorists’ use of modern communications to craft and distribute a message intended to find and radicalize recruits.  If they employ advanced AI, terrorist organizations will be able to increase the production rate of quality materials in multiple languages, far beyond what they are currently capable of producing with their limited manpower.  The recent advances in AI, most notably with OpenAI’s Chatbot, demonstrate that AIs will be capable of producing quality materials.  These materials will be increasingly sophisticated and nuanced in a way to resonate with vulnerable individuals, leading to increased radicalization and recruitment[5].

Once a terrorist organization has recruited a cadre of fighters, then it can begin the process of planning and executing a terrorist attack, a key phase of which is reconnaissance.  AI could be an important tool here, enabling increased collection and analysis of data to find patterns of life and security vulnerabilities.  Distributed AI would allow terrorists conducting reconnaissance to collect and process vast quantities of information as opposed to relying on purely physical surveillance[6].  This AI use will speed up the techniques of open source intelligence collection and analysis, enabling the organization to identify the pattern of life of the employees of a targeted facility, and to find gaps and vulnerabilities in the security.  Open-source imagery and technical information could provide valuable sources for characterizing targets.  AI could also drive open architecture devices that enable terrorists to collect and access all signals in the electromagnetic spectrum and sound waves[7].  In the hands of skilled users, AI will able to enable the collection and analysis of information that was previously unavailable, or only available to the most sophisticated state intelligence operations.  Moreover, as the systems that run modern societies increase in complexity, that complexity will create new unanticipated failure modes, as the history of computer hacking or even the recent power grid attacks demonstrate[8].  

After conducting the target reconnaissance, terrorists could employ AI-enabled systems to facilitate or execute the attack.  The clearest example would be autonomous or semi-autonomous vehicles.  These vehicles will pose increasing problems for facilities protection in the future.  However, there are other ways that terrorists could employ AI to enable their attacks.  One idea would be to use AI agents to identify how they are vulnerable to facial recognition or other forms of pattern recognition.  Forewarned, the groups could use AI to generate deception measures to mislead security forces.  Using these AI-enabled disguises, the terrorists could conduct attacks with manned and unmanned teams.  The unmanned teammates could conduct parts of the operation that are too distant, dangerous, difficult, or restricted for their human teammates to action.  More frighteningly, the recent successes in applying machine learning and AI to understand deoxyribonucleic acid (DNA) and proteins could be applied to make new biological and chemical weapons, increasing lethality, transmissibility, or precision[9].  

Not all terrorist organizations will develop the sophistication to employ advanced AI across all phases of the organizations’ operations.  However, AI will continue and accelerate the arms race between security forces and terrorists.  Terrorists have applied most other human technologies in their effort to become more effective.  They will be able to apply AI to accelerate their propaganda and recruitment; target selection and reconnaissance; evasion of facial recognition and pattern analysis; unmanned attacks against fortified targets; manned-unmanned teamed attacks; and advanced biological and chemical attacks.  

One implication of this analysis is that the more distributed AI technology and access become, the more it will favor the terrorists.  Unlike early science fiction novels about AI, the current trends are for AI to be distributed and more available unlike the centralized mainframes of earlier fictional visions.  The more these technologies proliferate, the more defenders should be concerned.

The policy implications are that governments and security forces will continue their investments in technology to remain ahead of the terrorists.  In the west, this imperative to exploit new technologies, including AI, will increasingly bring the security forces into conflict with the need to protect individual liberties and maintain strict limits on the potential for governmental abuse of power.  The balance in that debate between protecting liberty and protecting lives will have to evolve as terrorists grasp new technological powers.


Endnotes:

[1] For example, see “The AI-Surveillance Symbiosis in China: A Big Data China Event,” accessed December 16, 2022, https://www.csis.org/analysis/ai-surveillance-symbiosis-china-big-data-china-event; “China Uses AI Software to Improve Its Surveillance Capabilities | Reuters,” accessed December 16, 2022, https://www.reuters.com/world/china/china-uses-ai-software-improve-its-surveillance-capabilities-2022-04-08/.

[2] Andrew Krepinevich, “Get Ready for the Democratization of Destruction,” Foreign Policy, n.d., https://foreignpolicy.com/2011/08/15/get-ready-for-the-democratization-of-destruction/.

[3] Bruce Hoffman, Inside Terrorism, Columbia Studies in Terrorism and Irregular Warfare (New York: Columbia University Press, 2017).

[4] Ariel Victoria Lieberman, “Terrorism, the Internet, and Propaganda: A Deadly Combination,” Journal of National Security Law & Policy 9, no. 95 (April 2014): 95–124.

[5] See https://chat.openai.com/

[6] “The ABCs of AI-Enabled Intelligence Analysis,” War on the Rocks, February 14, 2020, https://warontherocks.com/2020/02/the-abcs-of-ai-enabled-intelligence-analysis/.

[7] “Extracting Audio from Visual Information,” MIT News | Massachusetts Institute of Technology, accessed December 16, 2022, https://news.mit.edu/2014/algorithm-recovers-speech-from-vibrations-0804.

[8] Miranda Willson, “Attacks on Grid Infrastructure in 4 States Raise Alarm,” E&E News, December 9, 2022, https://www.eenews.net/articles/attacks-on-grid-infrastructure-in-4-states-raise-alarm/; Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations (Reading, Mass: Perseus Books, 1996).

[9] Michael Eisenstein, “Artificial Intelligence Powers Protein-Folding Predictions,” Nature 599, no. 7886 (November 23, 2021): 706–8, https://doi.org/10.1038/d41586-021-03499-y.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Emerging Technology Violent Extremism William D. Harris

Assessing The Network-State in 2050

Bryce Johnston (@am_Bryce) is an U.S. Army officer currently serving in the 173 rd Airborne Brigade. He is a West Point graduate and a Fulbright Scholar. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing The Network-State in 2050

Date Originally Written:  December 12, 2022.

Date Originally Published:  December 26, 2022.   

Author and / or Article Point of View:  The author is an active-duty U.S. Army officer whose studies intersect technology and politics. His assessment combines Balaji Srinivasan’s concept of the network-state with Chamath Palihapitiya’s[1] claim that the marginal cost of energy and computation will eventually reach zero. The article is written from the point of view of an advisor to nation-states.

Summary:  Online communities have become an integral part of life in 2022. As money, computing power, and energy become cheaper, citizens may find themselves identifying more with an immersive online network than their nation. If this trend continues, the world’s balance of power may soon include powerful network-states that do not respect political boundaries and control important aspects of the globe’s information domain. 

Text:  The nation-state was the primary actor in international affairs for the last two centuries; advances in digital technology may ensure the network-state dominates the next two centuries. The network-state, as conceived by Balaji Srinivasan, is a cohesive digital community that is capable of achieving political aims and is recognized as sovereign by the international community[2]. The citizens of the network-state are not tied to a physical location. Instead, they gain their political and cultural identity through their affiliation with a global network connected through digital technology. The idea of the network-state poses an immediate challenge to the nation-state whose legitimacy comes through its ability to protect its physical territory.  By 2050, nation-states like the United States of America could compete with sovereign entities that exist within their borders. 

An accepted definition of a state is an entity that has a monopoly on violence within its territory[3]. While a network-state may have a weak claim to a monopoly of physical violence, they could monopolize an alternate form of power that is just as important. Most aspects of modern life rely on the cooperation of networks. A network-state that has a monopoly over the traffic that comes through it could very easily erode the will of a nation-state by denying its citizens the ability to move money, communicate with family, or even drive their car. One only has to look at China today to see this sort of power in action. 

Culturally, citizens in developed countries have grown used to spending most of their time online. The average American spends about eight hours online engaged with digital media[4]. Digital communities such as QAnon and WallStreetBets have been able to coordinate their members to affect the physical world. These communities were able to distill a strong sense of identity in their members even though they only ever interacted with each other in an online forum. Advances in generative media, virtual reality hardware, and digital currencies will only make these communities more engaging in the near future. 

The network-state is not inevitable. Three conditions are necessary to create the technology needed to sustain a politically viable digital community that spans the world by 2050. First, the marginal cost of capital must approach zero. The last decade saw interest rates stay near zero. Cheap money leads to the misallocation of capital towards frivolous endeavors, but it also nudges technologists to place a higher value on innovations that have a longer time horizon[5]. Artificial intelligence, crypto, and virtual reality all need significant investments to make them viable for the market. These same technologies also make up the building blocks of the network-state.

Second, the marginal cost of computing must approach zero. The technologies mentioned above require vast amounts of computational power. To persuade millions of users to make their online community the core of their identity, online communities will need to provide a persistent level of immersion that is not feasible today. This technical challenge is best understood by looking at the billions of dollars it took to allow Mark Zuckerberg’s metaverse citizens to traverse their community on legs[6]. Moore’s Law, which states that the number of transistors on microchips will double every year, has remained largely true for the last forty years[7]. While this pattern will likely come to an end, other technologies such as NVIDIA’s specialized graphic chips and quantum computing will ensure that the cost of computing power will drop over time[8].

Finally, the marginal cost of energy must approach zero. Improvements in computing technology will likely make systems more energy efficient, but digital communities that encompass a majority of mankind will require a large amount of energy. The ability to transfer this energy to decentralized nodes will become important as network-states span vast swaths of the earth. Solar panels and battery stations are already becoming cheap enough for individuals to buy. As these materials become cheaper and more reliable, most of the citizens in a network-state likely provide their own power. This decoupling from national grids and fossil fuels will not only allow these citizens to run their machines uninhibited but make them less vulnerable to coercion by nation-states who derive their power from energy production. 

The likelihood of these conditions occurring by 2050 is high. Investors like billionaire Chamath Palihapitiya are already betting on a drastic reduction in the cost of energy and computing power[9].  Assuming these three trends do allow for the creation of sovereign network-states, the balance of power on the global stage will shift. A world in which there is a unipolar moment amongst nation-states does not preclude the existence of a multipolar balance amongst network-states. Nation-states and network-states will not compete for many of the same resources, but the proliferation of new sovereign entities creates more opportunities for friction and miscalculation.

If war comes, nation-states will consider how to fight against an adversary that is not bound by territorial lines. Nation-states will have an advantage in that they control the physical means of production for commodities such as food and raw materials, but as the world becomes more connected to the internet, networks will still have a reach into this domain. The rise of the network-state makes it more important than ever for nation-states to control their physical infrastructure and learn to project power in the cognitive domain. Advanced missile systems and drones will do little to threaten the power of the network-state; instead, offensive capabilities will be limited to information campaigns and sophisticated cyber-attacks will allow the nation-state to protect its interests in a world where borders become meaningless.


Endnotes:

[1] Fridman, L. (November 15, 2022). Chamath Palihapitiya: Money, Success, Startups, Energy, Poker & Happiness (No. 338). Retrieved December 1, 2022, from https://www.youtube.com/watch?v=kFQUDCgMjRc

[2] Balaji, S. (2022, July 4). The Network-state in One Sentence. The Network-state. https://thenetworkstate.com/the-network-state-in-one-sentence

[3] Waters, T., & Waters, D. (2015). Politics As Vocation. In Weber’s Rationalism and Modern Society (pp. 129-198). Palgrave MacMillan, New York.

[4] Statista Research Department. (2022, August 16). Time spent with digital media in the U.S. 2011-2024. Statista Media. https://www.statista.com/statistics/262340/daily-time-spent-with-digital-media-according-to-us-consumsers

[5] Caggese, A., & Perez-Orive, A. (2017). Capital misallocation and secular stagnation. Finance and Economics Discussion Series, 9.

[6] Klee, M. (2022, October 12). After Spending Billions on the Metaverse, Mark Zuckerberg Is Left Standing on Virtual Legs. Rolling Stone. https://www.federalreserve.gov/econres/feds/capital-misallocation-and-secular-stagnation.html

[7] Roser, M., Ritchie, H., & Mathieu, E. (2022, March). Technological Change. Our World in Data. https://ourworldindata.org/grapher/transistors-per-microprocessor

[8] Sterling, B. (2020, March 10). Preparing for the end of Moore’s Law. Wired. https://www.wired.com/beyond-the-beyond/2020/03/preparing-end-moores-law/

[9] Fridman, L. (November 15, 2022). Chamath Palihapitiya: Money, Success, Startups, Energy, Poker & Happiness (No. 338). Retrieved December 1, 2022, from https://www.youtube.com/watch?v=kFQUDCgMjRc

 

Assessment Papers Bryce Johnston Emerging Technology Government Information Systems

Assessing the Tension Between Privacy and Innovation

Channing Lee studies International Politics at the Edmund A. Walsh School of Foreign Service at Georgetown University. She can be found on Twitter @channingclee. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Tension Between Privacy and Innovation

Date Originally Written:  April 1, 2022.

Date Originally Published:  April 11, 2022.

Author and / or Article Point of View:  The author is a student of international politics. 

Summary:  Given the importance of data to emerging technologies, future innovation may be dependent upon personal data access and a new relationship with privacy. To fully unleash the potential of technological innovation, societies that traditionally prize individual privacy may need to reevaluate their attitudes toward data collection in order to remain globally competitive.

Text:  The U.S. may be positioning itself to lag behind other nations that are more willing to collect and use personal data to drive Artificial Intelligence (AI) advancement and innovation. When the COVID-19 pandemic began, the idea of conducting contact tracing to assess virus exposure through personal devices sounded alarm bells across the United States[1]. However, that was not the first time technologies were engaged in personal data collection. Beyond the pandemic, the accumulation of personal data has already unlocked enhanced experiences with technology—empowering user devices to better accommodate personal preferences. As technology continues to advance, communities around the world will need to decide which ideals of personal privacy take precedence over innovation.

Some experts like Kai-Fu Lee argue that the collection of personal data may actually be the key that unlocks the future potential of technology, especially in the context of AI[2]. AI is already being integrated into nearly all industries, from healthcare to digital payments to driverless automobiles and more. AI works by training algorithms on existing data, but it can only succeed if such data is available. In Sweden, for example, data has enabled the creation of “Smart Grid Gotland,” which tracks electricity consumption according to wind energy supply fluctuations and reduces household energy costs[3]. Such integration of technology with urban planning, otherwise known as “smart cities,” has become a popular aspiration of governments across the globe to make their cities safer and more efficient. However, these projects also require massive amounts of data.

Indeed, data is already the driving force behind many research problems and innovations, though not without concerns. For example, AI is being used to improve cancer screening in cervical and prostate cancer, and AI might be the human invention that eventually leads scientists to discover a cancer cure[4]. Researchers like Dr. Fei Sha from the University of Southern California are working to apply big data and algorithmic models to “generate life-saving biomedical research outcomes[5].” But if patients deny access to their healthcare histories and other information, researchers will not have the adequate data to uncover more effective methods of treatment. Similarly, AI will likely be the technology that streamlines the advancement of digital payments, detecting fraudulent transactions and approving loan applications at a quicker speed. Yet, if people resist data collection, the algorithms cannot reach their full potential. As these examples demonstrate, “big data” can unlock the next chapter of human advances, but privacy concerns stand in the way.

Different societies use different approaches to deal with and respond to questions of data and privacy. In Western communities, individuals demonstrate strong opposition to the collection of their personal information by private sector actors, believing collection to be a breach of their personal privacy privileges. The European Union’s (EU) General Data Protection Regulation  and its newly introduced Digital Services Act, Canada’s Personal Information Protection and Electronic Documents Act, and California’s Consumer Privacy Act curb the non-consensual collection of personal information by businesses, thereby empowering individuals to take ownership of their data. Recently, big tech companies such as Meta and Google have come under public scrutiny for collecting personal data, and polls reveal that Americans are increasingly distrustful of popular social media apps such as Facebook and Instagram[6]. 

Still, the American public is not as guarded as it may appear. Video-focused social media app TikTok, whose parent company Bytedance is based in China, reported more than 100 million daily U.S. users in August 2020, up 800% since January 2018[7]. Despite warnings that the Shanghai-based company could potentially share personal data with Beijing, including threats by the Trump administration to “ban TikTok” for national security reasons, nearly a third of Americans continue to use the application on a daily basis, seemingly ignoring privacy concerns. While lawmakers have attempted to regulate the collection of data by large corporations, especially foreign companies, public opinion appears mixed.

Norms in the Eastern hemisphere tell a different story. Privacy laws exist, such as China’s Personal Information Protection Law and Japan’s upcoming ​​Amended Act on Protection of Personal Information, but the culture surrounding them is completely distinct, particularly when it comes to government collection of personal data. At the height of the pandemic, South Korea introduced a robust contact tracing campaign that relied on large databases constructed by data from credit card transactions[8]. Taiwan succeed in contact tracing efforts by launching an electronic security monitoring system that tracks isolating individuals’ locations through their cell phones[9]. In China, almost everything can be achieved through a single app, WeChat, which allows users to post pictures, order food, message friends, hire babysitters, hail a cab, pay for groceries, and more. This technological integration, which has transformed Chinese society, works because enough personal information is stored and linked together in the application. 

Some may argue that not all the data being collected by governments and even corporations has been neither voluntary nor consensual, which is why collection discussions require legal frameworks regarding privacy. Nevertheless, governments that emphasize the collective good over personal privacy have fostered societies where people possess less paranoia about companies utilizing their information and enjoy more technological progress. Despite aforementioned privacy concerns, WeChat topped more than one billion users by the end of 2021, including overseas users[10].

Regardless of a nation’s approach to technological innovation, one thing must be made clear: privacy concerns are real and cannot be diminished. In fact, personal privacy as a principle forms the foundation of liberal democratic citizenship, and infringements upon privacy threaten such societal fabrics. Law enforcement, for example, are more actively optimizing emerging technologies such as facial recognition and surveillance methods to monitor protests and collect individual location data. These trends have the potential to compromise civil liberties, in addition to the injustices that arise from data biases[11].

Yet there is also no doubt that the direction global privacy laws are headed may potentially stifle innovation, especially because developing technologies such as AI requires large quantities of data. 

The U.S. will soon need to reevaluate the way it conceives of privacy as it relates to innovation. If the U.S. follows the EU’s footsteps and tightens its grip on the act of data collection, rather than the technology behind the data collection, it might be setting itself up for failure, or at least falling behind. If the U.S. wants to continue leading the world in technological advancement, it may pursue policies that allow technology to flourish without discounting personal protections. The U.S. can, for example, simultaneously implement strident safeguards against government or corporate misuse of personal data and invest in the next generation of technological innovation. The U.S. has options, but these options require viewing big data as a friend, not a foe.


Endnotes:

[1] Kate Blackwood, “Study: Americans skeptical of COVID-19 contact tracing apps,” Cornell Chronicle, January 21, 2021, https://news.cornell.edu/stories/2021/01/study-americans-skeptical-covid-19-contact-tracing-apps.

[2] Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (Boston: Mariner Books, 2018).

[3] “Data driving the next wave of Swedish super cities,” KPMG, accessed March 12, 2022, https://home.kpmg/se/sv/home/nyheter-rapporter/2020/12/data-driving-the-next-wave-of-swedish-super-cities.html.

[4] “Artificial Intelligence – Opportunities in Cancer Research,” National Cancer Institute, accessed February 15, 2022, https://www.cancer.gov/research/areas/diagnosis/artificial-intelligence.

[5] Marc Ballon, “Can artificial intelligence help to detect and cure cancer?,” USC News, November 6, 2017, https://news.usc.edu/130825/can-artificial-intelligence-help-to-detect-and-cure-cancer/.

[6] Heather Kelly and Emily Guskin, “Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds,” The Washington Post, December 22, 2021, https://www.washingtonpost.com/technology/2021/12/22/tech-trust-survey/.

[7] Alex Sherman, “TikTok reveals detailed user numbers for the first time,” CNBC, August 24, 2020, https://www.cnbc.com/2020/08/24/tiktok-reveals-us-global-user-growth-numbers-for-first-time.html.

[8] Young Joon Park, Young June Choe, Ok Park, et al. “Contact Tracing during Coronavirus Disease Outbreak, South Korea, 2020,” Emerging Infectious Diseases 26, no. 10 (October 2020):2465-2468. https://wwwnc.cdc.gov/eid/article/26/10/20-1315_article.

[9] Emily Weinstein, “Technology without Authoritarian Characteristics: An Assessment of the Taiwan Model of Combating COVID-19,” Taiwan Insight, December 10, 2020, https://taiwaninsight.org/2020/11/24/technology-without-authoritarian-characteristics-an-assessment-of-the-taiwan-model-of-combating-covid-19/.

[10] “WeChat users & platform insights 2022,” China Internet Watch, March 24, 2022, https://www.chinainternetwatch.com/31608/wechat-statistics/#:~:text=Over%20330%20million%20of%20WeChat’s,Account%20has%20360%20million%20users.

[11] Aaron Holmes, “How police are using technology like drones and facial recognition to monitor protests and track people across the US,” Business Insider, June 1, 2020, https://www.businessinsider.com/how-police-use-tech-facial-recognition-ai-drones-2019-10.

Assessment Papers Channing Lee Emerging Technology Governing Documents and Ideas Government Information Systems Privacy

Assessing United States Military Modernization Priorities

Kristofer Seibt is an active-duty United States Army Officer and a graduate student at Columbia University.  Divergent Options content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing United States Military Modernization Priorities

Date Originally Written:  December 13, 2020.

Date Originally Published:  January 25, 2021.

Author and / or Article Point of View:  The author is an active-duty U.S. Army officer.  The author is critical of the tendency to equate modernization with costly technology or equipment investments, and the related tendency to conflate operational and structural readiness.

Summary:  Modernizing the military by optimizing access to, and employment of, readily available digital capabilities such as cell phones and personal computers offers a surer prospect for a ready and modern military when called upon in future years.  Persistent ambivalence towards basic digital tools and processes across the Department of Defense presents vulnerabilities and opportunity costs for both operational and structural readiness.

Text:  The U.S. Armed Forces and the wider public have long appreciated cutting edge technology and powerful equipment as the cornerstone of a modern and ready military.  As the national security strategy and subordinate defense, military, and service strategies shift to address the still undefined Great Power Competition, and long wars in the Middle East ostensibly wind down, modernizing the military for future conflict is a widely discussed topic[1].  Despite an inevitable reduction in military spending at some point in the near future, alongside the already unparalleled levels of military appropriation, a strong narrative has re-emerged that portrays new or upgraded capabilities as a common and unquestionable pillar of operational and structural readiness[2].  

As a function of readiness, America’s military technology obsession ignores the more pressing need to modernize basic and often neglected components of daily military operations in garrison, on mission, and at war.  Outmoded systems, tools, and processes in military organizations and on military installations are one readiness issue that can be solved today with if they had a similar level of investment and top-level coordination traditionally afforded to more costly programs.  Investing in modernizing the military by overhauling daily operations today, at a wide scale, offers a surer prospect for a ready and modern military when called upon in future years, regardless of the unknowable capability requirements future warfare will demand and the uncertain results of technology or capability development[3].

The elephant in the room, so to speak, is the Department of Defense’s mixed feelings towards digital tools and processes[4]. Besides obvious and widely known inefficiencies encountered in all facets of daily military life, at all levels, these mixed feelings contribute to security vulnerabilities and operational constraints on a similar scale.  Consider daily communication, often via cell phone and email[5]. Today, most Military Members are asked to conduct official business on personally procured devices that are connected by personally funded data plans on domestic telecommunications networks.  

Official business conducted at the speed that daily operations in the military supposedly require, out of a perception of necessity and expedience, often occurs through a mixture of unsecure text message, unsecure messaging app, and personal teleconferencing software ungoverned by any DoD or Military Department policy or procedure.  Military workflows on digital devices rely on inefficient methods and limited collaboration through outdated tools on semi-closed government networks requiring a wired connection and a government-issued workstation.   The compounding constraints generated by limited access to networks, phones, computers, and the attendant inefficiencies of their supported workflows necessitate a parallel or “shadow” system of getting things done i.e. the use of personal electronic devices.  

While the DoD certainly issues computers and phones to select Military Members in many organizations, especially executive staffs and headquarters, government-procured devices on government-funded plans/infrastructure remain the privilege of a relative few, ostensibly due to security and cost.  Company Commanders in the U.S. Army (responsible for 100-150 Military Members), for example, are no longer authorized government cell phones in most organizations.  For those lucky enough to have a government-issued computer, before the COVID19 pandemic, obtaining permission to enable their personal hardware’s wireless capabilities or conduct official business remotely via Virtual Private Network had become increasingly difficult. 

In contrast to peacetime and garrison environments, in combat or combat-simulation training environments Military Members are asked to ignore their personally owned or even government-provided unclassified digital tools in favor of radios or classified, internally networked computers with proprietary software.  That leaders in tactical training environments with government cell phones may sneak away from the constraints of the exercise to coordinate with less friction than that offered by their assigned tactical equipment, as the author has routinely witnessed, underscores the artificiality of the mindset erected around (and the unrealized opportunity afforded by) digital technology.

Digital communication technologies such as cell phones, computers, and internet-enabled software were once at the cutting edge, just as unmanned systems are now, and artificial intelligence will be.  Much like a period of degraded operational readiness experienced when militaries field, train, and integrate new capabilities, military organizations have generally failed to adapt their own systems, processes, or cultures to optimize the capabilities offered by modern communication technologies[6].  

Talk of modernization need not entail investment into the development of groundbreaking new technologies or equipment.  An overabundance of concern for security and disproportionate concern for cost have likely prevented, to this point, the wide-scale distribution of government-procured devices to the lowest level of the military.  These concerns have also likely prevented the U.S. Armed Forces from enabling widespread access to official communication on personal devices.  While prioritizing military modernization is challenging, and costly systems often come out on top, there is goodness in investments that enable military organizations to optimize their efficiency, their effectiveness, and their agility through existing or easily procured digital technologies.  

Systems, processes, and culture are intangible, but modernization evokes an image of tangible or materiel outcomes.  The assessment above can link the intangible to the tangible when mapped back onto concepts of operational and structural readiness.  For example, imagine deploying a platoon on a disaster relief mission or a brigade to a Pacific island as part of a deterrence mission related to Great Power Competition.  In this scenario, the Military Members in these deployed units have everything they need to communicate, plan, and execute their mission on their personal government-issued phones which can be used securely on a host nation cell network.  Cameras, mapping software, and communications capabilities already on these government devices are widely embedded in the daily operations of each unit allowing the units to get on the first available plane and start operating.  

The tangible benefits of a digitally adept military therefore also bridge to structural readiness, whereby the force can absorb reductions in size and become systemically, procedurally, and culturally ready to employ new capabilities that demand organizations operate flexibly and at high speeds[7].  If modernization investments today imagine a future with networked artificial intelligence, ubiquitous unmanned systems, and convergent data — ostensibly secure and enmeshed deeply enough to be leveraged effectively — that same imagination can be applied to a future where this same security and optimization is applied to a suite of government-issued, personal digital hardware and internet-enabled software.


Endnotes:

[1] For one example of analysis touching on modernization within the context of the defense budget, see Blume, S., & Parrish, M. (2020, July 9). Investing in Great-Power Competition. Center for a New American Security. https://www.cnas.org/publications/reports/investing-in-great-power-competition

[2] For definitions, their relationship, and their conflation with modernization, see Betts, R. K. (1995). Military Readiness: Concepts, Choices, Consequences (pp. 40-41, 134-136). Brookings Institution Press.

[3] Barno, D., & Bensahel, N. (2020, September 29). Falling into the Adaptation Gap. War on the Rocks. https://warontherocks.com/2020/09/falling-into-the-adaptation-gap

[4] Kroger, J. (2020, August 20). Office Life at the Pentagon Is Disconcertingly Retrograde. Wired. https://www.wired.com/story/opinion-office-life-at-the-pentagon-is-disconcertingly-retrograde

[5] Ibid.; the author briefly recounts some of the cultural impediments to efficiency at the Pentagon, specifically, and their subsequent impact on leveraging technology.

[6] See Betts, Military Readiness, for an expanded discussion of the trade-off in near-term operational readiness alluded to here.

[7] For a broader advocation for bridging structural readiness, modernization imperatives, and current forces, see Brands, H., & Montgomery, E. B. (2020). One War is Not Enough: Strategy and Force Planning for Great-Power Competition. Texas National Security Review, 3(2). https://doi.org/10.26153/tsw/8865

Budgets and Resources Capacity / Capability Enhancement Defense and Military Reform Emerging Technology Kristofer Seibt United States

Options to Enhance Security in U.S. Networked Combat Systems

Jason Atwell has served in the U.S. Army for over 17 years and has worked in intelligence and cyber for most of that time. He has been a Federal employee, a consultant, and a contractor at a dozen agencies and spent time overseas in several of those roles. He is currently a senior intelligence expert for FireEye, Inc. and works with government clients at all levels on cyber security strategy and planning.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  As combat systems within DoD become more connected via networks, this increases their vulnerability to adversary action.

Date Originally Written:  November 1, 2020.

Date Originally Published:  January 11, 2021.

Author and / or Article Point of View:  The author is a reservist in the U.S. Army and a cyber security and intelligence strategist for FireEye, Inc. in his day job. This article is intended to draw attention to the need for building resiliency into future combat systems by assessing vulnerabilities in networks, hardware, and software as it is better to discover a software vulnerability such as a zero day exploit in a platform like the F-35 during peacetime instead of crisis.

Background:  The United States is rushing to field a significant number of networked autonomous and semi-autonomous systems[1][2] while neglecting to secure those systems against cyber threats. This neglect is akin to the problem the developed world is having with industrial control systems and internet-of-things devices[3]. These systems are unique, they are everywhere, they are connected to the internet, but they are not secured like traditional desktop computers. These systems won’t provide cognitive edge or overmatch if they fail when it matters most due to poorly secured networks, compromised hardware, and untested or vulnerable software.

Significance:  Networked devices contain massive potential to increase the resiliency, effectiveness, and efficiency in the application of combat power[4]. Whether kinetic weapons systems, non-lethal information operations, or well-organized logistics and command and control, the advantages gained by applying high-speed networking and related developments in artificial intelligence and process automation will almost certainly be decisive in future armed conflict. However, reliance on these technologies to gain a competitive or cognitive edge also opens the user up to being incapacitated by the loss or degradation of the very thing they rely on for that edge[5]. As future combat systems become more dependent on networked autonomous and semi-autonomous platforms, success will only be realized via accompanying cybersecurity development and implementation. This formula for success is equally true for ground, sea, air, and space platforms and will take into account considerations for hardware, software, connectivity, and supply chain. The effective application of cyber threat intelligence to securing and enabling networked weapons systems and other defense technology will be just as important to winning in the new multi-domain battlefield as the effective application of other forms of intelligence has been in all previous conflicts.

Option #1:  The Department of Defense (DoD) requires cybersecurity efforts as part of procurement. The DoD has been at work on applying their “Cybersecurity Maturity Model Certification” to vendors up and down the supply chain[6]. A model like this can assure a basic level of protection to hardware and software development and will make sure that controls and countermeasures are at the forefront of defense industrial base thinking.

Risk:  Option #1 has the potential to breed complacency by shifting the cybersecurity aspect too far to the early stages of the procurement process, ignoring the need for continued cyber vigilance further into the development and fielding lifecycle. This option also places all the emphasis on vendor infrastructure through certification and doesn’t address operational and strategic concerns around the resiliency of systems in the field. A compliance-only approach does not adapt to changing adversary tactics, techniques, and procedures.

Gain:  Option #1 forces vendors to take the security of their products seriously lest they lose their ability to do business with the DoD. As the model grows and matures it can be used to also elevate the collective security of the defense industrial base[7].

Option #2:  DoD takes a more proactive approach to testing systems before and during fielding. Training scenarios such as those used at the U.S. Army’s National Training Center (NTC) could be modified to include significant cyber components, or a new Cyber-NTC could be created to test the ability of maneuver units to use networked systems in a hostile cyber environment. Commanders could be provided a risk profile for their unit to enable them to understand critical vulnerabilities and systems in their formations and be able to think through risk-based mitigations.

Risk:  This option could cause significant delay in operationalizing some systems if they are found to be lacking. It could also give U.S. adversaries insight into the weaknesses of some U.S. systems. Finally, if U.S. systems are not working well, especially early on in their maturity, this option could create significant trust and confidence issues in networked systems[8].

Gain:  Red teams from friendly cyber components could use this option to hone their own skills, and maneuver units will get better at dealing with adversity in their networked systems in difficult and challenging environments. This option also allows the U.S. to begin developing methods for degrading similar adversary capabilities, and on the flip side of the risk, builds confidence in systems which function well and prepares units for dealing with threat scenarios in the field[9].

Option #3:  The DoD requires the passing of a sort of “cybersecurity sea trial” where the procured system is put through a series of real-world challenges to see how well it holds up. The optimal way to do this could be having specialized red teams assigned to program management offices that test the products.

Risk:  As with Option #2, this option could create significant delays or hurt confidence in a system. There is also the need for this option to utilize a truly neutral test to avoid it becoming a check-box exercise or a mere capabilities demonstration.

Gain:  If applied properly, this option could give the best of all options, showing how well a system performs and forcing vendors to plan for this test in advance. This also helps guard against the complacency associated with Option #1. Option #3 also means systems will show up to the field already prepared to meet their operational requirements and function in the intended scenario and environment.

Other Comments:  Because of advances in technology, almost every function in the military is headed towards a mix of autonomous, semi-autonomous, and manned systems. Everything from weapons platforms to logistics supply chains are going to be dependent on robots, robotic process automation, and artificial intelligence. Without secure resilient networks the U.S. will not achieve overmatch in speed, efficiency, and effectiveness nor will this technology build trust with human teammates and decision makers. It cannot be overstated the degree to which reaping the benefits of this technology advancement will depend upon the U.S. application of existing and new cybersecurity frameworks in an effective way while developing U.S. offensive capabilities to deny those advantages to U.S. adversaries.

Recommendation:  None.


Endnotes:

[1] Judson, Jen. (2020). US Army Prioritizes Open Architecture for Future Combat Vehicle. Retrieved from https://www.defensenews.com/digital-show-dailies/ausa/2020/10/13/us-army-prioritizes-open-architecture-for-future-combat-vehicle-amid-competition-prep

[2] Larter, David B. The US Navy’s ‘Manhattan Project’ has its leader. (2020). Retrieved from https://www.c4isrnet.com/naval/2020/10/14/the-us-navys-manhattan-project-has-its-leader

[3] Palmer, Danny. IOT security is a mess. Retrieved from https://www.zdnet.com/article/iot-security-is-a-mess-these-guidelines-could-help-fix-that

[4] Shelbourne, Mallory. (2020). Navy’s ‘Project Overmatch’ Structure Aims to Accelerate Creating Naval Battle Network. Retrieved from https://news.usni.org/2020/10/29/navys-project-overmatch-structure-aims-to-accelerate-creating-naval-battle-network

[5] Gupta, Yogesh. (2020). Future war with China will be tech-intensive. Retrieved from https://www.tribuneindia.com/news/comment/future-war-with-china-will-be-tech-intensive-161196

[6] Baksh, Mariam. (2020). DOD’s First Agreement with Accreditation Body on Contractor Cybersecurity Nears End. Retrieved from https://www.nextgov.com/cybersecurity/2020/10/dods-first-agreement-accreditation-body-contractor-cybersecurity-nears-end/169602

[7] Coker, James. (2020). CREST and CMMC Center of Excellence Partner to Validate DoD Contractor Security. Retrieved from https://www.infosecurity-magazine.com/news/crest-cmmc-validate-defense

[8] Vandepeer, Charles B. & Regens, James L. & Uttley, Matthew R.H. (2020). Surprise and Shock in Warfare: An Enduring Challenge. Retrieved from https://www.realcleardefense.com/articles/2020/10/27/surprise_and_shock_in_warfare_an_enduring_challenge_582118.html

[9] Schechter, Benjamin. (2020). Wargaming Cyber Security. Retrieved from https://warontherocks.com/2020/09/wargaming-cyber-security

Cyberspace Defense and Military Reform Emerging Technology Information Systems Jason Atwell United States

Options to Improve U.S. Army Ground Combat Platform Research and Development

Mel Daniels has served in the United States military for nearly twenty years. Mel is new to writing. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group or person.


National Security Situation:  The modernization of U.S. Army ground combat platforms includes risks that are not presently mitigated.

Date Originally Written:  August, 16, 2020.

Date Originally Published:  November 23, 2020.

Author and / or Article Point of View:  The author believes that the U.S. Army’s over reliance on immature technologies are a risk to national security. Further, the author believes that the risk can be mitigated by slowing development and reducing research and development (R&D) investments while reinvesting in proven material solutions until new systems prove technologically reliable and fiscally feasible to implement.

Background:  The U.S. Army is investing in programs that remain unproven and are unlikely to provide the capabilities sought. Specifically, the Army is heavily investing in its Optionally Manned Fighting Vehicle (OMFV) and remote combat vehicles[1]. These programs are predicated on optimal battlefield conditions. Firstly, the assumption exists that enemy forces will not be able to degrade or destroy the battlefield network required to operate unmanned vehicles. Secondly, the risk of the enemy developing weapons that specifically target transmissions coming from control vehicles is a factor that needs to be taken seriously in threat assessments and in planning purposes[2].

Significance:  If the Army’s assumptions are incorrect and if these efforts fails to procure reliable and sustainable ground combat platforms for future operations, there will not be additional resources to mitigate this failure. Moreso, if the Army procures vulnerable systems that fail to deliver effects promised, the Army risks catastrophic defeat on the battlefield.

Option #1:  The U.S. Army could reduce and spread out its R&D investments to further invest in its legacy combat forces to offset the risks associated with funding unproven and unreliable technologies.

Risk:  The significant risks associated with Option #1 are that the technological investments needed for future capabilities will be delayed. The Army would lose its plan for fielding, as the Army will not fully field the OMFV until the early 2030’s, assuming there are no complications to the program of record. Additionally, Robotic Combat Vehicles (RCV) would be delayed until they can also be realistically evaluated. Lastly, investments into legacy systems could threaten the need for future platforms.

Gain:  If the Army elected Option #1, it would have the time to properly and realistically test RCV’s, OMFV and Manned-Unmanned Teaming concepts (MUM-T). This additional testing reduces the chances of investing significant resources into a programs that do not deliver as promised. It also reduces disingenuous and fraudulent claims prior to further funding requests. Simply put, the chances of ineffective systems being funded would be mitigated because proper and realistic testing from an independent entity would occur first. The Army would also gain additional capabilities for its current systems that otherwise would not be upgraded but yet will remain in service for decades.

Option #2:  The Army consolidates their modernization efforts and cancels select requirements. Currently the Army funds 4 major ground combat programs; the Mobile Protected Firepower, RCV program (Heavy, Medium and Light), OMFV, and CROWS-J/30mm. The Army could cancel the MPF program because the program is questionable due to its inability to defeat enemy near peer armored threats that will likely be encountered[3]. This cancelation would allow the Army to invest into the RCV light program, armed with the 50mm cannon and Anti-Tank Guided Missiles (ATGM). The Army would retain a viable anti fortification and direct fire support vehicle, while reducing needless expenses. Further, the Army could cancel or delay the unmanned requirement of the OMFV until the technology and network is fully secured and matured, with no limitations or risks.

Risk:  A significant risk associated with Option #2 is that the Army will not get a light tank or get RCV Medium or Heavy platforms and it will not receive the “optionally manned” portion of the OMFV until later. The risk with not obtaining these desired capabilities mean that the Army would have to accept an alternative material solution that defeats enemy fortifications and armor as opposed to the MPF.

Gain:  The Army retains its desired capabilities while maximizing resources. The Robotic Combat Vehicle-(Light), armed with a 50mm cannon and ATGM’s, is less expensive, lighter and carries more ammunition than the MPF. Further, the RCV-L is better armed to defeat enemy armored threats, as the MPF’s 105mm cannon is inadequate to defeat enemy tanks[3]. Additionally, by removing the unmanned requirement from the OMFV, the Army would gain savings and reduce reliance on unproven technologies, reducing risk of battlefield defeat[4]. This option enables the Army to retain remote controlled concepts by shifting the focus to the RCV-L and equipping the Infantry Brigade Combat Team community with it, as opposed to the Army risking its combat strength and forcing immature technologies upon the Armored Brigade Combat Team community, which is the Army’s main combat formation for near peer conflict.

Other Comments:  The Army is heavily investing in vulnerable technologies without first ensuring it has an effective network able to completely support the operational concepts it desires. Without ensuring that the required network will be immune to enemy countermeasures, these technologies will not fully support operational requirements. Further, the costs associated with these efforts are already exceeding 60 billion dollars, and do not afford the service increased lethality or survivability, even by common English definitional sense. Army R&D efforts will continue to be at risk if they refuse to allow independent agencies full access and evaluation rights prior to further funding.

Recommendation:  None.


Endnotes:

[1] Freedberg Jr, Sydney. August 6th, 2020. Breaking Defense. GAO Questions Army’s 62B Cost Estimates for Combat Vehicles. Retrieved from: https://breakingdefense.com/2020/08/gao-questions-armys-62b-cost-estimates-for-combat-vehicles

[2] Trevithick, Joseph. May 11th 2020. The War Zone: This is What Ground Forces Look like to an Electronic Warfare System and Why It’s A Big Deal. Retrieved from: https://www.thedrive.com/the-war-zone/33401/this-is-what-ground-forces-look-like-to-an-electronic-warfare-system-and-why-its-a-big-deal

[3] Central Intelligence Agency. Gorman, Paul. Major General, USA. US Intelligence and Soviet Armor. 1980. Retrieved from: https://www.cia.gov/library/readingroom/docs/DOC_0000624298.pdf

[4] Collins, Liam. Colonel, USA. July 26th 2018. Association of the United States Army: Russia Gives Lessons in Electronic Warfare. Retrieved from: https://www.ausa.org/articles/russia-gives-lessons-electronic-warfare

Budgets and Resources Emerging Technology Mel Daniels Option Papers Research and Development U.S. Army

Assessing the U.S. and China Competition for Brazilian 5G 

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Martina Victoria Abalo is an Argentinian undergrad student majoring in international affairs from The University of San Andres in Buenos Aires, Argentina. She can be found on Twitter as @Martilux. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the U.S. and China Competition for Brazilian 5G

Date Originally Written:  July 24, 2020.

Date Originally Published:  October 5, 2020.

Author and / or Article Point of View:  The author is an advanced undergrad student of International Affairs from Argentina.

Summary:  Brazilian President Jair Bolsonaro and U.S. President Donald Trump see the world similarly. At the same time, China’s investment in Brazil is significantly more than the U.S. investment. With China trying to put 5G antennas into Brazil, and the U.S. trying to stop China from doing the same worldwide, President Bolsonaro finds himself in a quandary and thus far has not decided to side with the U.S. or China.

Text:  When thinking about China and the U.S., most tend to see the big picture. However, often unseen are the disputes that are going in the shadows for alignment. This article will assess how U.S. tries to counterbalance China in Brazil, as China pursues the alignment of the South American power for 5G.

The relationship between China and Brazil must be understood in context. Before the impeachment of Brazil’s President Dilma Rousseff, China was Brazil’s most important ally economically and one of the closest politically[1]. However, with the assumption of the Presidency by Jair Bolsonaro in 2019, Brazil’s foreign policy towards China turned 180 degrees[2]. Although Brazil has been famous for having a foreign policy autonomous from their domestic one[3], Bolsonaro’s office changed that and started a close linkage with U.S. President Donald Trump.

Having a similar political outlook, Bolsonaro and Trump have made Brazil and the U.S. closer than they have ever been. Nevertheless, this closeness comes with a price, especially for Brazil, which seems to be playing the role of the second state in a bandwagon for survival relationship[4] with the United States. This role can be seen in the first months of 2019 when Jair tried to follow the U.S. lead in the international community. Though alignment with the U.S. may be well or poorly appreciated, it remains to be seen what impacts Brazil will feel from China following this alignment.

Brazil’s relationship with the U.S. did not last long because no matter how uncomfortable Bolsonaro feels with the Chinese political model, China remains Brazil’s first economic partner. In 2018 the Brazilian Gross Domestic Product GDP was 12,79%[5] and during 2019 China bought assets in Brazil for U.S. $62.871 billion and had total trade of U.S. $98.142 billion[6] between these two countries. In parallel with this, Brazil’s second-best trade partner, the U.S., was far behind China, with a two – way trade of U.S. $59.646 billion. As we know, the United States can not offer to Brazil the economic benefits that China does and clearly, this is no secret in Brazil. With this trade disparity, the question is whether the U.S. can offer something as powerful as China, to persuade Brazil from signing more agreements with China, such as installation of 5G antennas?

Even though 5G antennas are faster than the 4G, there are two concerns around this new technology. The first one is the privacy of the users because it is easy to get the exact user location. The second is that the owner of the 5G network or a hacker could spy on the internet traffic passing through said network[8].

In 2020, the United States is attempting to thwart China from signing agreements to place 5G antennas in countries worldwide. While the United Kingdom and France[9] rejected any kind of deal with China, in Brazil the official decision keeps on being delayed, and as of this writing nobody knows whether Bolsonaro align with China or the U.S.

China is trying to persuade Brazil to sign an agreement with Huawei which aims to develop 5G technology by placing 5G antennas all along Brazil[10]. This quest to convince Brazil to sign with Huawei has been going on for months. Despite the lack of a signed agreement Huawei, who has been operating in Brazil for a long time now, is opening a lab of 5G technology in Brasilia[11].

The U.S. does not have a viable counteroffer to Brazil for the placement of Huawei 5G antennas across the country[12]. The Trump administration keeps trying to persuade their political allies to not sign with Huawei, although the United States has not developed this kind of alternative technology. However, there are some companies interested in placing 5G on Brazil alike the Mexican telecommunications company Claro[13].

President Bolsonaro, in an effort to balance the U.S. and China’s interest in Brazil, will likely have to find a middle way. Regarding the influence of China when it comes to Brazil’s economy, it is naïve to think that there will not be any consequence if Brazil says no to Huawei. Of course, this possible “no” does not mean that China will break any economical entanglement with Brazil, but clearly, China would not be pleased by this decision. A yes decision by Bolsonaro might be a deal-breaker for the Trump-Bolsonaro relationship. Additionally, Bolsonaro started his presidency seeking to be Trump’s southern ally to ensure survival. Bolsonaro remains as hopeful as Jair that Brazil would be a secondary state[14], and that following the U.S. will help Brazil to change their political and economic allies. Finally, the close relationship between the U.S. and Brazil is quite good, and the U.S. can support Bolsonaro politically and diplomatically speaking in ways that China is simply not able.

In conclusion, Brazil is placed between a rock and a hard place and the solution to this matter will not satisfy all participants. It might be rather expensive for the future of Brazil if the U.S. does not back Bolsonaro if he says yes to China, and China might turn back on the Brazilian administration if they say no to Huawei.


Endnotes:

[1] Ferreyra, J. E. (n.d.). Acciones de política exterior de Brasil hacia organismos multilaterales durante las presidencias de Lula Da Silva [Grado, Siglo 21]. https://repositorio.uesiglo21.edu.ar/bitstream/handle/ues21/12989/FERREYRA%20Jorge%20E..pdf?sequence=1&isAllowed=y

[2] Guilherme Casarões. (2019, December 20). Making Sense of Bolsonaro’s Foreign Policy at Year One. Americas Quarterly. https://www.americasquarterly.org/article/making-sense-of-bolsonaros-foreign-policy-at-year-one

[3] Jacaranda Guillén Ayala. (2019). La política exterior del gobierno de Bolsonaro. Foreign Affairs Latinoamérica. http://revistafal.com/la-politica-exterior-del-gobierno-de-bolsonaro

[4] Matias Spektot, & Guilherme Fasolin. (2018). Bandwagoning for Survival: Political Leaders and International Alignments.

[5] Brasil—Exportaciónes de Mercancías 2019. (2019). Datos Macro. https://datosmacro.expansion.com/comercio/exportaciones/brasil

[6] World Integrated Trade Solutions. (2020, July 12). Brasil | Resumen del comercio | 2018 | WITS | Texto. World Integrated Trade Solutions. https://wits.worldbank.org/CountryProfile/es/Country/BRA/Year/LTST/Summarytext

[7] World Integrated Trade Solutions. (2020, July 12). Brasil | Resumen del comercio | 2018 | WITS | Texto. World Integrated Trade Solutions. https://wits.worldbank.org/CountryProfile/es/Country/BRA/Year/LTST/Summarytext

[8] Vandita Grover. (2019, October 16). In the Age of 5G Internet Is Data Privacy Just A Myth? | MarTech Advisor. Martech Advisor. https://www.martechadvisor.com/articles/mobile-marketing/5g-internet-and-data-privacy

[9] George Calhoun. (2020, July 24). Is The UK Ban On Huawei The “Endgame” For Free Trade? Forbes. https://www.forbes.com/sites/georgecalhoun/2020/07/24/is-the-uk-ban-on-huawei-the-endgame-for-free-trade/#6735924d46db

[10] Oliver Stunkel. (2020, June 30). Huawei or Not? Brazil Faces a Key Geopolitical Choice. Americas Quarterly. https://americasquarterly.org/article/huawei-or-not-brazil-faces-a-key-geopolitical-choice

[11] Huawei to open 5G lab in Brasília. (2020, July 23). BNamericas.Com. https://www.bnamericas.com/en/news/huawei-to-open-5g-lab-in-brasilia

[12] Gabriela Mello. (2020, July 7). Huawei says U.S. pressure on Brazil threatens long delays in 5G rollout. Reuters. https://www.reuters.com/article/us-huawei-tech-brazil-5g-idUSKBN2482WS

[13] Forbes Staff. (2020, July 8). Claro, de Carlos Slim, iniciará la carrera del 5G en Brasil. Forbes México. https://www.forbes.com.mx/tecnologia-claro-slim-5g-brasil

[14] Matias Spektot, & Guilherme Fasolin. (2018). Bandwagoning for Survival: Political Leaders and International Alignments.

Assessment Papers Below Established Threshold Activities (BETA) Brazil China (People's Republic of China) Competition Emerging Technology United States

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk

U.S. Options to Combat Chinese Technological Hegemony

Ilyar Dulat, Kayla Ibrahim, Morgan Rose, Madison Sargeant, and Tyler Wilkins are Interns at the College of Information and Cyberspace at the National Defense UniversityDivergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  China’s technological rise threatens U.S. interests both on and off the battlefield.

Date Originally Written:  July 22, 2019.

Date Originally Published:  February 10, 2020.

Author and / or Article Point of View:  This article is written from the point of view of the United States Government.

Background:  Xi Jinping, the Chairman of China’s Central Military Commission. affirmed in 2012 that China is acting to redefine the international world order through revisionist policies[1]. These policies foster an environment open to authoritarianism thus undermining Western liberal values. The Chinese Communist Party (CCP) utilizes emerging technologies to restrict individual freedoms of Chinese citizens, in and out of cyberspace. Subsequently, Chinese companies have exported this freedom-restricting technology to other countries, such as Ethiopia and Iran, for little cost. These technologies, which include Artificial Intelligence-based surveillance systems and nationalized Internet services, allow authoritarian governments to effectively suppress political dissent and discourse within their states. Essentially monopolizing the tech industry through low prices, China hopes to gain the loyalty of these states to obtain the political clout necessary to overcome the United States as the global hegemon.

Significance:  Among the technologies China is pursuing, 5G is of particular interest to the U.S.  If China becomes the leader of 5G network technologies and artificial intelligence, this will allow for opportunities to disrupt the confidentiality, integrity, and availability of data. China has been able to aid regimes and fragmented democracies in repressing freedom of speech and restricting human rights using “digital tools of surveillance and control[2].” Furthermore, China’s National Security Law of 2015 requires all Chinese tech companies’ compliance with the CCP. These Chinese tech companies are legally bound to share data and information housed on Chinese technology, both in-state and abroad. They are also required to remain silent about their disclosure of private data to the CCP. As such, information about private citizens and governments around the world is provided to the Chinese government without transparency. By deploying hardware and software for countries seeking to expand their networks, the CCP could use its authority over domestic tech companies to gain access to information transferred over Chinese built networks, posing a significant threat to the national security interests of the U.S. and its Allies and Partners. With China leading 5G, the military forces of the U.S. and its Allies and Partners would be restricted in their ability to rely on indigenous telecoms abroad, which could cripple operations critical to U.S. interests [3]. This risk becomes even greater with the threat of U.S. Allies and Partners adopting Chinese 5G infrastructure, despite the harm this move would do to information sharing with the United States.

If China continues its current trajectory, the U.S. and its advocacy for personal freedoms will grow increasingly marginal in the discussion of human rights in the digital age. In light of the increasing importance of the cyber domain, the United States cannot afford to assume that its global leadership will seamlessly transfer to, and maintain itself within, cyberspace. The United States’ position as a leader in cyber technology is under threat unless it vigilantly pursues leadership in advancing and regulating the exchange of digital information.

Option #1:  Domestic Investment.

The U.S. government could facilitate a favorable environment for the development of 5G infrastructure through domestic telecom providers. Thus far, Chinese companies Huawei and ZTE have been able to outbid major European companies for 5G contracts. American companies that are developing 5G infrastructure are not large enough to compete at this time. By investing in 5G development domestically, the U.S. and its Allies and Partners would have 5G options other than Huawei and ZTE available to them. This option provides American companies with a playing field equal to their Chinese counterparts.

Risk:  Congressional approval to fund 5G infrastructure development will prove to be a major obstacle. Funding a development project can quickly become a bipartisan issue. Fiscal conservatives might argue that markets should drive development, while those who believe in strong government oversight might argue that the government should spearhead 5G development. Additionally, government subsidized projects have previously failed. As such, there is no guarantee 5G will be different.

Gain:  By investing in domestic telecommunication companies, the United States can remain independent from Chinese infrastructure by mitigating further Chinese expansion. With the U.S. investing domestically and giving subsidies to companies such as Qualcomm and Verizon, American companies can develop their technology faster in an attempt to compete with Huawei and ZTE.

Option #2:  Foreign Subsidization.

The U.S. supports European competitors Nokia and Ericsson, through loans and subsidies, against Huawei and ZTE. In doing so, the United States could offer a conduit for these companies to produce 5G technology at a more competitive price. By providing loans and subsidies to these European companies, the United States delivers a means for these companies to offer more competitive prices and possibly outbid Huawei and ZTE.

Risk:  The American people may be hostile towards a policy that provides U.S. tax dollars to foreign entities. While the U.S. can provide stipulations that come with the funding provided, the U.S. ultimately sacrifices much of the control over the development and implementation of 5G infrastructure.

Gain:  Supporting European tech companies such as Nokia and Ericsson would help deter allied nations from investing in Chinese 5G infrastructure. This option would reinforce the U.S.’s commitment to its European allies, and serve as a reminder that the United States maintains its position as the leader of the liberal international order. Most importantly, this option makes friendlier telecommunications companies more competitive in international markets.

Other Comments:  Both options above would also include the U.S. defining regulations and enforcement mechanisms to promote the fair usage of cyberspace. This fair use would be a significant deviation from a history of loosely defined principles. In pursuit of this fair use, the United States could join the Cyber Operations Resilience Alliance, and encourage legislation within the alliance that invests in democratic states’ cyber capabilities and administers clearly defined principles of digital freedom and the cyber domain.

Recommendation:  None.


Endnotes:

[1] Economy, Elizabeth C. “China’s New Revolution.” Foreign Affairs. June 10, 2019. Accessed July 31, 2019. https://www.foreignaffairs.com/articles/china/2018-04-17/chinas-new-revolution.

[2] Chhabra, Tarun. “The China Challenge, Democracy, and U.S. Grand Strategy.” Democracy & Disorder, February 2019. https://www.brookings.edu/research/the-china-challenge-democracy-and-u-s-grand-strategy/.

[3] “The Overlooked Military Implications of the 5G Debate.” Council on Foreign Relations. Accessed August 01, 2019. https://www.cfr.org/blog/overlooked-military-implications-5g-debate.

Artificial Intelligence / Machine Learning / Human-Machine Teaming China (People's Republic of China) Cyberspace Emerging Technology Ilyar Dulat Kayla Ibrahim Madison Sargeant Morgan Rose Option Papers Tyler Wilkins United States

Options to Bridge the U.S. Department of Defense – Silicon Valley Gap with Cyber Foreign Area Officers

Kat Cassedy is a qualitative analyst with 20 years of work in hard problem solving, alternative analysis, and red teaming.  She currently works as an independent consultant/contractor, with experience in the public, private, and academic sectors.  She can be found on Twitter @Katnip95352013, tweeting on modern #politicalwarfare, #proxywarfare, #NatSec issues, #grayzoneconflict, and a smattering of random nonsense.  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The cultural gap between the U.S. Department of Defense and Silicon Valley is significant.  Bridging this gap likely requires more than military members learning tech speak as their primary duties allow.

Date Originally Written:  April 15, 2019. 

Date Originally Published:  April 15, 2019. 

Author and / or Article Point of View:  The author’s point of view is that the cyber-sector may be more akin to a foreign culture than a business segment, and that bridging the growing gulf between the Pentagon and Silicon Valley may require sociocultural capabilities as much or more so than technical or acquisition skills. 

Background:  As the end of the third decade of the digital revolution nears an end, and close to a year after the U.S. Cyber Command was elevated to a Unified Combatant Command, the gap between the private sector’s most advanced technology talents, intellectual property (IP), services, and products and that of the DoD is strained and increasing. Although the Pentagon needs and wants Silicon Valley’s IP and capabilities, the technorati are rejecting DoD’s overtures[1] in favor of enormous new markets such as those available in China. In the Information Age, DoD assesses that it needs Silicon Valley’s technology much the way it needed the Middle East’s fossil fuels over the last half century, to maintain U.S. global battlespace dominance. And Silicon Valley’s techno giants, with their respective market caps rivaling or exceeding the Gross Domestic Product of the globe’s most thriving economies, have global agency and autonomy such that they should arguably be viewed as geo-political power players, not simply businesses.  In that context, perhaps it is time to consider 21st century alternatives to the DoD way of thinking of Silicon Valley and its subcomponents as conventional Defense Industrial Base vendors to be managed like routine government contractors. 

Significance:  Many leaders and action officers in the DoD community are concerned that Silicon Valley’s emphasis on revenue share and shareholder value is leading it to prioritize relationships with America’s near-peer competitors – mostly particularly but not limited to China[2] – over working with the U.S. DoD and national security community. “In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete[3].” Given the DoD’s extreme reliance on and investment in highly networked and interdependent information systems to dominate the modern global operating environment, the possibility that U.S. companies are choosing foreign adversaries as clients and partners over the U.S. government is highly concerning. If this technology shifts away from U.S. national security concerns continues, 1)  U.S. companies may soon be providing adversaries with advanced capabilities that run counter to U.S. national interests[4]; and 2) even where these companies continue to provide products and services to the U.S., there is an increased concern about counter-intelligence vulnerabilities in U.S. Government (USG) systems and platforms due to technology supply chain vulnerabilities[5]; and 3) key U.S. tech startup and emerging technology companies are accepting venture capital, seed, and private equity investment from investors who’s ultimate beneficial owners trace back to foreign sovereign and private wealth sources that are of concern to the national security community[6].

Option #1:  To bridge the cultural gap between Silicon Valley and the Pentagon, the U.S. Military Departments will train, certify, and deploy “Cyber Foreign Area Officers” or CFAOs.  These CFAOs would align with DoD Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs[7]” and, within the cyber and Silicon Valley context, do the same as a traditional FAO and “provide expertise in planning and executing operations, to provide liaison with foreign militaries operating in coalitions with U.S. forces, to conduct political-military activities, and to execute military-diplomatic missions.”

Risk:  DoD treating multinational corporations like nation states risks further decreasing or eroding the recognition of nation states as bearing ultimate authority.  Additionally, there is risk that the checks and balances specifically within the U.S. between the public and private sectors will tip irrevocably towards the tech sector and set the sector up as a rival for the USG in foreign and domestic relationships. Lastly, success in this approach may lead to other business sectors/industries pushing to be treated on par.

Gain:  Having DoD establish a CFAO program would serve to put DoD-centric cyber/techno skills in a socio-cultural context, to aid in Silicon Valley sense-making, narrative development/dissemination, and to establish mutual trusted agency. In effect, CFAOs would act as translators and relationship builders between Silicon Valley and DoD, with the interests of all the branches of service fully represented. Given the routine real world and fictional depictions of Silicon Valley and DoD being from figurative different worlds, using a FAO construct to break through this recognized barrier may be a case of USG policy retroactively catching up with present reality. Further, considering the national security threats that loom from the DoD losing its technological superiority, perhaps the potential gains of this option outweigh its risks.

Option #2:  Maintain the status quo, where DoD alternates between first treating Silicon Valley as a necessary but sometimes errant supplier, and second seeking to emulate Silicon Valley’s successes and culture within existing DoD constructs.  

Risk:  Possibly the greatest risk in continuing the path of the current DoD approach to the tech world is the loss of the advantage of technical superiority through speed of innovation, due to mutual lack of understanding of priorities, mission drivers, objectives, and organizational design.  Although a number of DoD acquisition reform initiatives are gaining some traction, conventional thinking is that DoD must acquire technology and services through a lengthy competitive bid process, which once awarded, locks both the DoD and the winner into a multi-year relationship. In Silicon Valley, speed-to-market is valued, and concepts pitched one month may be expected to be deployable within a few quarters, before the technology evolves yet again. Continual experimentation, improvisation, adaptation, and innovation are at the heart of Silicon Valley. DoD wants advanced technology, but they want it scalable, repeatable, controllable, and inexpensive. These are not compatible cultural outlooks.

Gain:  Continuing the current course of action has the advantage of familiarity, where the rules and pathways are well-understood by DoD and where risk can be managed. Although arguably slow to evolve, DoD acquisition mechanisms are on solid legal ground regarding use of taxpayer dollars, and program managers and decision makers alike are quite comfortable in navigating the use of conventional DoD acquisition tools. This approach represents good fiscal stewardship of DoD budgets.

Other Comments:  None. 

Recommendation:  None.  


Endnotes:

[1] Malcomson, S. Why Silicon Valley Shouldn’t Work With the Pentagon. New York Times. 19APR2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/04/19/opinion/silicon-valley-military-contract.html.

[2] Hsu, J. Pentagon Warns Silicon Valley About Aiding Chinese Military. IEEE Spectrum. 28MAR2019. Retrieved 15APR2019, from https://spectrum.ieee.org/tech-talk/aerospace/military/pentagon-warns-silicon-valley-about-aiding-chinese-military.

[3] Zegart, A and Childs, K. The Growing Gulf Between Silicon Valley and Washington. The Atlantic. 13DEC2018. Retrieved 15APR2019, from https://www.theatlantic.com/ideas/archive/2018/12/growing-gulf-between-silicon-valley-and-washington/577963/.

[4] Copestake, J. Google China: Has search firm put Project Dragonfly on hold? BBC News. 18DEC2018. Retrieved 15APR2019, from https://www.bbc.com/news/technology-46604085.

[5] Mozur, P. The Week in Tech: Fears of the Supply Chain in China. New York Times. 12OCT2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/10/12/technology/the-week-in-tech-fears-of-the-supply-chain-in-china.html.

[6] Northam, J. China Makes A Big Play In Silicon Valley. National Public Radio. 07OCT2018. Retrieved 15APR2019, from https://www.npr.org/2018/10/07/654339389/china-makes-a-big-play-in-silicon-valley.

[7] Department of Defense Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs,” April 28, 2005.  Retrieved 15APR2019, from https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/131517p.pdf.

 

Cyberspace Emerging Technology Information Systems Kat Cassedy Option Papers Public-Private Partnerships and Intersections United States

An Assessment of the Role of Unmanned Ground Vehicles in Future Warfare

Robert Clark is a post-graduate researcher at the Department of War Studies at King’s College London, and is a British military veteran. His specialities include UK foreign policy in Asia Pacific and UK defence relations.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Role of Unmanned Ground Vehicles in Future Warfare

Date Originally Written:  February 17, 2019.

Date Originally Published:  February 25, 2019.

Summary:  The British Army’s recent land trials of the Tracked Hybrid Modular Infantry System of Unmanned Ground Vehicles, seeks to ensure that the British Army retains its lethality in upcoming short to medium level intensity conflicts.  These trials align with the announcements by both the British Army’s Chief of General Staff, General Carleton-Smith, and by the Defence Secretary, Gavin Williamson, regarding the evolving character of warfare.

Text:  The United Kingdom’s (UK) current vision for the future role of Unmanned Ground Vehicles (UGVs) originates from the British Army’s “Strike Brigade” concept, as outlined in the Strategic Defence Security Review 2015[1]. This review proposed that British ground forces should be capable of self-deployment and self-sustainment at long distances, potentially global in scope. According to this review, by 2025 the UK should be able to deploy “a war-fighting division optimised for high intensity combat operations;” indeed, “the division will draw on two armoured infantry brigades and two new Strike Brigades to deliver a deployed division of three brigades.” Both Strike Brigades should be able to operate simultaneously in different parts of the world, and by incorporating the next generation autonomous technology currently being trialled by the British Army, will remain combat effective post-Army 2020.

The ability for land forces of this size to self-sustain at long-range places an increased demand on logistics and the resupply chain of the British Army, which has been shown to have been overburdened in recent conflicts[2]. This overburdening is likely to increase due to the evolving character of warfare and of the environments in which conflicts are likely to occur, specifically densely populated urban areas. These densely populated areas are likely to become more cluttered, congested and contested than ever before. Therefore, a more agile and flexible logistics and resupply system, able to conduct resupply in a more dynamic environment and over greater distances, will likely be required to meet the challenges of warfare from the mid-2020s and beyond.

Sustaining the British Armed Forces more broadly in densely populated areas may represent something of a shift in the UK’s vision for UGV technology. This UGV technology was previously utilised almost exclusively for Explosive Ordnance Disposal (EOD) and for Countering-Improvised Explosive Devices for both the military and the police, as opposed to being truly a force-multiplier developing the logistics and resupply chains.

Looking at UGVs as a force multiplier, the Ministry of Defence’s Defence Science and Technology Laboratory (DTSL) is currently leading a three-year research and development programme entitled Autonomous Last Mile Resupply System (ALMRS)[3]. The ALMRS research is being undertaken to demonstrate system solutions which aim to reduce the logistical burden on the entire Armed Forces, in addition to providing new operational capability and to reduce operational casualties. Drawing on both commercial technology as well as conceptual academic ideas – ranging from online delivery systems to unmanned vehicles – more than 140 organisations from small and medium-sized enterprises, to large military-industrial corporations, submitted entries.

The first phase of the ALMRS programme challenged industry and academia to design pioneering technology to deliver vital supplies and support to soldiers on the front line, working with research teams across the UK and internationally. This research highlights the current direction with which the British vision is orientated regarding UGVs, i.e., support-based roles. Meanwhile, the second phase of the ALMRS programme started in July 2018 and is due to last for approximately twelve months. It included ‘Autonomous Warrior’, the Army Warfighting Experiment 18 (AWE18), a 1 Armoured Infantry Brigade battlegroup-level live fire exercise, which took place on Salisbury Plain in November 2018. This live fire exercise saw each of the five remaining projects left in the ALMRS programme demonstrate their autonomous capabilities in combined exercises with the British Armed Forces, the end user. The results of this exercise provided DSTL with user feedback, crucial to enable subsequent development; identifying how the Army can exploit developments in robotics and autonomous systems technology through capability integration.

Among the final five projects short-listed for the second phase of ALMRS and AWE18 was a UGV multi-purpose platform called TITAN, developed by British military technology company QinetiQ, in partnership with MILREM Robotics, an Estonian military technology company. Developing its Tracked Hybrid Modular Infantry System (THeMIS), the QinetiQ-led programme impressed in the AWE18.

The THeMIS platform is designed to provide support for dismounted troops by serving as a transport platform, a remote weapon station, an IED detection and disposal unit, and surveillance and targeting acquisition system designed to enhance a commander’s situational awareness. THeMIS is an open architecture platform, with subsequent models based around a specific purpose or operational capability.

THeMIS Transport is designed to manoeuvre equipment around the battlefield to lighten the burden of soldiers, with a maximum payload weight of 750 kilograms. This 750 kilogram load would be adequate to resupply a platoon’s worth of ammunition, water, rations and medical supplies and to sustain it at 200% operating capacity – in essence, two resupplies in one. In addition, when utilised in battery mode, THeMIS Transport is near-silent and can travel for up to ninety minutes. When operating on the front-line, THeMIS Transport proves far more effective than a quad bike and trailer, which are presently in use with the British Army to achieve the same effect. Resupply is often overseen by the Platoon Sergeant, the platoon’s Senior Non-Commissioned Officer and most experienced soldier. Relieving the Platoon Sergeant of such a burden would create an additional force multiplier during land operations.

In addition, THeMIS can be fitted to act as a Remote Weapons System (RWS), with the ADDER version equipped with a .51 calibre Heavy Machine Gun, outfitted with both day and night optics. Additional THeMIS models include the PROTECTOR RWS, which integrates Javelin anti-tank missile capability. Meanwhile, more conventional THeMIS models include GroundEye, an EOD UGV, and the ELIX-XL and KK-4 LE, which are surveillance platforms that allow for the incorporation of remote drone technology.

By seeking to understand further the roles within the British Armed Forces both artificial intelligence and robotics currently have, in addition to what drives these roles and what challenges them, it is possible to gauge the continued evolution of remote warfare with the emergence of such technologies. Specifically, UGVs and RWS’ which were trialled extensively in 2018 by the British Army. Based upon research conducted on these recent trials, combined with current up-to-date in-theatre applications of such technology, it is assessed that the use of such equipment will expedite the rise of remote warfare as the preferred method of war by western policy makers in future low to medium level intensity conflicts seeking to minimise the physical risks to military personnel in addition to engaging in conflict more financially viable.


Endnotes:

[1] HM Government. (2015, November). National Security Strategy and Strategic Defence and Security Review 2015. Retrieved February 17, 2019, from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/478933/52309_Cm_9161_NSS_SD_Review_web_only.pdf

[2] Erbel, M., & Kinsey, C. (2015, October 4). Think again – supplying war: Reappraising military logistics and its centrality to strategy and war. Retrieved February 17, 2019, from https://www.tandfonline.com/doi/full/10.1080/01402390.2015.1104669

[3] Defence Science and Technology Laboratory. (2017). Competition document: Autonomous last mile resupply. Retrieved February 17, 2019, from https://www.gov.uk/government/publications/accelerator-competition-autonomous-last-mile-supply/accelerator-competition-autonomous-last-mile-resupply

 

Assessment Papers Capacity / Capability Enhancement Emerging Technology Robert Clark United Kingdom

Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

Assessment of the Role of Cyber Power in Interstate Conflict

Eric Altamura is a graduate student in the Security Studies Program at Georgetown University’s School of Foreign Service. He previously served for four years on active duty as an armor officer in the United States Army.  He regularly writes for Georgetown Security Studies Review and can be found on Twitter @eric_senlu.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Role of Cyber Power in Interstate Conflict

Date Originally Written:  May 05, 2018 / Revised for Divergent Options July 14, 2018.

Date Originally Published:  September 17, 2018.

Summary:  The targeting of computer networks and digitized information during war can prevent escalation by providing an alternative means for states to create the strategic effects necessary to accomplish limited objectives, thereby bolstering the political viability of the use of force as a lever of state power.

Text:  Prussian General and military theorist Carl von Clausewitz wrote that in reality, one uses, “no greater force, and setting himself no greater military aim, than would be sufficient for the achievement of his political purpose.” State actors, thus far, have opted to limit cyberattacks in size and scope pursuant to specific political objectives when choosing to target information for accomplishing desired outcomes. This limiting occurs because as warfare approaches its unlimited form in cyberspace, computer network attacks increasingly affect the physical domain in areas where societies have become reliant upon IT systems for everyday functions. Many government and corporate network servers host data from industrial control systems (ICS) or supervisory control and data acquisition (SCADA) systems that control power generation, utilities, and virtually all other public services. Broader attacks on an adversary’s networks consequently affect the populations supported by these systems, so that the impacts of an attack go beyond simply denying an opponent the ability to communicate through digital networks.

At some point, a threshold exists where it becomes more practical for states to utilize other means to directly target the physical assets of an adversary rather than through information systems. Unlimited cyberattacks on infrastructure would come close to replicating warfare in its total form, with the goal of fully disarming an opponent of its means to generate resistance, so states become more willing to expend resources and effort towards accomplishing their objectives. In this case, cyber power decreases in utility relative to the use of physical munitions (i.e. bullets and bombs) as the scale of warfare increases, mainly due to the lower probability of producing enduring effects in cyberspace. As such, the targeting and attacking of an opponent’s digital communication networks tends to occur in a more limited fashion because alternative levers of state power provide more reliable solutions as warfare nears its absolute form. In other words, cyberspace offers much more value to states seeking to accomplish limited political objectives, rather than for waging total war against an adversary.

To understand how actors attack computer systems and networks to accomplish limited objectives during war, one must first identify what states actually seek to accomplish in cyberspace. Just as the prominent British naval historian Julian Corbett explains that command of the sea does not entail “the conquest of water territory,” states do not use information technology for the purpose of conquering the computer systems and supporting infrastructure that comprise an adversary’s information network. Furthermore, cyberattacks do not occur in isolation from the broader context of war, nor do they need to result in the total destruction of the enemy’s capabilities to successfully accomplish political objectives. Rather, the tactical objective in any environment is to exploit the activity that takes place within it – in this case, the communication of information across a series of interconnected digital networks – in a way that provides a relative advantage in war. Once the enemy’s communication of information is exploited, and an advantage achieved, states can then use force to accomplish otherwise unattainable political objectives.

Achieving such an advantage requires targeting the key functions and assets in cyberspace that enable states to accomplish political objectives. Italian General Giulio Douhet, an airpower theorist, describes command of the air as, “the ability to fly against an enemy so as to injure him, while he has been deprived of the power to do likewise.” Whereas airpower theorists propose targeting airfields alongside destroying airplanes as ways to deny an adversary access to the air, a similar concept prevails with cyber power. To deny an opponent the ability to utilize cyberspace for its own purposes, states can either attack information directly or target the means by which the enemy communicates its information. Once an actor achieves uncontested use of cyberspace, it can subsequently control or manipulate information for its own limited purposes, particularly by preventing the escalation of war toward its total form.

More specifically, the ability to communicate information while preventing an adversary from doing so has a limiting effect on warfare for three reasons. Primarily, access to information through networked communications systems provides a decisive advantage to military forces by allowing for “analyses and synthesis across a variety of domains” that enables rapid and informed decision-making at all echelons. The greater a decision advantage one military force has over another, the less costly military action becomes. Secondly, the ubiquity of networked information technologies creates an alternative way for actors to affect targets that would otherwise be politically, geographically, or normatively infeasible to target with physical munitions. Finally, actors can mask their activities in cyberspace, which makes attribution difficult. This added layer of ambiguity enables face-saving measures by opponents, who can opt to not respond to attacks overtly without necessarily appearing weak.

In essence, cyber power has become particularly useful for states as a tool for preventing conflict escalation, as an opponent’s ability to respond to attacks becomes constrained when denied access to communication networks. Societies’ dependence on information technology and resulting vulnerability to computer network attacks continues to increase, indicating that interstate violence may become much more prevalent in the near term if aggressors can use cyberattacks to decrease the likelihood of escalation by an adversary.


Endnotes:

[1] von Clausewitz, C. (1976). On War. (M. Howard, & P. Paret, Trans.) Princeton: Princeton University Press.

[2] United States Computer Emergency Readiness Team. (2018, March 15). Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors. (United States Department of Homeland Security) Retrieved May 1, 2018, from https://www.us-cert.gov/ncas/alerts/TA18-074A

[3] Fischer, E. A. (2016, August 12). Cybersecurity Issues and Challenges: In Brief. Retrieved May 1, 2018, from https://fas.org/sgp/crs/misc/R43831.pdf

[4] Corbett, J. S. (2005, February 16). Some Principles of Maritime Strategy. (S. Shell, & K. Edkins, Eds.) Retrieved May 2, 2018, from The Project Gutenberg: http://www.gutenberg.org/ebooks/15076

[5] Ibid.

[6] Douhet, G. (1942). The Command of the Air. (D. Ferrari, Trans.) New York: Coward-McCann.

[7] Singer, P. W., & Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know. New York: Oxford University Press.

[8] Boyd, J. R. (2010, August). The Essence of Winning and Losing. (C. Richards, & C. Spinney, Eds.) Atlanta.

Aggression Assessment Papers Cyberspace Emerging Technology Eric Altamura

Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity

Options for Next Generation Blue Force Biometrics

Sarah Soliman is a Technical Analyst at the nonprofit, nonpartisan RAND Corporation.  Sarah’s research interests lie at the intersection of national security, emerging technology, and identity.  She can be found on Twitter @BiometricsNerd.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Next Generation Biometrics for U.S. Forces.

Date Originally Written:  March 18, 2017.

Date Originally Published:  June 26, 2017.

Author and / or Article Point of View:  Sarah Soliman is a biometrics engineer who spent two years in Iraq and Afghanistan as contracted field support to Department of Defense biometrics initiatives.

Background:  When a U.S. Army specialist challenged Secretary of Defense Donald Rumsfeld in 2004, it became tech-innovation legend within the military.  The specialist asked what the secretary was doing to up-armor military vehicles against Improvised Explosive Device (IED) attacks[1].  This town hall question led to technical innovations that became the class of military vehicles known as Mine-Resistant Ambush Protected, the MRAP.

History repeated itself in a way last year when U.S. Marine Corps General Robert B. Neller was asked in a Marine Corps town hall what he was doing to “up-armor” military personnel—not against attacks from other forces, but against suicide within their ranks[2].  The technical innovation path to strengthening troop resiliency is less clear, but just as in need of an MRAP-like focus on solutions.  Here are three approaches to consider in applying “blue force” biometrics, the collection of physiological or behavioral data from U.S. military troops, that could help develop diagnostic applications to benefit individual servicemembers.

1

US Army Specialist Thomas Wilson addresses the Secretary of Defense on base in Kuwait in 2004. Credit: Gustavo Ferrari / AP http://www.nbcnews.com/id/6679801/ns/world_news-mideast_n_africa/t/rumsfeld-inquisitor-not-one-bite-his-tongue

Significance:  The September 11th terrorists struck at a weakness—the United States’ ability to identify enemy combatants.  So the U.S. military took what was once blue force biometrics—a measurement of human signatures like facial images, fingerprints and deoxyribonucleic acid (DNA) (which are all a part of an enrolling military member’s record)—and flipped their use to track combatants rather than their own personnel.  This shift led to record use of biometrics in Operation Iraqi Freedom and Operation Enduring Freedom to assist in green (partner), grey (unknown), and red (enemy) force identification.

After 9/11, the U.S. military rallied for advances in biometrics, developing mobile tactical handheld devices, creating databases of IED networks, and cutting the time it takes to analyze DNA from days to hours[3].  The U.S. military became highly equipped for a type of identification that validates a person is who they say they are, yet in some ways these red force biometric advances have plateaued alongside dwindling funding for overseas operations and troop presence.  As a biometric toolset is developed to up-armor military personnel for health concerns, it may be worth considering expanding the narrow definition of biometrics that the Department of Defense currently uses[4].

The options presented below represent research that is shifting from red force biometrics back to the need for more blue force diagnostics as it relates to traumatic brain injury, sleep and social media.

Option #1:  Traumatic Brain Injury (TBI).

The bumps and grooves of the brain can contain identification information much like the loops and whorls in a fingerprint.  Science is only on the cusp of understanding the benefits of brain mapping, particularly as it relates to injury for military members[5].

Gain:  Research into Wearables.

Getting military members to a field hospital equipped with a magnetic resonance imaging (MRI) scanner soon after an explosion is often unrealistic.  One trend has been to catalog the series of blast waves experienced—instead of measuring one individual biometric response—through a wearable “blast gauge” device.  The blast gauge program made news recently as the markers failed to give vibrant enough data and the program was cancelled[6].  Though not field expedient, another traumatic brain injury (TBI) sensor type to watch is brain activity trackers, which CNN’s Jake Tapper experienced when he donned a MYnd Analytics electroencephalogram brain scanning cap, drawing attention to blue force biometrics topics alongside Veterans Day[7].

 

Risk:  Overpromising, Underdelivering or “Having a Theranos Moment.”

Since these wearable devices aren’t currently viable solutions, another approach being considered is uncovering biometrics in blood.  TBI may cause certain proteins to spike in the blood[8]. Instead of relying on a subjective self-assessment by a soldier, a quick pin-prick blood draw could be taken.  Military members can be hesitant to admit to injury, since receiving treatment is often equated with stigma and may require having to depart from a unit.  This approach would get around that while helping the Department of Defense (DoD) gain a stronger definition of whether treatment is required.

Option #2:  Sleep.

Thirty-one percent of members of the U.S. military get five hours or less of sleep a night, according to RAND research[9].  This level of sleep deprivation affects cognitive, interpersonal, and motor skills whether that means leading a convoy, a patrol or back home leading a family.  This health concern bleeds across personal and professional lines.

Gain:  Follow the Pilots.

The military already requires flight crews to rest between missions, a policy in place to allow flight crews the opportunity to be mission ready through sleep, and the same concept could be instituted across the military.  Keeping positive sleep biometrics—the measurement of human signatures based on metrics like amount of total sleep time or how often a person wakes up during a sleep cycle, oxygen levels during sleep and the repeat consistent length of sleep—can lower rates of daytime impairment.

4
The prevalence of insufficient sleep duration and poor sleep quality across the force. Credit: RAND, Clock by Dmitry Fisher/iStock; Pillow by Yobro10/iStockhttp://www.rand.org/pubs/research_briefs/RB9823.html

Risk:  More memoirs by personnel bragging how little sleep they need to function[10].

What if a minimal level of rest became a requirement for the larger military community?  What sleep-tracking wearables could military members opt to wear to better grasp their own readiness?  What if sleep data were factored into a military command’s performance evaluation?

Option #3:  Social Media.

The traces of identity left behind through the language, images, and even emoji[11] used in social media have been studied, and they can provide clues to mental health.

Gain:  It’s easier to pull text than to pull blood.

Biometric markers include interactivity like engagement (how often posts are made), what time a message is sent (which can act as an “insomnia index”), and emotion detection through text analysis of the language used[12].  Social media ostracism can also be measured by “embeddedness” or how close-knit one’s online connections are[13].

 

Risk:  Misunderstanding in social media research.

The DoD’s tweet about this research was misconstrued as a subtweet or mockery[14].  True to its text, the tweet was about research under development at the Department of Defense and in particular the DoD Suicide Prevention Office.  Though conclusions at the scale of the DoD have yet to be reached, important research is being built-in this area including studies like one done by Microsoft Research, which demonstrated 70 percent accuracy in estimating onset of a major depressive disorder[15].  Computer programs have identified Instagram photos as a predictive marker of depression[16] and Twitter data as a quantifiable signal of suicide attempts[17].

Other Comments:  Whether by mapping the brain, breaking barriers to getting good sleep, or improving linguistic understanding of social media calls for help, how will the military look to blue force biometrics to strengthen the health of its core?  What type of intervention should be aligned once data indicators are defined?  Many tombs of untapped data remain in the digital world, but data protection and privacy measures must be in place before they are mined.

Recommendations:  None.


Endnotes:

[1]  Gilmore, G. J. (2004, December 08). Rumsfeld Handles Tough Questions at Town Hall Meeting. Retrieved June 03, 2017, from http://archive.defense.gov/news/newsarticle.aspx?id=24643

[2]  Schogol, J. (2016, May 29). Hidden-battle-scars-robert-neller-mission-to-save-marines-suicide. Retrieved June 03, 2017, from http://www.marinecorpstimes.com/story/military/2016/05/29/hidden-battle-scars-robert-neller-mission-to-save-marines-suicide/84807982/

[3]  Tucker, P. (2015, May 20). Special Operators Are Using Rapid DNA Readers. Retrieved June 03, 2017, from http://www.defenseone.com/technology/2015/05/special-operators-are-using-rapid-dna-readers/113383/

[4]  The DoD’s Joint Publication 2-0 defines biometrics as “The process of recognizing an individual based on measurable anatomical, physiological, and behavioral characteristics.”

[5]  DoD Worldwide Numbers for TBI. (2017, May 22). Retrieved June 03, 2017, from http://dvbic.dcoe.mil/dod-worldwide-numbers-tbi

[6]  Hamilton, J. (2016, December 20). Pentagon Shelves Blast Gauges Meant To Detect Battlefield Brain Injuries. Retrieved June 03, 2017, from http://www.npr.org/sections/health-shots/2016/12/20/506146595/pentagon-shelves-blast-gauges-meant-to-detect-battlefield-brain-injuries?utm_medium=RSS&utm_campaign=storiesfromnpr

[7]  CNN – The Lead with Jake Tapper. (2016, November 11). Retrieved June 03, 2017, from https://vimeo.com/191229323

[8]  West Virginia University. (2014, May 29). WVU research team developing test strips to diagnose traumatic brain injury, heavy metals. Retrieved June 03, 2017, from http://wvutoday-archive.wvu.edu/n/2014/05/29/wvu-research-team-developing-test-strips-to-diagnose-traumatic-brain-injury-heavy-metals.html

[9]  Troxel, W. M., Shih, R. A., Pedersen, E. R., Geyer, L., Fisher, M. P., Griffin, B. A., . . . Steinberg, P. S. (2015, April 06). Sleep Problems and Their Impact on U.S. Servicemembers. Retrieved June 03, 2017, from http://www.rand.org/pubs/research_briefs/RB9823.html

[10]  Mullany, A. (2017, May 02). Here’s Arianna Huffington’s Recipe For A Great Night Of Sleep. Retrieved June 03, 2017, from https://www.fastcompany.com/3060801/heres-arianna-huffingtons-recipe-for-a-great-night-of-sleep

[11]  Ruiz, R. (2016, June 26). What you post on social media might help prevent suicide. Retrieved June 03, 2017, from http://mashable.com/2016/06/26/suicide-prevention-social-media.amp

[12]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[13]  Ibid.

[14]  Brogan, J. (2017, January 23). Did the Department of Defense Just Subtweet Donald Trump? Retrieved June 03, 2017, from http://www.slate.com/blogs/future_tense/2017/01/23/did_the_department_of_defense_subtweet_donald_trump_about_mental_health.html

[15]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[16]  Reece, A. G., & Danforth, C. M. (2016, August 13). Instagram photos reveal predictive markers of depression. Retrieved June 03, 2017, from https://arxiv.org/abs/1608.03282

[17]  Coppersmith, G., Ngo, K., Leary, R., & Wood, A. (2016, June 16). Exploratory Analysis of Social Media Prior to a Suicide Attempt. Retrieved June 03, 2017, from https://www.semanticscholar.org/paper/Exploratory-Analysis-of-Social-Media-Prior-to-a-Su-Coppersmith-Ngo/3bb21a197b29e2b25fe8befbe6ac5cec66d25413

Biometrics Emerging Technology Option Papers Psychological Factors Sarah Soliman United States