Assessing The Network-State in 2050

Bryce Johnston (@am_Bryce) is an U.S. Army officer currently serving in the 173 rd Airborne Brigade. He is a West Point graduate and a Fulbright Scholar. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing The Network-State in 2050

Date Originally Written:  December 12, 2022.

Date Originally Published:  December 26, 2022.   

Author and / or Article Point of View:  The author is an active-duty U.S. Army officer whose studies intersect technology and politics. His assessment combines Balaji Srinivasan’s concept of the network-state with Chamath Palihapitiya’s[1] claim that the marginal cost of energy and computation will eventually reach zero. The article is written from the point of view of an advisor to nation-states.

Summary:  Online communities have become an integral part of life in 2022. As money, computing power, and energy become cheaper, citizens may find themselves identifying more with an immersive online network than their nation. If this trend continues, the world’s balance of power may soon include powerful network-states that do not respect political boundaries and control important aspects of the globe’s information domain. 

Text:  The nation-state was the primary actor in international affairs for the last two centuries; advances in digital technology may ensure the network-state dominates the next two centuries. The network-state, as conceived by Balaji Srinivasan, is a cohesive digital community that is capable of achieving political aims and is recognized as sovereign by the international community[2]. The citizens of the network-state are not tied to a physical location. Instead, they gain their political and cultural identity through their affiliation with a global network connected through digital technology. The idea of the network-state poses an immediate challenge to the nation-state whose legitimacy comes through its ability to protect its physical territory.  By 2050, nation-states like the United States of America could compete with sovereign entities that exist within their borders. 

An accepted definition of a state is an entity that has a monopoly on violence within its territory[3]. While a network-state may have a weak claim to a monopoly of physical violence, they could monopolize an alternate form of power that is just as important. Most aspects of modern life rely on the cooperation of networks. A network-state that has a monopoly over the traffic that comes through it could very easily erode the will of a nation-state by denying its citizens the ability to move money, communicate with family, or even drive their car. One only has to look at China today to see this sort of power in action. 

Culturally, citizens in developed countries have grown used to spending most of their time online. The average American spends about eight hours online engaged with digital media[4]. Digital communities such as QAnon and WallStreetBets have been able to coordinate their members to affect the physical world. These communities were able to distill a strong sense of identity in their members even though they only ever interacted with each other in an online forum. Advances in generative media, virtual reality hardware, and digital currencies will only make these communities more engaging in the near future. 

The network-state is not inevitable. Three conditions are necessary to create the technology needed to sustain a politically viable digital community that spans the world by 2050. First, the marginal cost of capital must approach zero. The last decade saw interest rates stay near zero. Cheap money leads to the misallocation of capital towards frivolous endeavors, but it also nudges technologists to place a higher value on innovations that have a longer time horizon[5]. Artificial intelligence, crypto, and virtual reality all need significant investments to make them viable for the market. These same technologies also make up the building blocks of the network-state.

Second, the marginal cost of computing must approach zero. The technologies mentioned above require vast amounts of computational power. To persuade millions of users to make their online community the core of their identity, online communities will need to provide a persistent level of immersion that is not feasible today. This technical challenge is best understood by looking at the billions of dollars it took to allow Mark Zuckerberg’s metaverse citizens to traverse their community on legs[6]. Moore’s Law, which states that the number of transistors on microchips will double every year, has remained largely true for the last forty years[7]. While this pattern will likely come to an end, other technologies such as NVIDIA’s specialized graphic chips and quantum computing will ensure that the cost of computing power will drop over time[8].

Finally, the marginal cost of energy must approach zero. Improvements in computing technology will likely make systems more energy efficient, but digital communities that encompass a majority of mankind will require a large amount of energy. The ability to transfer this energy to decentralized nodes will become important as network-states span vast swaths of the earth. Solar panels and battery stations are already becoming cheap enough for individuals to buy. As these materials become cheaper and more reliable, most of the citizens in a network-state likely provide their own power. This decoupling from national grids and fossil fuels will not only allow these citizens to run their machines uninhibited but make them less vulnerable to coercion by nation-states who derive their power from energy production. 

The likelihood of these conditions occurring by 2050 is high. Investors like billionaire Chamath Palihapitiya are already betting on a drastic reduction in the cost of energy and computing power[9].  Assuming these three trends do allow for the creation of sovereign network-states, the balance of power on the global stage will shift. A world in which there is a unipolar moment amongst nation-states does not preclude the existence of a multipolar balance amongst network-states. Nation-states and network-states will not compete for many of the same resources, but the proliferation of new sovereign entities creates more opportunities for friction and miscalculation.

If war comes, nation-states will consider how to fight against an adversary that is not bound by territorial lines. Nation-states will have an advantage in that they control the physical means of production for commodities such as food and raw materials, but as the world becomes more connected to the internet, networks will still have a reach into this domain. The rise of the network-state makes it more important than ever for nation-states to control their physical infrastructure and learn to project power in the cognitive domain. Advanced missile systems and drones will do little to threaten the power of the network-state; instead, offensive capabilities will be limited to information campaigns and sophisticated cyber-attacks will allow the nation-state to protect its interests in a world where borders become meaningless.


Endnotes:

[1] Fridman, L. (November 15, 2022). Chamath Palihapitiya: Money, Success, Startups, Energy, Poker & Happiness (No. 338). Retrieved December 1, 2022, from https://www.youtube.com/watch?v=kFQUDCgMjRc

[2] Balaji, S. (2022, July 4). The Network-state in One Sentence. The Network-state. https://thenetworkstate.com/the-network-state-in-one-sentence

[3] Waters, T., & Waters, D. (2015). Politics As Vocation. In Weber’s Rationalism and Modern Society (pp. 129-198). Palgrave MacMillan, New York.

[4] Statista Research Department. (2022, August 16). Time spent with digital media in the U.S. 2011-2024. Statista Media. https://www.statista.com/statistics/262340/daily-time-spent-with-digital-media-according-to-us-consumsers

[5] Caggese, A., & Perez-Orive, A. (2017). Capital misallocation and secular stagnation. Finance and Economics Discussion Series, 9.

[6] Klee, M. (2022, October 12). After Spending Billions on the Metaverse, Mark Zuckerberg Is Left Standing on Virtual Legs. Rolling Stone. https://www.federalreserve.gov/econres/feds/capital-misallocation-and-secular-stagnation.html

[7] Roser, M., Ritchie, H., & Mathieu, E. (2022, March). Technological Change. Our World in Data. https://ourworldindata.org/grapher/transistors-per-microprocessor

[8] Sterling, B. (2020, March 10). Preparing for the end of Moore’s Law. Wired. https://www.wired.com/beyond-the-beyond/2020/03/preparing-end-moores-law/

[9] Fridman, L. (November 15, 2022). Chamath Palihapitiya: Money, Success, Startups, Energy, Poker & Happiness (No. 338). Retrieved December 1, 2022, from https://www.youtube.com/watch?v=kFQUDCgMjRc

 

Assessment Papers Bryce Johnston Emerging Technology Government Information Systems

Assessing the Tension Between Privacy and Innovation

Channing Lee studies International Politics at the Edmund A. Walsh School of Foreign Service at Georgetown University. She can be found on Twitter @channingclee. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Tension Between Privacy and Innovation

Date Originally Written:  April 1, 2022.

Date Originally Published:  April 11, 2022.

Author and / or Article Point of View:  The author is a student of international politics. 

Summary:  Given the importance of data to emerging technologies, future innovation may be dependent upon personal data access and a new relationship with privacy. To fully unleash the potential of technological innovation, societies that traditionally prize individual privacy may need to reevaluate their attitudes toward data collection in order to remain globally competitive.

Text:  The U.S. may be positioning itself to lag behind other nations that are more willing to collect and use personal data to drive Artificial Intelligence (AI) advancement and innovation. When the COVID-19 pandemic began, the idea of conducting contact tracing to assess virus exposure through personal devices sounded alarm bells across the United States[1]. However, that was not the first time technologies were engaged in personal data collection. Beyond the pandemic, the accumulation of personal data has already unlocked enhanced experiences with technology—empowering user devices to better accommodate personal preferences. As technology continues to advance, communities around the world will need to decide which ideals of personal privacy take precedence over innovation.

Some experts like Kai-Fu Lee argue that the collection of personal data may actually be the key that unlocks the future potential of technology, especially in the context of AI[2]. AI is already being integrated into nearly all industries, from healthcare to digital payments to driverless automobiles and more. AI works by training algorithms on existing data, but it can only succeed if such data is available. In Sweden, for example, data has enabled the creation of “Smart Grid Gotland,” which tracks electricity consumption according to wind energy supply fluctuations and reduces household energy costs[3]. Such integration of technology with urban planning, otherwise known as “smart cities,” has become a popular aspiration of governments across the globe to make their cities safer and more efficient. However, these projects also require massive amounts of data.

Indeed, data is already the driving force behind many research problems and innovations, though not without concerns. For example, AI is being used to improve cancer screening in cervical and prostate cancer, and AI might be the human invention that eventually leads scientists to discover a cancer cure[4]. Researchers like Dr. Fei Sha from the University of Southern California are working to apply big data and algorithmic models to “generate life-saving biomedical research outcomes[5].” But if patients deny access to their healthcare histories and other information, researchers will not have the adequate data to uncover more effective methods of treatment. Similarly, AI will likely be the technology that streamlines the advancement of digital payments, detecting fraudulent transactions and approving loan applications at a quicker speed. Yet, if people resist data collection, the algorithms cannot reach their full potential. As these examples demonstrate, “big data” can unlock the next chapter of human advances, but privacy concerns stand in the way.

Different societies use different approaches to deal with and respond to questions of data and privacy. In Western communities, individuals demonstrate strong opposition to the collection of their personal information by private sector actors, believing collection to be a breach of their personal privacy privileges. The European Union’s (EU) General Data Protection Regulation  and its newly introduced Digital Services Act, Canada’s Personal Information Protection and Electronic Documents Act, and California’s Consumer Privacy Act curb the non-consensual collection of personal information by businesses, thereby empowering individuals to take ownership of their data. Recently, big tech companies such as Meta and Google have come under public scrutiny for collecting personal data, and polls reveal that Americans are increasingly distrustful of popular social media apps such as Facebook and Instagram[6]. 

Still, the American public is not as guarded as it may appear. Video-focused social media app TikTok, whose parent company Bytedance is based in China, reported more than 100 million daily U.S. users in August 2020, up 800% since January 2018[7]. Despite warnings that the Shanghai-based company could potentially share personal data with Beijing, including threats by the Trump administration to “ban TikTok” for national security reasons, nearly a third of Americans continue to use the application on a daily basis, seemingly ignoring privacy concerns. While lawmakers have attempted to regulate the collection of data by large corporations, especially foreign companies, public opinion appears mixed.

Norms in the Eastern hemisphere tell a different story. Privacy laws exist, such as China’s Personal Information Protection Law and Japan’s upcoming ​​Amended Act on Protection of Personal Information, but the culture surrounding them is completely distinct, particularly when it comes to government collection of personal data. At the height of the pandemic, South Korea introduced a robust contact tracing campaign that relied on large databases constructed by data from credit card transactions[8]. Taiwan succeed in contact tracing efforts by launching an electronic security monitoring system that tracks isolating individuals’ locations through their cell phones[9]. In China, almost everything can be achieved through a single app, WeChat, which allows users to post pictures, order food, message friends, hire babysitters, hail a cab, pay for groceries, and more. This technological integration, which has transformed Chinese society, works because enough personal information is stored and linked together in the application. 

Some may argue that not all the data being collected by governments and even corporations has been neither voluntary nor consensual, which is why collection discussions require legal frameworks regarding privacy. Nevertheless, governments that emphasize the collective good over personal privacy have fostered societies where people possess less paranoia about companies utilizing their information and enjoy more technological progress. Despite aforementioned privacy concerns, WeChat topped more than one billion users by the end of 2021, including overseas users[10].

Regardless of a nation’s approach to technological innovation, one thing must be made clear: privacy concerns are real and cannot be diminished. In fact, personal privacy as a principle forms the foundation of liberal democratic citizenship, and infringements upon privacy threaten such societal fabrics. Law enforcement, for example, are more actively optimizing emerging technologies such as facial recognition and surveillance methods to monitor protests and collect individual location data. These trends have the potential to compromise civil liberties, in addition to the injustices that arise from data biases[11].

Yet there is also no doubt that the direction global privacy laws are headed may potentially stifle innovation, especially because developing technologies such as AI requires large quantities of data. 

The U.S. will soon need to reevaluate the way it conceives of privacy as it relates to innovation. If the U.S. follows the EU’s footsteps and tightens its grip on the act of data collection, rather than the technology behind the data collection, it might be setting itself up for failure, or at least falling behind. If the U.S. wants to continue leading the world in technological advancement, it may pursue policies that allow technology to flourish without discounting personal protections. The U.S. can, for example, simultaneously implement strident safeguards against government or corporate misuse of personal data and invest in the next generation of technological innovation. The U.S. has options, but these options require viewing big data as a friend, not a foe.


Endnotes:

[1] Kate Blackwood, “Study: Americans skeptical of COVID-19 contact tracing apps,” Cornell Chronicle, January 21, 2021, https://news.cornell.edu/stories/2021/01/study-americans-skeptical-covid-19-contact-tracing-apps.

[2] Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order (Boston: Mariner Books, 2018).

[3] “Data driving the next wave of Swedish super cities,” KPMG, accessed March 12, 2022, https://home.kpmg/se/sv/home/nyheter-rapporter/2020/12/data-driving-the-next-wave-of-swedish-super-cities.html.

[4] “Artificial Intelligence – Opportunities in Cancer Research,” National Cancer Institute, accessed February 15, 2022, https://www.cancer.gov/research/areas/diagnosis/artificial-intelligence.

[5] Marc Ballon, “Can artificial intelligence help to detect and cure cancer?,” USC News, November 6, 2017, https://news.usc.edu/130825/can-artificial-intelligence-help-to-detect-and-cure-cancer/.

[6] Heather Kelly and Emily Guskin, “Americans widely distrust Facebook, TikTok and Instagram with their data, poll finds,” The Washington Post, December 22, 2021, https://www.washingtonpost.com/technology/2021/12/22/tech-trust-survey/.

[7] Alex Sherman, “TikTok reveals detailed user numbers for the first time,” CNBC, August 24, 2020, https://www.cnbc.com/2020/08/24/tiktok-reveals-us-global-user-growth-numbers-for-first-time.html.

[8] Young Joon Park, Young June Choe, Ok Park, et al. “Contact Tracing during Coronavirus Disease Outbreak, South Korea, 2020,” Emerging Infectious Diseases 26, no. 10 (October 2020):2465-2468. https://wwwnc.cdc.gov/eid/article/26/10/20-1315_article.

[9] Emily Weinstein, “Technology without Authoritarian Characteristics: An Assessment of the Taiwan Model of Combating COVID-19,” Taiwan Insight, December 10, 2020, https://taiwaninsight.org/2020/11/24/technology-without-authoritarian-characteristics-an-assessment-of-the-taiwan-model-of-combating-covid-19/.

[10] “WeChat users & platform insights 2022,” China Internet Watch, March 24, 2022, https://www.chinainternetwatch.com/31608/wechat-statistics/#:~:text=Over%20330%20million%20of%20WeChat’s,Account%20has%20360%20million%20users.

[11] Aaron Holmes, “How police are using technology like drones and facial recognition to monitor protests and track people across the US,” Business Insider, June 1, 2020, https://www.businessinsider.com/how-police-use-tech-facial-recognition-ai-drones-2019-10.

Assessment Papers Channing Lee Emerging Technology Governing Documents and Ideas Government Information Systems Privacy

Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Zac Rogers is an academic from Adelaide, South Australia. Zac has published in journals including International Affairs, The Cyber Defense Review, Joint Force Quarterly, and Australian Quarterly, and communicates with a wider audience across various multimedia platforms regularly. Parasitoid is his first book.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Date Originally Written:  January 3, 2022.

Date Originally Published:  January 17, 2022.

Author and / or Article Point of View:  The author is an Australia-based academic whose research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.  The author works closely with industry and government partners across multiple projects. 

Summary:  Military investment in war-gaming, table-top exercises, scenario planning, and future force design is increasing.  Some of this investment focuses on adversary activities in the “cognitive domain.” While this investment is necessary, it may fail due to it anchoring to data-driven machine-learning and automation for both offensive and defensive purposes, without a clear understanding of their appropriateness. 

Text:  In 2019 the author wrote a short piece for the U.S. Army’s MadSci website titled  “In the Cognitive War, the Weapon is You![1]” This article attempted to spur self-reflection by the national security, intelligence, and defence communities in Australia, the United States and Canada, Europe, and the United Kingdom.  At the time these communities were beginning to incorporate discussion of “cognitive” security/insecurity in their near future threat assessments and future force design discourses. The article is cited in in the North Atlantic Treaty Organization (NATO) Cognitive Warfare document of 2020[2]. Either in ways that demonstrate the misunderstanding directly, or as part of the wider context in which the point of that particular title is thoroughly misinterpreted, the author’s desired self-reflection has not been forthcoming. Instead, and not unexpectedly, the discourse on the cognitive aspects of contemporary conflict have consumed and regurgitated a familiar sequence of errors which will continue to perpetuate rather than mitigate the problem if not addressed head-on.  

What the cognitive threat is

The primary cognitive threat is us[3]. The threat is driven by a combination of, firstly, techno-futurist hubris which exists as a permanently recycling feature of late-modern military thought.  The threat includes a precipitous slide into scientism which military thinkers and the organisations they populate have not avoided[4].  Further contributing to the threat is the commercial and financial rent-seeking which overhangs military affairs as a by-product of private-sector led R&D activities and government dependence on and cultivation of those activities increasingly since the 1990s[5].  Lastly, adversary awareness of these dynamics and an increasing willingness and capacity to manipulate and exacerbate them via the multitude of vulnerabilities ushered in by digital hyper-connectivity[6]. In other words, before the cognitive threat is an operational and tactical menace to be addressed and countered by the joint force, it is a central feature of the deteriorating epistemic condition of the late modern societies in which said forces operate and from which its personnel, funding, R&D pathways, doctrine and operating concepts, epistemic communities and strategic leadership emerge. 

What the cognitive threat is not   

The cognitive threat is not what adversary military organisations and their patrons are doing in and to the information environment with regard to activities other than kinetic military operations. Terms for adversarial activities occurring outside of conventional lethal/kinetic combat operations – such as the “grey-zone” and “below-the-threshold” – describe time-honoured tactics by which interlocutors engage in methods aimed at weakening and sowing dysfunction in the social and political fabric of competitor or enemy societies.  These tactics are used to gain advantage in areas not directly including military conflict, or in areas likely to be critical to military preparedness and mobilization in times of war[7]. A key stumbling block here is obvious: its often difficult to know which intentions such tactics express. This is not cognitive warfare. It is merely typical of contending across and between cross-cultural communities, and the permanent unwillingness of contending societies to accord with the other’s rules. Information warfare – particularly influence operations traversing the Internet and exploiting the dominant commercial operations found there – is part of this mix of activities which belong under the normal paradigm of competition between states for strategic advantage. Active measures – influence operations designed to self-perpetuate – have found fertile new ground on the Internet but are not new to the arsenals of intelligence services and, as Thomas Rid has warned, while they proliferate, are more unpredictable and difficult to control than they were in the pre-Internet era[8]. None of this is cognitive warfare either. Unfortunately, current and recent discourse has lapsed into the error of treating it as such[9], leading to all manner of self-inflicted confusion[10]. 

Why the distinction matters

Two trends emerge from the abovementioned confusion which represent the most immediate threat to the military enterprise[11]. Firstly, private-sector vendors and the consulting and lobbying industry they employ are busily pitching technological solutions based on machine-learning and automation which have been developed in commercial business settings in which sensitivity to error is not high[12]. While militaries experiment with this raft of technologies, eager to be seen at the vanguard of emerging tech; to justify R&D budgets and stave off defunding; or simply out of habit, they incur opportunity cost.  This cost is best described as stultifying investment in the human potential which strategic thinkers have long identified as the real key to actualizing new technologies[13], and entering into path dependencies with behemoth corporate actors whose strategic goal is the cultivation of rentier-relations not excluding the ever-lucrative military sector[14]. 

Secondly, to the extent that automation and machine learning technologies enter the operational picture, cognitive debt is accrued as the military enterprise becomes increasingly dependent on fallible tech solutions[15]. Under battle conditions, the first assumption is the contestation of the electromagnetic spectrum on which all digital information technologies depend for basic functionality. Automated data gathering and analysis tools suffer from heavy reliance on data availability and integrity.  When these tools are unavailable any joint multinational force will require multiple redundancies, not only in terms of technology, but more importantly, in terms of leadership and personnel competencies. It is evidently unclear where the military enterprise draws the line in terms of the likely cost-benefit ratio when it comes to experimenting with automated machine learning tools and the contexts in which they ought to be applied[16]. Unfortunately, experimentation is never cost-free. When civilian / military boundaries are blurred to the extent they are now as a result of the digital transformation of society, such experimentation requires consideration  in light of all of its implications, including to the integrity and functionality of open democracy as the entity being defended[17]. 

The first error of misinterpreting the meaning and bounds of cognitive insecurity is compounded by a second mistake: what the military enterprise chooses to invest time, attention, and resources into tomorrow[18]. Path dependency, technological lock-in, and opportunity cost all loom large if  digital information age threats are misinterpreted. This is the solipsistic nature of the cognitive threat at work – the weapon really is you! Putting one’s feet in the shoes of the adversary, nothing could be more pleasing than seeing that threat self-perpetuate. As a first step, militaries could organise and invest immediately in a strategic technology assessment capacity[19] free from the biases of rent-seeking vendors and lobbyists who, by definition, will not only not pay the costs of mission failure, but stand to benefit from rentier-like dependencies that emerge as the military enterprise pays the corporate sector to play in the digital age. 


Endnotes:

[1] Zac Rogers, “158. In the Cognitive War – The Weapon Is You!,” Mad Scientist Laboratory (blog), July 1, 2019, https://madsciblog.tradoc.army.mil/158-in-the-cognitive-war-the-weapon-is-you/.

[2] Francois du Cluzel, “Cognitive Warfare” (Innovation Hub, 2020), https://www.innovationhub-act.org/sites/default/files/2021-01/20210122_CW%20Final.pdf.

[3] “us” refers primarily but not exclusively to the national security, intelligence, and defence communities taking up discourse on cognitive security and its threats including Australia, the U.S., U.K., Europe, and other liberal democratic nations. 

[4] Henry Bauer, “Science in the 21st Century: Knowledge Monopolies and Research Cartels,” Journal of Scientific Exploration 18 (December 1, 2004); Matthew B. Crawford, “How Science Has Been Corrupted,” UnHerd, December 21, 2021, https://unherd.com/2021/12/how-science-has-been-corrupted-2/; William A. Wilson, “Scientific Regress,” First Things, May 2016, https://www.firstthings.com/article/2016/05/scientific-regress; Philip Mirowski, Science-Mart (Harvard University Press, 2011).

[5] Dima P Adamsky, “Through the Looking Glass: The Soviet Military-Technical Revolution and the American Revolution in Military Affairs,” Journal of Strategic Studies 31, no. 2 (2008): 257–94, https://doi.org/10.1080/01402390801940443; Linda Weiss, America Inc.?: Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin UK, 2018).

[6] Timothy L. Thomas, “Russian Forecasts of Future War,” Military Review, June 2019, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MJ-19/Thomas-Russian-Forecast.pdf; Nathan Beauchamp-Mustafaga, “Cognitive Domain Operations: The PLA’s New Holistic Concept for Influence Operations,” China Brief, The Jamestown Foundation 19, no. 16 (September 2019), https://jamestown.org/program/cognitive-domain-operations-the-plas-new-holistic-concept-for-influence-operations/.

[7] See Peter Layton, “Social Mobilisation in a Contested Environment,” The Strategist, August 5, 2019, https://www.aspistrategist.org.au/social-mobilisation-in-a-contested-environment/; Peter Layton, “Mobilisation in the Information Technology Era,” The Forge (blog), N/A, https://theforge.defence.gov.au/publications/mobilisation-information-technology-era.

[8] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, Illustrated edition (New York: MACMILLAN USA, 2020).

[9] For example see Jake Harrington and Riley McCabe, “Detect and Understand: Modernizing Intelligence for the Gray Zone,” CSIS Briefs (Center for Strategic & International Studies, December 2021), https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/211207_Harrington_Detect_Understand.pdf?CXBQPSNhUjec_inYLB7SFAaO_8kBnKrQ; du Cluzel, “Cognitive Warfare”; Kimberly Underwood, “Cognitive Warfare Will Be Deciding Factor in Battle,” SIGNAL Magazine, August 15, 2017, https://www.afcea.org/content/cognitive-warfare-will-be-deciding-factor-battle; Nicholas D. Wright, “Cognitive Defense of the Joint Force in a Digitizing World” (Pentagon Joint Staff Strategic Multilayer Assessment Group, July 2021), https://nsiteam.com/cognitive-defense-of-the-joint-force-in-a-digitizing-world/.

[10] Zac Rogers and Jason Logue, “Truth as Fiction: The Dangers of Hubris in the Information Environment,” The Strategist, February 14, 2020, https://www.aspistrategist.org.au/truth-as-fiction-the-dangers-of-hubris-in-the-information-environment/.

[11] For more on this see Zac Rogers, “The Promise of Strategic Gain in the Information Age: What Happened?,” Cyber Defense Review 6, no. 1 (Winter 2021): 81–105.

[12] Rodney Brooks, “An Inconvenient Truth About AI,” IEEE Spectrum, September 29, 2021, https://spectrum.ieee.org/rodney-brooks-ai.

[13] Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology Is Only Half the Battle,” War on the Rocks, December 25, 2018, https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/.

[14] Jathan Sadowski, “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism,” Antipode 52, no. 2 (2020): 562–80, https://doi.org/10.1111/anti.12595.

[15] For problematic example see Ben Collier and Lydia Wilson, “Governments Try to Fight Crime via Google Ads,” New Lines Magazine (blog), January 4, 2022, https://newlinesmag.com/reportage/governments-try-to-fight-crime-via-google-ads/.

[16] Zac Rogers, “Discrete, Specified, Assigned, and Bounded Problems: The Appropriate Areas for AI Contributions to National Security,” SMA Invited Perspectives (NSI Inc., December 31, 2019), https://nsiteam.com/discrete-specified-assigned-and-bounded-problems-the-appropriate-areas-for-ai-contributions-to-national-security/.

[17] Emily Bienvenue and Zac Rogers, “Strategic Army: Developing Trust in the Shifting Strategic Landscape,” Joint Force Quarterly 95 (November 2019): 4–14.

[18] Zac Rogers, “Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven,” Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.

[19] For expansion see Zac Rogers and Emily Bienvenue, “Combined Information Overlay for Situational Awareness in the Digital Anthropological Terrain: Reclaiming Information for the Warfighter,” The Cyber Defense Review, no. Summer Edition (2021), https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/06_Rogers_Bienvenue_CDR_V6N3_2021.pdf?ver=6qlw1l02DXt1A_1n5KrL4g%3d%3d.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Below Established Threshold Activities (BETA) Cyberspace Influence Operations Information Systems Zac Rogers

Options to Enhance Security in U.S. Networked Combat Systems

Jason Atwell has served in the U.S. Army for over 17 years and has worked in intelligence and cyber for most of that time. He has been a Federal employee, a consultant, and a contractor at a dozen agencies and spent time overseas in several of those roles. He is currently a senior intelligence expert for FireEye, Inc. and works with government clients at all levels on cyber security strategy and planning.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  As combat systems within DoD become more connected via networks, this increases their vulnerability to adversary action.

Date Originally Written:  November 1, 2020.

Date Originally Published:  January 11, 2021.

Author and / or Article Point of View:  The author is a reservist in the U.S. Army and a cyber security and intelligence strategist for FireEye, Inc. in his day job. This article is intended to draw attention to the need for building resiliency into future combat systems by assessing vulnerabilities in networks, hardware, and software as it is better to discover a software vulnerability such as a zero day exploit in a platform like the F-35 during peacetime instead of crisis.

Background:  The United States is rushing to field a significant number of networked autonomous and semi-autonomous systems[1][2] while neglecting to secure those systems against cyber threats. This neglect is akin to the problem the developed world is having with industrial control systems and internet-of-things devices[3]. These systems are unique, they are everywhere, they are connected to the internet, but they are not secured like traditional desktop computers. These systems won’t provide cognitive edge or overmatch if they fail when it matters most due to poorly secured networks, compromised hardware, and untested or vulnerable software.

Significance:  Networked devices contain massive potential to increase the resiliency, effectiveness, and efficiency in the application of combat power[4]. Whether kinetic weapons systems, non-lethal information operations, or well-organized logistics and command and control, the advantages gained by applying high-speed networking and related developments in artificial intelligence and process automation will almost certainly be decisive in future armed conflict. However, reliance on these technologies to gain a competitive or cognitive edge also opens the user up to being incapacitated by the loss or degradation of the very thing they rely on for that edge[5]. As future combat systems become more dependent on networked autonomous and semi-autonomous platforms, success will only be realized via accompanying cybersecurity development and implementation. This formula for success is equally true for ground, sea, air, and space platforms and will take into account considerations for hardware, software, connectivity, and supply chain. The effective application of cyber threat intelligence to securing and enabling networked weapons systems and other defense technology will be just as important to winning in the new multi-domain battlefield as the effective application of other forms of intelligence has been in all previous conflicts.

Option #1:  The Department of Defense (DoD) requires cybersecurity efforts as part of procurement. The DoD has been at work on applying their “Cybersecurity Maturity Model Certification” to vendors up and down the supply chain[6]. A model like this can assure a basic level of protection to hardware and software development and will make sure that controls and countermeasures are at the forefront of defense industrial base thinking.

Risk:  Option #1 has the potential to breed complacency by shifting the cybersecurity aspect too far to the early stages of the procurement process, ignoring the need for continued cyber vigilance further into the development and fielding lifecycle. This option also places all the emphasis on vendor infrastructure through certification and doesn’t address operational and strategic concerns around the resiliency of systems in the field. A compliance-only approach does not adapt to changing adversary tactics, techniques, and procedures.

Gain:  Option #1 forces vendors to take the security of their products seriously lest they lose their ability to do business with the DoD. As the model grows and matures it can be used to also elevate the collective security of the defense industrial base[7].

Option #2:  DoD takes a more proactive approach to testing systems before and during fielding. Training scenarios such as those used at the U.S. Army’s National Training Center (NTC) could be modified to include significant cyber components, or a new Cyber-NTC could be created to test the ability of maneuver units to use networked systems in a hostile cyber environment. Commanders could be provided a risk profile for their unit to enable them to understand critical vulnerabilities and systems in their formations and be able to think through risk-based mitigations.

Risk:  This option could cause significant delay in operationalizing some systems if they are found to be lacking. It could also give U.S. adversaries insight into the weaknesses of some U.S. systems. Finally, if U.S. systems are not working well, especially early on in their maturity, this option could create significant trust and confidence issues in networked systems[8].

Gain:  Red teams from friendly cyber components could use this option to hone their own skills, and maneuver units will get better at dealing with adversity in their networked systems in difficult and challenging environments. This option also allows the U.S. to begin developing methods for degrading similar adversary capabilities, and on the flip side of the risk, builds confidence in systems which function well and prepares units for dealing with threat scenarios in the field[9].

Option #3:  The DoD requires the passing of a sort of “cybersecurity sea trial” where the procured system is put through a series of real-world challenges to see how well it holds up. The optimal way to do this could be having specialized red teams assigned to program management offices that test the products.

Risk:  As with Option #2, this option could create significant delays or hurt confidence in a system. There is also the need for this option to utilize a truly neutral test to avoid it becoming a check-box exercise or a mere capabilities demonstration.

Gain:  If applied properly, this option could give the best of all options, showing how well a system performs and forcing vendors to plan for this test in advance. This also helps guard against the complacency associated with Option #1. Option #3 also means systems will show up to the field already prepared to meet their operational requirements and function in the intended scenario and environment.

Other Comments:  Because of advances in technology, almost every function in the military is headed towards a mix of autonomous, semi-autonomous, and manned systems. Everything from weapons platforms to logistics supply chains are going to be dependent on robots, robotic process automation, and artificial intelligence. Without secure resilient networks the U.S. will not achieve overmatch in speed, efficiency, and effectiveness nor will this technology build trust with human teammates and decision makers. It cannot be overstated the degree to which reaping the benefits of this technology advancement will depend upon the U.S. application of existing and new cybersecurity frameworks in an effective way while developing U.S. offensive capabilities to deny those advantages to U.S. adversaries.

Recommendation:  None.


Endnotes:

[1] Judson, Jen. (2020). US Army Prioritizes Open Architecture for Future Combat Vehicle. Retrieved from https://www.defensenews.com/digital-show-dailies/ausa/2020/10/13/us-army-prioritizes-open-architecture-for-future-combat-vehicle-amid-competition-prep

[2] Larter, David B. The US Navy’s ‘Manhattan Project’ has its leader. (2020). Retrieved from https://www.c4isrnet.com/naval/2020/10/14/the-us-navys-manhattan-project-has-its-leader

[3] Palmer, Danny. IOT security is a mess. Retrieved from https://www.zdnet.com/article/iot-security-is-a-mess-these-guidelines-could-help-fix-that

[4] Shelbourne, Mallory. (2020). Navy’s ‘Project Overmatch’ Structure Aims to Accelerate Creating Naval Battle Network. Retrieved from https://news.usni.org/2020/10/29/navys-project-overmatch-structure-aims-to-accelerate-creating-naval-battle-network

[5] Gupta, Yogesh. (2020). Future war with China will be tech-intensive. Retrieved from https://www.tribuneindia.com/news/comment/future-war-with-china-will-be-tech-intensive-161196

[6] Baksh, Mariam. (2020). DOD’s First Agreement with Accreditation Body on Contractor Cybersecurity Nears End. Retrieved from https://www.nextgov.com/cybersecurity/2020/10/dods-first-agreement-accreditation-body-contractor-cybersecurity-nears-end/169602

[7] Coker, James. (2020). CREST and CMMC Center of Excellence Partner to Validate DoD Contractor Security. Retrieved from https://www.infosecurity-magazine.com/news/crest-cmmc-validate-defense

[8] Vandepeer, Charles B. & Regens, James L. & Uttley, Matthew R.H. (2020). Surprise and Shock in Warfare: An Enduring Challenge. Retrieved from https://www.realcleardefense.com/articles/2020/10/27/surprise_and_shock_in_warfare_an_enduring_challenge_582118.html

[9] Schechter, Benjamin. (2020). Wargaming Cyber Security. Retrieved from https://warontherocks.com/2020/09/wargaming-cyber-security

Cyberspace Defense and Military Reform Emerging Technology Information Systems Jason Atwell United States

Assessment of Civilian Next-Generation Knowledge Management Systems for Managing Civil Information

This article is published as part of the Small Wars Journal and Divergent Options Writing Contest which runs from March 1, 2019 to May 31, 2019.  More information about the writing contest can be found here.


Ray K. Ragan, MAd (PM), PMP is a Civil Affairs Officer in the U.S. Army Reserve and an Assistant Vice President of Project Management for a large Credit Union.  As a civilian, Ray worked in defense and financial technology industries, bringing machine learning, intelligence systems, along with speech and predictive analytics to enterprise scale.  Ray holds a Master’s degree in Administration from Northern Arizona University and a Certificate in Strategic Decision and Risk from Stanford University. He is a credentialed Project Management Profession (PMP) and has several Agile Project Management certifications.  Ray has served small and big war tours in Iraq and the Philippines with multiple mobilizations around the world, working in the U.S. National Interests.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Civilian Next-Generation Knowledge Management Systems for Managing Civil Information 

Date Originally Written:  May 25, 2019.

Date Originally Published:  August 19, 2019.

Summary:  Current Civil Information Management Systems are not taking advantage of the leaps of technology in knowledge management, specifically in the realm of predictive analytics, Natural Language Processing, and Machine Learning. This creates a time cost that commanders must pay in real-time in their operating environment, particularly felt in small wars. This cost also diverts resources away from direct mission-enabling operations.

Text:  Currently Civil Information Management (CIM) systems employed by the U.S. Military are not keeping pace with the current revolution seen in civilian next-generation knowledge management systems (KMS)[1][2]. These KMS are possible through the convergence of modern computing, predictive analytics, Natural Language Processing (NLP), and Machine Learning (ML)[3]. This CIM limitation is unnecessary and self-imposed as a KMS offers persistent and progressing inputs to the common operating picture. This assessment explores how civilian business harnessed this revolution and how to apply it to CIM.

Generally, CIM represents the operational variables (OV) of an operational environment (OE) and as practiced today, resides in the domain of information rather than knowledge[4]. The DIKW pyramid framework, named for its Data, Information, Knowledge, Wisdom structure informs the structure of learning[5]. Further, one can infer that traversing each step represents time and effort, a price paid by commanders in real-time during operations. Small wars demand speed and agility. Current CIM takes time to gather data, input it into a database, run queries, overlay on maps, and eventually infer some knowledge to inform decision making by the commander[6]. 

Using the 1999-invented Cynefin Framework to aid decision-making, commanders needlessly leave many of the OVs in the chaotic domain[7]. Moving from the chaotic to the complex domain the OVs must come from a KMS that is persistent and automatically progressing. Current CIMs do not automatically update by gathering information from public sources such as broadcast, print, and digital that are digitized with NLP and speech/text analytics[8].   Instead, human operators typically located in the OE, manually update these sources. Because of this, today’s CIMs go stale after the operators complete their mission or shift priorities, making what information was gathered devolve to historic data and the OE fog of war revert to chaos[9].

The single biggest advantage a quality KMS provides to a commander is time and decision-making in the OE[10]. Implemented as a simple search engine that is persistent and progressing for all OEs, would mean a commander does not need to spend operational time and effort on basic data gathering missions. Rather, a commander can focus spending operational resources on direct mission-enabling operations. Enticingly, this simple search engine KMS allows for the next advancement, one that businesses around the world are busily employing – operationalizing big data.

Business systems, such as search engines and other applications scour open sources like in court records and organizes them through a myriad of solutions. Data organized through taxonomy and algorithms allow businesses to offer their customers usable information[11]. The advent of ML permits the conversion of information to knowledge[12]. Civilian businesses use all these tools with their call centers to not only capture what customers are saying, but also the broader meta conversation: what most customers are not saying, but revealing through their behavior[13]. 

This leap in application of informatics, which civilian business use today, is absent in today’s CIM systems. The current model of CIM is not well adapted for tomorrow’s battlefield, which will almost certainly be a data-rich environment fed by robotics, signals, and public information[14]. Even the largest team of humans cannot keep up with the overwhelming deluge of data, let alone conduct analysis and make recommendations to the commander of how the civilian terrain will affect his OE[15].

In civilian business, empiricism is replacing the older model of eminence-based decision-making. No longer is it acceptable to take the word of the highest-paid person’s opinion, business decisions need to have evidence, especially at the multi-billion dollar level company level[16]. KMS enables for hypothesis, experimentation, and evidence. Applied in the civilian terrain, if the hypothesis is that by drilling a well reduces insurgency, a global KMS will reveal the truth through the metrics, which cannot be influenced, as former-U.S. Secretary of State Condoleezza Rice criticized[17]. 

Using text preprocessing with speech analytics and NLP, the KMS would solve an OE problem of data quality, as operators when supplementing the KMS with OE reports, would use speech whenever possible. This overcomes a persistent problem of garbage in and garbage out that plagues military and business systems alike. Rather than re-typing the field notes into a form, the human operator would simply use an interactive spoken dialog for input where feasible[18].

A persistent and progressive KMS also addresses a problem with expertise. During Operation Iraqi Freedom, the U.S. State Department could not find enough experts and professionals to fill the voids in transitional governance. This problem was such that then-Secretary of Defense Robert Gates volunteered to send Department of Defense civilians in their place[19]. With a KMS, commanders and policymakers can draw on a home-based cadre of experts to assess the data models of the KMS and offer contextualized insights into the system to commanders in the field.

As the breadth and quality of the data grows, system administrators can experiment with new algorithms and models on the data in a relentless drive to shorten OV-derived insights into operations planning. Within two years, this KMS data would be among the richest political science datasets ever compiled, inviting academia to write new hypothetical models and experiment. In turn, this will assist policy makers in sensing where new sources of instability emerge before they reveal themselves in actions[20].

“How do you put the genie of knowledge back in the bottle,” P. W. Singer rhetorically asked[21] in his book, Wired for War about the prospect of a robotic, data-enabled OE. This genie will not conveniently return to his bottle for robotics or data, instead commanders and policy makers will look to how to manage the data-enabled battlefield. While it may seem a herculean task to virtually recreate OEs in a future KMS, it is a necessary one. Working through the fog of war with a candle and ceding OVs to chaos is no longer acceptable. Civilian business already addressed this problem with next-generation knowledge management systems, which are ready for today’s OE.


Endnotes:

[1] APAN Staff (n.d.) Tools. Retrieved May 9, 2019, from https://www.apan.org/(S(12adofim0n1ranvobqiyfizu))/pages/tools-communities

[2] Williams, Gregory (2016, December 2). WFX 16 tests Civil Affairs Soldiers. Retrieved May 12, 2019, from https://www.dvidshub.net/news/189856/wfx-16-tests-civil-affairs-soldiers

[3] Szilagyi and P. Wira (2018) An intelligent system for smart buildings using machine learning and semantic technologies: A hybrid data-knowledge approach, 2018 IEEE Industrial Cyber-Physical Systems (ICPS), St. Petersburg, pp. 22-24.

[4] Chief, Civil Affairs Branch et al. (2011). Joint Civil Information Management Tactical Handbook, Tampa, FL, pp. 1-3 – 2-11.

[5] Fricke, Martin (2018, June 7). Encyclopedia of Knowledge Organization: Knowledge pyramid The DIKW hierarchy. Retrieved May 19, 2019, from http://www.isko.org/cyclo/dikw

[6] Chief, Civil Affairs Branch et al. (2011). Joint Civil Information Management Tactical Handbook, Tampa, FL, pp. 5-5, 5-11.

[7] Kopsch, Thomas and Fox, Amos (2016, August 22). Embracing Complexity: Adjusting Processes to Meet the Challenges of the Contemporary Operating Environment. Retrieved May 19, 2019, from https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2016-Online-Exclusive-Articles/Embracing-Complexity-Adjusting-Processes/

[8] APAN Staff (n.d.) Tools. Retrieved May 9, 2019, from https://www.apan.org/(S(12adofim0n1ranvobqiyfizu))/pages/tools-communities

[9] Neubarth, Michael (2013, June 28). Dirty Email Data Takes Its Toll. Retrieved May 20, 2019, from https://www.towerdata.com/blog/bid/116629/Dirty-Email-Data-Takes-Its-Toll

[10] Marczewski, Andrzey (2013, August 5). The Effect of Time on Decision Making. Retrieved May 20, 2019, from https://www.gamified.uk/2013/08/05/the-effect-of-time-on-decision-making/

[11] Murthy, Praveen et al. (2014, September). Big Data Taxonomy, Big Data Working Group, Cloud Security Alliance, pp. 9-29.

[12] Edwards, Gavin (2018, November 18). Machine Learning | An Introduction. Retrieved May 25, 2019, from https://towardsdatascience.com/machine-learning-an-introduction-23b84d51e6d0

[13] Gallino, Jeff (2019, May 14). Transforming the Call Center into a Competitive Advantage. Retrieved May 25, 2019, from https://www.martechadvisor.com/articles/customer-experience-2/transforming-the-call-center-into-a-competitive-advantage/

[14] Vergun, David (2018, August 21). Artificial intelligence likely to help shape future battlefield, says Army vice chief.  Retrieved May 25, 2019, from https://www.army.mil/article/210134/artificial_intelligence_likely_to_help_shape_future_battlefield_says_army_vice_chief

[15] Snibbe, Alana Conner (2006, Fall). Drowning in Data. Retrieved May 25, 2019, from https://ssir.org/articles/entry/drowning_in_data

[16] Frizzo-Barker, Julie et al. An empirical study of the rise of big data in business scholarship, International Journal of Information Management, Burnaby, Canada, pp. 403-413.

[17] Rice, Condoleezza (2011) No Higher Honor. New York, NY, Random House Publishing, pp. 506-515.

[18] Ganesan, Kavita (n.d.) All you need to know about text preprocessing for NLP and Machine Learning. Retrieved May 25, 2019, from https://www.kdnuggets.com/2019/04/text-preprocessing-nlp-machine-learning.html

[19] Gates, Robert (2014). Duty. New York, NY, Penguin Random House Publishing, pp. 347-348.

[20] Lasseter, Tom (2019, April 26). ‘Black sheep’: The mastermind of Sri Lanka’s Easter Sunday bombs. Retrieved May 25, 2019, from https://www.reuters.com/article/us-sri-lanka-blasts-mastermind-insight/black-sheep-the-mastermind-of-sri-lankas-easter-sunday-bombs-idUSKCN1S21S8

[21] Singer, Peter Warren (2009). Wired for War. The Penguin Press, New York, NY, pp. 11.

2019 - Contest: Small Wars Journal Assessment Papers Information and Intelligence Information Systems Insurgency & Counteinsurgency Ray K. Ragan

Assessment of the Threat of Nationalism to the State Power of Democracies in the Information Age

James P. Micciche is an Active Component U.S. Army Civil Affairs Officer with deployment and service experience in the Middle East, Africa, Afghanistan, Europe, and Indo-Pacific.  He is currently a Master’s candidate at the Fletcher School of Law and Diplomacy at Tufts University.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Threat of Nationalism to the State Power of Democracies in the Information Age

Date Originally Written:  April 9, 2019.

Date Originally Published:  May 20, 2019.

Summary:  Historically, both scholars and political leaders have viewed nationalism as an advantageous construct that enhanced a state’s ability to both act and exert power within the international system.  Contrary to historic precedence, nationalism now represents a potential threat to the ability of modern democracies to project and exercise power due to demographic trends, globalized economies, and the information age. 

Text:  At its very core a nation is a collection of individuals who have come together through common interest, culture, or history, ceding a part of their rights and power to representatives through social compact, to purse safety and survival against the unknown of anarchy.  Therefore, a nation is its population, and its population is one measure of its overall power relative to other nations.  Academics and policy makers alike have long viewed nationalism as a mechanism that provides states an advantage in galvanizing domestic support to achieve international objectives, increasing comparative power, and reducing the potential impediment of domestic factors within a two-level game.  Globalization, the dawn of the information age, and transitioning demographics have fundamentally reversed the effects of nationalism on state power with the concept now representing a potential threat to both domestic stability and relative power of modern democracies. 

Looking beyond the material facets of population such as size, demographic trends, and geographic distribution, Hans Morgenthau identifies both “National Character” and “National Morale” as key elements of a nation’s ability to exert power. Morgenthau explains the relationship between population and power as “Whenever deep dissensions tear a people apart, the popular support that can be mustered for a foreign policy will always be precarious and will be actually small if the success or failure of the foreign policy has a direct bearing upon the issue of domestic struggle[1].  Historically, scholars have highlighted the role nationalism played in the creation and early expansion of the modern state system as European peoples began uniting under common identities and cultures and states utilized nationalism to solidify domestic support endowing them greater autonomy and power to act within the international system[2].  As globalization and the international movement of labor has made western nations ethnically more diverse, nationalism no longer functions as the traditional instrument of state power that was prevalent in periods where relatively homogenous states were the international norm.  

Key to understanding nationalism and the reversal of its role in state power is how it not only differs from the concept of patriotism but that the two constructs are incompatible within the modern paradigm of many industrialized nations that contain ever-growing heterogeneous populations.  Walker Connor described patriotism as “an emotional attachment to one’s state or country and its political institutions” and nationalism as “an attachment to ones people[3].”  A contemporary manifestation of this concept was the rise of Scottish Nationalism during the 2014 Scottish Independence Referendum that was in direct opposition to the greater state of the United Kingdom (patriotism) leading to the prospective loss of British power.  Furthermore, the term nationalism both conceptually and operationally requires a preceding adjective that describes a specific subset of individuals within a given population that have common cause, history, or heritage and are often not restricted to national descriptors. Historically these commonalities have occurred along ethnic or religious variations such as white, Hindu, Arab, Jewish, or black in which individuals within an in-group have assembled to pursue specific interests and agendas regardless of the state(s) in which they reside, with many such groups wishing to create nations from existing powers, such as Québécois or the Scots.  

The idea that industrialized nations in 2019 remain relatively homogenous constructs is a long outdated model that perpetuates the fallacy that nationalism is a productive tool for democratic states.  The average proportion of foreign-born individuals living in a given European country is 11.3% of the total population, Germany a major economic power and key NATO ally exceeds 15%[4].   Similar trends remain constant in the U.S., Canada, and Australia that have long histories of immigrant populations and as of 2015 14% of the U.S. population was foreign-born[5].    Furthermore, projections forecast that by 2045 white Americans will encompass less than 50% of the total population due to a combination of immigration, interracial marriages, and higher minority birth rates[6].  The aforementioned transitions are byproducts of a modern globalized economy as fertility rates within Organisation for Economic Co-operation and Development nations have dropped below replacement thresholds of 2.1[7] the demand for labor remains.

One of the central components of the information age is metadata. As individuals navigate the World Wide Web, build social networks, and participate in e-commerce their personal attributes and trends transform into storable data. Data has become both a form of currency and a material asset that state actors can weaponize to conduct influence or propaganda operations against individuals or groups whose network positions amplifies effects.   Such actors can easily target the myriad of extra-national identities present within a given nation in attempts to mobilize one group against another or even against the state itself causing domestic instability and potential loss of state power within the international system.  Russian digital information operations have recently expanded from the former Soviet space to the U.S. and European Union and regularly target vulnerable or disenfranchised populations to provoke domestic chaos and weakening governance as a means to advance Russian strategic objectives[8].

As long as western democracies continue to become more diverse, a trend that is unalterable for at least the next quarter century, nationalism will remain a tangible threat, as malign actors will continue to subvert nationalist movements to achieve their own strategic objectives.  This threat is only intensified by the accessibility of information and the ease of engaging groups and individuals in the information age.  Nationalism in various forms is on the rise throughout western democracies and often stems from unaddressed grievances, economic misfortunes, or perceived loss of power that leads to consolidation of in-groups and the targeting of outgroup.  It remains justifiable for various individuals to want equal rights and provisions under the rule of law, and ensuring that systems are in place to protect the rights of both the masses from the individual (tyranny) but also the individual from the masses (mob rule) has become paramount for maintaining both state power and domestic stability.  It falls on citizens and policy makers alike within democracies to promote national identities that facilitate patriotism and integration and assimilation of various cultures into the populace rather than segregation and outgrouping that creates divisions that rival states will exploit. 


Endnotes:

[1] Morgenthau, H., & Thompson, K. (1948). Politics Among Nations: The Struggle for Power and Peace-6th edition. New York: McGraw-Hill.

[2] Mearsheimer, J. J. (2011). Kissing Cousins: Nationalism and Realism. Yale Workshop on International Relations, vol. 5.

[3] Connor, W. (1993). Beyond Reason: The Nature of The Ethnonational Bond. Ethnic and Racial Studies, 373 – 389.

[4] Connor, P., & Krogstad, J. (2016, June 15). Immigrant share of population jumps in some European countries. Retrieved April 7, 2019, from Pew Research Center: https://www.pewresearch.org/fact-tank/2016/06/15/immigrant-share-of-population-jumps-in-some-european-countries/

[5] Pew Resarch Center. (2015). Modern Immigration Wave Brings 59 Million to U.S. Driving Population Growth and Change Through 2065. Washington DC: Pew Rsearch Center.

[6] Frey, W. H. (2018, March 14). The US will become ‘minority white’ in 2045, Census projects. Retrieved April 8, 2019, from The Brookings Institution : https://www.brookings.edu/blog/the-avenue/2018/03/14/the-us-will-become-minority-white-in-2045-census-projects/

[7] World Bank. (2019). Fertility Rate, Total (Births per Women). Retrieved April 9, 2019, from The World Bank Group: https://data.worldbank.org/indicator/SP.DYN.TFRT.IN

[8] Klein, H. (2018, September 25). Information Warfare and Information Perspectives: Russian and U.S. Perspectives.Retrieved April 6, 2019, from Columbia SIPA Journal of International Affairs:https://jia.sipa.columbia.edu/information-warfare-and-information-perspectives-russian-and-us-perspectives

Assessment Papers Information Systems James P. Micciche Nationalism

Options to Bridge the U.S. Department of Defense – Silicon Valley Gap with Cyber Foreign Area Officers

Kat Cassedy is a qualitative analyst with 20 years of work in hard problem solving, alternative analysis, and red teaming.  She currently works as an independent consultant/contractor, with experience in the public, private, and academic sectors.  She can be found on Twitter @Katnip95352013, tweeting on modern #politicalwarfare, #proxywarfare, #NatSec issues, #grayzoneconflict, and a smattering of random nonsense.  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The cultural gap between the U.S. Department of Defense and Silicon Valley is significant.  Bridging this gap likely requires more than military members learning tech speak as their primary duties allow.

Date Originally Written:  April 15, 2019. 

Date Originally Published:  April 15, 2019. 

Author and / or Article Point of View:  The author’s point of view is that the cyber-sector may be more akin to a foreign culture than a business segment, and that bridging the growing gulf between the Pentagon and Silicon Valley may require sociocultural capabilities as much or more so than technical or acquisition skills. 

Background:  As the end of the third decade of the digital revolution nears an end, and close to a year after the U.S. Cyber Command was elevated to a Unified Combatant Command, the gap between the private sector’s most advanced technology talents, intellectual property (IP), services, and products and that of the DoD is strained and increasing. Although the Pentagon needs and wants Silicon Valley’s IP and capabilities, the technorati are rejecting DoD’s overtures[1] in favor of enormous new markets such as those available in China. In the Information Age, DoD assesses that it needs Silicon Valley’s technology much the way it needed the Middle East’s fossil fuels over the last half century, to maintain U.S. global battlespace dominance. And Silicon Valley’s techno giants, with their respective market caps rivaling or exceeding the Gross Domestic Product of the globe’s most thriving economies, have global agency and autonomy such that they should arguably be viewed as geo-political power players, not simply businesses.  In that context, perhaps it is time to consider 21st century alternatives to the DoD way of thinking of Silicon Valley and its subcomponents as conventional Defense Industrial Base vendors to be managed like routine government contractors. 

Significance:  Many leaders and action officers in the DoD community are concerned that Silicon Valley’s emphasis on revenue share and shareholder value is leading it to prioritize relationships with America’s near-peer competitors – mostly particularly but not limited to China[2] – over working with the U.S. DoD and national security community. “In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete[3].” Given the DoD’s extreme reliance on and investment in highly networked and interdependent information systems to dominate the modern global operating environment, the possibility that U.S. companies are choosing foreign adversaries as clients and partners over the U.S. government is highly concerning. If this technology shifts away from U.S. national security concerns continues, 1)  U.S. companies may soon be providing adversaries with advanced capabilities that run counter to U.S. national interests[4]; and 2) even where these companies continue to provide products and services to the U.S., there is an increased concern about counter-intelligence vulnerabilities in U.S. Government (USG) systems and platforms due to technology supply chain vulnerabilities[5]; and 3) key U.S. tech startup and emerging technology companies are accepting venture capital, seed, and private equity investment from investors who’s ultimate beneficial owners trace back to foreign sovereign and private wealth sources that are of concern to the national security community[6].

Option #1:  To bridge the cultural gap between Silicon Valley and the Pentagon, the U.S. Military Departments will train, certify, and deploy “Cyber Foreign Area Officers” or CFAOs.  These CFAOs would align with DoD Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs[7]” and, within the cyber and Silicon Valley context, do the same as a traditional FAO and “provide expertise in planning and executing operations, to provide liaison with foreign militaries operating in coalitions with U.S. forces, to conduct political-military activities, and to execute military-diplomatic missions.”

Risk:  DoD treating multinational corporations like nation states risks further decreasing or eroding the recognition of nation states as bearing ultimate authority.  Additionally, there is risk that the checks and balances specifically within the U.S. between the public and private sectors will tip irrevocably towards the tech sector and set the sector up as a rival for the USG in foreign and domestic relationships. Lastly, success in this approach may lead to other business sectors/industries pushing to be treated on par.

Gain:  Having DoD establish a CFAO program would serve to put DoD-centric cyber/techno skills in a socio-cultural context, to aid in Silicon Valley sense-making, narrative development/dissemination, and to establish mutual trusted agency. In effect, CFAOs would act as translators and relationship builders between Silicon Valley and DoD, with the interests of all the branches of service fully represented. Given the routine real world and fictional depictions of Silicon Valley and DoD being from figurative different worlds, using a FAO construct to break through this recognized barrier may be a case of USG policy retroactively catching up with present reality. Further, considering the national security threats that loom from the DoD losing its technological superiority, perhaps the potential gains of this option outweigh its risks.

Option #2:  Maintain the status quo, where DoD alternates between first treating Silicon Valley as a necessary but sometimes errant supplier, and second seeking to emulate Silicon Valley’s successes and culture within existing DoD constructs.  

Risk:  Possibly the greatest risk in continuing the path of the current DoD approach to the tech world is the loss of the advantage of technical superiority through speed of innovation, due to mutual lack of understanding of priorities, mission drivers, objectives, and organizational design.  Although a number of DoD acquisition reform initiatives are gaining some traction, conventional thinking is that DoD must acquire technology and services through a lengthy competitive bid process, which once awarded, locks both the DoD and the winner into a multi-year relationship. In Silicon Valley, speed-to-market is valued, and concepts pitched one month may be expected to be deployable within a few quarters, before the technology evolves yet again. Continual experimentation, improvisation, adaptation, and innovation are at the heart of Silicon Valley. DoD wants advanced technology, but they want it scalable, repeatable, controllable, and inexpensive. These are not compatible cultural outlooks.

Gain:  Continuing the current course of action has the advantage of familiarity, where the rules and pathways are well-understood by DoD and where risk can be managed. Although arguably slow to evolve, DoD acquisition mechanisms are on solid legal ground regarding use of taxpayer dollars, and program managers and decision makers alike are quite comfortable in navigating the use of conventional DoD acquisition tools. This approach represents good fiscal stewardship of DoD budgets.

Other Comments:  None. 

Recommendation:  None.  


Endnotes:

[1] Malcomson, S. Why Silicon Valley Shouldn’t Work With the Pentagon. New York Times. 19APR2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/04/19/opinion/silicon-valley-military-contract.html.

[2] Hsu, J. Pentagon Warns Silicon Valley About Aiding Chinese Military. IEEE Spectrum. 28MAR2019. Retrieved 15APR2019, from https://spectrum.ieee.org/tech-talk/aerospace/military/pentagon-warns-silicon-valley-about-aiding-chinese-military.

[3] Zegart, A and Childs, K. The Growing Gulf Between Silicon Valley and Washington. The Atlantic. 13DEC2018. Retrieved 15APR2019, from https://www.theatlantic.com/ideas/archive/2018/12/growing-gulf-between-silicon-valley-and-washington/577963/.

[4] Copestake, J. Google China: Has search firm put Project Dragonfly on hold? BBC News. 18DEC2018. Retrieved 15APR2019, from https://www.bbc.com/news/technology-46604085.

[5] Mozur, P. The Week in Tech: Fears of the Supply Chain in China. New York Times. 12OCT2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/10/12/technology/the-week-in-tech-fears-of-the-supply-chain-in-china.html.

[6] Northam, J. China Makes A Big Play In Silicon Valley. National Public Radio. 07OCT2018. Retrieved 15APR2019, from https://www.npr.org/2018/10/07/654339389/china-makes-a-big-play-in-silicon-valley.

[7] Department of Defense Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs,” April 28, 2005.  Retrieved 15APR2019, from https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/131517p.pdf.

 

Cyberspace Emerging Technology Information Systems Kat Cassedy Option Papers Public-Private Partnerships and Intersections United States

Assessment of the North Korean Cyberattack on Sony Pictures

Emily Weinstein is a Research Analyst at Pointe Bello and a current M.A. candidate in Security Studies at Georgetown University.  Her research focuses on Sino-North Korean relations, foreign policy, and military modernization.  She can be found on Twitter @emily_sw1.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the North Korean Cyberattack on Sony Pictures

Date Originally Written:  July 11, 2018.

Date Originally Published:  August 20, 2018.

Summary:   The 2014 North Korean cyberattack on Sony Pictures shocked the world into realizing that a North Korean cyber threat truly existed.  Prior to 2014, what little information existed on North Korea’s cyber capabilities was largely dismissed, citing poor domestic conditions as rationale for cyber ineptitude.  However, the impressive nature of the Sony attack was instrumental in changing global understanding of Kim Jong-un and his regime’s daring nature.

Text:  On November 24, 2014 Sony employees discovered a massive cyber breach after an image of a red skull appeared on computer screens company-wide, displaying a warning that threatened to reveal the company’s secrets.  That same day, more than 7,000 employees turned on their computers to find gruesome images of the severed head of Sony’s chief executive, Michael Lynton[1].  These discoveries forced the company to shut down all computer systems, including those in international offices, until the incident was further investigated.  What was first deemed nothing more than a nuisance was later revealed as a breach of international proportions.  Since this incident, the world has noted the increasing prevalence of large-scale digital attacks and the dangers they pose to both private and public sector entities.

According to the U.S. Computer Emergency Readiness Team, the primary malware used in this case was a Server Message Block (SMB) Worm Tool, otherwise known as SVCH0ST.EXE.  An SMB worm is usually equipped with five components: a listening implant, lightweight backdoor, proxy tool, destructive hard drive tool, and a destructive target cleaning tool[2].  The worm spreads throughout the infected network via a trial-and-error method used to obtain information such as a user password or personal identification number known as a brute force authentication attack.  The worm then connects to the command-and-control infrastructure where it is then able to begin its damage, usually copying software that is intended to damage or disable computers and computer systems, known as malware, across to the victim system or administrator system via the network sharing process.  Once these tasks are complete, the worm executes the malware using remotely scheduled tasks[3].

This type of malware is highly destructive.  If an organization is infected, it is likely to experience massive impacts on daily operations, including the loss of intellectual property and the disruption of critical internal systems[4].  In Sony’s case, on an individual level, hackers obtained and leaked personal and somewhat embarrassing information about or said by Sony personnel to the general public, in addition to information from private Sony emails that was sensitive or controversial.  On the company level, hackers stole diverse information ranging from contracts, salary lists, budget information, and movie plans, including five entire yet-to-be released movies.  Moreover, Sony internal data centers had been wiped clean and 75 percent of the servers had been destroyed[5].

This hack was attributed to the release of Sony’s movie, The Interview—a comedy depicting U.S. journalists’ plan to assassinate North Korean leader Kim Jong-un.  A group of hackers who self-identified by the name “Guardians of Peace” (GOP) initially took responsibility for the attack; however, attribution remained unsettled, as experts had a difficult time determining the connections and sponsorship of the “GOP” hacker group.  Former Federal Bureau of Investigation (FBI) Director James Comey in December 2014 announced that U.S. government believed that the North Korean regime was behind the attack, alluding to the fact that the Sony hackers failed to use proxy servers that masked the origin of their attack, revealing Internet Protocol or IP addresses that the FBI knew to be exclusively used by North Korea[6].

Aside from Director Comey’s statements, other evidence exists that suggests North Korea’s involvement.  For instance, the type of malware deployed against Sony utilized methods similar to malware that North Korean actors had previously developed and used.  Similarly, the computer-wiping software used against Sony was also used in a 2013 attack against South Korean banks and media outlets.  However, most damning of all was the discovery that the malware was built on computers set to the Korean language[7].

As for a motivation, experts argue that the hack was executed by the North Korean government in an attempt to preserve the image of Kim Jong-un, as protecting their leader’s image is a chief political objective in North Korea’s cyber program.  Sony’s The Interview infantilized Kim Jong-un and disparaged his leadership skills, portraying him as an inept, ruthless, and selfish leader, while poking fun at him by depicting him singing Katy Perry’s “Firework” song while shooting off missiles.  Kim Jong-un himself has declared that “Cyberwarfare, along with nuclear weapons and missiles, is an ‘all-purpose sword[8],’” so it is not surprising that he would use it to protect his own reputation.

The biggest takeaway from the Sony breach is arguably the U.S. government’s change in attitude towards North Korean cyber capabilities.  In recent years leading up to the attack, U.S. analysts were quick to dismiss North Korea’s cyber-potential, citing its isolationist tactics, struggling economy, and lack of modernization as rationale for this judgement.  However, following this large-scale attack on a large and prominent U.S. company, the U.S. government has been forced to rethink how it views the Hermit Regime’s cyber capabilities.  Former National Security Agency Deputy Director Chris Inglis argues that cyber is a tailor-made instrument of power for the North Korean regime, thanks to its low-cost of entry, asymmetrical nature and degree of anonymity and stealth[9].  Indeed the North Korean cyber threat has crept up on the U.S., and now the its intelligence apparatus must continue to work to both counter and better understand North Korea’s cyber capabilities.


Endnotes:

[1] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[2] Lennon, M. (December 19, 2014). Hackers Used Sophisticated SMB Worm Tool to Attack Sony. Retrieved July 7, 2018, from https://www.securityweek.com/hackers-used-sophisticated-smb-worm-tool-attack-sony

[3] Doman, C. (January 19, 2015). Destructive malware—a close look at an SMB worm tool. Retrieved July 7, 2018, from http://pwc.blogs.com/cyber_security_updates/2015/01/destructive-malware.html

[4] United States Computer Emergency Readiness Team (December 19, 2014). Alert (TA14-353A) Targeted Destructive Malware. Retrieved July 7, 2018, from https://www.us-cert.gov/ncas/alerts/TA14-353A

[5] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[6] Greenberg, A. (January 7, 2015). FBI Director: Sony’s ‘Sloppy’ North Korean Hackers Revealed Their IP Addresses. Retrieved July 7, 2018, from https://www.wired.com/2015/01/fbi-director-says-north-korean-hackers-sometimes-failed-use-proxies-sony-hack/

[7] Pagliery, J. (December 29, 2014). What caused Sony hack: What we know now. Retrieved July 8, 2018, from http://money.cnn.com/2014/12/24/technology/security/sony-hack-facts/

[8] Sanger, D., Kirkpatrick, D., and Perlroth, N. (October 15, 2017). The World Once Laughed at North Korean Cyberpower. No More. Retrieved July 8, 2018, from https://mobile.nytimes.com/2017/10/15/world/asia/north-korea-hacking-cyber-sony.html

[9] Ibid.

Assessment Papers Cyberspace Emily Weinstein Information Systems

An Assessment of Information Warfare as a Cybersecurity Issue

Justin Sherman is a sophomore at Duke University double-majoring in Computer Science and Political Science, focused on cybersecurity, cyberwarfare, and cyber governance. Justin conducts technical security research through Duke’s Computer Science Department; he conducts technology policy research through Duke’s Sanford School of Public Policy; and he’s a Cyber Researcher at a Department of Defense-backed, industry-intelligence-academia group at North Carolina State University focused on cyber and national security – through which he works with the U.S. defense and intelligence communities on issues of cybersecurity, cyber policy, and national cyber strategy. Justin is also a regular contributor to numerous industry blogs and policy journals.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in cybersecurity and IT project management. During his service in the Armed Forces, Anastasios was assigned to various key positions in national, NATO, and EU headquarters, and he’s been honored by numerous high-ranking officers for his expertise and professionalism, including a nomination as a certified NATO evaluator for information security. Anastasios currently works as an informatics instructor at AKMI Educational Institute, where his interests include exploring the human side of cybersecurity – psychology, public education, organizational training programs, and the effects of cultural, cognitive, and heuristic biases.

Paul Cobaugh is the Vice President of Narrative Strategies, a coalition of scholars and military professionals involved in the non-kinetic aspects of counter-terrorism, defeating violent extremism, irregular warfare, large-scale conflict mediation, and peace-building. Paul recently retired from a distinguished career in U.S. Special Operations Command, and his specialties include campaigns of influence and engagement with indigenous populations.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Information Warfare as a Cybersecurity Issue

Date Originally Written:  March 2, 2018.

Date Originally Published:  June 18, 2018.

Summary:  Information warfare is not new, but the evolution of cheap, accessible, and scalable cyber technologies enables it greatly.  The U.S. Department of Justice’s February 2018 indictment of the Internet Research Agency – one of the Russian groups behind disinformation in the 2016 American election – establishes that information warfare is not just a global problem from the national security and fact-checking perspectives; but a cybersecurity issue as well.

Text:  On February 16, 2018, U.S. Department of Justice Special Counsel Robert Mueller indicted 13 Russians for interfering in the 2016 United States presidential election [1]. Beyond the important legal and political ramifications of this event, this indictment should make one thing clear: information warfare is a cybersecurity issue.

It shouldn’t be surprising that Russia created fake social media profiles to spread disinformation on sites like Facebook.  This tactic had been demonstrated for some time, and the Russians have done this in numerous other countries as well[2].  Instead, what’s noteworthy about the investigation’s findings, is that Russian hackers also stole the identities of real American citizens to spread disinformation[3].  Whether the Russian hackers compromised accounts through technical hacking, social engineering, or other means, this technique proved remarkably effective; masquerading as American citizens lent significantly greater credibility to trolls (who purposely sow discord on the Internet) and bots (automated information-spreaders) that pushed Russian narratives.

Information warfare has traditionally been viewed as an issue of fact-checking or information filtering, which it certainly still is today.  Nonetheless, traditional information warfare was conducted before the advent of modern cyber technologies, which have greatly changed the ways in which information campaigns are executed.  Whereas historical campaigns took time to spread information and did so through in-person speeches or printed news articles, social media enables instantaneous, low-cost, and scalable access to the world’s populations, as does the simplicity of online blogging and information forgery (e.g., using software to manufacture false images).  Those looking to wage information warfare can do so with relative ease in today’s digital world.

The effectiveness of modern information warfare, then, is heavily dependent upon the security of these technologies and platforms – or, in many cases, the total lack thereof.  In this situation, the success of the Russian hackers was propelled by the average U.S. citizen’s ignorance of basic cyber “hygiene” rules, such as strong password creation.  If cybersecurity mechanisms hadn’t failed to keep these hackers out, Russian “agents of influence” would have gained access to far fewer legitimate social media profiles – making their overall campaign significantly less effective.

To be clear, this is not to blame the campaign’s effectiveness on specific end users; with over 100,000 Facebook accounts hacked every single day we can imagine it wouldn’t be difficult for any other country to use this same technique[4].  However, it’s important to understand the relevance of cybersecurity here. User access control, strong passwords, mandated multi-factor authentication, fraud detection, and identity theft prevention were just some of the cybersecurity best practices that failed to combat Russian disinformation just as much as fact-checking mechanisms or counter-narrative strategies.

These technical and behavioral failures didn’t just compromise the integrity of information, a pillar of cybersecurity; they also enabled the campaign to become incredibly more effective.  As the hackers planned to exploit the polarized election environment, access to American profiles made this far easier: by manipulating and distorting information to make it seem legitimate (i.e., opinions coming from actual Americans), these Russians undermined law enforcement operations, election processes, and more.  We are quick to ask: how much of this information was correct and how much of it wasn’t?  Who can tell whether the information originated from un-compromised, credible sources or from credible sources that have actually been hacked?

However, we should also consider another angle: what if the hackers hadn’t won access to those American profiles in the first place?  What if the hackers were forced to almost entirely use fraudulent accounts, which are prone to be detected by Facebook’s algorithms?  It is for these reasons that information warfare is so critical for cybersecurity, and why Russian information warfare campaigns of the past cannot be equally compared to the digital information wars of the modern era.

The global cybersecurity community can take an even greater, active role in addressing the account access component of disinformation.  Additionally, those working on information warfare and other narrative strategies could leverage cybersecurity for defensive operations.  Without a coordinated and integrated effort between these two sectors of the cyber and security communities, the inability to effectively combat disinformation will only continue as false information penetrates our social media feeds, news cycles, and overall public discourse.

More than ever, a demand signal is present to educate the world’s citizens on cyber risks and basic cyber “hygiene,” and to even mandate the use of multi-factor authentication, encrypted Internet connections, and other critical security features.  The security of social media and other mass-content-sharing platforms has become an information warfare issue, both within respective countries and across the planet as a whole.  When rhetoric and narrative can spread (or at least appear to spread) from within, the effectiveness of a campaign is amplified.  The cybersecurity angle of information warfare, in addition to the misinformation, disinformation, and rhetoric itself, will remain integral to effectively combating the propaganda and narrative campaigns of the modern age.


Endnotes:

[1] United States of America v. Internet Research Agency LLC, Case 1:18-cr-00032-DLF. Retrieved from https://www.justice.gov/file/1035477/download

[2] Wintour, P. (2017, September 5). West Failing to Tackle Russian Hacking and Fake News, Says Latvia. Retrieved from https://www.theguardian.com/world/2017/sep/05/west-failing-to-tackle-russian-hacking-and-fake-news-says-latvia

[3] Greenberg, A. (2018, February 16). Russian Trolls Stole Real US Identities to Hide in Plain Sight. Retrieved from https://www.wired.com/story/russian-trolls-identity-theft-mueller-indictment/

[4] Callahan, M. (2015, March 1). Big Brother 2.0: 160,000 Facebook Pages are Hacked a Day. Retrieved from https://nypost.com/2015/03/01/big-brother-2-0-160000-facebook-pages-are-hacked-a-day/

Anastasios Arampatzis Assessment Papers Cyberspace Information and Intelligence Information Systems Justin Sherman Paul Cobaugh Political Warfare Psychological Factors

Options for United States Intelligence Community Analytical Tools

Marvin Ebrahimi has served as an intelligence analyst.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Centralization of United States Intelligence Community (USIC) Analytical Tools.

Date Originally Written:  March 6, 2017.

Date Originally Published:  May 1, 2017.

Author and / or Article Point of View:  Author has served as an all-source analyst.  The author has personal experience with the countless tool suites, programs, and platforms used by intelligence analysts within the USIC to perform their analytical function, and the fact that there is no centralized collaborative environment from which such tools can be learned, evaluated, or selected for unit-specific mission sets tailored to the end-user.

Background:  Various USIC agencies and components have access to countless tool suites, some which are unique to that agency, and some available across the community.  These tools vary according to intelligence function, and are available across the multiple information systems used by intelligence components within the community[1].  While a baseline of tools are available to all, there is little centralization of these tools.  This lack of centralization requires analysts to learn and retain knowledge of how to manipulate the information systems and search engines at their disposal in order to find specific tools required for specific analytical functions.

Significance:  Digital collocation or compilation of analytical tool suites, programs, and platforms in a collaborative space would benefit all-source analysts who require access to a diverse set of tools required of their broadly focused function.  The knowledge and ability required to conduct tool-specific searches, i.e. manipulating a basic search engine in order to navigate to a landing page or agency-specific site where tool-specific information can hopefully be found, is time-consuming and detracts from analysts’ time available to conduct and employ analytical tradecraft.  This loss of time from an analyst’s daily schedule creates an opportunity cost that has yet to be realized.

Option #1:  Centralize analytical training, visibility, and accessibility to tool suites, programs, and platforms through creation of a USIC-wide collaborative analytical environment such as the Director of National Intelligence’s Intelligence Community IT Enterprise initiative[2].  Centralization of analytical training is used here to describe the necessity to train analysts on how to manipulate various databases, search engines, and environments effectively to find the specific tool suite or program required for their function.  Option #1 does not refer to centralizing all USIC member analytical training as outlined in the Intelligence Community Directive 203, “Analytical Standards.”

Risk:  Centralization of analytical training and accessibility leads to conflict primarily within respective USIC members’ unique identities and analytical functions.  The tools used by one agency may not work or be required by another.  Various agencies provide different functions, and centralizing access to tools requires further integration of all USIC agencies in the same or at least a compatible digital space.

Creation of a USIC-wide entity to collate, evaluate, and manage access to the multitude of tool suites, etc. creates yet another potentially bureaucratic element in an already robust community.  Such an entity would have to be controlled by the Office of the Director of National Intelligence unless delegated to a subordinate element.

Compartmentalization required to protect agencies’ sources and methods, analytical tradecraft included, may preclude community-wide collaborative efforts and information sharing necessary to create an effective space.

Gain:  Shared information is a shared reality.  Creation of a community-wide space to colocate analytical tool suites enables and facilitates collaboration, information exchange, and innovation across agencies.  These are positive factors in an information-driven age where rapid exchange of information is expected.

Effectiveness of end-user analysts increases with greater access to collective knowledge, skills, and abilities used by other professionals who perform similar specific analytical functions.

Community-wide efforts to improve efficiency in mission execution, development of analytical tradecraft, and the associated tools or programs can be captured and codified for mass implementation or dissemination thereby improving the overall capability of the USIC analytical workforce.

Option #2:  Improve analyst education and training that increases knowledge, skills, and abilities to better manipulate current and existing tool suites, programs, and platforms.

Risk:  USIC components continue to provide function-specific training to their own analysts while other agency partners do not benefit from agency-specific tools, due to inaccessibility or ignorance of tool), thereby decreasing the analytical effectiveness of the community as a whole.  Worded differently, community-wide access to the full range of analytical tools available does not reach all levels or entities.

Analysts rely on unstructured or personal means such as informal on-the-job training and personal networks in learning to manipulate current tools, while possibly remaining unaware or untrained on other analytical tools at their disposal elsewhere in the community.  Analysts rely on ingenuity and resolve to accomplish tasks but are not as efficient or effective.

Gain:  Analytical tradecraft unique to specific community members remains protected and inaccessible to analysts that do not possess required access, thereby preserving developed tradecraft.

Other Comments:  The author does not possess expert knowledge of collaborative spaces or the digital infrastructure required to create such an information environment; however, the author does have deployment experience as an end-user of several tool suites and programs utilized to perform specific analytical functions.  The majority of these were somewhat proliferated across a small portion of USIC components but were not widely accessible or available past basic search engine functions.

Recommendation:  None.


Endnotes:

[1] Treverton, G. F. (2016). New Tools for Collaboration. Retrieved March, 2016, from https://csis-prod.s3.amazonaws.com/s3fs-public/legacy_files/files/publication/160111_Treverton_NewTools_Web.pdf

[2] Office of the Director of National Intelligence IC IT Enterprise Fact Sheet. Retrieved March, 2017, from https://www.dni.gov/files/documents/IC%20ITE%20Fact%20Sheet.pdf

Information and Intelligence Information Systems Marvin Ebrahimi Option Papers United States