Options to Mitigate Cognitive Threats

John Chiment is a strategic threat intelligence analyst and has supported efforts across the Department of Defense and U.S. Intelligence Community. The views expressed herein are those of the author and do not reflect the official policy or position of the LinQuest Corporation, any of LinQuest’s subsidiaries or parents, or the U.S. Government.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


National Security Situation:  Cognitive attacks target the defender’s ability to accurately perceive the battlespace and react appropriately. If successful, these attacks may permit an attacker to defeat better equipped or positioned defenders. Defenders who deploy defenses poorly matched against the incoming threat – either due to mischaracterizing that threat or by rushing to respond – likely will suffer greater losses. Mitigation strategies for cognitive attacks all carry risks.

Date Originally Written:  January 31, 2022.

Date Originally Published:   March 7, 2022.

Author and / or Article Point of View:  The author is an American threat intelligence analyst with time in uniform, as a U.S. government civilian, and as a DoD contractor. 

Background:  Effectively countering an attack requires the defender to detect its existence, recognize the danger posed, decide on a course of action, and implement that action before the attack completes its engagement. An attacker can improve the odds of a successful strike by increasing the difficulty in each of these steps (via stealth, speed, deception, saturation, etc.) while defenders can improve their chances through preparation, awareness, and technical capabilities. Correct detection and characterization of a threat enables decision-makers to decide which available defense is the most appropriate. 

Significance:  A defender deploying a suboptimal or otherwise inappropriate defense benefits the attacker. Attackers who target the defender’s understanding of the incoming attack and their decision-making process may prompt defenders to select inappropriate defenses. Technological superiority – long a goal of western militaries – may be insufficient against such cognitive manipulations that target human decision-making processes rather than the capabilities the defender controls.

Option #1:  Defenders increase their number of assets collecting Intelligence, Surveillance, and Reconnaissance (ISR) data in order to more rapidly detect threats.

Risk:  Increasing ISR data collection consumes industrial and financial resources and may worsen relationships with other powers and the general public. Increasing collection may also overwhelm analytic capabilities by providing too much data [1].

Gain:  Event detection begins the defender’s process and earlier detection permits the defender to develop more options in subsequent stages. By increasing the number of ISR assets that can begin the defender’s decision-making process, the defender increases their opportunities to select an appropriate defense.

Option #2:  The defender increases the number of assets capable of analyzing information in order to more rapidly identify the threat.

Risk:  Increasing the number of assets capable of accurately processing, exploiting, and disseminating (PED) information consumes intellectual and financial resources. Threat characterization decisions can also be targeted in the same ways as defense deployment decisions [2].

Gain:   A larger network of available PED analysts may better address localized spikes in attacks, more evenly distribute stress among analysts and analytic networks within supporting agencies, and lower the risk of mischaracterizing threats, likely improving decision-maker’s chances of selecting an appropriate defense.

Option #3:  The defender automates defense deployment decisions in order to rapidly respond with a defense.

Risk:  Automated systems may possess exploitable logical flaws that can be targeted in much the same way as defender’s existing decision-making process. Automated systems operate at greater speeds, limiting opportunities for the defender to detect and correct inappropriate decisions [3].

Gain:  Automated systems operate at high speed and may mitigate time lost to late detection or initial mischaracterization of threats. Automating decisions also reduces the immediate cognitive load on the defender by permitting defensive software designers to explore and plan for complex potentials without the stress of an incoming attack.

Option #4:  The defender increases the number of assets authorized to make defense deployment decisions in order to more likely select an appropriate defense.

Risk:  Increasing the available pool of authorized decision-makers consumes communication bandwidth and financial resources. Larger communication networks have larger attack surfaces and increase the risk of both data leaks and attackers maliciously influencing decisions into far-off engagements. Attacking the network segment may produce delays resulting in defenders not deploying appropriate defenses in time [4].

Gain:  A larger network of authorized decision-makers may better address localized spikes in attacks, more evenly distribute stress among decision-making personnel, and lower the risk of rushed judgements that may prompt inappropriate defense deployments.

Option #5:  The defender trains authorized decision-makers to operate at higher cognitive loads in order to more likely select an appropriate defense.

Risk:  Attackers likely can increase attacks and overwhelm even extremely well-trained decision-makers.  As such, this option is a short-term solution. Increasing the cognitive load on an already limited resource pool likely will increase burnout rates, lowering the overall supply of experienced decision-makers [5].

Gain:  Improving decision-maker training can likely be achieved with minimal new investments as it focusses on better utilization of existing resources.

Option #6:  The defender prepositions improved defenses and defense response options in order to better endure attacks regardless of decision-making timelines.

Risk:  Prepositioned defenses and response options consume logistical and financial resources. Actions made prior to conflict risk being detected and planned for by adversaries, reducing their potential value. Rarely used defenses have maintenance costs that can be difficult to justify [6].

Gain:  Prepositioned defenses may mitigate attacks not detected before impact by improving the targeted asset’s overall endurance, and attackers knowledgeable of the defender’s defensive capabilities and response options may be deterred or slowed when pursuing goals that will now have to contend with the defender’s assets.

Other Comments:  Risks to the decision-making processes cannot be fully avoided. Options #3 and #6 attempt to make decisions before any cognitive attacks target decision-makers while Options #2 and #4 attempt to mitigate cognitive attack impact by spreading the load across a larger pool of assets. Options #1 and #2 may permit decision-makers to make better decisions earlier in an active attack while Option #5 attempts to improve the decision-making abilities of existing decision-makers. 

Recommendation:  None.


Endnotes:

[1] Krohley, N. (2017, 24 October). The Intelligence Cycle is Broken. Here’s How To Fix It. Modern Warfare Institute at West Point. https://mwi.usma.edu/intelligence-cycle-broken-heres-fix/

[2] Corona, I., Giancinto, G., & Roli, F. (2013, 1 August). Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences, 239, 201-225. https://doi.org/10.1016/j.ins.2013.03.022

[3] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A. Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust Physical-World Attacks on Deep Learning Visual Classification [Paper Presentation]. Conference on Computer Vision and Pattern Recognition. https://arxiv.org/abs/1707.08945v5

[4] Joint Chiefs of Staff. (2016, 21 December). Countering Threat Networks (JP 3-25). https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_25.pdf

[5] Larsen, R. P. (2001). Decision Making by Military Students Under Severe Stress. Military Psychology, 13(2), 89-98. https://doi.org/10.1207/S15327876MP1302_02

[6] Gerritz, C. (2018, 1 February). Special Report: Defense in Depth is a Flawed Cyber Strategy. Cyber Defense Magazine. https://www.cyberdefensemagazine.com/special-report-defense-in-depth-is-a-flawed-cyber-strategy/

Cyberspace Influence Operations Information and Intelligence John Chiment Option Papers

Options to Address Disinformation as a Cognitive Threat to the United States

Joe Palank is a Captain in the U.S. Army Reserve, where he leads a Psychological Operations Detachment. He has also previously served as an assistant to former Secretary of Homeland Security Jeh Johnson. He can be found on Twitter at @JoePalank. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization or any group.


National Security Situation:  Disinformation as a cognitive threat poses a risk to the U.S.

Date Originally Written:  January 17, 2022.

Date Originally Published:  February 14, 2022.

Author and / or Article Point of View:  The author is a U.S. Army Reservist specializing in psychological operations and information operations. He has also worked on political campaigns and for the U.S. Department of Homeland Security. He has studied psychology, political communications, disinformation, and has Masters degrees in Political Management and in Public Policy, focusing on national security.

Background:  Disinformation as a non-lethal weapon for both state and non-state actors is nothing new.  However the rise of the internet age and social media, paired with cultural change in the U.S., has given this once fringe capability new salience. Russia, China, Iran, North Korea, and violent extremist organizations pose the most pervasive and significant risks to the United States through their increasingly weaponized use of disinformation[1]. 

Significance:  Due to the nature of disinformation, this cognitive threat poses a risk to U.S. foreign and domestic policy-making, undercuts a foundational principle of democracy, and has already caused significant disruption to the U.S. political process. Disinformation can be used tactically alongside military operations, operationally to shape the information environment within a theater of conflict, and strategically by potentially sidelining the U.S. or allies from joining international coalitions.

Option #1:  The U.S. focuses domestically. 

The U.S. could combat the threat of disinformation defensively, by looking inward, and take a two-pronged approach to prevent the effects of disinformation. First, the U.S. could adopt new laws and policies to make social media companies—the primary distributor of disinformation—more aligned with U.S. national security objectives related to disinformation. The U.S. has an asymmetric advantage in serving as the home to the largest social media companies, but thus far has treated those platforms with the same laissez faire approach other industries enjoy. In recent years, these companies have begun to fight disinformation, but they are still motivated by profits, which are in turn motivated by clicks and views, which disinformation can increase[2]. Policy options might include defining disinformation and passing a law making the deliberate spread of disinformation illegal or holding social media platforms accountable for the spread of disinformation posted by their users.

Simultaneously, the U.S. could embark on widescale media literacy training for its populace. Raising awareness of disinformation campaigns, teaching media consumers how to vet information for authenticity, and educating them on the biases within media and our own psychology is an effective line of defense against disinformation[3]. In a meta-analysis of recommendations for improving awareness of disinformation, improved media literacy training was the single most common suggestion among experts[4]. Equipping the end users to be able to identify real, versus fake, news would render most disinformation campaigns ineffective.

Risk:  Legal – the United States enjoys a nearly pure tradition of “free speech” which may prevent the passage of laws combatting disinformation.

Political – Passing laws holding individuals criminally liable for speech, even disinformation, would be assuredly unpopular. Additionally, cracking down on social media companies, who are both politically powerful and broadly popular, would be a political hurdle for lawmakers concerned with re-election. 

Feasibility –  Media literacy training would be expensive and time-consuming to implement at scale, and the same U.S. agencies that currently combat disinformation are ill-equipped to focus on domestic audiences for broad-scale educational initiatives.

Gain:  A U.S. public that is immune to disinformation would make for a healthier polity and more durable democracy, directly thwarting some of the aims of disinformation campaigns, and potentially permanently. Social media companies that are more heavily regulated would drastically reduce the dissemination of disinformation campaigns worldwide, benefiting the entire liberal economic order.

Option #2:  The U.S. focuses internationally. 

Strategically, the U.S. could choose to target foreign suppliers of disinformation. This targeting is currently being done tactically and operationally by U.S. DoD elements, the intelligence community, and the State Department. That latter agency also houses the coordinating mechanism for the country’s handling of disinformation, the Global Engagement Center, which has no actual tasking authority within the Executive Branch. A similar, but more aggressive agency, such as the proposed Malign Foreign Influence Response Center (MFIRC), could literally bring the fight to purveyors of disinformation[5]. 

The U.S. has been slow to catch up to its rivals’ disinformation capabilities, responding to disinformation campaigns only occasionally, and with a varied mix of sanctions, offensive cyber attacks, and even kinetic strikes (only against non-state actors)[6]. National security officials benefit from institutional knowledge and “playbooks” for responding to various other threats to U.S. sovereignty or the liberal economic order. These playbooks are valuable for responding quickly, in-kind, and proportionately, while also giving both sides “off-ramps” to de-escalate. An MFIRC could develop playbooks for disinformation and the institutional memory for this emerging type of warfare. Disinformation campaigns are popular among U.S. adversaries due to the relative capabilities advantage they enjoy, as well as for their low costs, both financially and diplomatically[7]. Creating a basket of response options lends itself to the national security apparatus’s current capabilities, and poses fewer legal and political hurdles than changing U.S. laws that infringe on free speech. Moreover, an MFIRC would make the U.S. a more equal adversary in this sphere and raise the costs to conduct such operations, making them less palatable options for adversaries.

Risk:  Geopolitical – Disinformation via the internet is still a new kind of warfare; responding disproportionately carries a significant risk of escalation, possibly turning a meme into an actual war.

Effectiveness – Going after the suppliers of disinformation could be akin to a whack-a-mole game, constantly chasing the next threat without addressing the underlying domestic problems.

Gain:  Adopting this approach would likely have faster and more obvious effects. A drone strike to Russia’s Internet Research Agency’s headquarters, for example, would send a very clear message about how seriously the U.S. takes disinformation. At relatively little cost and time—more a shifting of priorities and resources—the U.S. could significantly blunt its adversaries’ advantages and make disinformation prohibitively expensive to undertake at scale.

Other Comments:  There is no reason why both options could not be pursued simultaneously, save for costs or political appetite.

Recommendation:  None.


Endnotes:

[1] Nemr, C. & Gangware, W. (2019, March). Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age. Park Advisors. Retrieved January 16, 2022 from https://2017-2021.state.gov/weapons-of-mass-distraction-foreign-state-sponsored-disinformation-in-the-digital-age/index.html 

[2] Cerini, M. (2021, December 22). Social media companies beef up promises, but still fall short on climate disinformation. Fortune.com. Retrieved January 16, 2022 from https://fortune.com/2021/12/22/climate-change-disinformation-misinformation-social-media/

[3] Kavanagh, J. & Rich, M.D. (2018) Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. RAND Corporation. https://www.rand.org/t/RR2314

[4] Helmus, T. & Keep, M. (2021). A Compendium of Recommendations for Countering Russian and Other State-Sponsored Propaganda. Research Report. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA894-1.html

[5] Press Release. (2020, February 14). Following Passage of their Provision to Establish a Center to Combat Foreign Influence Campaigns, Klobuchar, Reed Ask Director of National Intelligence for Progress Report on Establishment of the Center. Office of Senator Amy Klobuchar. https://www.klobuchar.senate.gov/public/index.cfm/2020/2/following-passage-of-their-provision-to-establish-a-center-to-combat-foreign-influence-campaigns-klobuchar-reed-ask-director-of-national-intelligence-for-progress-report-on-establishment-of-the-center

[6] Goldman, A. & Schmitt, E. (2016, November 24). One by One, ISIS Social Media Experts Are Killed as Result of F.B.I. Program. New York Times. Retrieved January 15, 2022 from https://www.nytimes.com/2016/11/24/world/middleeast/isis-recruiters-social-media.html

[7] Stricklin, K. (2020, March 29). Why Does Russia Use Disinformation? Lawfare. Retrieved January 15, 2022 from https://www.lawfareblog.com/why-does-russia-use-disinformation

Cyberspace Influence Operations Information and Intelligence Joe Palank Option Papers Social Media United States

Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Zac Rogers is an academic from Adelaide, South Australia. Zac has published in journals including International Affairs, The Cyber Defense Review, Joint Force Quarterly, and Australian Quarterly, and communicates with a wider audience across various multimedia platforms regularly. Parasitoid is his first book.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Date Originally Written:  January 3, 2022.

Date Originally Published:  January 17, 2022.

Author and / or Article Point of View:  The author is an Australia-based academic whose research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.  The author works closely with industry and government partners across multiple projects. 

Summary:  Military investment in war-gaming, table-top exercises, scenario planning, and future force design is increasing.  Some of this investment focuses on adversary activities in the “cognitive domain.” While this investment is necessary, it may fail due to it anchoring to data-driven machine-learning and automation for both offensive and defensive purposes, without a clear understanding of their appropriateness. 

Text:  In 2019 the author wrote a short piece for the U.S. Army’s MadSci website titled  “In the Cognitive War, the Weapon is You![1]” This article attempted to spur self-reflection by the national security, intelligence, and defence communities in Australia, the United States and Canada, Europe, and the United Kingdom.  At the time these communities were beginning to incorporate discussion of “cognitive” security/insecurity in their near future threat assessments and future force design discourses. The article is cited in in the North Atlantic Treaty Organization (NATO) Cognitive Warfare document of 2020[2]. Either in ways that demonstrate the misunderstanding directly, or as part of the wider context in which the point of that particular title is thoroughly misinterpreted, the author’s desired self-reflection has not been forthcoming. Instead, and not unexpectedly, the discourse on the cognitive aspects of contemporary conflict have consumed and regurgitated a familiar sequence of errors which will continue to perpetuate rather than mitigate the problem if not addressed head-on.  

What the cognitive threat is

The primary cognitive threat is us[3]. The threat is driven by a combination of, firstly, techno-futurist hubris which exists as a permanently recycling feature of late-modern military thought.  The threat includes a precipitous slide into scientism which military thinkers and the organisations they populate have not avoided[4].  Further contributing to the threat is the commercial and financial rent-seeking which overhangs military affairs as a by-product of private-sector led R&D activities and government dependence on and cultivation of those activities increasingly since the 1990s[5].  Lastly, adversary awareness of these dynamics and an increasing willingness and capacity to manipulate and exacerbate them via the multitude of vulnerabilities ushered in by digital hyper-connectivity[6]. In other words, before the cognitive threat is an operational and tactical menace to be addressed and countered by the joint force, it is a central feature of the deteriorating epistemic condition of the late modern societies in which said forces operate and from which its personnel, funding, R&D pathways, doctrine and operating concepts, epistemic communities and strategic leadership emerge. 

What the cognitive threat is not   

The cognitive threat is not what adversary military organisations and their patrons are doing in and to the information environment with regard to activities other than kinetic military operations. Terms for adversarial activities occurring outside of conventional lethal/kinetic combat operations – such as the “grey-zone” and “below-the-threshold” – describe time-honoured tactics by which interlocutors engage in methods aimed at weakening and sowing dysfunction in the social and political fabric of competitor or enemy societies.  These tactics are used to gain advantage in areas not directly including military conflict, or in areas likely to be critical to military preparedness and mobilization in times of war[7]. A key stumbling block here is obvious: its often difficult to know which intentions such tactics express. This is not cognitive warfare. It is merely typical of contending across and between cross-cultural communities, and the permanent unwillingness of contending societies to accord with the other’s rules. Information warfare – particularly influence operations traversing the Internet and exploiting the dominant commercial operations found there – is part of this mix of activities which belong under the normal paradigm of competition between states for strategic advantage. Active measures – influence operations designed to self-perpetuate – have found fertile new ground on the Internet but are not new to the arsenals of intelligence services and, as Thomas Rid has warned, while they proliferate, are more unpredictable and difficult to control than they were in the pre-Internet era[8]. None of this is cognitive warfare either. Unfortunately, current and recent discourse has lapsed into the error of treating it as such[9], leading to all manner of self-inflicted confusion[10]. 

Why the distinction matters

Two trends emerge from the abovementioned confusion which represent the most immediate threat to the military enterprise[11]. Firstly, private-sector vendors and the consulting and lobbying industry they employ are busily pitching technological solutions based on machine-learning and automation which have been developed in commercial business settings in which sensitivity to error is not high[12]. While militaries experiment with this raft of technologies, eager to be seen at the vanguard of emerging tech; to justify R&D budgets and stave off defunding; or simply out of habit, they incur opportunity cost.  This cost is best described as stultifying investment in the human potential which strategic thinkers have long identified as the real key to actualizing new technologies[13], and entering into path dependencies with behemoth corporate actors whose strategic goal is the cultivation of rentier-relations not excluding the ever-lucrative military sector[14]. 

Secondly, to the extent that automation and machine learning technologies enter the operational picture, cognitive debt is accrued as the military enterprise becomes increasingly dependent on fallible tech solutions[15]. Under battle conditions, the first assumption is the contestation of the electromagnetic spectrum on which all digital information technologies depend for basic functionality. Automated data gathering and analysis tools suffer from heavy reliance on data availability and integrity.  When these tools are unavailable any joint multinational force will require multiple redundancies, not only in terms of technology, but more importantly, in terms of leadership and personnel competencies. It is evidently unclear where the military enterprise draws the line in terms of the likely cost-benefit ratio when it comes to experimenting with automated machine learning tools and the contexts in which they ought to be applied[16]. Unfortunately, experimentation is never cost-free. When civilian / military boundaries are blurred to the extent they are now as a result of the digital transformation of society, such experimentation requires consideration  in light of all of its implications, including to the integrity and functionality of open democracy as the entity being defended[17]. 

The first error of misinterpreting the meaning and bounds of cognitive insecurity is compounded by a second mistake: what the military enterprise chooses to invest time, attention, and resources into tomorrow[18]. Path dependency, technological lock-in, and opportunity cost all loom large if  digital information age threats are misinterpreted. This is the solipsistic nature of the cognitive threat at work – the weapon really is you! Putting one’s feet in the shoes of the adversary, nothing could be more pleasing than seeing that threat self-perpetuate. As a first step, militaries could organise and invest immediately in a strategic technology assessment capacity[19] free from the biases of rent-seeking vendors and lobbyists who, by definition, will not only not pay the costs of mission failure, but stand to benefit from rentier-like dependencies that emerge as the military enterprise pays the corporate sector to play in the digital age. 


Endnotes:

[1] Zac Rogers, “158. In the Cognitive War – The Weapon Is You!,” Mad Scientist Laboratory (blog), July 1, 2019, https://madsciblog.tradoc.army.mil/158-in-the-cognitive-war-the-weapon-is-you/.

[2] Francois du Cluzel, “Cognitive Warfare” (Innovation Hub, 2020), https://www.innovationhub-act.org/sites/default/files/2021-01/20210122_CW%20Final.pdf.

[3] “us” refers primarily but not exclusively to the national security, intelligence, and defence communities taking up discourse on cognitive security and its threats including Australia, the U.S., U.K., Europe, and other liberal democratic nations. 

[4] Henry Bauer, “Science in the 21st Century: Knowledge Monopolies and Research Cartels,” Journal of Scientific Exploration 18 (December 1, 2004); Matthew B. Crawford, “How Science Has Been Corrupted,” UnHerd, December 21, 2021, https://unherd.com/2021/12/how-science-has-been-corrupted-2/; William A. Wilson, “Scientific Regress,” First Things, May 2016, https://www.firstthings.com/article/2016/05/scientific-regress; Philip Mirowski, Science-Mart (Harvard University Press, 2011).

[5] Dima P Adamsky, “Through the Looking Glass: The Soviet Military-Technical Revolution and the American Revolution in Military Affairs,” Journal of Strategic Studies 31, no. 2 (2008): 257–94, https://doi.org/10.1080/01402390801940443; Linda Weiss, America Inc.?: Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin UK, 2018).

[6] Timothy L. Thomas, “Russian Forecasts of Future War,” Military Review, June 2019, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MJ-19/Thomas-Russian-Forecast.pdf; Nathan Beauchamp-Mustafaga, “Cognitive Domain Operations: The PLA’s New Holistic Concept for Influence Operations,” China Brief, The Jamestown Foundation 19, no. 16 (September 2019), https://jamestown.org/program/cognitive-domain-operations-the-plas-new-holistic-concept-for-influence-operations/.

[7] See Peter Layton, “Social Mobilisation in a Contested Environment,” The Strategist, August 5, 2019, https://www.aspistrategist.org.au/social-mobilisation-in-a-contested-environment/; Peter Layton, “Mobilisation in the Information Technology Era,” The Forge (blog), N/A, https://theforge.defence.gov.au/publications/mobilisation-information-technology-era.

[8] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, Illustrated edition (New York: MACMILLAN USA, 2020).

[9] For example see Jake Harrington and Riley McCabe, “Detect and Understand: Modernizing Intelligence for the Gray Zone,” CSIS Briefs (Center for Strategic & International Studies, December 2021), https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/211207_Harrington_Detect_Understand.pdf?CXBQPSNhUjec_inYLB7SFAaO_8kBnKrQ; du Cluzel, “Cognitive Warfare”; Kimberly Underwood, “Cognitive Warfare Will Be Deciding Factor in Battle,” SIGNAL Magazine, August 15, 2017, https://www.afcea.org/content/cognitive-warfare-will-be-deciding-factor-battle; Nicholas D. Wright, “Cognitive Defense of the Joint Force in a Digitizing World” (Pentagon Joint Staff Strategic Multilayer Assessment Group, July 2021), https://nsiteam.com/cognitive-defense-of-the-joint-force-in-a-digitizing-world/.

[10] Zac Rogers and Jason Logue, “Truth as Fiction: The Dangers of Hubris in the Information Environment,” The Strategist, February 14, 2020, https://www.aspistrategist.org.au/truth-as-fiction-the-dangers-of-hubris-in-the-information-environment/.

[11] For more on this see Zac Rogers, “The Promise of Strategic Gain in the Information Age: What Happened?,” Cyber Defense Review 6, no. 1 (Winter 2021): 81–105.

[12] Rodney Brooks, “An Inconvenient Truth About AI,” IEEE Spectrum, September 29, 2021, https://spectrum.ieee.org/rodney-brooks-ai.

[13] Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology Is Only Half the Battle,” War on the Rocks, December 25, 2018, https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/.

[14] Jathan Sadowski, “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism,” Antipode 52, no. 2 (2020): 562–80, https://doi.org/10.1111/anti.12595.

[15] For problematic example see Ben Collier and Lydia Wilson, “Governments Try to Fight Crime via Google Ads,” New Lines Magazine (blog), January 4, 2022, https://newlinesmag.com/reportage/governments-try-to-fight-crime-via-google-ads/.

[16] Zac Rogers, “Discrete, Specified, Assigned, and Bounded Problems: The Appropriate Areas for AI Contributions to National Security,” SMA Invited Perspectives (NSI Inc., December 31, 2019), https://nsiteam.com/discrete-specified-assigned-and-bounded-problems-the-appropriate-areas-for-ai-contributions-to-national-security/.

[17] Emily Bienvenue and Zac Rogers, “Strategic Army: Developing Trust in the Shifting Strategic Landscape,” Joint Force Quarterly 95 (November 2019): 4–14.

[18] Zac Rogers, “Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven,” Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.

[19] For expansion see Zac Rogers and Emily Bienvenue, “Combined Information Overlay for Situational Awareness in the Digital Anthropological Terrain: Reclaiming Information for the Warfighter,” The Cyber Defense Review, no. Summer Edition (2021), https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/06_Rogers_Bienvenue_CDR_V6N3_2021.pdf?ver=6qlw1l02DXt1A_1n5KrL4g%3d%3d.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Below Established Threshold Activities (BETA) Cyberspace Influence Operations Information Systems Zac Rogers

Options to Counter Foreign Influence Operations Targeting Servicemember and Veterans

Marcus Laird has served in the United States Air Force. He presently works at Headquarters Air Force Reserve Command as a Strategic Plans and Programs Officer. He can be found on Twitter @USLairdForce.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  Divergent Options’ does not contain official information nor is it affiliated with the Department of Defense or the U. S. Air Force. The following opinion is of the author only, and is not official Air Force or Department of Defense policy. This publication was reviewed by AFRC/PA, and is cleared for public release and unlimited distribution.


National Security Situation:  Foreign Actors are using Social Media to influence Servicemember and Veteran communities. 

Date Originally Written:  December 2, 2021.

Date Originally Published:  January 3, 2022.

Author and / or Article Point of View:  The author is a military member who has previously researched the impact of social media on US military internal dialogue for professional military education and graduate courses. 

Background:  During the lead up to the 2016 election, members of the U.S. Army Reserve were specifically targeted by advertisements on Facebook purchased by Russia’s Internet Research Agency at least ten times[1]. In 2017, the Vietnam Veterans of America (VVA) also detected social media profiles which were sophisticated mimics of their official web pages. These web pages were created for several reasons to include identity theft, fraud, and disseminating disinformation favorable to Russia. Further investigation revealed a network of fake personas attempting to make inroads within online military and veteran communities for the purpose of bolstering persona credibility to spread disinformation. Because these mimics used VVA logos, VVA was able to have these web pages deplatformed after two months due to trademark infringement[2].  

Alternatively, military influencers, after building a substantial following, have chosen to sell their personas as a means of monetizing their social media brands. While foreign adversary networks have not incorporated this technique for building an audience, the purchase of a persona is essentially an opportunity to purchase a turnkey information operation platform. 

Significance:  Servicemembers and veterans are trusted voices within their communities on matters of national security. The special trust society places on these communities makes them a particularly lucrative target for an adversary seeking to influence public opinion and shape policy debates[3]. Social media is optimized for advertising, allowing specific demographics to be targeted with unprecedented precision. Unchecked, adversaries can use this capability to sow mistrust, degrade unit cohesion, and spread disinformation through advertisements, mimicking legitimate organizations, or purchasing a trusted persona. 

Option #1:  Closing Legislative Loopholes 

Currently, foreign entities are prohibited from directly contributing to campaigns. However, there is no legal prohibition on purchasing advertising by foreign entities for the purpose of influencing elections. Using legislative means to close this loophole would deny adversaries’ abuse of platforms’ microtargeting capabilities for the purpose of political influence[4].

Risk:  Enforcement – As evidenced during inquiries into election interference, enforcement could prove difficult. Enforcement relies on good faith efforts by platforms to conduct internal assessments of sophisticated actors’ affiliations and intentions and report them. Additionally, government agencies have neither backend system access nor adequate resources to forensically investigate every potential instance of foreign advertising.

Gain:  Such a solution would protect society as a whole, to include the military and veteran communities. Legislation would include reporting and data retention requirements for platforms, allowing for earlier detection of potential information operations. Ideally, regulation would prompt platforms to tailor their content moderation standards around political advertising to create additional barriers for foreign entities.  

Option #2:  Deplatforming on the Grounds of Trademark Infringement

Should a foreign adversary attempt to use sophisticated mimicry of official accounts to achieve a veneer of credibility, then the government may elect to request a platform remove a user or network of users on the basis of trademark infringement. This technique was successfully employed by the VVA in 2017. Military services have trademark offices, which license the use of their official logos and can serve as focal points for removing unauthorized materials[5].

Risk:  Resources – since trademark offices are self-funded and rely on royalties for operations, they may not be adequately resourced to challenge large-scale trademark infringement by foreign actors.

Personnel – personnel in trademark offices may not have adequate training to determine whether or not a U.S. person or a foreign entity is using the organization’s trademarked materials. Failure to adequately delineate between U.S. persons and foreign actors when requesting to deplatform a user potentially infringes upon civil liberties. 

Gain:  Developing agency response protocols using existing intellectual property laws ensures responses are coordinated between the government and platforms as opposed to a pickup game during an ongoing operation. Regular deplatforming can also help develop signatures for sophisticated mimicry, allowing for more rapid detection and mitigation by the platforms. 

Option #3:  Subject the Sale of Influence Networks to Review by the Committee on Foreign Investment in the United States (CFIUS) 

Inform platform owners of the intent of CFIUS to review the sale of all influence networks and credentials which specifically market to military and veteran communities. CFIUS review has been used to prevent the acquisition of applications by foreign entities. Specifically, in 2019 CFIUS retroactively reviewed the purchase of Grindr, an LGBTQ+ dating application, due to national security concerns about the potential for the Chinese firm Kunlun to pass sensitive data to the Chinese government.  Data associated with veteran and servicemember social networks could be similarly protected[6]. 

Risk:  Enforcement – Due to the large number of influencers and the lack of knowledge of the scope of the problem, enforcement may be difficult in real time. In the event a sale happens, then ex post facto CFIUS review would provide a remedy.  

Gain:  Such a notification should prompt platforms to craft governance policies around the sale and transfer of personas to allow for more transparency and reporting.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1] Goldsmith, K. (2020). An Investigation Into Foreign Entities Who Are Targeting Servicemembers and Veterans Online. Vietnam Veterans of America. Retrieved September 17, 2019, from https://vva.org/trollreport/, 108.

[2] Ibid, 6-7.

[3] Gallacher, J. D., Barash, V., Howard, P. N., & Kelly, J. (2018). Junk news on military affairs and national security: Social media disinformation campaigns against us military personnel and veterans. arXiv preprint arXiv:1802.03572.

[4] Wertheimer, F. (2019, May 28). Loopholes allow foreign adversaries to legally interfere in U.S. elections. Just Security. Retrieved December 10, 2021, from https://www.justsecurity.org/64324/loopholes-allow-foreign-adversaries-to-legally-interfere-in-u-s-elections/.

[5] Air Force Trademark Office. (n.d.). Retrieved December 3, 2021, from https://www.trademark.af.mil/Licensing/Applications.aspx.

[6] Kara-Pabani, K., & Sherman, J. (2021, May 11). How a Norwegian government report shows the limits of Cfius Data Reviews. Lawfare. Retrieved December 10, 2021, from https://www.lawfareblog.com/how-norwegian-government-report-shows-limits-cfius-data-reviews.

Cyberspace Influence Operations Marcus Laird Military Veterans and Military Members Option Papers Social Media United States

Analyzing Social Media as a Means to Undermine the United States

Michael Martinez is a consultant who specializes in data analysis, project management and community engagement. has a M.S. of Intelligence Management from University of Maryland University College. He can be found on Twitter @MichaelMartinez. Divergent Optionscontent does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Analyzing Social Media as a Means to Undermine the United States

Date Originally Written:  November 30, 2021.

Date Originally Published:  December 27, 2021.

Author and / or Article Point of View:  The author believes that social media is not inherently good nor bad, but a tool to enhance discussion. Unless the national security apparatus understands how to best utilize Open Source Intelligence to achieve its stated goals, i.e. engaging the public on social media and public forums, it will lag behind its adversaries in this space.

Summary:  Stopping online radicalization of all varieties is complex and includes the individual, the government, social media companies, and Internet Service Providers. Artificial intelligence reviewing information online and flagging potential threats may not be adequate. Only through public-private partnerships can an effective system by created to support anti-radicalization endeavors.

Text:  The adage, “If you’re not paying for the product, you are the product[1],” cannot be further from the truth in the age of social media. Every user’s click and purchase are recorded by private entities such as Facebook and Twitter. These records can be utilized by other nations to gather information on the United States economy, intellectual property, as well as information on government personnel and agencies. This collation of data can be packaged together and be used to inform operations to prey on U.S. personnel.  Examples include extortion through ransomware, an adversary intelligence service probing an employee for specific national information by appealing to their subject matter expertise, and online influence / radicalization.

It is crucial to accept that the United States and its citizens are more heavily reliant on social media than ever before. Social media entities such as Meta (formerly Facebook) have new and yet to be released products for children (i.e., the “Instagram for Kids” product) enabling adversaries to prey upon any age a potential. Terrorist organizations such as Al-Qaeda utilize cartoons on outlets like YouTube and Instagram to entice vulnerable youth to carry out attacks or help radicalize potential suicide bombers[2]. 

While Facebook and YouTube are the most common among most age groups, Tik-Tok and Snapchat have undergone a meteoric arise among youth under thirty[3]. Intelligence services and terrorist organizations have vastly improved their online recruiting techniques including video and media as the platforms have become just as sophisticated. Unless federal, state, and local governments strengthen their public-private partnerships to stay ahead of growth in next generation social media platforms this adversary behavior will continue.  The national security community has tools at its disposal to help protect Americans from being turned into cybercriminals through coercion, or radicalizing individuals from overseas entities such as the Islamic State to potentially carry out domestic attacks.

To counter such trends within social media radicalization, the National Institutes of Justice (NIJ) worked with the National Academies to identify traits and agendas to facilitate disruption of these efforts. Some of the things identified include functional databases, considering links between terrorism and lesser crimes, and exploring the culture of terrorism, including structure and goals[4]. While a solid federal infrastructure and deterrence mechanism is vital, it is also important for the social media platform themselves to eliminate radical media that may influence at-risk individuals. 

According to the NIJ, there are several characteristics that contribute to social media radicalization: being unemployed, a loner, having a criminal history, a history of mental illness, and having prior military experience[5]. These are only potential factors that do not apply to all who are radicalized[6]. However, these factors do provide a base to begin investigation and mitigation strategies. 

As a long-term solution, the Bipartisan Policy Center recommends enacting and teaching media literacy to understand and spot internet radicalization[7]. Social media algorithms are not fool proof. These algorithms require the cyberspace equivalent of “see something, say something” and for users to report any suspicious activity to the platforms. The risks of these companies not acting is also vital as their main goal is to monetize. Acting in this manner does not help companies make more money. This inaction is when the government steps in to ensure that private enterprise is not impeding national security. 

Creating a system that works will balance the rights of the individual with the national security of the United States. It will also respect the rights of private enterprise and the pipelines that carry the information to homes, the Internet Service Providers. Until this system can be created, the radicalization of Americans will be a pitfall for the entire National Security apparatus. 


Endnotes:

[1] Oremus, W. (2018, April 27). Are You Really the Product? Retrieved on November 15, 2021, from https://slate.com/technology/2018/04/are-you-really-facebooks-product-the-history-of-a-dangerous-idea.html. 

[2] Thompson, R. (2011). Radicalization and the Use of Social Media. Journal of Strategic Security, 4(4), 167–190. http://www.jstor.org/stable/26463917 

[3] Pew Research Center. (2021, April 7). Social Media Use in 2021. Retrieved from https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/ 

[4] Aisha Javed Qureshi, “Understanding Domestic Radicalization and Terrorism,” August 14, 2020, nij.ojp.gov: https://nij.ojp.gov/topics/articles/understanding-domestic-radicalization-and-terrorism.

[5] The National Counterintelligence and Security Center. Intelligence Threats & Social Media Deception. Retrieved November 15, 2021, from https://www.dni.gov/index.php/ncsc-features/2780-ncsc-intelligence-threats-social-media-deception. 

[6] Schleffer, G., & Miller, B. (2021). The Political Effects of Social Media Platforms on Different Regime Types. Austin, TX. Retrieved November 29, 2021, from http://dx.doi.org/10.26153/tsw/13987. 

[7] Bipartisan Policy Center. (2012, December). Countering Online Radicalization in America. Retrieved November 29, 2021, from https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2019/03/BPC-_Online-Radicalization-Report.pdf 

Assessment Papers Cyberspace Influence Operations Michael Martinez Social Media United States

Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Alex Buck is a currently serving officer in the Canadian Armed Forces. He has deployed twice to Afghanistan, once to Ukraine, and is now working towards an MA in National Security.  Alex can be found on Twitter @RCRbuck.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Date Originally Written:  August 29, 2021.

Date Originally Published:  December 13, 2021.

Author and / or Article Point of View: The author believes that without appropriate action, the United States’ political climate will continue to be exploited by Russian influence campaigns. These campaigns will have broad impacts across the Western world, and potentially generate an increased competitive advantage for Russia.

Summary:  To achieve a competitive advantage over the United States, Russia uses social media-based influence campaigns to influence American foreign policy. Political polarization makes the United States an optimal target for such campaigns. 

Text:  Russia aspires to regain influence over the international system that they once had as the Soviet Union. To achieve this aim, Russia’s interest lies in building a stronger economy and expanding their regional influence over Eastern Europe[1]. Following the Cold War, Russia recognized that these national interests were at risk of being completely destroyed by Western influence. The Russian economy was threatened by the United States’ unipolar hegemony over the global economy[2]. A strong North Atlantic Treaty Organization (NATO) has threatened Russia’s regional influence in Eastern Europe. NATO’s collective security agreement was originally conceived to counter the Soviet threat following World War II and has continued to do so to this day. Through the late 1990s and early 2000s, NATO expanded their membership to include former Soviet states in Eastern Europe. This expansion was done in an effort to reduce Russian regional influence [1]. Russia perceives these actions as a threat to their survival as a state, and needs a method to regain competitive advantage.

Following the Cold War, Russia began to identify opportunities they could exploit to increase their competitive advantage in the international system. One of those opportunities began to develop in the early-2000s as social media emerged. During this time, social media began to impact American culture in such a significant way that it could not be ignored. Social media has two significant impacts on society. First, it causes people to create very dense clusters of social connections. Second, these clusters are populated by very similar types of people[3]. These two factors caused follow-on effects to American society in that they created a divided social structure and an extremely polarized political system. Russia viewed these as opportunities ripe for their exploitation. Russia sees U.S. social media as a cost-effective medium to exert influence on the United States. 

In the late 2000s, Russia began experimenting with their concept of exploiting the cyber domain as a means of exerting influence on other nation-states. After the successful use of cyber operations against Ukraine, Estonia, Georgia and again in Ukraine in 2004, 2007, 2008, and 2014 respectively, Russia was poised to attempt utilizing their concept against the United States and NATO[4]. In 2014, Russia slowly built a network of social media accounts that would eventually begin sowing disinformation amongst American social media users[3]. The significance of the Russian information campaign leading up to the 2016 U.S. presidential election can not be underestimated. The Russian Internet Research Agency propagated ~10.4 million tweets on Twitter, 76.5 million engagements on Facebook, and 187 million engagements on Instagram[5]. Although within the context of 200 billion tweets sent annually this may seem like a small-scale effort, the targeted nature of the tweets contributed to their effectiveness. This Russian social media campaign was estimated to expose between 110 and 130 million American social media users to misinformation aimed at skewing the results of the presidential election[3]. The 2000 presidential election was decided by 537 votes in the state of Florida. To change the results of an American election like that of 2000, a Russian information campaign could potentially sway electoral results with a campaign that is 0.00049% effective.

The bifurcated nature of the current American political arena has created the perfect target for Russian attacks via the cyber domain. Due to the persistently slim margins of electoral results, Russia will continue to exploit this opportunity until it achieves its national aims and gains a competitive advantage over the United States. Social media’s influence offers Russia a cost effective and highly impactful tool that has the potential to sway American policies in its favor. Without coherent strategies to protect national networks and decrease Russian social influence the United States, and the broader Western world, will continue to be subject to Russian influence. 


Endnotes:

[1] Arakelyan, L. A. (2017). Russian Foreign Policy in Eurasia: National Interests and Regional Integration (1st ed.). Routledge. https://doi.org/10.4324/9781315468372

[2] Blank, S. (2008). Threats to and from Russia: An Assessment. The Journal of Slavic Military Studies, 21(3), 491–526. https://doi.org/10.1080/13518040802313746

[3] Aral, S. (2020). The hype machine: How social media disrupts our elections, our economy, and our health–and how we must adapt (First edition). Currency.

[4] Geers, K. & NATO Cooperative Cyber Defence Centre of Excellence. (2015). Cyber war in perspective: Russian aggression against Ukraine. https://www.ccdcoe.org/library/publications/cyber-war-in-perspective-russian-aggression-against-ukraine/

[5] DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., & Matney, R. (2019). The Tactics & Tropes of the Internet Research Agency. US Senate Documents.

Alex Buck Assessment Papers Cyberspace Influence Operations Russia Social Media United States

Assessing the Threat from Social Media Enabled Domestic Extremism in an Era of Stagnant Political Imagination


David Nwaeze is a freelance journalist and former political organizer based out of the U.S. Pacific Northwest, who has spent over two decades among the U.S. left.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessing the Threat from Social Media Enabled Domestic Extremism in an Era of Stagnant Political Imagination

Date Originally Written:  November 19, 2021.

Date Originally Published:  November 29, 2021.

Author and / or Article Point of View:  The author contends that despite efforts by legislators and social media platforms to reduce online mediated domestic extremism, America’s political stagnation is a chief contributor to the appeal of extremist movements to the domestic public at large.

Summary:  Social media is enabling domestic extremism. Where recruitment and incitement to action once took a great deal more effort for domestic extremists, they can now instantly attract much larger audiences through social media. While some may blame social media echo chambers for the growth of domestic extremism in recent years, equally culpable is stagnant political imagination within the U.S.

Text:  The threat of social media enabled domestic extremism in the U.S. is all too real today.  Below are some recent examples:

– An 18-year-old Army National Guardsman murders two of his housemates in Tampa, Florida[1]. A fourth is later arrested, tried, and convicted of possessing explosive material. All four are members of a Neo-Nazi organization that’s been built up in the preceding years through social media chatrooms.

– A drive-by shooting takes the life of a security guard on contract with the Federal Protective Service at the Ronald V. Dellums Federal Building in Oakland, California[2]. The following weekend, a suspect in the attack kills a Santa Cruz County Sheriff’s Sergeant and injures a deputy seeking to arrest him in relation to the attack. This suspect – along with another man – is arrested. Both suspects were part of an online subculture organized mainly over social media, oriented around preparing for or inciting a second American Civil War, or “boogaloo.”

– A mob of rioters storms the United States Capitol building in Washington, D.C.[3]. In the conflagration, there were five deaths and an unknown number of injuries[4]. To date, 695 individuals have been charged with crimes associated with this event[5]. Inspired by a mix of social media-mediated conspiracy theories, the January 6th attack would go on to wake America up to the real-world threat imposed by online extremism.

As we enter 2022, Americans are witnessing an uneasy calm following a violent awakening to the threat of social media enabled domestic extremism. What got us here? It is easy to look at recent history as moments in the process of evolution for online radicalization and mobilization toward violence:

– Email list-servs are used in 1999 to organize the shutdown of the World Trade Organization’s conference in Seattle[6]

– Al Qa’ida uses early social media as a propaganda and recruitment tool[7]

– The Islamic State dramatically improves this technique[7]

To understand where we are, the fundamental character of the present American mediascape needs examination. Today’s mediascape is unlike anything in human history. The internet provides an immense capability for anyone with a connection to transmit ideas and information at nearly instantaneous speeds worldwide. With this opportunity, however, comes the deep risk of information bottlenecks. According to SEO marketing strategist Brian Dean[8], “93.33% of [the] 4.8 billion global internet users and 85% of [the] 5.27 billion mobile phone users are on social media.” This means that the most popular social media platforms (Facebook, YouTube, WhatsApp, Instagram, etc) substantially impact how internet users connect to news, information, and ideas.

Additionally, in late October 2021, Facebook whistleblower Frances Haugen leaked documents now known as The Facebook Papers[9]. In her testimony since the leak, she has identified the moral hazard faced by social media companies as their “engagement-based metrics” create “echo chambers that create social norms” which exacerbate the “normalization of hate, a normalization of dehumanizing others.” In her words, “that’s what leads to violent incidents[10].” In a threat-free environment, this would be worrying enough on its own. However, the American people face many ideological opponents – both at home and abroad – who seek to leverage this media space to violent ends. To understand U.S. vulnerability to these threats, let’s examine the underlying character of our political environment since “the end of history.”

In “The End of History and the Last Man[11],” Francis Fukuyama presents a case for liberal democracy as the apotheosis and conclusion of historical ideological struggle as viewed through the Hegelian lens. In his words, the end of the Cold War brought us to “the end-point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.” Stemming from this, Fukuyama holds that humanity, once having achieved this “end-point,” would reach a condition in which “here would be no further progress in the development of underlying principles and institutions, because all of the really big questions had been settled.” In other words, he concludes that humanity is no longer capable of achieving far-reaching social change. This political theory – and the phenomenon it sought to characterize – found its expression during the 1990s in the rise of neoliberalism, and its attendant policy shifts, in the anglo-American political space away from the welfare state and toward finance capital mediated economic goals. Such subsequent ideas have come to define the limits of the American political space.

In “The Return of History and the End of Dreams[12],” Robert Kagan responded to Fukuyama by framing an international political struggle characterized by the rise of a new impulse toward autocracy, led by Russia and China. Kagan goes on to propose a “concert of world democracies” work together to challenge this new international autocratic threat. Kagan’s solution to “the end of dreams” is to awaken to the ideological struggle at hand and rise to its challenge of identifying and affirming our values and promoting the fulfillment of democratic political dreams abroad.

In the spirit of Kagan’s response to Fukuyama, America won’t rise to meet the dual challenges of social media’s capability to enable far-reaching social change and the inevitability of ideological struggle with domestic extremists until it accepts that history has not ended.  Unless America can assertively identify and affirm its underlying national values, the convergence of information echo chambers with stagnant political imagination will continue to motivate this threat to U.S. national security.  America has seen the warning signs in the headlines. History illustrates what this may portend if not abated. America’s enemies are many. Chief among them, however, is dreamless slumber.


Endnotes:

[1] Thompson, A.C., Winston, A. and Hanrahan, J., (2018, February 23). Inside Atomwaffen As It Celebrates a Member for Allegedly Killing a Gay Jewish College Student. Retrieved November 19, 2021 from: https://www.propublica.org/article/atomwaffen-division-inside-white-hate-group

[2] Winston, A., (2020, September 25). The Boogaloo Cop Killers. Retrieved November 19, 2021 from: https://www.popularfront.co/boogaloo-cop-killers

[3] Reeves, J., Mascaro, L., and Woodward, C., (2021, January 11). Capitol assault a more sinister attack than first appeared. Retrieved November 19, 2021 from: https://apnews.com/article/us-capitol-attack-14c73ee280c256ab4ec193ac0f49ad54

[4] McEvoy, J., (2021, January 8). Woman Possibly ‘Crushed To Death’: These Are The Five People Who Died Amid Pro-Trump Riots. Retrieved November 19, 2021 from: https://www.forbes.com/sites/jemimamcevoy/2021/01/08/woman-possibly-crushed-to-death-these-are-the-five-people-who-died-amid-pro-trump-riots/

[5] Hall, M., Gould, S., Harrington, R., Shamisian, J., Haroun, A., Ardrey, T., and Snodgrass, E., (2021, November 16). 695 people have been charged in the Capitol insurrection so far. This searchable table shows them all. Retrieved November 19, 2021 from: https://www.insider.com/all-the-us-capitol-pro-trump-riot-arrests-charges-names-2021-1

[6] Arquilla, J., & Ronfeldt, D. (2001). Networks and Netwars: The Future of Terror, Crime, and Militancy. RAND Corporation.

[7] Byman, D. L. (2015, April 29). Comparing Al Qaeda and ISIS: Different goals, different targets. Retrieved November 19, 2021 from: https://www.brookings.edu/testimonies/comparing-al-qaeda-and-isis-different-goals-different-targets

[8] Dean, B. (2021, October 10). Social Network Usage & Growth Statistics: How Many People Use Social Media in 2021?. Retrieved November 19, 2021 from: https://backlinko.com/social-media-users

[9] Chappell, B. (2021, October 25). The Facebook Papers: What you need to know about the trove of insider documents. Retrieved November 19, 2021 from: https://www.npr.org/2021/10/25/1049015366/the-facebook-papers-what-you-need-to-know

[10] Sky News. (2021, October 25). Facebook groups push people to “extreme interests”, says whistleblower Frances Haugen. Retrieved November 19, 2021 from: https://news.sky.com/story/facebook-groups-push-people-to-extreme-interests-says-whistleblower-frances-haugen-12444405

[11] Fukuyama, F. (1992). The End of History and the Last Man. Free Press.

[12] Kagan, R. (2009). The Return of History and the End of Dreams. Vintage.

Cyberspace David Nwaeze United States Violent Extremism

Assessing Australia’s Cyber-Attack Attribution Issues


Jackson Calder is the Founder and CEO of JC Ltd., a futures modeling firm specialising in geopolitical risk advisory based in New Zealand, and holds a Masters of Strategic Studies from Victoria University of Wellington.  Divergent Optionscontent does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing Australia’s Cyber-Attack Attribution Issues

Date Originally Written:  August 11, 2021.

Date Originally Published:  September 27, 2021.

Author and / or Article Point of View:  The author believes that without more proactive and novel thinking by decision makers, strategic competition in the grey-zone is likely to continue to outpace meaningful policy responses.

Summary:  Recent years have proven that China can prevail over Australia in the threshold below war, particularly through cyber-attacks that go without attribution. Without building trust between agencies, implementing the right training and education, and properly conceptualizing cyber warfare to bolster political will, Canberra will not strengthen attribution capabilities and achieve greater strategic agility in the cyber domain.

Text:  Making an official attribution of a cyber-attack is one of the key techno-political challenges faced by governments today. Using China-Australia tensions as a case study, one can analyse how capability gaps, technical expertise, and political will all play a role in shaping attribution and assess how one state prevails over another in the grey-zone of conflict below the threshold of war. Thus far Australia has favoured freeriding upon its more powerful allies’ attribution capability vis-à-vis China, rather than make attributions of its own[1]. Unless Canberra greatly expands its cyber security and attribution capabilities it will not accrue more agency, independence and, ultimately, strategic agility in this domain.

Over the past three years Australia has been the victim of numerous large-scale cyber campaigns carried out by China, targeting critical infrastructure, political parties, and service providers. While Australian Prime Minister Scott Morrison did state that a “sophisticated state-based actor” perpetrated these attacks, his government has thus far never made a public attribution to China[2]. Senior Australian officials have confirmed to media that they believe China is behind the attacks, raising questions around the lack of attribution[3].

Australia’s situation is representative of a wider strategic environment rife with frequent and sophisticated information operations, with China being a leading perpetrator of offensive cyber -attacks. Chinese hybrid warfare is undoubtedly inspired by Soviet political warfare dating back to the early 1920’s, but is perhaps grounded more in the concept of ‘unrestricted warfare’ posited by Liang and Xiangsui in 1999[4]. This concept manifested in the ‘Three Warfares’ doctrine of the early 2000’s, with offensive cyber operations being used as a key strategic tool since the PLA formed their Informatization Department in 2011[5]. Though described as ‘kinder weapons’, their ability to ‘strike at the enemy’s nerve center directly’ has indeed produced kinetic effects in recent years when used to sabotage critical infrastructure[6]. Whilst it is widely accepted that China is responsible for large-scale cyber operations, proving this can be a monumental task by virtue of cyber forensics being technically intensive and time-consuming.

In 2014, Thomas Rid and Ben Buchanan captured the nuance of cyber attribution excellently when they stated that ‘attribution is an art: no purely technical routine, simple or complex, can formalise, calculate, quantify, or fully automate attribution[7].’ While the art statement is true, technical routines exists to build attribution capability upon, and this is the crux of China’s prevailing over Australia in recent years. Canberra’s ‘freeriding’ on capabilities outside of the government and lack of streamlined inter-agency processes and accountability has severely limited their effectiveness in the cyber domain[8]. Attempts to remedy this have been made over the past two decades, with a number of agencies agreeing to communicate more and share responsibility for bringing an attribution forward, but they have been hamstrung by endemic underinvestment. Consequently, Australia’s response to a greatly increased threat profile in the cyber domain ‘has been slow and fragmented, thus ‘Australia’s play-book is not blank but it looks very different from those of pace-setter countries[9].’ 

Improving the speed and integrity of an attribution begins with ensuring that cyber security practitioners are not over-specialised in training and education. Though it may seem counterintuitive, evidence suggests that the most effective practitioners utilise general-purpose software tools more than others[10]. This means that organisational investment into specialised cyber security tools will not translate directly into improved capability without also establishing a training and work environment that pursues pragmatism over convoluted hyper-specialisation.

Attribution is less likely when there are low levels of trust between the government and civilian organisations involved in cyber security as this does not foster an operational environment conducive to the maturing of inter-agency responses. Trust is particularly important in Australia’s case in the relationship between more centralised intelligence agencies like the national Computer Emergency Response Team (CERT) based out of the Australian Cyber Security Centre and the civilian-run AusCERT. In 2017, Frank Smith and Graham Ingram addressed trust poignantly in stating that ‘the CERT community appears to have lacked the authority and funding needed to institutionalise trust – and thus depersonalise or professionalise it – enough to grow at scale[11].’ Trust between organisations, as well as between practitioners and the technology available to them, underpin the development of a robust and timely cyber security capability[12]. Without robust information sharing and clear lanes of responsibility failure will occur.

Attribution requires political will but competition in the cyber domain remains somewhat nebulous in its strategic conceptualisation, which constrains meaningful responses. If cyber war remains undefined, how do we know if we are in one or not[13]? Conceptualisation of the grey-zone as on the periphery of power competition, instead of at the centre of power competition itself, similarly confuses response thresholds and dampens political will. In 2016, James K. Wither stated that although information operations are non-kinetic, ‘the aim of their use remains Clausewitzian, that is to compel an opponent to bend to China’s will[14].’ Wither develops this point, arguing that within a rivalry dynamic where an ideological battle is also present, revisionist states wage hybrid warfare against the West ‘where, to reverse Clausewitz, peace is essentially a continuation of war by other means[15].’ Adopting this mindset is key to building political will, thus improving attribution external to technical capability. 

Finally, it is best to acknowledge Australia’s geopolitical environment may make attribution a less preferable course of action, even if a robust case is made. Foreign Minister Payne has stated that Australia ‘publicly attributes cyber incidents’ only ‘when it is in our interest to do so[16].’ Until attribution is tied to concrete consequences for the perpetrator, Canberra’s strategic calculus is likely to weigh potential Chinese economic and diplomatic retaliation as heavier than any potential benefits of making an official attribution. Nevertheless, it creates more options if Canberra possesses rapid and robust attribution capabilities, combined with political will to use them, to compete more effectively under the threshold of war.       


Endnotes:

[1] Chiacu, D., & Holland, S. (2021, July 19). U.S. and allies accuse China of global hacking spree. Retrieved from https://www.reuters.com/technology/us-allies-accuse-china-global-cyber-hacking-campaign-2021-07-19/

[2] Packham, C. (2020, June 18). Australia sees China as main suspect in state-based cyberattacks, sources say. Retrieved from https://www.reuters.com/article/us-australia-cyber-idUSKBN23P3T5

[3] Greene, A. (2021, March 17). China suspected of cyber attack on WA Parliament during state election. Retrieved from https://www.abc.net.au/news/2021-03-17/wa-parliament-targeted-cyber-attack/13253926

[4] Liang, Q., & Xiangsui, W. (1999). Unrestricted warfare. Beijing, CN: PLA Literature and Arts Publishing House Arts. https://www.c4i.org/unrestricted.pdf

[5] Raska, M. (2015). Hybrid Warfare with Chinese Characteristics. (RSIS Commentaries, No. 262). RSIS Commentaries. Singapore: Nanyang Technological University. https://hdl.handle.net/10356/82086 p.1.

[6] Liang, Q., & Xiangsui, W. (1999). Unrestricted warfare. Beijing, CN: PLA Literature and Arts Publishing House Arts. https://www.c4i.org/unrestricted.pdf p.27.

[7] Rid, T., & Buchanan, B. (2014). Attributing Cyber Attacks. Journal of Strategic Studies, 38(1-2), 4-37. doi:10.1080/01402390.2014.977382 p.27.

[8] Smith, F., & Ingram, G. (2017). Organising cyber security in Australia and beyond. Australian Journal of International Affairs, 71(6), 642-660. doi:10.1080/10357718.2017.1320972 p.10.

[9] Joiner, K. F. (2017). How Australia can catch up to U.S. cyber resilience by understanding that cyber survivability test and evaluation drives defense investment. Information Security Journal: A Global Perspective, 26(2), 74-84. doi:10.1080/19393555.2017.1293198 p.1.

[10] Mcclain, J., Silva, A., Emmanuel, G., Anderson, B., Nauer, K., Abbott, R., & Forsythe, C. (2015). Human Performance Factors in Cyber Security Forensic Analysis. Procedia Manufacturing, 3, 5301-5307. doi:10.1016/j.promfg.2015.07.621 p.5306.

[11] Smith, F., & Ingram, G. (2017). Organising cyber security in Australia and beyond. Australian Journal of International Affairs, 71(6), 642-660. doi:10.1080/10357718.2017.1320972 p.14.

[12] Robinson, M., Jones, K., & Janicke, H. (2015). Cyber warfare: Issues and challenges. Computers & Security. 49. 70-94. 10.1016/j.cose.2014.11.007. p.48.

[13] Ibid, p.12.

[14] Wither, J. K. (2016). Making Sense of Hybrid Warfare. Connections: The Quarterly Journal, 15(2), 73-87. doi:10.11610/connections.15.2.06 p.78.

[15] Ibid, p.79.

[16] Payne, M. (2018, December 21). Attribution of Chinese cyber-enabled commercial intellectual property theft. Retrieved from https://www.foreignminister.gov.au/minister/marise-payne/media-release/attribution-chinese-cyber-enabled-commercial-intellectual-property-theft

Assessment Papers Australia Below Established Threshold Activities (BETA) Cyberspace Jackson Calder

Assessing the Application of a Cold War Strategic Framework to Establish Norms in the Cyber Threat Environment

Jason Atwell is an officer in the U.S. Army Reserve and a Senior Manager with FireEye, Inc. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Application of a Cold War Strategic Framework to Establish Norms in the Cyber Threat Environment

Date Originally Written:  December 28, 2020.

Date Originally Published:  March 29, 2021.

Author and / or Article Point of View:  The article is written from the point of view of the United States and its Western allies as they seek to impose order on the increasingly fluid and frequently volatile cyber threat environment.

Summary:  The continued growth and maturity of cyber operations as a means of state sponsored espionage and, more recently, as a potential weapon of war, has generated a need for an “accepted” strategic framework governing its usage. To date, this framework remains unestablished. Cold War strategic frameworks could help govern the future conduct of cyber operations between nation states and bring some semblance of order to this chaotic battlespace.

Text:  The cyber threat environment continues to evolve and expand. Threat vectors like ransomware, a type of malicious software designed to block access to a computer system until a sum of money is paid, are now daily subjects for discussion among leaders in the public and private sectors alike. It is against this backdrop that high-level initiatives like the Cyberspace Solarium Commission have sought to formulate comprehensive, whole-of-government strategies for dealing with cyber threats and developing capabilities. The U.S. Department of Commerce’s National Institute for Standards in Technology issues a steady stream of best practices for cyber risk management and hygiene. Yet, no comprehensive framework to govern cyber operations at the macro, nation-to-nation level, has emerged and been able to achieve buy-in from all the affected parties. In fact, there are not even useful norms limiting the risk in many of these cyber interactions[1]. Industry leaders as well have lamented the lack of a coherent doctrine that governs relations in cyberspace and discourages the violating of doctrinal norms[2]. In some ways the Cold War norms governing armed conflict, espionage, and economic competition can be used to provide much needed stability to cyber and cyber-enabled operations. In other ways, the framing of current problems in Cold War vocabulary and rhetoric has proved unworkable at best and counterproductive at worst. 

Applying the accepted framework of great power interactions that was established during the Cold War presents both opportunities and challenges when it comes to the cyber threat environment. The rules which governed espionage especially, however informal in nature, helped to ensure both sides knew the red lines for conduct and could expect a standard response to common activities. On the individual level, frameworks like the informal “Moscow Rules” governed conduct and helped avoid physical confrontations[3]. When those rules were violated, and espionage came into the open, clear consequences were proscribed via precedent. These consequences made the use of persona-non-grata expulsions, facility closures, the use of neutral territories, exchanges and arrests were predictable and useful controls on behavior and means to avoid escalation. The application of these consequences to cyber, such as the closure of Russian facilities and expulsion of their diplomats has been used[4], however to little or no apparent effect as administrations have changed their approach over time. This uneven application of norms as cyber capabilities have advanced may in fact be leading the Russians in particular to abandon the old rules altogether[5]. In other areas, Cold War methods have been specifically avoided, such as the manner in which Chinese cyber operators have been indicted for the theft of intellectual property. Lowering this confrontation from high-level diplomatic brinkmanship to the criminal courts both prevents a serious confrontation while effectively rendering any consequences moot due to issues with extradition and prosecution. The dynamics between the U.S. and China have attracted a lot of discussion framed in Cold War terminology[6]. Indeed, the competition with China has many of the same hallmarks as the previous U.S.-Soviet Union dynamic[7]. What is missing is a knowledge of where the limits to each side’s patience lie when it comes to cyber activity. 

Another important component of Cold War planning and strategy was an emphasis on continuity of operations and government authority and survivability in a crisis. This continuity was pursued as part of a deterrence model where both sides sought to either convince the other that they would endure a confrontation and / or decisively destroy their opposition. Current cyber planning tends to place an emphasis on the ability to achieve overmatch without placing a similar emphasis on resilience on the friendly side. Additionally, deterrence through denial of access or geophysical control cannot ever work in cyberspace due to its inherently accessible and evolving nature[8]. Adopting a mindset and strategic framework based on ensuring the ability of command and control networks to survive and retaliate in this environment will help to impose stability in the face of potentially devastating attacks involving critical infrastructure[9]. It is difficult to have mutually assured destruction in cyberspace at this phase, because “destruction” is still nebulous and potentially impossible in cyberspace, meaning that any eventual conflict that begins in that domain may still have to turn kinetic before Cold War models begin to function.

As cyber capabilities have expanded and matured over time, there has been an apparent failure to achieve consensus on what the red lines of cyber confrontation are. Some actors appear to abide by general rules, while others make it a point of exploring new ways to raise or lower the bar on acceptable actions in cyberspace. Meanwhile, criminals and non-aligned groups are just as aggressive with their operations as many terrorist groups were during the height of the Cold War, and they are similarly frequently used or discarded by nation states depending on the situation and the need. However, nation states on the two sides were useful bulwarks against overzealous actions, as they could exert influence over the actions of groups operating from their territory or abusing their patronage. Espionage in cyberspace will not stop, nor can a framework anticipate every possible scenario that my unfold. Despite these imperfections, in the future an issue like the SolarWinds breach could lead to a series of escalatory actions a la the Cuban Missile Crisis, or the cyber threat environment could be governed by a Strategic Arms Limitation Talk-like treaty which bans cyber intrusions into global supply chains[10]. Applying aspects of the Cold War strategic framework can begin to bring order to the chaos of the cyber threat environment, while also helping highlight areas where this framework falls short and new ways of thinking are needed.


Endnotes:

[1] Bremmer, I., & Kupchan, C. (2021, January 4). Risk 6: Cyber Tipping Point. Retrieved February 12, 2021, from https://www.eurasiagroup.net/live-post/top-risks-2021-risk-6-cyber-tipping-point 

[2] Brennan, M., & Mandia, K. (2020, December 20). Transcript: Kevin MANDIA on “Face the Nation,” December 20, 2020. Retrieved February 12, 2021, from https://www.cbsnews.com/news/transcript-kevin-mandia-on-face-the-nation-december-20-2020/ 

[3] Sanger, D. (2016, December 29). Obama Strikes Back at Russia for Election Hacking. Retrieved February 13, 2021, from https://www.nytimes.com/2016/12/29/us/politics/russia-election-hacking-sanctions.html 

[4] Zegart, A. (2021, January 04). Everybody Spies in Cyberspace. The US Must Plan Accordingly. Retrieved February 13, 2021, from https://www.defenseone.com/ideas/2021/01/everybody-spies-cyberspace-us-must-plan-accordingly/171112/

[5] Devine, J., & Masters, J. (2018, March 15). Has Russia Abandoned the Rules of Spy-Craft? Retrieved February 13, 2021, from https://www.cfr.org/interview/are-cold-war-spy-craft-norms-fading 

[6] Buchanan, B., & Cunningham, F. (2020, December 18). Preparing the Cyber Battlefield: Assessing a Novel Escalation risk in A Sino-American Crisis. Retrieved February 13, 2021, from https://tnsr.org/2020/10/preparing-the-cyber-battlefield-assessing-a-novel-escalation-risk-in-a-sino-american-crisis/ 

[7] Sayers, E. (2021, February 9). Thoughts on the Unfolding U.S.-Chinese Competition: Washington’s Policy Towards Beijing Enters its Next Phase. Retrieved February 13, 2021, from https://warontherocks.com/2021/02/thoughts-on-the-unfolding-u-s-chinese-competition-washingtons-policy-towards-beijing-enters-its-next-phase/ 

[8] Borghard, E., Jensen, B., & Montgomery, M. (2021, February 05). Elevating ‘Deterrence By Denial’ in U.S. Defense Strategy. Retrieved February 13, 2021, from https://www.realcleardefense.com/articles/2021/02/05/elevating_deterrence_by_denial_in_us_defense_strategy_659300.html 

[9] Borghard, E. (2021, January 04). A Grand Strategy Based on Resilience. Retrieved February 13, 2021, from https://warontherocks.com/2021/01/a-grand-strategy-based-on-resilience/ 

[10] Lubin, A. (2020, December 23). SolarWinds as a Constitutive Moment: A New Agenda for International Law of Intelligence. Retrieved February 13, 2021, from https://www.justsecurity.org/73989/solarwinds-as-a-constitutive-moment-a-new-agenda-for-the-international-law-of-intelligence/

Arms Control Assessment Papers Below Established Threshold Activities (BETA) Cold War Cyberspace Governing Documents and Ideas Jason Atwell Soviet Union Treaties and Agreements United States

Options to Enhance Security in U.S. Networked Combat Systems

Jason Atwell has served in the U.S. Army for over 17 years and has worked in intelligence and cyber for most of that time. He has been a Federal employee, a consultant, and a contractor at a dozen agencies and spent time overseas in several of those roles. He is currently a senior intelligence expert for FireEye, Inc. and works with government clients at all levels on cyber security strategy and planning.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  As combat systems within DoD become more connected via networks, this increases their vulnerability to adversary action.

Date Originally Written:  November 1, 2020.

Date Originally Published:  January 11, 2021.

Author and / or Article Point of View:  The author is a reservist in the U.S. Army and a cyber security and intelligence strategist for FireEye, Inc. in his day job. This article is intended to draw attention to the need for building resiliency into future combat systems by assessing vulnerabilities in networks, hardware, and software as it is better to discover a software vulnerability such as a zero day exploit in a platform like the F-35 during peacetime instead of crisis.

Background:  The United States is rushing to field a significant number of networked autonomous and semi-autonomous systems[1][2] while neglecting to secure those systems against cyber threats. This neglect is akin to the problem the developed world is having with industrial control systems and internet-of-things devices[3]. These systems are unique, they are everywhere, they are connected to the internet, but they are not secured like traditional desktop computers. These systems won’t provide cognitive edge or overmatch if they fail when it matters most due to poorly secured networks, compromised hardware, and untested or vulnerable software.

Significance:  Networked devices contain massive potential to increase the resiliency, effectiveness, and efficiency in the application of combat power[4]. Whether kinetic weapons systems, non-lethal information operations, or well-organized logistics and command and control, the advantages gained by applying high-speed networking and related developments in artificial intelligence and process automation will almost certainly be decisive in future armed conflict. However, reliance on these technologies to gain a competitive or cognitive edge also opens the user up to being incapacitated by the loss or degradation of the very thing they rely on for that edge[5]. As future combat systems become more dependent on networked autonomous and semi-autonomous platforms, success will only be realized via accompanying cybersecurity development and implementation. This formula for success is equally true for ground, sea, air, and space platforms and will take into account considerations for hardware, software, connectivity, and supply chain. The effective application of cyber threat intelligence to securing and enabling networked weapons systems and other defense technology will be just as important to winning in the new multi-domain battlefield as the effective application of other forms of intelligence has been in all previous conflicts.

Option #1:  The Department of Defense (DoD) requires cybersecurity efforts as part of procurement. The DoD has been at work on applying their “Cybersecurity Maturity Model Certification” to vendors up and down the supply chain[6]. A model like this can assure a basic level of protection to hardware and software development and will make sure that controls and countermeasures are at the forefront of defense industrial base thinking.

Risk:  Option #1 has the potential to breed complacency by shifting the cybersecurity aspect too far to the early stages of the procurement process, ignoring the need for continued cyber vigilance further into the development and fielding lifecycle. This option also places all the emphasis on vendor infrastructure through certification and doesn’t address operational and strategic concerns around the resiliency of systems in the field. A compliance-only approach does not adapt to changing adversary tactics, techniques, and procedures.

Gain:  Option #1 forces vendors to take the security of their products seriously lest they lose their ability to do business with the DoD. As the model grows and matures it can be used to also elevate the collective security of the defense industrial base[7].

Option #2:  DoD takes a more proactive approach to testing systems before and during fielding. Training scenarios such as those used at the U.S. Army’s National Training Center (NTC) could be modified to include significant cyber components, or a new Cyber-NTC could be created to test the ability of maneuver units to use networked systems in a hostile cyber environment. Commanders could be provided a risk profile for their unit to enable them to understand critical vulnerabilities and systems in their formations and be able to think through risk-based mitigations.

Risk:  This option could cause significant delay in operationalizing some systems if they are found to be lacking. It could also give U.S. adversaries insight into the weaknesses of some U.S. systems. Finally, if U.S. systems are not working well, especially early on in their maturity, this option could create significant trust and confidence issues in networked systems[8].

Gain:  Red teams from friendly cyber components could use this option to hone their own skills, and maneuver units will get better at dealing with adversity in their networked systems in difficult and challenging environments. This option also allows the U.S. to begin developing methods for degrading similar adversary capabilities, and on the flip side of the risk, builds confidence in systems which function well and prepares units for dealing with threat scenarios in the field[9].

Option #3:  The DoD requires the passing of a sort of “cybersecurity sea trial” where the procured system is put through a series of real-world challenges to see how well it holds up. The optimal way to do this could be having specialized red teams assigned to program management offices that test the products.

Risk:  As with Option #2, this option could create significant delays or hurt confidence in a system. There is also the need for this option to utilize a truly neutral test to avoid it becoming a check-box exercise or a mere capabilities demonstration.

Gain:  If applied properly, this option could give the best of all options, showing how well a system performs and forcing vendors to plan for this test in advance. This also helps guard against the complacency associated with Option #1. Option #3 also means systems will show up to the field already prepared to meet their operational requirements and function in the intended scenario and environment.

Other Comments:  Because of advances in technology, almost every function in the military is headed towards a mix of autonomous, semi-autonomous, and manned systems. Everything from weapons platforms to logistics supply chains are going to be dependent on robots, robotic process automation, and artificial intelligence. Without secure resilient networks the U.S. will not achieve overmatch in speed, efficiency, and effectiveness nor will this technology build trust with human teammates and decision makers. It cannot be overstated the degree to which reaping the benefits of this technology advancement will depend upon the U.S. application of existing and new cybersecurity frameworks in an effective way while developing U.S. offensive capabilities to deny those advantages to U.S. adversaries.

Recommendation:  None.


Endnotes:

[1] Judson, Jen. (2020). US Army Prioritizes Open Architecture for Future Combat Vehicle. Retrieved from https://www.defensenews.com/digital-show-dailies/ausa/2020/10/13/us-army-prioritizes-open-architecture-for-future-combat-vehicle-amid-competition-prep

[2] Larter, David B. The US Navy’s ‘Manhattan Project’ has its leader. (2020). Retrieved from https://www.c4isrnet.com/naval/2020/10/14/the-us-navys-manhattan-project-has-its-leader

[3] Palmer, Danny. IOT security is a mess. Retrieved from https://www.zdnet.com/article/iot-security-is-a-mess-these-guidelines-could-help-fix-that

[4] Shelbourne, Mallory. (2020). Navy’s ‘Project Overmatch’ Structure Aims to Accelerate Creating Naval Battle Network. Retrieved from https://news.usni.org/2020/10/29/navys-project-overmatch-structure-aims-to-accelerate-creating-naval-battle-network

[5] Gupta, Yogesh. (2020). Future war with China will be tech-intensive. Retrieved from https://www.tribuneindia.com/news/comment/future-war-with-china-will-be-tech-intensive-161196

[6] Baksh, Mariam. (2020). DOD’s First Agreement with Accreditation Body on Contractor Cybersecurity Nears End. Retrieved from https://www.nextgov.com/cybersecurity/2020/10/dods-first-agreement-accreditation-body-contractor-cybersecurity-nears-end/169602

[7] Coker, James. (2020). CREST and CMMC Center of Excellence Partner to Validate DoD Contractor Security. Retrieved from https://www.infosecurity-magazine.com/news/crest-cmmc-validate-defense

[8] Vandepeer, Charles B. & Regens, James L. & Uttley, Matthew R.H. (2020). Surprise and Shock in Warfare: An Enduring Challenge. Retrieved from https://www.realcleardefense.com/articles/2020/10/27/surprise_and_shock_in_warfare_an_enduring_challenge_582118.html

[9] Schechter, Benjamin. (2020). Wargaming Cyber Security. Retrieved from https://warontherocks.com/2020/09/wargaming-cyber-security

Cyberspace Defense and Military Reform Emerging Technology Information Systems Jason Atwell United States

Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Scott Harr is a U.S. Army Special Forces officer with deployment and service experience throughout the Middle East.  He has contributed articles on national security and foreign policy topics to military journals and professional websites focusing on strategic security issues.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Date Originally Written:  September 26, 2020.

Date Originally Published:  December 2, 2020.

Author and / or Article Point of View:  The author believes that realist theory of international relations will have to take into account the weaponization of information in order to continue to be viable.

Summary:  The weaponization of information as an instrument of security has re-shaped the traditional security dilemma faced by nation-states under realist theory. While yielding to the anarchic ordering principle from realist thought, the information domain also extends the classic security dilemma and layers it with new dynamics. These dynamics put liberal democracies on the defensive compared to authoritarian regimes.

Text:  According to realist theory, the Westphalian nation-state exists in a self-interested international community[1]. Because of the lack of binding international law, anarchy, as an ordering principle, characterizes the international environment as each nation-state, not knowing the intentions of those around it, is incentivized to provide for its own security and survival[2]. This self-help system differentiates insecure nations according to their capabilities to provide and project security. While this state-of-play within the international community holds the structure together, it also creates a classic security dilemma: the more each insecure state invests in its own security, the more such actions are interpreted as aggression by other insecure states which initiates and perpetuates a never-ending cycle of escalating aggression amongst them[3]. Traditionally, the effects of the realist security dilemma have been observed and measured through arms-races between nations or the general buildup of military capabilities. In the emerging battlefield of the 21st century, however, states have weaponized the Information Domain as both nation-states and non-state actors realize and leverage the power of information (and new ways to transmit it) to achieve security objectives. Many, like author Sean McFate, see the end of traditional warfare as these new methods captivate entities with security interests while altering and supplanting the traditional military means to wage conflict[4]. If the emergence and weaponization of information technology is changing the instruments of security, it is worth assessing how the realist security dilemma may be changing along with it.

One way to assess the Information Domain’s impact on the realist security dilemma is to examine the ordering principle that undergirds this dilemma. As mentioned above, the realist security dilemma hinges on the anarchic ordering principle of the international community that drives (compels) nations to militarily invest in security for their survival. Broadly, because no (enforceable) international law exists to uniformly regulate nation-state actions weaponizing information as a security tool, the anarchic ordering principle still exists. However, on closer inspection, while the anarchic ordering principle from realist theory remains intact, the weaponization of information creates a domain with distinctly different operating principles for nation-states existing in an anarchic international environment and using information as an instrument of security. Nation-states espousing liberal-democratic values operate on the premise that information should flow freely and (largely) uncontrolled or regulated by government authority. For this reason, countries such as the United States do not have large-scale and monopolistic “state-run” information or media channels. Rather, information is, relatively, free to flow unimpeded on social media, private news corporations, and print journalism. Countries that leverage the “freedom” operating principle for information implicitly rely on the strength and attractiveness of liberal-democratic values endorsing liberty and freedom as the centerpiece for efforts in the information domain. The power of enticing ideals, they seem to say, is the best application of power within the Information Domain and surest means to preserve security. Nevertheless, reliance on the “freedom” operating principle puts liberal democratic countries on the defensive when it comes to the security dimensions of the information domain.

In contrast to the “freedom” operating principle employed by liberal democratic nations in the information domain, nations with authoritarian regimes utilize an operating principle of “control” for information. According to authors Irina Borogan and Andrei Soldatov, when the photocopier was first invented in Russia in the early 20th century, Russian authorities promptly seized the device and hid the technology deep within government archives to prevent its proliferation[5]. Plainly, the information-disseminating capabilities implied by the photocopier terrified the Russian authorities. Such paranoid efforts to control information have shaped the Russian approach to information technology through every new technological development from the telephone, computer, and internet. Since authoritarian regimes maintain tight control of information as their operating principle, they remain less concerned about adhering to liberal values and can thus assume a more offensive stance in the information domain. For this reason, the Russian use of information technology is characterized by wide-scale distributed denial of services attacks on opposition voices domestically and “patriot hackers” spreading disinformation internationally to achieve security objectives[6]. Plausible deniability surrounding information used in this way allows authoritarian regimes to skirt and obscure the ideological values cherished by liberal democracies under the “freedom” ordering principle.

The realist security dilemma is far too durable to be abolished at the first sign of nation-states developing and employing new capabilities for security. But even as the weaponization of information has not abolished the classic realist dilemma, it has undoubtedly extended and complicated it by adding a new layer with new considerations. Whereas in the past the operating principles of nation-states addressing their security has been uniformly observed through the straight-forward build-up of overtly military capabilities, the information domain, while preserving the anarchic ordering principle from realist theory, creates a new dynamic where nation-states employ opposite operating principles in the much-more-subtle Information Domain. Such dynamics create “sub-dilemmas” for liberal democracies put on the defensive in the Information Domain. As renowned realist scholar Kenneth Waltz notes, a democratic nation may have to “consider whether it would prefer to violate its code of behavior” (i.e. compromise its liberal democratic values) or “abide by its code and risk its survival[7].” This is the crux of the matter as democracies determine how to compete in the Information Domain and all the challenges it poses (adds) to the realist security dilemma: they must find a way to leverage the strength (and attractiveness) of their values in the Information Domain while not succumbing to temptations to forsake those values and stoop to the levels of adversaries. In sum, regarding the emerging operating principles, “freedom” is the harder right to “control’s” easier wrong. To forget this maxim is to sacrifice the foundations that liberal democracies hope to build upon in the international community.


Endnotes:

[1] Waltz, Kenneth. Realism and International Politics. New York: Taylor and Francis, 2008.

[2] Ibid, Waltz, Realism.

[3] Ibid, Waltz, Realsim

[4] Mcfate, Sean. The New Rules Of War: Victory in the Age of Durable Disorder. New York: Harper Collins Press, 2019.

[5] Soldatov, Andrei and Borogan, Irina. The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries. New York: Perseus Books Group, 2015.

[6] Ibid, Soldatov.

[7] Waltz, Kenneth Neal. Man, the State, and War: A Theoretical Analysis. New York: Columbia University Press, 1959.

Assessment Papers Cyberspace Influence Operations Scott Harr

U.S. Options for a Consistent Response to Cyberattacks

Thomas G. Pledger is an U.S. Army Infantry Officer currently serving at the U.S. Army National Guard Directorate in Washington, DC. Tom has deployed to multiple combat zones supporting both the Conventional and Special Operations Forces. Tom holds a Master in Public Service and Administration from the Bush School of Public Administration at Texas A&M University, a Master of Humanities in Organizational Dynamics, Group Think, and Communication from Tiffin University, and three Graduate Certificates in Advanced International Affairs from Texas A&M University in Intelligence, Counterterrorism, and Defense Policy and Military Affairs. Tom has been a guest lecturer at the Department of State’s Foreign Service Institute. He currently serves on 1st NAEF’s External Advisory Board, providing insight on approaches for countering information operations. Tom’s current academic and professional research is focused on a holistic approach to counter-facilitation/network, stability operations, and unconventional warfare. Divergent Options’ content does not contain information of an official nature, nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The United States Government (USG) does not have a consistent response or strategy for cyberattacks against the private sector and population. Instead, it evaluates each attack on a case by case basis. This lack of a consistent response strategy has enabled hackers to act with greater freedom of maneuver, increasing the number and types of cyberattacks.

Date Originally Written:  April 24, 2020.

Date Originally Published:  June 29, 2020.

Author and / or Article Point of View:  The author believes that a lack of a consistent response or strategy for cyberattacks against the United States private sector and population have emboldened foreign powers’ continued actions and prevented a coordinated response.

Background:  The United States private sector and population has become the target of an almost continuous barrage of cyberattacks coming from a long list of state-sponsored actors, including Russia, China, North Korea, and Iran[1]. These actors have used the low financial cost of execution and low cost of final attribution to utilize cyberattacks as a tool to stay below the threshold of armed conflict. In the United States, these attacks have primarily avoided negative impacts on critical infrastructure, as defined by the USG. Therefore, the USG has treated such attacks as a matter for the private sector and population to manage, conducting only limited response to such state-sponsored attacks.

Significance:  The number of known cyberattacks has increased at a near exponential rate since the 1990s. During this same period, these attacks have become more sophisticated and coordinated, causing increased damage to both real-world infrastructure, intellectual property, societal infrastructure, and digital communication platforms. This trend for cyberattacks will continue to rise as individuals, industry, and society’s reliance on and the number of connected devices increases.

Option #1:  The USG categorizes cyberattacks against the United States’ private sector and population as an act of cyberterrorism.

Risk:  Defining cyberattacks against the United States’ private sector and population as cyberterrorism could begin the process of turning every action conducted against the United States that falls below the threshold of armed conflict as terrorism. Patience in responding to these attacks, as attack attribution takes time, can be difficult. Overzealous domestic governments, both state and federal, could use Option #1 to suppress or persecute online social movements originating in the United States.

Gain:  Defining cyberattacks against the United States’ private sector and population as cyberterrorism will utilize an established framework that provides authorities, coordination, and tools while simultaneously pressuring the USG to respond. Including the term “digital social infrastructure” will enable a response to persistent efforts by state actors to create divisions and influence the United States population. Option #1 also creates a message to foreign actors that the continued targeting of the United States private sectors and population by cyberattacks will begin to have a real cost, both politically and financially. A stated definition creates standard precedence for the use of cyberattacks not to target the United States’ private sector and population outside of declared armed conflict, which has been applied to other weapon systems of war.

Option #2:  The USG maintains the current case by case response against cyberattacks.

Risk:  The private sector will begin to hire digital mercenaries to conduct counter-cyberattacks, subjecting these companies to possible legal actions in United States Courts, as “hack the hacker” is illegal in the United States[2]. Cyberattacks conducted by the United States private sector could drag the United States unknowingly into an armed conflict, as responses could rapidly escalate or have unknown second-order effects. Without providing a definition and known response methodology, the continued use of cyberattacks will escalate in both types and targets, combined with that U.S. adversaries not knowing what cyberattack is too far, which could lead to armed conflict.

Gain:  Option #2 allows a case by case flexible response to individual cyberattacks by the USG. Examining the target, outcome, and implication allows for a custom response towards each event. This option maintains a level of separation between the private sector operating in the United States and the USG, which may allow these organizations to operate more freely in foreign countries.

Other Comments:  Although there is no single USG definition for terrorism, all definitions broadly include the use of violence to create fear in order to affect the political process. Cyberterrorism does not include the typical act of violence against a person or property. This lack of physical violence has led some administrations to define cyberattacks as “cyber vandalism[3],” even as the cyberattack targeted the First Amendment. Cyberattacks are designed to spread doubt and fear in the systems that citizens use daily, sowing fear amongst the population, and creating doubt in the ability of the government to respond.

Recommendation:  None.


Endnotes:

[1] “Significant Cyber Incidents.” Center for Strategic and International Studies, Center for Strategic and International Studies, Apr. 2020, http://www.csis.org/programs/technology-policy-program/significant-cyber-incidents.

[2] “Hacking Laws and Punishments.” Findlaw, Thomson Reuters, 2 May 2019, criminal.findlaw.com/criminal-charges/hacking-laws-and-punishments.html.

[3] Fung, Brian. “Obama Called the Sony Hack an Act of ‘Cyber Vandalism.’ He’s Right.” The Washington Post, WP Company, 22 Dec. 2014, http://www.washingtonpost.com/news/the-switch/wp/2014/12/22/obama-called-the-sony-hack-an-act-of-cyber-vandalism-hes-right/.

Cyberspace Option Papers Policy and Strategy Thomas G. Pledger United States

Assessment of the Virtual Societal Warfare Environment of 2035

Editor’s Note:  This article is part of our Civil Affairs Association and Divergent Options Writing Contest which took place from April 7, 2020 to July 7, 2020.  More information about the contest can be found by clicking here.


James Kratovil is a Civil Affairs Officer in the United States Army, currently working in the Asia-Pacifc region.

Hugh Harsono is currently serving as an Officer in the United States Army. He writes regularly for multiple publications about cyberspace, economics, foreign affairs, and technology. He can be found on LinkedIn @HughHarsono.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization or any group


Title:  Assessment of the Virtual Societal Warfare Environment of 2035

Date Originally Written:  April 30, 2020.

Date Originally Published:  June 3, 2020.

Author and / or Article Point of View:  Both authors believe that emerging societal warfare is a risk to U.S. interests worldwide.

Summary:  The world of 2035 will see the continued fracturing of the online community into distinctive tribes, exacerbated by sophisticated disinformation campaigns designed to manipulate these siloed groups. Anonymity on the internet will erode, thus exposing individuals personally to the masses driven by this new form of Virtual Societal Warfare, and creating an entirely new set of rules for interaction in the digital human domain.

Text:  The maturation of several emerging technologies will intersect with the massive expansion of online communities and social media platforms in 2035 to create historic conditions for the conduct of Virtual Societal Warfare. Virtual Societal Warfare is defined by the RAND Corporation as a “broad range of techniques” with the aim of changing “people’s fundamental social reality[1].” This form of warfare will see governments and other organizations influencing public opinion in increasingly precise manners. Where once narratives were shaped by professional journalists, unaltered videos, and fact-checked sources, the world of 2035 will be able to convincingly alter history itself in real time. Citizens will be left to the increasingly difficult task of discerning reality from fantasy, increasing the rate at which people will pursue whatever source of news best fits their ideology.

By 2035, the maturation of artificial intelligence (AI) will transform the information landscape. With lessons learned from experiences such as Russia’s interference with the 2016 elections in the U.S.[2], AI will continue to proliferate the issue of deep fakes to the point where it will be substantially more challenging to identify disinformation on the internet, thus increasing the effectiveness of disinformation campaigns. These AI systems will be able to churn out news stories and video clips showing fabricated footage in a remarkably convincing fashion.

With the population of the global community currently continuing to trend upwards, there is no doubt that an increasing number of individuals will seek information from popular social media platforms. The current figures for social media growth support this notion, with Facebook alone logging almost 2.5 billion monthly active users[3] and Tencent’s WeChat possessing an ever-growing user base that currently totals over 1.16 billion individuals[4]. An explosion in the online population will solidify the complete fracturing of traditional news sites into ones that cater to specific ideologies and preferences to maintain profits. This siloed collection of tailored realities will better allow disinformation campaigns of the future to target key demographics with surgical precision, making such efforts increasingly effective.

Where social media, the information environment, and online disinformation were once in their infancy of understanding, in 2035 they will constitute a significant portion of future organizational warfare. States and individuals will war in the information environment over every potentially significant piece of news, establishing multiple realities of ever starker contrast, with a body politic unable to discern the difference. The environment will encompass digital participation from governments and organizations alike. Every action taken by any organizational representative, be it a public affairs officer, a Department of Defense spokesperson, or key leader will have to take into account their engagement with online communities, with every movement being carefully planned in order to account for synchronized messaging across all web-based platforms. Organizations will need to invest considerable resources into methods of understanding how these different communities interact and react to certain news.

A digital human domain will arise, one as tangible in its culture and nuances as the physical, and organizations will have to prepare their personnel to act appropriately in it. Ostracization from an online community could have rippling effects in the physical world. One could imagine a situation where running afoul of an influential group or individual could impact the social credit score of the offender more than currently realized. Witness the power of WeChat, which not only serves as a messaging app but continually evolves to encompass a multitude of normal transactions. Everything from buying movie tickets to financial services exist on a super application home to its own ecosystem of sub-applications[5]. In 2035 this application constitutes your identity and has been blurred and merged across the digital space into one unified identity for social interactions. The result will be the death of online anonymity. Offend a large enough group of people, and you could see your social rating plummet, impacting everything from who will do business with you to interactions with government security forces.

Enter the new age disinformation campaign. While the internet has become less anonymous, it has not become any less wild, even within the intranets of certain countries. Communities set up in their own bubbles of reality are more readily excited by certain touchpoints, flocking to news organizations and individuals that cater to their specific dopamine rush of familiar news. A sophisticated group wanting to harass a rival organization could unleash massive botnets pushing AI-generated deep fakes to generate perceived mass negative reaction, crashing the social score of an individual and cutting them off from society.

Though grim, several trends are emerging to give digital practitioners and the average person a fighting chance. Much of the digital realm can be looked at as a never-ending arms race between adversarial actors and those looking to protect information and the privacy of individuals. Recognizing the growing problem of deepfakes, AI is already in development to detect different types, with a consortium of companies recently coming together to announce the “Deepfake Detection Challenge[6].” Meanwhile, the privacy industry has continued development of increasingly sophisticated forms of anonymity, with much of it freely available to a tech savvy public. The proliferation of virtual machines, Virtual Private Networks, Onion Routers, blockchain[7], and encryption have prolonged a cat and mouse game with governments that will continue into the future.

Where social media, the information environment, and online disinformation were once in their infancy of understanding, in 2035 they will be key elements used by governments and organizations in the conduct of Virtual Societal Warfare. The merging and unmasking of social media will leave individuals critically exposed to these online wars, with casualties on both sides weighed not in lives lost, but rather everyday lives suppressed by the masses. Ultimately, it will be up to individuals, corporations, and governments working together to even the odds, even as they advance the technology they seek to counter.


Endnotes:

[1] Mazarr, M., Bauer, R., Casey, A., Heintz, S. & Matthews, L. (2019). The emerging risk of virtual societal warfare : social manipulation in a changing information environment. Santa Monica, CA: RAND.

[2] Mayer, J. (2018, September 24). How Russia Helped to Swing the Election for Trump. Retrieved April 16, 2020, from https://www.newyorker.com/magazine/2018/10/01/how-russia-helped-to-swing-the-election-for-trump

[2] Petrov, C. (2019, March 25). Gmail Statistics 2020. Retrieved April 17, 2020, from https://techjury.net/stats-about/gmail-statistics/#gref

[3] Clement, J. (2020, January 30). Number of Facebook users worldwide 2008-2019. Retrieved April 18, 2020, from https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-

[4] Thomala, L. L. (2020, March 30). Number of active WeChat messenger accounts Q2 2011-Q4 2019. Retrieved April 18, 2020, from https://www.statista.com/statistics/255778/number-of-active-wechat-messenger-accounts

[5] Feng, Jianyun. (2019, September 26). What is WeChat? The super-app you can’t live without in China. Retrieved April 25, 2020 from https://signal.supchina.com/what-is-wechat-the-super-app-you-cant-live-without-in-china

[6] Thomas, Elise. (2019, November 25). In the Battle Against Deepfakes, AI is being Pitted Against AI. Retrieved April 30, 2020 from https://www.wired.co.uk/article/deepfakes-ai

[7] Shaan, Ray. (2018, May 4). How Blockchains Will Enable Privacy. Retrived April 30, 2020 from https://towardsdatascience.com/how-blockchains-will-enable-privacy-1522a846bf65

2020 - Contest: Civil Affairs Association Writing Contest Assessment Papers Civil Affairs Association Cyberspace James Kratovil Non-Government Entities

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk

U.S. Options to Combat Chinese Technological Hegemony

Ilyar Dulat, Kayla Ibrahim, Morgan Rose, Madison Sargeant, and Tyler Wilkins are Interns at the College of Information and Cyberspace at the National Defense UniversityDivergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  China’s technological rise threatens U.S. interests both on and off the battlefield.

Date Originally Written:  July 22, 2019.

Date Originally Published:  February 10, 2020.

Author and / or Article Point of View:  This article is written from the point of view of the United States Government.

Background:  Xi Jinping, the Chairman of China’s Central Military Commission. affirmed in 2012 that China is acting to redefine the international world order through revisionist policies[1]. These policies foster an environment open to authoritarianism thus undermining Western liberal values. The Chinese Communist Party (CCP) utilizes emerging technologies to restrict individual freedoms of Chinese citizens, in and out of cyberspace. Subsequently, Chinese companies have exported this freedom-restricting technology to other countries, such as Ethiopia and Iran, for little cost. These technologies, which include Artificial Intelligence-based surveillance systems and nationalized Internet services, allow authoritarian governments to effectively suppress political dissent and discourse within their states. Essentially monopolizing the tech industry through low prices, China hopes to gain the loyalty of these states to obtain the political clout necessary to overcome the United States as the global hegemon.

Significance:  Among the technologies China is pursuing, 5G is of particular interest to the U.S.  If China becomes the leader of 5G network technologies and artificial intelligence, this will allow for opportunities to disrupt the confidentiality, integrity, and availability of data. China has been able to aid regimes and fragmented democracies in repressing freedom of speech and restricting human rights using “digital tools of surveillance and control[2].” Furthermore, China’s National Security Law of 2015 requires all Chinese tech companies’ compliance with the CCP. These Chinese tech companies are legally bound to share data and information housed on Chinese technology, both in-state and abroad. They are also required to remain silent about their disclosure of private data to the CCP. As such, information about private citizens and governments around the world is provided to the Chinese government without transparency. By deploying hardware and software for countries seeking to expand their networks, the CCP could use its authority over domestic tech companies to gain access to information transferred over Chinese built networks, posing a significant threat to the national security interests of the U.S. and its Allies and Partners. With China leading 5G, the military forces of the U.S. and its Allies and Partners would be restricted in their ability to rely on indigenous telecoms abroad, which could cripple operations critical to U.S. interests [3]. This risk becomes even greater with the threat of U.S. Allies and Partners adopting Chinese 5G infrastructure, despite the harm this move would do to information sharing with the United States.

If China continues its current trajectory, the U.S. and its advocacy for personal freedoms will grow increasingly marginal in the discussion of human rights in the digital age. In light of the increasing importance of the cyber domain, the United States cannot afford to assume that its global leadership will seamlessly transfer to, and maintain itself within, cyberspace. The United States’ position as a leader in cyber technology is under threat unless it vigilantly pursues leadership in advancing and regulating the exchange of digital information.

Option #1:  Domestic Investment.

The U.S. government could facilitate a favorable environment for the development of 5G infrastructure through domestic telecom providers. Thus far, Chinese companies Huawei and ZTE have been able to outbid major European companies for 5G contracts. American companies that are developing 5G infrastructure are not large enough to compete at this time. By investing in 5G development domestically, the U.S. and its Allies and Partners would have 5G options other than Huawei and ZTE available to them. This option provides American companies with a playing field equal to their Chinese counterparts.

Risk:  Congressional approval to fund 5G infrastructure development will prove to be a major obstacle. Funding a development project can quickly become a bipartisan issue. Fiscal conservatives might argue that markets should drive development, while those who believe in strong government oversight might argue that the government should spearhead 5G development. Additionally, government subsidized projects have previously failed. As such, there is no guarantee 5G will be different.

Gain:  By investing in domestic telecommunication companies, the United States can remain independent from Chinese infrastructure by mitigating further Chinese expansion. With the U.S. investing domestically and giving subsidies to companies such as Qualcomm and Verizon, American companies can develop their technology faster in an attempt to compete with Huawei and ZTE.

Option #2:  Foreign Subsidization.

The U.S. supports European competitors Nokia and Ericsson, through loans and subsidies, against Huawei and ZTE. In doing so, the United States could offer a conduit for these companies to produce 5G technology at a more competitive price. By providing loans and subsidies to these European companies, the United States delivers a means for these companies to offer more competitive prices and possibly outbid Huawei and ZTE.

Risk:  The American people may be hostile towards a policy that provides U.S. tax dollars to foreign entities. While the U.S. can provide stipulations that come with the funding provided, the U.S. ultimately sacrifices much of the control over the development and implementation of 5G infrastructure.

Gain:  Supporting European tech companies such as Nokia and Ericsson would help deter allied nations from investing in Chinese 5G infrastructure. This option would reinforce the U.S.’s commitment to its European allies, and serve as a reminder that the United States maintains its position as the leader of the liberal international order. Most importantly, this option makes friendlier telecommunications companies more competitive in international markets.

Other Comments:  Both options above would also include the U.S. defining regulations and enforcement mechanisms to promote the fair usage of cyberspace. This fair use would be a significant deviation from a history of loosely defined principles. In pursuit of this fair use, the United States could join the Cyber Operations Resilience Alliance, and encourage legislation within the alliance that invests in democratic states’ cyber capabilities and administers clearly defined principles of digital freedom and the cyber domain.

Recommendation:  None.


Endnotes:

[1] Economy, Elizabeth C. “China’s New Revolution.” Foreign Affairs. June 10, 2019. Accessed July 31, 2019. https://www.foreignaffairs.com/articles/china/2018-04-17/chinas-new-revolution.

[2] Chhabra, Tarun. “The China Challenge, Democracy, and U.S. Grand Strategy.” Democracy & Disorder, February 2019. https://www.brookings.edu/research/the-china-challenge-democracy-and-u-s-grand-strategy/.

[3] “The Overlooked Military Implications of the 5G Debate.” Council on Foreign Relations. Accessed August 01, 2019. https://www.cfr.org/blog/overlooked-military-implications-5g-debate.

Artificial Intelligence / Machine Learning / Human-Machine Teaming China (People's Republic of China) Cyberspace Emerging Technology Ilyar Dulat Kayla Ibrahim Madison Sargeant Morgan Rose Option Papers Tyler Wilkins United States

An Assessment of the National Security Impact of Digital Sovereignty

Kathleen Cassedy is an independent contractor and open source specialist. She spent the last three years identifying, cataloging, and analyzing modern Russian and Chinese political and economic warfare efforts; the role of foreign influence operations in gray zone problem sets; global influence of multi-national entities, non-state actors, and super-empowered individuals; and virtual sovereignty, digital agency, and decentralized finance/cryptocurrency. She tweets @Katnip95352013.

Ian Conway manages Helios Global, Inc., a risk analysis consultancy that specializes in applied research and analysis of asymmetric threats. Prior to conducting a multi-year study of political warfare operations and economic subversion, he supported DoD and homeland security programs focused on counterterrorism, counterproliferation, hard and deeply buried targets, and critical infrastructure protection.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the National Security Impact of Digital Sovereignty

Date Originally Written:  December 6, 2019.

Date Originally Published:  January 6, 2020.

Author and / or Article Point of View:  The authors believe that traditional notions of citizenship and sovereignty are rapidly changing and that the U.S. could gain competitive advantage by embracing a tiered citizenship model, including e-residency.

Summary:  Money, people, and companies drove globalization’s disruption of centuries of power domination by nation-states, while increasing the agency and autonomy of corporations and individuals. The balance of power has shifted, and if governments do not learn how to adapt, they will be relegated to the back seat in influence and decision making for this century. One opportunity for adaptation lies in embracing, not rejecting, digital sovereignty.

Text:  In the past 25 years, the globalization of the world’s economic systems and the introduction of Internet ubiquity have had profound effects on humankind and centuries-old governance structures. Electronic commerce has transformed international supply chain dynamics and business finance. Physical borders have become less meaningful in the face of digital connectedness and supranational economic zones. The largest multinational corporations have market caps which challenge or exceed the gross domestic product of most of the countries in the world. These changes have made international transactions – and investments – executable with the click of a button, transactions that once required weeks or months of travel to finalize.

Facilitating and empowering the citizens of the world to engage in the global marketplace has created a new dynamic. This dynamic involves the provision of safety and security of the people being increasingly transferred to the private sector thus forcing governments to outsource their most basic sovereign responsibility and reserving the most complete and effective solutions for those who can afford them. This outsourcing includes fiscal security (or social welfare), especially in free market economies where the responsibility for savings and investment is on the individual, not the government. As safety and security – personal and fiscal – becomes further privatized, individuals are taking steps to wrest control of themselves – their identities, their businesses, and their freedom of movement – from the state. Individuals want to exercise self-determination and attain individual sovereignty in the globalized world. This desire leaves the nation state, particularly in western democracies, in a challenging position. How does a government encourage self-sufficiency (often because states can no longer afford the associated costs) and democracy when globalized citizens are redefining what it means to be a citizen?

The first war of the 21st century, the Global War on Terrorism, was one of individuals disenfranchised from the state developing subnational, virtual organizations to employ terror and insurgent tactics to fight the nation states’ monopoly on power. The second war – already well underway but one that governments have been slow to recognize and engage – is great power competition short of kinetic action, to remake the geopolitical balance of power into multi-polar spheres of influence. The third war this century may likely be over amassing talent and capital, which in turn drives economic power. America’s near-peer adversaries, particularly China[1], are already moving aggressively to increase their global hegemony for this century, using all means of state power available. How can America counter its near-peers? The U.S. could position itself to exert superiority in the expanding competition for wealth by proactively embracing self-determination and individual autonomy as expressed by the digital sovereignty movement.

Digital sovereignty is the ultimate expression of free market capitalism. If global citizens have freedom of movement – and of capital, access to markets, encouragement to start businesses – they will choose the market and the society with the fewest barriers to entry. Digital sovereignty gives the advantage to countries who operate on free market capitalism and self-determination. Digital sovereignty is also an unexpected counter to China’s and Russia’s authoritarian models, thus disrupting the momentum that both those competitors have gained during the great power competition. In addition to acting as a disrupter in global geopolitics, proactive acceptance and adoption of digital sovereignty opens new potential tax and economic boosts to the U.S. Further, digital sovereignty could serve as an opportunity to break down barriers between Silicon Valley (particularly its techno-libertarians) and the U.S. government, by leveraging one of the tech elite’s most progressive socio-cultural concepts.

What might digital sovereignty look like in the U.S.? One approach is Estonia’s forward-looking experiments with e-residency[2] for business purposes but with the U.S. extending these ideas further to a tiered citizenship structure that includes U.S.-issued identity and travel benefits. One can be a citizen and contribute to the U.S. economy with or without living there. People can incorporate their business and conduct banking in the U.S., all using secure digital transactions. Stateless (by choice or by circumstance) entrepreneurs can receive travel documents in exchange for tax revenue. This is virtual citizenship.

The U.S. government could opt to act now to throw its weight behind digital sovereignty. This is a democratic ideal for the 21st century, and the U.S. has an opportunity to shape and influence the concept. This policy approach would pay homage to the Reagan-Bush model of free movement of labor. In this model, people don’t get full citizenship straight away, but they can legally work and pay taxes in the U.S. economy, while living where they like.

The U.S. government could create two tiers of citizenship. Full conventional citizenship – with voting privileges and other constitutionally guaranteed rights – could remain the domain of natural born and naturalized citizens. A second level of citizenship for the e-citizen could avoid the provision of entitlements but allow full access to the economy: free movement across borders, the ability to work, to start a business, to open a bank account. E-citizenship could be a path to earning full citizenship if that’s what the individual wants. If not, they can remain a virtual citizen, with some but not all privileges of full citizenship. Those who do wish to pursue full legal citizenship might begin contributing to the American economy and gain some benefits of association with the U.S., but they could do so from wherever they are currently located. This approach might also encourage entrepreneurship, innovation, and hard work – the foundations of the American dream.

Both historically and at present – irrespective of what party is in office – the U.S. has always desired to attract immigrants that want the opportunity to pursue a better life for themselves and their children through hard work. Life, liberty, and the pursuit of happiness: the foundational concept of the United States. Accordingly, if the U.S. is the first great power to embrace and encourage digital sovereignty, acting in accordance with core American values, then the U.S. also shapes the future battlespace for the war for talent and capital by exerting first-mover advantage.


Endnotes:

[1] Shi, T. (2017, October 17). “Xi Plans to Turn China Into a Leading Global Power by 2050”. Retrieved December 2, 2019, from https://www.bloomberg.com/news/articles/2017-10-17/xi-to-put-his-stamp-on-chinese-history-at-congress-party-opening.

[2] Republic of Estonia. “The new digital nation: What is E-Residency?” Retrieved December 2, 2019, from https://e-resident.gov.ee/.

Assessment Papers Cyberspace Economic Factors Estonia United States

Assessing North Korea’s Cyber Evolution

Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing North Korea’s Cyber Evolution

Date Originally Written:  September 17, 2019.

Date Originally Published:  November 25, 2019.

Author and / or Article Point of View:  The author believes that the international community’s focus on addressing North Korea’s nuclear capability sets the conditions whereby their cyber capabilities can evolve unchecked.

Summary:  Despite displaying a growing and capable cadre of cyber warriors, North Korean cyber prowess has been overshadowed by threats of nuclear proliferation. While North Korea remains extremely isolated from the global community, it has conducted increasingly sophisticated cyber attacks over a short span of time. In a relatively short period of time, North Korea has cultivated a cyber acumen worth recognizing as threatening as its nuclear program.

Text:  As the internet quickly expanded across the globe and changed the nature of business and communication, Western nations capitalized on its capabilities. Authoritarian regimes felt threatened by the internet’s potential for damaging the regime’s power structure. In the 1990s, Kim Jong-il, father of current North Korean leader Kim Jong-un, restricted internet access, usage, and technology in his country[1]. Eventually, Kim Jong-il’s attitude shifted after recognizing the potential benefits of the internet. The North likely received assistance from China and the Soviet Union to begin training a rudimentary cyber corps during the 80s and 90s[2]. Cyber was and still is reserved explicitly for military or state leadership use.

The expansion of North Korea’s cyber program continued under Kim Jong-un, who today seeks to project military might by displays of a capable nuclear program. But Kim Jong-un, who possesses a degree in computer science, also understood the potential for cultivating cyber power. For North Korea, cyber is not just an asymmetrical medium of warfare, but also a method of surveillance, intelligence-gathering, and circumventing sanctions[3]. Within the last decade, North Korea has demonstrated an impressive understanding and application of offensive cyber competence. Several experts and reports estimate North Korean cyber forces range from 1,800 to upwards of 6,000 professionals[4]. Internet access is reportedly routed through China, which lends added difficulty to attribution but provides a measure of defense[5]. North Korea is largely disconnected from the rest of the world and maintains a rudimentary internet infrastructure[6]. The disconnect between the state and the internet leaves a significantly small and less vulnerable attack surface for other nations to exploit. 

Little information is available regarding the internal structure of North Korea’s cyber forces. What is thought to be known suggests an organizational hierarchy that operates with some autonomy to achieve designated mission priorities. Bureau 121, No. 91 Office, and Lab 110 report to North Korea’s Reconnaissance General Bureau (RGB)[7]. Each reportedly operate internally and externally from Pyongyang. Bureau 121’s main activities include intelligence gathering and coordinating offensive cyber operations. Lab 110 engages in technical reconnaissance, such as network infiltration and malware implantation. No. 91 Office is believed to orchestrate hacking operations. Other offices situated under Bureau 121 or the RGB likely exist and are devoted entirely to information warfare and propaganda campaigns[8]. 

In the spring of 2013, a wave of cyber attacks struck South Korea. A new group called Dark Seoul emerged from North Korea armed with sophisticated code and procedures. South Korean banks and broadcasting companies were among the first institutions to endure the attacks beginning in March. In May, the South Korean financial sector was paralyzed by sophisticated malware. Later in June, marking the 63rd anniversary of the beginning of the Korean War, various South Korean government websites were taken offline by Distributed Denial of Service (DDoS) attacks. Although Dark Seoul had been working discreetly since 2009, its successful attacks against major South Korean institutions prompted security researchers to more seriously consider the North Koreans as perpetrators[9]. The various attacks against financial institutions would be a prequel to the massive cyber financial heists the North would eventually manage, possibly making South Korea a testing ground for North Korea’s code and malware vehicles.

North Korea’s breach of Sony Pictures in 2014 catapulted the reclusive regime to international cyber infamy. Members of an organization calling themselves the Guardians of Peace stole nearly 40 gigabytes of sensitive data from Sony Pictures, uploaded damaging information online, and left behind a bizarre image of a red skeleton on employees’ desktop computers[10]. This was the first major occurrence of a nation-state attacking a United States corporation in retribution for something seemingly innocuous. While the Sony hack was an example of how vague rules for conducting cyber war and crime differ between nations, the attack  was more importantly North Korea’s first true display of cyber power. Sony executives felt compelled to respond and sought counsel from the U.S. government. The government was hesitant to let a private company respond to an attack led by the military apparatus of a foreign adversary. Instead, President Barack Obama publicly named North Korea as the perpetrator and vaguely hinted at a potential U.S. response, becoming the first U.S. president to do so.

Cyber crime also provides alternative financing for the regime’s agenda. In February 2016, employees at the Bank of Bangladesh were struggling to recover a large sum of money that had been transferred to accounts in the Philippines and Sri Lanka. The fraudulent transactions totaled $81 million USD[11]. Using Bangladesh Bank employee credentials, the attackers targeted the bank’s SWIFT account. SWIFT is an international money transfer system used by financial institutions to transfer large sums of money. After-action analysis revealed the malware had been implanted a month prior and shared similarities with the malware used to infiltrate and exploit Sony in 2014[12]. The Bangladesh Bank heist was intensively planned and researched, which lent credence to the North’s growing cyber acumen. As of 2019, North Korea has accumulated an estimated $2 billion USD exclusively from cyber crime[13]. Security assessments indicate the Sony attack, the Bangladesh Bank hack, and the WannaCry attacks are related which lends some understanding to how North Korean cyber groups operate. In 2018, the United States filed criminal charges against a North Korean man for all three cyber crimes as part of a grander strategy for deterrence[14].

Finally, it is important to consider how North Korea’s cyber warfare tactics and strategies will evolve. North Korea has already proven to be a capable financial cyber crime actor, but how would its agencies perform in full-scale warfare? In terms of numbers, the North Korean military is one of the largest conventional forces in the world despite operating with rudimentary technology[15]. Studies suggest that while the North may confidently rely on its nuclear program to win a conventional war, it is unlikely that North Korea would be able to sustain its forces in long-term war[16]. North Korea would need to promptly engage in asymmetric warfare to disorient enemy forces to gain a technological advantage while continuously attempting to attack enemy systems to disrupt crucial communications. The regime could conduct several cyber operations against its adversaries, deny responsibility, then use the wrongful attribution as grounds for a kinetic response. North Korea has threatened military action in the past after being hit with additional sanctions[17]. 

Despite North Korea’s display of a growing and expansive cyber warfare infrastructure coupled with a sophisticated history of cyber attacks, the international community remains largely concerned with the regime’s often unpredictable approach to nuclear and missile testing. With the international community focused elsewhere, North Korea’s cyber program continues to grow unchecked. It remains to be seen if someday the international community will diplomatically engage North Korea regarding their cyber program with the same intensity as their nuclear program.


Endnotes:

[1] David E. Sanger, The Perfect Weapon, Crown Publishing, 2018, p. 127

[2] The Perfect Weapon, p.127-128; and Eleanor Albert, Council on Foreign Relations: North Korea’s Military Capabilities, 25 July 2019, retrieved from https://www.cfr.org/backgrounder/north-koreas-military-capabilities

[3] David Sanger, David Kirkpatrick, Nicole Perloth, New York Times: The World Once Laughed at North Korean Cyberpower. No more, 15 October 2017, retrieved from https://www.nytimes.com/2017/10/15/world/asia/north-korea-hacking-cyber-sony.html

[4] Ibid; and 1st Lt. Scott J. Tosi, Military Review: North Korean Cyber Support to Combat Operations, July/August 2017, retrieved from https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MilitaryReview_20170831_TOSI_North_Korean_Cyber.pdf

[5] 1st Lt. Scott J. Tosi

[6] David Sanger, David Kirkpatrick, Nicole Perloth

[7] 1st Lt. Scott J. Tosi; and Kong Ji Young, Lim Jong In, and Kim Kyoung Gon, NATO CCDCOE:The All-Purpose Sword: North Korea’s Cyber Operations and Strategies, 2019, retrieved from https://ccdcoe.org/uploads/2019/06/Art_08_The-All-Purpose-Sword.pdf

[8] Ibid.

[9] Symantec Security Response, Four Years of DarkSeoul Cyberattacks Against South Korea Continue on Anniversary of Korean War, 26 June 2013, retrieved from https://www.symantec.com/connect/blogs/four-years-darkseoul-cyberattacks-against-south-korea-continue-anniversary-korean-war; and Kong Ji Young, Lim Jong In, and Kim Kyoung Gon, NATO CCDCOE Publications, The All-Purpose Sword: North Korea’s Cyber Operations and Strategy, 2019, retrieved from https://ccdcoe.org/uploads/2019/06/Art_08_The-All-Purpose-Sword.pdf

[10] Kim Zetter, Wired: Sony Got Hacked Hard: What We Know and Don’t Know So Far, 3 December 2014, retrieved from https://www.wired.com/2014/12/sony-hack-what-we-know/

[11] Kim Zetter, Wired: That Insane, $81M Bangladesh Bank Heist? Here’s What We Know, 17 May 2016, retrieved from https://www.wired.com/2016/05/insane-81m-bangladesh-bank-heist-heres-know/

[12] Ibid.

[13] Michelle Nichols, Reuters: North Korea took $2 billion in cyberattacks to fund weapons program: U.N. report, 5 August 2019, retrieved from https://www.reuters.com/article/us-northkorea-cyber-un/north-korea-took-2-billion-in-cyberattacks-to-fund-weapons-program-u-n-report-idUSKCN1UV1ZX

[14] Christopher Bing and Sarah Lynch, Reuters: U.S. charges North Korean hacker in Sony, WannaCry cyberattacks, 6 September 2018, retrieved from https://www.reuters.com/article/us-cyber-northkorea-sony/u-s-charges-north-korean-hacker-in-sony-wannacry-cyberattacks-idUSKCN1LM20W

[15] Eleanor Albert, Council on Foreign Relations, What Are North Korea’s Military Capabilities?, 25 July 2019, retrieved from https://www.cfr.org/backgrounder/north-koreas-military-capabilities

[16] 1st Lt. Scott J. Tosi, Military Review: North Korean Cyber Support to Combat Operations, July/August 2017, retrieved from https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MilitaryReview_20170831_TOSI_North_Korean_Cyber.pdf

[17] Jack Kim and Ju-min Park, Reuters:Cyber-attack on South Korea may not have come from China after all, 22 March 2013, retrieved from  https://www.reuters.com/article/us-cyber-korea/cyber-attack-on-south-korea-may-not-have-come-from-china-after-all-regulator-idUSBRE92L07120130322

 

 

Assessment Papers Cyberspace North Korea (Democratic People’s Republic of Korea)

Assessment of Militia Forces as a Model for Recruitment and Retention in Cyber Security Forces

Franklin Holcomb is a graduate student from the U.S. at the University of Tartu, Estonia and a former research analyst on Eastern European security issues in Washington, D.C. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Militia Forces as a Model for Recruitment and Retention in Cyber Security Forces

Date Originally Written:  September 25, 2019.

Date Originally Published:  November 18, 2019.

Author and / or Article Point of View:  The author is a graduate student from the U.S. at the University of Tartu, Estonia and a former research analyst on Eastern European security issues in Washington, D.C. He is a strong believer in the Euro-American relationship and the increasing relevance of innovation in security and governance.

Summary:  U.S. and Western Armed Forces are struggling with recruitment and retention in their cyber units, which leaves their countries vulnerable to hostile cyber actors. As society becomes increasingly digitalized in coming years, the severity of these vulnerabilities will increase. The militia model adopted by the Baltic states provides a format to attract civilian experts and decrease vulnerabilities.

Text:  The U.S. Armed Forces are facing difficulties recruiting and retaining cyber-security talent. To meet this challenge the U.S. Marine Corps announced in April 2019 that it would establish a volunteer cyber-auxiliary force (Cyber Aux) consisting of a “small cadre of highly-talented cyber experts who train, educate, advise, and mentor Marines to keep pace with constantly-evolving cyber challenges[1].” The Cyber Aux will face many of the issues that other branches, and countries, have in attracting and retaining cyber-security professionals. Cyber Aux takes notably important steps towards increasing the appeal of participation in the U.S. armed forces for cyber-security experts, such as relaxing grooming and fitness standards. But Cyber Aux will struggle to attract enough professionals due to factors such as its role as a mentorship organization, rather than one that conducts operations, and the wide military-civilian pay gap in the cyber-security field[2]. These factors will ensure U.S. and North Atlantic Treaty Organization (NATO) military forces will have suboptimal and likely understaffed cyber components; increasing their vulnerabilities on and off the battlefield.

Estonia, Latvia, and Lithuania have been on the geographic and virtual frontlines of many challenges faced by NATO. The severity of threats facing them has made security innovation a necessity rather than a goal. While not all innovations have succeeded, these countries have created a dynamic multi-layered defense ecosystem which combines the skillsets of civil society and their armed forces to multiply their defense capabilities and increase national resilience. There are numerous organizations that play a role in these innovations including civilian groups as well as the militias of each state[3]. The militias, non-professional military forces who gain legitimacy and legality from state authorization, play a key role in increasing the effective strength of forces in the region. The Estonian Defense League, the Latvian National Guard, and the Lithuanian Riflemen’s Association all draw on civilian talent to form militias. These organizations are integrated, to different extents, with military structures and play supporting roles in a time of crisis that would free regular forces to conduct operations or support their operations directly.

These militias have established cyber units which are models for integrating civilian cyber-security professionals into military structures. The Baltic cyber-militias engage directly in practical cyber-security concerns, rather than being restricted to academic pursuit or mentoring like Cyber Aux. In peacetime, these organizations conduct training for servicemen and civilians with the goal of raising awareness of the risks posed by hostile cyber actors, increasing civilian-military collaboration in cyber-security, and improving cyber-security practices for critical systems and infrastructure[4]. In crisis, these units mobilize to supplement state capabilities. The Estonian Defense League and Latvian National Guard have both established cyber-defense units, and Lithuania intends to complete a framework through which its militia could play a role in supporting cyber-defense capabilities by January 2020[5]. 

The idea of a cyber-militia is not new, yet the role these organizations play in the Baltic states as a talent bridge between the armed forces and civil society provides a very useful policy framework for many Western states. Currently cyber-auxiliaries are used by many states such as Russia and China who rely on them to supplement offensive cyber capacities[6]. This situational, and often unofficial use of auxiliaries in cyber operations has advantages, prominently including deniability, but these should not overshadow the value of having official structures that are integrated into both civil society and national cyber-defense. By creating a reserve of motivated civilian professionals that can be called on to supplement military cyber units during a time of crisis, the Baltic states are also effectively increasing not only their resilience to a major cyber incident while it is underway, but raising the up-front cost of conducting such an attack in the first place.

As NATO and European policymakers consider the best courses available to improve their Armed Forces’ cyber capacities, the models being adopted in Estonia, Latvia, and Lithuania are likely of value. Estonia pioneered the concept in the region[7], but as the model spreads to other states Western states could learn from the effectiveness of the model. Cyber-militias, which play a supportive role in cyber operations, will strengthen the cyber forces of militaries in other NATO states which are undermined by low recruitment and retention.


Endnotes:

[1] (2019, May 13). Marine Corps Establishes Volunteer Cyber Auxiliary to Increase Cyberspace Readiness. Marines.mil. Retrieved September 25, 2019. https://www.marines.mil/News/Press-Releases/Press-Release-Display/Article/1845538/marine-corps-establishes-volunteer-cyber-auxiliary-to-increase-cyberspace-readi

[2] Moore E., Kollars N. (2019, August 21). Every Marine a Blue-Haired Quasi-Rifleperson? War on the Rocks. Retrieved on September 25, 2019. https://warontherocks.com/2019/08/every-marine-a-blue-haired-quasi-rifleperson/; Cancian M., (2019, September 05) Marine Cyber Auxiliaries Aren’t Marines, and Cyber “Warriors” aren’t Warriors. War on the Rocks. Retrieved September 25, 2019. https://warontherocks.com/2019/09/marine-cyber-auxiliaries-arent-marines-and-cyber-warriors-arent-warriors/

[3] Thompson T. (2019, January 9) Countering Russian Disinformation the Baltic nations’ way. The Conversation. Retrieved September 25, 2019. http://theconversation.com/countering-russian-disinformation-the-baltic-nations-way-109366

[4] (2019, September 24). Estonian Defense League’s Cyber Unit. Estonian Defense League. Retrieved on September 25, 2019. http://www.kaitseliit.ee/en/cyber-unit; (2013). National Armed Forces Cyber Defense Unit (CDU) Concept. Latvian Ministry of Defense. Retrieved September 25, 2019. https://www.mod.gov.lv/sites/mod/files/document/cyberzs_April_2013_EN_final.pdf; (2015, January 15). National Guard opens cyber-defense center. Public Broadcasting of Latvia. Retrieved September 25, 2019. https://eng.lsm.lv/article/society/society/national-guard-opens-cyber-defense-center.a113832/; Kaska K, Osula A., Stinnissen J. (2013) The Cyber Defence Unit of the Estonian Defense League NATO Cooperative Cyber Defense Centre of Excellence. Tallinn, Estonia. Retrieved September 25, 2019. https://ccdcoe.org/uploads/2018/10/CDU_Analysis.pdf; Pernik P. (2018, December). Preparing for Cyber Conflict: Case Studies of Cyber Command. International Centre for Defense and Security. Retrieved on September 25, 2019. https://icds.ee/wp-content/uploads/2018/12/ICDS_Report_Preparing_for_Cyber_Conflict_Piret_Pernik_December_2018-1.pdf

[5] (2019, July 03) The Government of the Republic of Lithuania: Ruling on the Approval of the Interinstitutional Action Plan for the Implementation of National Cybernet Security Strategy. Lithuanian Parliament. Retrieved September 25, 2019. https://e-seimas.lrs.lt/portal/legalAct/lt/TAD/faeb5eb4a6c811e9aab6d8dd69c6da66?jfwid=dg8d31595

[6] Applegate S. (2011, September/October) Cybermilitias and Political Hackers- Use of Irregular Forces in Cyberwarfare. IEEE Security and Privacy. Retrieved on September 25, 2019. https://www.researchgate.net/publication/220497000_Cybermilitias_and_Political_Hackers_Use_of_Irregular_Forces_in_Cyberwarfare

[7] Ruiz M. (2018.January 9) Is Estonia’s Approach to Cyber Defense Feasible in the United States? War on the Rocks. Accessed: September 25, 2019. https://warontherocks.com/2018/01/estonias-approach-cyber-defense-feasible-united-states/; Drozdiak N. (2019, February 11) One of Russia’s Neighbors Has Security Lessons for the Rest of Us. Bloomberg. Retrieved on September 25, 2019. https://www.bloomberg.com/news/articles/2019-02-11/a-russian-neighbor-has-cybersecurity-lessons-for-the-rest-of-us

Assessment Papers Baltics Cyberspace Estonia Franklin Holcomb Latvia Lithuania Non-Full-Time Military Forces (Guard, Reserve, Territorial Forces, Militias, etc)

An Assessment of the Current State of U.S. Cyber Civil Defense

Lee Clark is a cyber intelligence analyst currently working on cyber defense strategy in the Middle East.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School. He can be found on Twitter at @InktNerd.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of the Current State of U.S. Cyber Civil Defense

Date Originally Written:  September 11, 2019.

Date Originally Published:  November 22, 2019.

Author and / or Article Point of View:  The author is an early-career cybersecurity analyst with experience advising private and public sector organizations on cyber threats and building cyber threat intelligence programs.

Summary:  Local civic organizations in the U.S. are experiencing a wave of costly and disruptive low-sophistication cyberattacks on a large scale, indicating widespread vulnerabilities in networks. In light of past and ongoing threats to U.S. cyber systems, especially election systems, this weak cybersecurity posture represents a serious national security concern.

Text:  The state of cyber defenses among public sector entities in the United States is less than ideal. This is especially true among smaller civic entities such as city utility companies, local government offices (including local election authorities), and court systems. There is currently an ongoing wave of cyberattacks against government systems in cities across the U.S. In 2019, more than 40 local government organizations experienced successful ransomware attacks[1]. These widespread attacks indicate an attractive attack surface and vulnerable profile to potential cyber aggressors, which has broad implications for the security of U.S. cyber systems, including election systems.

Ransomware is a vector of cyberattack in which malicious actors compromise a victim’s computer and encrypt all available files, while offering the victim an encryption key to decrypt files in exchange for a ransom payment, typically in the form of a cryptocurrency such as Bitcoin. If victims refuse to pay or cannot pay, the files are left encrypted and the infected computer(s) are rendered useless. In some cases, files can be decrypted by specialists without paying the ransom. In other cases, even if victims pay, there is in reality no decryption key and files are permanently locked. 

Ransomware is among the most common and least sophisticated forms of cyberattack in the field today. Attacks of this type have grown exponentially in recent years, and one study found that in 2019, 18% of all cyber-related insurance claims internationally were linked to ransomware incidents, second only to business email compromises[2]. In some cases, insurance companies were found encouraging clients to pay ransoms because it saved money and promoted the criminal practice, enhancing the market for cyber insurance services[3]. 

Ransomware attacks are relatively easy to execute on the part of attackers, and often target computers can be infected by tricking a victim into clicking on a malicious link through a phishing email disguised as a legitimate business communication. For example, in 2018, city computer networks in Allentown, Pennsylvania were offline for weeks after ransomware infected the system through an employee’s email after the employee failed to install security updates and clicked on a phishing email. The attack cost the city around USD 1 million to resolve and ongoing security improvements are costing approximately USD 420,000 per year[4].

Local city systems make for attractive targets for cyber attackers for several reasons: 

1) Such organizations often carry cyber insurance, indicating an ability to pay and a higher likelihood of attackers being paid quickly without difficulty.

2) Local government offices have a reputation for being soft targets, often with lax and/or outdated security software and practices.

3) Infecting systems requires very little investment of resources on the attacker’s part, such as time, technical skill, focus, and labor, since phishing emails are often sufficient to gain access to targeted networks.

4) Executing successful attacks against such organizations often results in widespread media attention and tangible damages, including monetary cost to the organization, disruption to services, and public backlash, all of which enhance the attacker’s reputation in criminal communities.

Because of the ongoing prevalence of ransomware attacks, U.S. officials recently voiced public concern about the plausibility of ransomware attacks against election systems during the 2020 elections[5]. A chief concern is that if attackers have enough systems access to lock the files, the attackers very likely also have the ability to alter and/or steal files from an infected system. This concern is compounded by recent revelations by the Senate Select Committee on Intelligence that Russian-linked threat actors targeted election systems in all 50 states in 2016, most successfully in Illinois and Arizona[6]. 

It should be noted that U.S. federal agencies and private consulting firms have engaged in a large-scale effort to increase security measures of election systems since 2016 in preparation for the 2020 election, including hiring specialists and acquiring new voting machines[7]. The specifics, technical details, and effectiveness of these efforts are difficult to properly measure from open source materials, but have drawn criticism for their limited scope[8].

In the U.S., election security is among the most complex and difficult challenges facing the cybersecurity field. Elections involve countless competing and interacting stakeholders, intricate federal and local regulations, numerous technologies of varying complexity, as well as legal and ethical norms and expectations. These nuances combine to present a unique challenge to U.S. national security concerns, especially from a cyber-viewpoint. It is a matter of public record that U.S. election systems are subject to ongoing cyber threats from various actors. Some known threats operate with advanced tactics, techniques, procedures, and resources supported by technologically-sophisticated nation states. 

The recent wave of ransomware attacks on local governments compounds election security concerns because the U.S. election system relies heavily on local government organizations like county clerk and poll offices. Currently, local systems are demonstrably vulnerable to common and low-effort attacks, and will remain so without significant national-level efforts. If local defenses are not developed enough to resist a ransomware attack delivered in a phishing email, it is difficult to imagine a county clerk’s office in Ohio or Kentucky having sufficient cyber defenses to repel a sophisticated attack by a Russian or Chinese-backed advanced persistent threat group. 

After the beginning of the nuclear arms race in the second half of the 20th century, the U.S. government developed a national civil defense program by which to prepare local jurisdictions for nuclear attacks. This effort was prominent in the public mind and expensive to execute. Lessons from this national civil defense program may be of value to adequately prepare U.S. civic cyber systems to effectively resist both low and high-sophistication cyber intrusions.

Unlike nuclear civil defense, which has been criticized for achieving questionable results in terms of effective defense, cyber civil defense effectiveness could be benchmarked and measured in tangible ways. While no computer system can be entirely secure, strong indicators of an effective cybersecurity posture include up-to-date software, regular automatic security updates, periodic security audits and vulnerability scans, established standard operating procedures and best practices (including employee cyber awareness training), and a well-trained and adequately-staffed cybersecurity team in-house.


Endnotes:

[1] Fernandez, M., Sanger, D. E., & Martinez, M. T. (2019, August 22). Ransomware Attacks Are Testing Resolve of Cities Across America. Retrieved from https://www.nytimes.com/2019/08/22/us/ransomware-attacks-hacking.html

[2] Cimpanu, C. (2019, September 2). BEC overtakes ransomware and data breaches in cyber-insurance claims. Retrieved from https://www.zdnet.com/article/bec-overtakes-ransomware-and-data-breaches-in-cyber-insurance-claims/

[3] Dudley, R. (2019, August 27). The Extortion Economy: How Insurance Companies Are Fueling a Rise in Ransomware Attacks. Retrieved from https://www.propublica.org/article/the-extortion-economy-how-insurance-companies-are-fueling-a-rise-in-ransomware-attacks

[4] Fernandez, M., Sanger, D. E., & Martinez, M. T. (2019, August 22). Ransomware Attacks Are Testing Resolve of Cities Across America. Retrieved from https://www.nytimes.com/2019/08/22/us/ransomware-attacks-hacking.html

[5] Bing, C. (2019, August 27). Exclusive: U.S. officials fear ransomware attack against 2020 election. Retrieved from https://www.reuters.com/article/us-usa-cyber-election-exclusive/exclusive-us-officials-fear-ransomware-attack-against-2020-election-idUSKCN1VG222

[6] Sanger, D. E., & Edmondson, C. (2019, July 25). Russia Targeted Election Systems in All 50 States, Report Finds. Retrieved from https://www.nytimes.com/2019/07/25/us/politics/russian-hacking-elections.html

[7] Pearson, R. (2019, August 5). 3 years after Russian hackers tapped Illinois voter database, officials spending millions to safeguard 2020 election. Retrieved from https://www.chicagotribune.com/politics/ct-illinois-election-security-russian-hackers-20190805-qtoku33szjdrhknwc7pxbu6pvq-story.html 

[8] Anderson, S. R., Lostri, E., Jurecic, Q., & Taylor, M. (2019, July 28). Bipartisan Agreement on Election Security-And a Partisan Fight Anyway. Retrieved from https://www.lawfareblog.com/bipartisan-agreement-election-security-and-partisan-fight-anyway

Assessment Papers Civil Defense Cyberspace Lee Clark United States

Options to Bridge the U.S. Department of Defense – Silicon Valley Gap with Cyber Foreign Area Officers

Kat Cassedy is a qualitative analyst with 20 years of work in hard problem solving, alternative analysis, and red teaming.  She currently works as an independent consultant/contractor, with experience in the public, private, and academic sectors.  She can be found on Twitter @Katnip95352013, tweeting on modern #politicalwarfare, #proxywarfare, #NatSec issues, #grayzoneconflict, and a smattering of random nonsense.  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The cultural gap between the U.S. Department of Defense and Silicon Valley is significant.  Bridging this gap likely requires more than military members learning tech speak as their primary duties allow.

Date Originally Written:  April 15, 2019. 

Date Originally Published:  April 15, 2019. 

Author and / or Article Point of View:  The author’s point of view is that the cyber-sector may be more akin to a foreign culture than a business segment, and that bridging the growing gulf between the Pentagon and Silicon Valley may require sociocultural capabilities as much or more so than technical or acquisition skills. 

Background:  As the end of the third decade of the digital revolution nears an end, and close to a year after the U.S. Cyber Command was elevated to a Unified Combatant Command, the gap between the private sector’s most advanced technology talents, intellectual property (IP), services, and products and that of the DoD is strained and increasing. Although the Pentagon needs and wants Silicon Valley’s IP and capabilities, the technorati are rejecting DoD’s overtures[1] in favor of enormous new markets such as those available in China. In the Information Age, DoD assesses that it needs Silicon Valley’s technology much the way it needed the Middle East’s fossil fuels over the last half century, to maintain U.S. global battlespace dominance. And Silicon Valley’s techno giants, with their respective market caps rivaling or exceeding the Gross Domestic Product of the globe’s most thriving economies, have global agency and autonomy such that they should arguably be viewed as geo-political power players, not simply businesses.  In that context, perhaps it is time to consider 21st century alternatives to the DoD way of thinking of Silicon Valley and its subcomponents as conventional Defense Industrial Base vendors to be managed like routine government contractors. 

Significance:  Many leaders and action officers in the DoD community are concerned that Silicon Valley’s emphasis on revenue share and shareholder value is leading it to prioritize relationships with America’s near-peer competitors – mostly particularly but not limited to China[2] – over working with the U.S. DoD and national security community. “In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete[3].” Given the DoD’s extreme reliance on and investment in highly networked and interdependent information systems to dominate the modern global operating environment, the possibility that U.S. companies are choosing foreign adversaries as clients and partners over the U.S. government is highly concerning. If this technology shifts away from U.S. national security concerns continues, 1)  U.S. companies may soon be providing adversaries with advanced capabilities that run counter to U.S. national interests[4]; and 2) even where these companies continue to provide products and services to the U.S., there is an increased concern about counter-intelligence vulnerabilities in U.S. Government (USG) systems and platforms due to technology supply chain vulnerabilities[5]; and 3) key U.S. tech startup and emerging technology companies are accepting venture capital, seed, and private equity investment from investors who’s ultimate beneficial owners trace back to foreign sovereign and private wealth sources that are of concern to the national security community[6].

Option #1:  To bridge the cultural gap between Silicon Valley and the Pentagon, the U.S. Military Departments will train, certify, and deploy “Cyber Foreign Area Officers” or CFAOs.  These CFAOs would align with DoD Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs[7]” and, within the cyber and Silicon Valley context, do the same as a traditional FAO and “provide expertise in planning and executing operations, to provide liaison with foreign militaries operating in coalitions with U.S. forces, to conduct political-military activities, and to execute military-diplomatic missions.”

Risk:  DoD treating multinational corporations like nation states risks further decreasing or eroding the recognition of nation states as bearing ultimate authority.  Additionally, there is risk that the checks and balances specifically within the U.S. between the public and private sectors will tip irrevocably towards the tech sector and set the sector up as a rival for the USG in foreign and domestic relationships. Lastly, success in this approach may lead to other business sectors/industries pushing to be treated on par.

Gain:  Having DoD establish a CFAO program would serve to put DoD-centric cyber/techno skills in a socio-cultural context, to aid in Silicon Valley sense-making, narrative development/dissemination, and to establish mutual trusted agency. In effect, CFAOs would act as translators and relationship builders between Silicon Valley and DoD, with the interests of all the branches of service fully represented. Given the routine real world and fictional depictions of Silicon Valley and DoD being from figurative different worlds, using a FAO construct to break through this recognized barrier may be a case of USG policy retroactively catching up with present reality. Further, considering the national security threats that loom from the DoD losing its technological superiority, perhaps the potential gains of this option outweigh its risks.

Option #2:  Maintain the status quo, where DoD alternates between first treating Silicon Valley as a necessary but sometimes errant supplier, and second seeking to emulate Silicon Valley’s successes and culture within existing DoD constructs.  

Risk:  Possibly the greatest risk in continuing the path of the current DoD approach to the tech world is the loss of the advantage of technical superiority through speed of innovation, due to mutual lack of understanding of priorities, mission drivers, objectives, and organizational design.  Although a number of DoD acquisition reform initiatives are gaining some traction, conventional thinking is that DoD must acquire technology and services through a lengthy competitive bid process, which once awarded, locks both the DoD and the winner into a multi-year relationship. In Silicon Valley, speed-to-market is valued, and concepts pitched one month may be expected to be deployable within a few quarters, before the technology evolves yet again. Continual experimentation, improvisation, adaptation, and innovation are at the heart of Silicon Valley. DoD wants advanced technology, but they want it scalable, repeatable, controllable, and inexpensive. These are not compatible cultural outlooks.

Gain:  Continuing the current course of action has the advantage of familiarity, where the rules and pathways are well-understood by DoD and where risk can be managed. Although arguably slow to evolve, DoD acquisition mechanisms are on solid legal ground regarding use of taxpayer dollars, and program managers and decision makers alike are quite comfortable in navigating the use of conventional DoD acquisition tools. This approach represents good fiscal stewardship of DoD budgets.

Other Comments:  None. 

Recommendation:  None.  


Endnotes:

[1] Malcomson, S. Why Silicon Valley Shouldn’t Work With the Pentagon. New York Times. 19APR2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/04/19/opinion/silicon-valley-military-contract.html.

[2] Hsu, J. Pentagon Warns Silicon Valley About Aiding Chinese Military. IEEE Spectrum. 28MAR2019. Retrieved 15APR2019, from https://spectrum.ieee.org/tech-talk/aerospace/military/pentagon-warns-silicon-valley-about-aiding-chinese-military.

[3] Zegart, A and Childs, K. The Growing Gulf Between Silicon Valley and Washington. The Atlantic. 13DEC2018. Retrieved 15APR2019, from https://www.theatlantic.com/ideas/archive/2018/12/growing-gulf-between-silicon-valley-and-washington/577963/.

[4] Copestake, J. Google China: Has search firm put Project Dragonfly on hold? BBC News. 18DEC2018. Retrieved 15APR2019, from https://www.bbc.com/news/technology-46604085.

[5] Mozur, P. The Week in Tech: Fears of the Supply Chain in China. New York Times. 12OCT2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/10/12/technology/the-week-in-tech-fears-of-the-supply-chain-in-china.html.

[6] Northam, J. China Makes A Big Play In Silicon Valley. National Public Radio. 07OCT2018. Retrieved 15APR2019, from https://www.npr.org/2018/10/07/654339389/china-makes-a-big-play-in-silicon-valley.

[7] Department of Defense Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs,” April 28, 2005.  Retrieved 15APR2019, from https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/131517p.pdf.

 

Cyberspace Emerging Technology Information Systems Kat Cassedy Option Papers Public-Private Partnerships and Intersections United States

An Assessment of North Atlantic Treaty Organization Cyber Strategy and Cyber Challenges

Ali Crawford has an M.A. from the Patterson School of Diplomacy and International Commerce where she focused on diplomacy, intelligence, cyber policy, and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of North Atlantic Treaty Organization Cyber Strategy and Cyber Challenges

Date Originally Written:  December 5, 2018.

Date Originally Published:  January 14, 2019.

Summary:  Cyber capabilities are changing the character of warfare.  Nations procure and develop cyber capabilities aimed at committing espionage, subversion, and compromising the integrity of information.  The North Atlantic Treaty Organization has evolved to meet these modern challenges by consistently implementing new policies, creating governing structures, and providing education to member-states.

Text:  In 2002, leaders from various nations met in Prague to discuss security challenges at a North Atlantic Treaty Organization (NATO) summit.  Agenda items included enhancing capabilities to more appropriately respond to terrorism and the proliferation of weapons of mass destruction, to consider the pending memberships of several Eastern European nations, and for the first time in NATO history, a pledge to strengthen cyber defenses.  Since 2002, NATO has updated its cyber policies to more accurately reflect the challenges of a world that is almost exclusively and continuously engaged in hybrid warfare. 

As NATO is a defensive organization, its primary focus is collective defense, crisis management, and cooperative security.  Early cyber policy was devoted exclusively to better network defense, but resources were limited; strategic partnerships had not yet been developed; and structured frameworks for policy applications did not exist.  When Russian Distributed Denial-of-Service (DDoS) attacks temporarily disrupted Estonian banking and business sectors in 2007, the idea of collective defense was brought to fruition.  Later, in 2008, another wave of vigorous and effective Russian DDoS attacks precluded an eventual kinetic military invasion of Georgia.  This onslaught of cyber warfare, arguably the first demonstration of cyber power used in conjunction with military force, prompted NATO to revisit cyber defense planning[1].  Today, several departments are devoted to the strategic and tactical governance of cybersecurity and policy. 

NATO’s North Atlantic Council (NAC) provides high-level political oversight on all policy developments and implementation[2].  Under the NAC rests the Cyber Defence Committee which, although subordinate to the NAC, leads most cyber policy decision-making.  At the tactical level, NATO introduced Cyber Rapid Reaction teams (CRRT) in 2012 which are responsible for cyber defense at all NATO sites[3].  The CRRTs are the first to respond to any cyber attack.  The Cyber Defence Management Board (CDMB), formerly known as the Defence Policy and Planning Committee (Cyber Defence), maintains responsibility for coordinating cyber defense activities among NATO’s civil and military bodies[4].  The CDMB also serves as the most senior advisory board to the NAC.  Additionally, the NATO Consultation, Control, and Command Board serves as the main authority and consultative body regarding all technical aspects and implementation of cyber defense[5]. 

In 2008 at the Bucharest Summit, NATO adopted its first political body of literature concerning cyber defense policy which primarily affirmed member nations’ shared responsibility to develop and defend its networks while adhering to international law[6].  Later, in 2010, the NAC was tasked with developing a more comprehensive cyber defense strategy which eventually led to an updated Policy on Cyber Defense in 2011 to reflect the rapidly evolving threat of cyber attacks[7].  NATO would continue to evolve in the following years.  In 2014, NATO began establishing working partnerships with industry leaders in cybersecurity, the European Union, and the European Defense Agency[8].  When NATO defense leaders met again at the Warsaw Summit in 2016, the Alliance agreed to name cyberspace as a domain of warfare in which NATO’s full spectrum of defensive capabilities do apply[9]. 

Despite major policy developments and resource advancements, NATO still faces several challenges in cyberspace.  Some obstacles are unavoidable and specific to the Internet of Things, which generally refers to a network of devices, vehicles, and home appliances that contain electronics, software, actuators, and connectivity which allows these things to connect, interact and exchange data.  First, the problem of misattribution is likely. Attribution is the process of linking a group, nation, or state actor to a specific cyber attack[10].  Actors take unique precautions to remain anonymous in their efforts, which creates ambiguities and headaches for the response teams investigating a particular cyber attack’s origin.  Incorrectly designating a responsible party may cause unnecessary tension or conflict. 

Second, as with any computer system or network, cyber defenses are only as strong as its weakest link.  On average, NATO defends against 500 attempted cyber attacks each month[11].  Ultimately, the top priority is management and security of Alliance-owned security infrastructure.  However, because NATO is a collection of member states with varying cyber capabilities and resources, security is not linear.  As such, each member nation is responsible for the safety and security of their own networks.  NATO does not provide security capabilities or resources for its members, but it does prioritize education, training, wargaming, and information-sharing[12].

To the east of NATO, Russia’s aggressive and tenacious approach to gaining influence in Eastern Europe and beyond has frustrated the Alliance and its strategic partners.  As demonstrated in Estonia and Georgia, Russia’s cyber power is as equally frustrating, as Russia views cyber warfare as a component of a larger information war to control the flow and perception of information and distract, degrade, or confuse opponents[13].  U.S. Army General Curtis Scaparroti sees Russia using cyber capabilities to operate under the legal and policy thresholds that define war. 

A perplexing forethought is the potential invocation of NATO Article 5 after a particularly crippling cyber attack on a member nation.  Article 5 bounds all Alliance members to the collective defense principle, stating that an attack on one member nation is an attack on the Alliance[14].  The invocation of Article 5 has only occurred one time in NATO history following the September 11 terror attacks in the United States[15].  The idea of proportional retaliation often arises in cyber warfare debates.  A retaliatory response from NATO is also complicated by potential misattribution.

Looking ahead, appears that NATO is moving towards an active cyber defense approach.  Active defense is a relatively new strategy that is a set of measures designed to engage, seek out, and proactively combat threats[16].  Active defense does have significant legal implications as it transcends the boundaries between legal operations and “hacking back.”  Regardless, in 2018 NATO leadership agreed upon the creation and implementation of a Cyber Command Centre that would be granted the operational authority to draw upon the cyber capabilities of its members, such as the United States and Great Britain[17].  Cyber Deterrence, as opposed to strictly defense, is attractive because it has relatively low barriers to entry and would allow the Alliance to seek out and neutralize threats or even to counter Russian information warfare campaigns.  The Command Centre is scheduled to be fully operational by 2023, so NATO still has a few years to hammer out specific details concerning the thin line between cyber defense and offense. 

The future of cyber warfare is uncertain and highly unpredictable.  Some experts argue that real cyber war will never happen, like German professor Thomas Rid, while others consider a true act of cyber war will be one that results in the direct loss of human life[18].  Like other nations grappling with cyber policy decision-making, NATO leadership will need to form a consensus on the applicability of Article 5, what precisely constitutes a serious cyber attack, and if the Alliance is willing to engage in offensive cyber operations.  Despite these future considerations, the Alliance has developed a comprehensive cyber strategy that is devoted to maintaining confidentiality, integrity, and accessibility of sensitive information. 


Endnotes:

[1] Smith, David J., Atlantic Council: Russian Cyber Strategy and the War Against Georgia, 17 January 2014, retrived from http://www.atlanticcouncil.org/blogs/natosource/russian-cyber-policy-and-the-war-against-georgia; and White, Sarah P., Modern War Institute: Understanding Cyber Warfare: Lessons From the Russia-Georgia War, 20 March 2018, retrieved from https://mwi.usma.edu/understanding-cyberwarfare-lessons-russia-georgia-war/

[2] North Atlantic Treaty Organization, Cyber defence, 16 July 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_78170.htm

[3] North Atlantic Treaty Organization, Cyber defence, 16 July 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_78170.htm

[4] Ibid.

[5] Ibid.

[6] Ibid.

[7] North Atlantic Treaty Organization, Cyber defence, 16 July 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_78170.htm

[8] Ibid; and NATO Cooperative Cyber Defence Center for Excellence, History, last updated 3 November 2015, https://ccdcoe.org/history.html

[9] North Atlantic Treaty Organization, Cyber defence, 16 July 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_78170.htm

[10] Symantec, The Cyber Security Whodunnit: Challenges in Attribution of Targeted Attacks, 3 October 2018, retrieved from https://www.symantec.com/blogs/expert-perspectives/cyber-security-whodunnit-challenges-attribution-targeted-attacks

[11] Soesanto, S., Defense One: In Cyberspace, Governments Don’t Know How to Count, 27 September 2018, retrieved from: https://www.defenseone.com/ideas/2018/09/cyberspace-governments-dont-know-how-count/151629/; and North Atlantic Treaty Organization, Cyber defence, last modified 18 February 2018, retrieved from https://www.nato.int/nato_static_fl2014/assets/pdf/pdf_2018_02/20180213_1802-factsheet-cyber-defence-en.pdf

[12] North Atlantic Treaty Organization, Cyber defence, last modified 18 February 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_78170.htm

[13] U.S. Department of Defense, “NATO moves to combant Russian hybrid warfare,” 29 September 2018, retrieved from https://dod.defense.gov/News/Article/Article/1649146/nato-moves-to-combat-russian-hybrid-warfare/

[14] North Atlantic Treaty Organization, Collective defence – article 5, 12 June 2018, retrieved from https://www.nato.int/cps/en/natohq/topics_110496.htm

[15] Ibid.

[16] Davis, D., Symantec: Navigating The Risky Terrain of Active Cyber Defense, 29 May 2018, retrieved from https://www.symantec.com/blogs/expert-perspectives/navigating-risky-terrain-active-cyber-defense

[17] Emmott, R., Reuters: NATO Cyber Command to be fully operational in 2023, 16 October 2018, retrieved from https://www.reuters.com/article/us-nato-cyber/nato-cyber-command-to-be-fully-operational-in-2023-idUSKCN1MQ1Z9

[18] North Atlantic Treaty Organization, “Cyber War Will Not Take Place”: Dr Thomas Rid presents his book at NATO Headquarters,” 7 May 2013, retrieved from https://www.nato.int/cps/en/natolive/news_100906.htm

 

Ali Crawford Assessment Papers Below Established Threshold Activities (BETA) Cyberspace North Atlantic Treaty Organization Policy and Strategy

An Assessment of the 2018 U.S. Department of Defense Cyber Strategy Summary

Doctor No has worked in the Cybersecurity field for more than 15 years.  He has also served in the military.  He has a keen interest in following the latest developments in foreign policy, information security, intelligence, military, space and technology-related issues.  You can follow him on Twitter @DoctorNoFI.  The author wishes to remain anonymous due to the work he is doing.  The author also wishes to thank @LadyRed_6 for help in editing.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:
  An Assessment of the 2018 U.S. Department of Defense Cyber Strategy Summary

Date Originally Written:  November 11, 2018.

Date Originally Published:  December 3, 2018.

Summary:  On September 18, 2018, the U.S. Department of Defense (DoD) released a summary of its new Cyber Strategy.  While the summary indicates that the new document is more aggressive than the 2015 strategy, that is not surprising as President Donald Trump differs significantly from President Barack Obama.  Additionally, many areas of adversary vulnerabilities will likely be taken advantage of based upon this new strategy.

Text:
  The U.S. DoD released a summary of its new Cyber Strategy on September 18, 2018[1].  This 2018 strategy supersedes the 2015 version.  Before looking at what has changed between the 2015 strategy and the new one, it is important to recap what has happened during the 2015-2018 timeframe.  In 2015, President Obama met with China’s Premier Xi Jinping, and one of the issues discussed was China’s aggressive cyber attacks and intelligence gathering targeting the U.S. Government, and similar activities targeting the intellectual property of U.S. companies.  The meeting and the sanctions before that did bear some fruit, as information security company FireEye reported cyber attacks from China against the U.S. decreased after that meeting[2].

Russia on the other hand, has increased cyber operations against the U.S. and other nations.  During 2014 in Ukraine, Russia seized Crimea, participated in military operations in Eastern Ukraine, and also demonstrated its might in cyber capabilities during these conflicts.  Perhaps the most significant cyber capability demonstrated by Russia was the hacking and immobilizing of Ukrainian power grid in December 2015[3].  This event was significant in that it attacked a critical part of another country’s essential infrastructure.

The cyber attack that had the most media coverage likely happened in 2016.  The media was shocked when Russians hacked the U.S. Democratic National Committee[4] and used that data against Presidential candidate and former Secretary of State Hillary Clinton, specifically in social media during the U.S. Presidential election[5].

The U.S. had its own internal cyber-related problems as well.  “Whistleblower” Reality Winner[6] and the criminal negligence of Nghia Hoang Pho[7] have somewhat damaged the National Security Agency’s (NSA) capabilities to conduct cyber operations.  The Nghia Hoang Pho case was probably the most damaging, as it leaked NSA’s Tailored Access Operations attacking tools to adversaries.  During this timeframe the U.S. Government also prohibited the use of Kaspersky Lab’s security products[8] in its computers due to security concerns.

Also worthy of note is that the U.S. administration has changed how it conducts diplomacy and handles military operations.  Some have said during President Obama’s tenure his administration micromanaged military operations[9].  This changed when President Trump came to the White House as he gave the U.S. military more freedom to conduct military operations and intelligence activities.

Taking these events into account, it is not surprising that the new DoD Cyber Strategy is more aggressive in its tone than the previous one.  Its statement to “defend forward to disrupt or halt malicious cyber activity at its source,” is perhaps the most interesting.  Monitoring adversaries is not new in U.S. actions, as the Edward Snowden leaks have demonstrated.  The strategy also names DoD’s main adversaries, mainly China and Russia, which in some fields can be viewed as near-peer adversaries.  The world witnessed a small example of what to expect as part of this new strategy when U.S. Cyber Command warned suspected Russian operatives of upcoming election meddling[10].

Much has been discussed about U.S. reliance on the Internet, but many forget that near-peer adversaries like China and Russia face similar issues.  What China and Russia perhaps fear the most, is the so-called Orange Revolution[11], or Arab Spring-style[12] events that can be inspired by Internet content.  Fear of revolution leads China and Russia to control and monitor much of their population’s access to Internet resources via the Great Firewall of China[13], and Russia’s SORM[14].  Financial and market data, also residing on the Internet, presents a vulnerability to Russia and China.  Much of the energy sector in these countries also operates and monitors their equipment thru Internet-connected resources.  All of these areas provide the U.S. and its allies a perfect place to conduct Computer Network Attack (CNA) and Computer Network Exploitation (CNE) operations, against both state and non-state actors in pursuit of U.S. foreign policy goals.  It is worth noting that Britain, arguably the closest ally to the U.S., is  also investing in Computer Network Operations, with emphasis on CNA and CNE capabilities against Russia’s energy sector for example.  How much the U.S. is actually willing to reveal of its cyber capabilities, is in the future to be seen.

Beyond these changes to the new DoD Cyber Strategy, the rest of the document follows the same paths as the previous one.  The new strategy continues the previous themes of increasing information sharing with allies, improving cybersecurity in critical parts of the homeland, increasing DoD resources, and increasing DoD cooperation with private industry that works with critical U.S. resources.

The new DoD Cyber Strategy is good, provides more maneuver room for the military, and its content will likely be of value to private companies as they think about what cyber security measures they should implement on their own systems.


Endnotes:


[1] U.S. Department of Defense. (2018). Summary of the Department of Defense Cyber Strategy. Retrieved from https://media.defense.gov/2018/Sep/18/2002041658/-1/-1/1/CYBER_STRATEGY_SUMMARY_FINAL.PDF

[2] Fireeye. (2016, June). REDLINE DRAWN: CHINA RECALCULATES ITS USE OF CYBER ESPIONAGE. Retrieved from https://www.fireeye.com/content/dam/fireeye-www/current-threats/pdfs/rpt-china-espionage.pdf

[3] Zetter, K. (2017, June 03). Inside the Cunning, Unprecedented Hack of Ukraine’s Power Grid. Retrieved from https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/

[4] Lipton, E., Sanger, D. E., & Shane, S. (2016, December 13). The Perfect Weapon: How Russian Cyberpower Invaded the U.S. Retrieved from https://www.nytimes.com/2016/12/13/us/politics/russia-hack-election-dnc.html

[5] Office of the Director of National Intelligence. (2017, January 6). Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution. Retrieved from https://www.dni.gov/files/documents/ICA_2017_01.pdf

[6] Philipps, D. (2018, August 23). Reality Winner, Former N.S.A. Translator, Gets More Than 5 Years in Leak of Russian Hacking Report. Retrieved from https://www.nytimes.com/2018/08/23/us/reality-winner-nsa-sentence.html

[7] Cimpanu, C. (2018, October 01). Ex-NSA employee gets 5.5 years in prison for taking home classified info. Retrieved from https://www.zdnet.com/article/ex-nsa-employee-gets-5-5-years-in-prison-for-taking-home-classified-info/

[8] Volz, D. (2017, December 12). Trump signs into law U.S. government ban on Kaspersky Lab software. Retrieved from https://www.reuters.com/article/us-usa-cyber-kaspersky/trump-signs-into-law-u-s-government-ban-on-kaspersky-lab-software-idUSKBN1E62V4

[9] Altman, G. R., & III, L. S. (2017, August 08). The Obama era is over. Here’s how the military rates his legacy. Retrieved from https://www.militarytimes.com/news/2017/01/08/the-obama-era-is-over-here-s-how-the-military-rates-his-legacy/

[10] Barnes, J. E. (2018, October 23). U.S. Begins First Cyberoperation Against Russia Aimed at Protecting Elections. Retrieved from https://www.nytimes.com/2018/10/23/us/politics/russian-hacking-usa-cyber-command.html

[11] Zasenko, O. E., & Kryzhanivsky, S. A. (2018, October 31). Ukraine. Retrieved November 1, 2018, from https://www.britannica.com/place/Ukraine/The-Orange-Revolution-and-the-Yushchenko-presidency#ref986649

[12] History Channel Editors. (2018, January 10). Arab Spring. Retrieved November 1, 2018, from https://www.history.com/topics/middle-east/arab-spring

[13] Chew, W. C. (2018, May 01). How It Works: Great Firewall of China – Wei Chun Chew – Medium. Retrieved November 1, 2018, from https://medium.com/@chewweichun/how-it-works-great-firewall-of-china-c0ef16454475

[14] Lewis, J. A. (2018, October 17). Reference Note on Russian Communications Surveillance. Retrieved November 1, 2018, from https://www.csis.org/analysis/reference-note-russian-communications-surveillance

Assessment Papers Cyberspace Doctor No Policy and Strategy

Assessment of Russia’s Cyber Relations with the U.S. and its Allies

Meghan Brandabur, Caroline Gant, Yuxiang Hou, Laura Oolup, and Natasha Williams were Research Interns at the College of Information and Cyberspace at National Defense University.  Laura Oolup is the recipient of the Andreas and Elmerice Traks Scholarship from the Estonian American Fund.  The authors were supervised in their research by Lieutenant Colonel Matthew Feehan, United States Army and Military Faculty member.  This article was edited by Jacob Sharpe, Research Assistant at the College of Information and Cyberspace.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Russia’s Cyber Relations with the U.S. and its Allies

Date Originally Written:  August 7, 2018.

Date Originally Published:  October 1, 2018.

Summary:  Russia frequently employs offensive cyber operations to further its foreign policy and strategic goals.  Prevalent targets of Russian activity include the United States and its allies, most recently culminating in attacks on Western national elections by using cyber-enabled information operations.  Notably, these information operations have yielded national security implications and the need for proactive measures to deter further Russian offenses.

Text:  The United States and its allies are increasingly at risk from Russian offensive cyber operations (OCOs).  Based on the definition of the Joint Chiefs of Staff, OCOs are operations which aim “to project power in or through cyberspace[1].”  Russia utilizes OCOs to further their desired strategic end state: to be perceived as a great power in a polycentric world order and to wield greater influence in international affairs.  Russia uses a variety of means to achieve this end state, with cyber tools now becoming more frequently employed.

Since the 2007 cyber attacks on Estonia, Russia has used OCOs against the United States, Great Britain, France, and others[2].  These OCOs have deepened existing societal divisions, undermined liberal democratic order, and increased distrust in political leadership in order to damage European unity and transatlantic relations.  Russian OCO’s fall into two categories: those projecting power within cyberspace, which can relay kinetic effects, and those projecting power indirectly through cyberspace.  The latter, in the form of cyber-enabled information operations, have become more prevalent and damaging. 

Throughout the 2016 U.S. Presidential election, Russia conducted an extended cyber-enabled information operation targeting the U.S. political process and certain individuals whom Russia viewed as a threat[3].  Presidential candidate Hillary Clinton, known for her more hawkish views on democracy-promotion, presented a serious political impediment to Russian foreign policy[4].  Thus, Russia’s information operations attempted to thwart Hillary Clinton’s presidential aspirations. 

At the same time, the Russian operation aimed to deepen existing divisions in the society which divided U.S. citizens along partisan lines, and to widen the American public’s distrust in their democratic system of government.  These actions also sought to decrease U.S. primacy abroad by demonstrating how vulnerable the U.S. is to the activity of external actors.  The political reasoning behind Russia’s operations was to promote a favorable environment within which Russian foreign policy and strategic aims could be furthered with the least amount of American resistance.  That favorable environment appeared to be through the election of Donald J. Trump to the U.S. Presidency, a perception that was reflected in how little Russia did to damage the Trump operation by either OCO method.

Russia also targeted several European countries to indirectly damage the U.S. and undermine the U.S. position in world affairs.  As such, Russian OCOs conducted in the U.S. and Europe should not be viewed in isolation.  For instance, presidential elections in Ukraine in 2014 and three years later in France saw cyber-enabled information operations favoring far-right, anti-European Union candidates[5]. 

Russia has also attempted to manipulate the results of referendums throughout Europe.  On social media, pro-Brexit cyber-enabled information operations were conducted in the run-up to voting on the country’s membership in the European Union[6].  In the Netherlands, cyber-enabled information operations sought to manipulate the constituency to vote against the Ukraine-European Union Association Agreement that would have prevented Ukraine from further integrating into the West, and amplified existing fractions within the European Union[7].

These cyber-enabled information operations, however, are not a new tactic for Russia, but rather a contemporary manifestation of Soviet era Komitet Gosudarstvennoy Bezopasnosti (K.G.B.) techniques of implementing, “aktivniye meropriyatiya,” or, “‘active measures’”[8].  These measures aim to “[influence] events,” and to “[undermine] a rival power with forgeries,” now through the incorporation of the cyber domain[9]. 

Russia thus demonstrates a holistic approach to information warfare which actively includes cyber, whereas the Western viewpoint distinguishes cyber warfare from information warfare[10].  However, Russia’s cyber-enabled information operations – also perceived as information-psychological operations – demonstrate how cyber is exploited in various forms to execute larger information operations [11].

Although kinetic OCOs remain a concern, we see that the U.S. is less equipped to deal with cyber-enabled information operations[12].  Given Western perceptions that non-kinetic methods such as information operations, now conducted through cyberspace, are historically, “not forces in their own right,” Russia is able to utilize these tactics as an exploitable measure against lagging U.S. and Western understandings of these capabilities[13].  Certain U.S. political candidates have already been identified as the targets of Russian OCOs intending to interfere with the 2018 U.S. Congressional midterm elections[14].  These information operations pose a great threat for the West and the U.S., especially considering the lack of consensus towards assessing and countering information operations directed at the U.S. regardless of any action taken against OCOs. 

Today, cyber-enabled information operations can be seen as not only ancillary, but substitutable for conventional military operations[15].  These operations pose considerable security concerns to a targeted country, as they encroach upon their sovereignty and enable Russia to interfere in their domestic affairs. Without a fully developed strategy that addresses all types of OCOs including the offenses within cyberspace and the broader information domain overall Russia will continue to pose a threat in the cyber domain. 


Endnotes:

[1] Joint Chiefs of Staff. (2018). “JP 3-12, Cyberspace Operations”, Retrieved July 7, 2018, from http://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_12.pdf?ver=2018-06-19-092120-930, p. GL-5.

[2] For instance: Brattberg, Erik & Tim Maurer. (2018). “Russian Election Interference – Europe’s Counter to Fake News and Cyber Attacks”, Carnegie Endowment for International Peace.; Burgess, Matt. (2017, November 10). “Here’s the first evidence Russia used Twitter to influence Brexit”, Retrieved July 16, 2018 from http://www.wired.co.uk/article/brexit-russia-influence-twitter-bots-internet-research-agency; Grierson, Jamie. (2017, February 12). “UK hit by 188 High-Level Cyber-Attacks in Three Months”, Retrieved July 16, 2018, from https://www.theguardian.com/world/2017/feb/12/uk-cyber-attacks-ncsc-russia-china-ciaran-martin; Tikk, Eneken, Kadri Kaska, Liis Vihul. (2010). International Cyber Incidents: Legal Considerations. Retrieved July 8, 2018, from https://ccdcoe.org/publications/books/legalconsiderations.pdf; Office of the Director of National Intelligence. (2017, January 6). “Background to ‘Assessing Russian Activities and Intentions in Recent US Elections’: The Analytic Process and Cyber Incident Attribution” Retrieved July 9, 2018, from https://www.dni.gov/files/documents/ICA_2017_01.pdf. 

[3] Office of the Director of National Intelligence. (2017, January 6). “Background to ‘Assessing Russian Activities and Intentions in Recent US Elections’: The Analytic Process and Cyber Incident Attribution” Retrieved July 9, 2018 https://www.dni.gov/files/documents/ICA_2017_01.pdf p.1.

[4] Flournoy, Michèle A. (2017).  Russia’s Campaign Against American Democracy: Toward a Strategy for Defending Against, Countering, and Ultimately Deterring Future Attacks Retrieved July 9, 2018, from http://www.jstor.org/stable/j.ctt20q22cv.17, p. 179. 

[5] Nimmo, Ben. (2017, April 20). “The French Election through Kremlin Eyes” Retrieved July 15, 2018, from https://medium.com/dfrlab/the-french-election-through-kremlin-eyes-5d85e0846c50

[6] Burgess, Matt. (2017, November 10). “Here’s the first evidence Russia used Twitter to influence Brexit” Retrieved July 16, 2018, from http://www.wired.co.uk/article/brexit-russia-influence-twitter-bots-internet-research-agency 

[7] Cerulus, Laurens. (2017, May 3). “Dutch go Old School against Russian Hacking” Retrieved August 8, 2018, from https://www.politico.eu/article/dutch-election-news-russian-hackers-netherlands/ ; Van der Noordaa, Robert. (2016, December 14). “Kremlin Disinformation and the Dutch Referendum” Retrieved August 8, 2018, from https://www.stopfake.org/en/kremlin-disinformation-and-the-dutch-referendum/

[8] Osnos, Evan, David Remnick & Joshua Yaffa. (2017, March 6). “Trump, Putin, and the New Cold War” Retrieved July 9, 2018 https://www.newyorker.com/magazine/2017/03/06/trump-putin-and-the-new-cold-war 

[9] Ibid.

[10] Connell, Michael & Sarah Vogler. (2017). “Russia’s Approach to Cyber Warfare” Retrieved July 7, 2018, from  https://www.cna.org/cna_files/pdf/DOP-2016-U-014231-1Rev.pdf ; Giles, Keir. & William Hagestad II (2013). “Divided by a Common Language: Cyber Definitions in Chinese, Russian and English”. In K. Podins, J. Stinissen, M. Maybaum (Eds.), 2013 5th International Conference on Cyber Conflict.  Retrieved July 7, 2018, from  https://ccdcoe.org/publications/2013proceedings/d3r1s1_giles.pdf, pp. 420-423; Giles, Keir. (2016). “Russia’s ‘New’ Tools for Confronting the West – Continuity and Innovation in Moscow’s Exercise of Power” Retrieved July 16, 2018, from https://www.chathamhouse.org/sites/default/files/publications/2016-03-russia-new-tools-giles.pdf, p. 62-63.

[11] Iasiello, Emilio J. (2017). “Russia’s Improved Information Operations: From Georgia to Crimea” Retrieved August 10, 2018 from https://ssi.armywarcollege.edu/pubs/parameters/issues/Summer_2017/8_Iasiello_RussiasImprovedInformationOperations.pdf p. 52. 

[12] Coats, Dan. (2018, July 18). “Transcript: Dan Coats Warns The Lights Are ‘Blinking Red’ On Russian Cyberattacks” Retrieved August 7, 2018, from https://www.npr.org/2018/07/18/630164914/transcript-dan-coats-warns-of-continuing-russian-cyberattacks?t=1533682104637

[13] Galeotti, Mark (2016). “Hybrid, ambiguous, and non-linear? How new is Russia’s ‘new way of war’?” Retrieved July 10, 2018, from Small Wars & Insurgencies, Volume 27(2), p. 291.

[14] Geller, Eric. (2018, July 19) . “Microsoft reveals first known Midterm Campaign Hacking Attempts” Retrieved August 8, 2018, from https://www.politico.com/story/2018/07/19/midterm-campaign-hacking-microsoft-733256 

[15] Inkster, Nigel. (2016). “Information Warfare and the US Presidential Election” Retrieved July 9, 2018, from Survival, Volume 58(5), p. 23-32, 28 https://doi.org/10.1080/00396338.2016.1231527

Caroline Gant Cyberspace Jacob Sharpe Laura Oolup Matthew Feehan Meghan Brandabur Natasha Williams Option Papers Psychological Factors Russia United States Yuxiang Hou

Assessment of the Role of Cyber Power in Interstate Conflict

Eric Altamura is a graduate student in the Security Studies Program at Georgetown University’s School of Foreign Service. He previously served for four years on active duty as an armor officer in the United States Army.  He regularly writes for Georgetown Security Studies Review and can be found on Twitter @eric_senlu.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Role of Cyber Power in Interstate Conflict

Date Originally Written:  May 05, 2018 / Revised for Divergent Options July 14, 2018.

Date Originally Published:  September 17, 2018.

Summary:  The targeting of computer networks and digitized information during war can prevent escalation by providing an alternative means for states to create the strategic effects necessary to accomplish limited objectives, thereby bolstering the political viability of the use of force as a lever of state power.

Text:  Prussian General and military theorist Carl von Clausewitz wrote that in reality, one uses, “no greater force, and setting himself no greater military aim, than would be sufficient for the achievement of his political purpose.” State actors, thus far, have opted to limit cyberattacks in size and scope pursuant to specific political objectives when choosing to target information for accomplishing desired outcomes. This limiting occurs because as warfare approaches its unlimited form in cyberspace, computer network attacks increasingly affect the physical domain in areas where societies have become reliant upon IT systems for everyday functions. Many government and corporate network servers host data from industrial control systems (ICS) or supervisory control and data acquisition (SCADA) systems that control power generation, utilities, and virtually all other public services. Broader attacks on an adversary’s networks consequently affect the populations supported by these systems, so that the impacts of an attack go beyond simply denying an opponent the ability to communicate through digital networks.

At some point, a threshold exists where it becomes more practical for states to utilize other means to directly target the physical assets of an adversary rather than through information systems. Unlimited cyberattacks on infrastructure would come close to replicating warfare in its total form, with the goal of fully disarming an opponent of its means to generate resistance, so states become more willing to expend resources and effort towards accomplishing their objectives. In this case, cyber power decreases in utility relative to the use of physical munitions (i.e. bullets and bombs) as the scale of warfare increases, mainly due to the lower probability of producing enduring effects in cyberspace. As such, the targeting and attacking of an opponent’s digital communication networks tends to occur in a more limited fashion because alternative levers of state power provide more reliable solutions as warfare nears its absolute form. In other words, cyberspace offers much more value to states seeking to accomplish limited political objectives, rather than for waging total war against an adversary.

To understand how actors attack computer systems and networks to accomplish limited objectives during war, one must first identify what states actually seek to accomplish in cyberspace. Just as the prominent British naval historian Julian Corbett explains that command of the sea does not entail “the conquest of water territory,” states do not use information technology for the purpose of conquering the computer systems and supporting infrastructure that comprise an adversary’s information network. Furthermore, cyberattacks do not occur in isolation from the broader context of war, nor do they need to result in the total destruction of the enemy’s capabilities to successfully accomplish political objectives. Rather, the tactical objective in any environment is to exploit the activity that takes place within it – in this case, the communication of information across a series of interconnected digital networks – in a way that provides a relative advantage in war. Once the enemy’s communication of information is exploited, and an advantage achieved, states can then use force to accomplish otherwise unattainable political objectives.

Achieving such an advantage requires targeting the key functions and assets in cyberspace that enable states to accomplish political objectives. Italian General Giulio Douhet, an airpower theorist, describes command of the air as, “the ability to fly against an enemy so as to injure him, while he has been deprived of the power to do likewise.” Whereas airpower theorists propose targeting airfields alongside destroying airplanes as ways to deny an adversary access to the air, a similar concept prevails with cyber power. To deny an opponent the ability to utilize cyberspace for its own purposes, states can either attack information directly or target the means by which the enemy communicates its information. Once an actor achieves uncontested use of cyberspace, it can subsequently control or manipulate information for its own limited purposes, particularly by preventing the escalation of war toward its total form.

More specifically, the ability to communicate information while preventing an adversary from doing so has a limiting effect on warfare for three reasons. Primarily, access to information through networked communications systems provides a decisive advantage to military forces by allowing for “analyses and synthesis across a variety of domains” that enables rapid and informed decision-making at all echelons. The greater a decision advantage one military force has over another, the less costly military action becomes. Secondly, the ubiquity of networked information technologies creates an alternative way for actors to affect targets that would otherwise be politically, geographically, or normatively infeasible to target with physical munitions. Finally, actors can mask their activities in cyberspace, which makes attribution difficult. This added layer of ambiguity enables face-saving measures by opponents, who can opt to not respond to attacks overtly without necessarily appearing weak.

In essence, cyber power has become particularly useful for states as a tool for preventing conflict escalation, as an opponent’s ability to respond to attacks becomes constrained when denied access to communication networks. Societies’ dependence on information technology and resulting vulnerability to computer network attacks continues to increase, indicating that interstate violence may become much more prevalent in the near term if aggressors can use cyberattacks to decrease the likelihood of escalation by an adversary.


Endnotes:

[1] von Clausewitz, C. (1976). On War. (M. Howard, & P. Paret, Trans.) Princeton: Princeton University Press.

[2] United States Computer Emergency Readiness Team. (2018, March 15). Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors. (United States Department of Homeland Security) Retrieved May 1, 2018, from https://www.us-cert.gov/ncas/alerts/TA18-074A

[3] Fischer, E. A. (2016, August 12). Cybersecurity Issues and Challenges: In Brief. Retrieved May 1, 2018, from https://fas.org/sgp/crs/misc/R43831.pdf

[4] Corbett, J. S. (2005, February 16). Some Principles of Maritime Strategy. (S. Shell, & K. Edkins, Eds.) Retrieved May 2, 2018, from The Project Gutenberg: http://www.gutenberg.org/ebooks/15076

[5] Ibid.

[6] Douhet, G. (1942). The Command of the Air. (D. Ferrari, Trans.) New York: Coward-McCann.

[7] Singer, P. W., & Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know. New York: Oxford University Press.

[8] Boyd, J. R. (2010, August). The Essence of Winning and Losing. (C. Richards, & C. Spinney, Eds.) Atlanta.

Aggression Assessment Papers Cyberspace Emerging Technology Eric Altamura

Assessment of the North Korean Cyberattack on Sony Pictures

Emily Weinstein is a Research Analyst at Pointe Bello and a current M.A. candidate in Security Studies at Georgetown University.  Her research focuses on Sino-North Korean relations, foreign policy, and military modernization.  She can be found on Twitter @emily_sw1.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the North Korean Cyberattack on Sony Pictures

Date Originally Written:  July 11, 2018.

Date Originally Published:  August 20, 2018.

Summary:   The 2014 North Korean cyberattack on Sony Pictures shocked the world into realizing that a North Korean cyber threat truly existed.  Prior to 2014, what little information existed on North Korea’s cyber capabilities was largely dismissed, citing poor domestic conditions as rationale for cyber ineptitude.  However, the impressive nature of the Sony attack was instrumental in changing global understanding of Kim Jong-un and his regime’s daring nature.

Text:  On November 24, 2014 Sony employees discovered a massive cyber breach after an image of a red skull appeared on computer screens company-wide, displaying a warning that threatened to reveal the company’s secrets.  That same day, more than 7,000 employees turned on their computers to find gruesome images of the severed head of Sony’s chief executive, Michael Lynton[1].  These discoveries forced the company to shut down all computer systems, including those in international offices, until the incident was further investigated.  What was first deemed nothing more than a nuisance was later revealed as a breach of international proportions.  Since this incident, the world has noted the increasing prevalence of large-scale digital attacks and the dangers they pose to both private and public sector entities.

According to the U.S. Computer Emergency Readiness Team, the primary malware used in this case was a Server Message Block (SMB) Worm Tool, otherwise known as SVCH0ST.EXE.  An SMB worm is usually equipped with five components: a listening implant, lightweight backdoor, proxy tool, destructive hard drive tool, and a destructive target cleaning tool[2].  The worm spreads throughout the infected network via a trial-and-error method used to obtain information such as a user password or personal identification number known as a brute force authentication attack.  The worm then connects to the command-and-control infrastructure where it is then able to begin its damage, usually copying software that is intended to damage or disable computers and computer systems, known as malware, across to the victim system or administrator system via the network sharing process.  Once these tasks are complete, the worm executes the malware using remotely scheduled tasks[3].

This type of malware is highly destructive.  If an organization is infected, it is likely to experience massive impacts on daily operations, including the loss of intellectual property and the disruption of critical internal systems[4].  In Sony’s case, on an individual level, hackers obtained and leaked personal and somewhat embarrassing information about or said by Sony personnel to the general public, in addition to information from private Sony emails that was sensitive or controversial.  On the company level, hackers stole diverse information ranging from contracts, salary lists, budget information, and movie plans, including five entire yet-to-be released movies.  Moreover, Sony internal data centers had been wiped clean and 75 percent of the servers had been destroyed[5].

This hack was attributed to the release of Sony’s movie, The Interview—a comedy depicting U.S. journalists’ plan to assassinate North Korean leader Kim Jong-un.  A group of hackers who self-identified by the name “Guardians of Peace” (GOP) initially took responsibility for the attack; however, attribution remained unsettled, as experts had a difficult time determining the connections and sponsorship of the “GOP” hacker group.  Former Federal Bureau of Investigation (FBI) Director James Comey in December 2014 announced that U.S. government believed that the North Korean regime was behind the attack, alluding to the fact that the Sony hackers failed to use proxy servers that masked the origin of their attack, revealing Internet Protocol or IP addresses that the FBI knew to be exclusively used by North Korea[6].

Aside from Director Comey’s statements, other evidence exists that suggests North Korea’s involvement.  For instance, the type of malware deployed against Sony utilized methods similar to malware that North Korean actors had previously developed and used.  Similarly, the computer-wiping software used against Sony was also used in a 2013 attack against South Korean banks and media outlets.  However, most damning of all was the discovery that the malware was built on computers set to the Korean language[7].

As for a motivation, experts argue that the hack was executed by the North Korean government in an attempt to preserve the image of Kim Jong-un, as protecting their leader’s image is a chief political objective in North Korea’s cyber program.  Sony’s The Interview infantilized Kim Jong-un and disparaged his leadership skills, portraying him as an inept, ruthless, and selfish leader, while poking fun at him by depicting him singing Katy Perry’s “Firework” song while shooting off missiles.  Kim Jong-un himself has declared that “Cyberwarfare, along with nuclear weapons and missiles, is an ‘all-purpose sword[8],’” so it is not surprising that he would use it to protect his own reputation.

The biggest takeaway from the Sony breach is arguably the U.S. government’s change in attitude towards North Korean cyber capabilities.  In recent years leading up to the attack, U.S. analysts were quick to dismiss North Korea’s cyber-potential, citing its isolationist tactics, struggling economy, and lack of modernization as rationale for this judgement.  However, following this large-scale attack on a large and prominent U.S. company, the U.S. government has been forced to rethink how it views the Hermit Regime’s cyber capabilities.  Former National Security Agency Deputy Director Chris Inglis argues that cyber is a tailor-made instrument of power for the North Korean regime, thanks to its low-cost of entry, asymmetrical nature and degree of anonymity and stealth[9].  Indeed the North Korean cyber threat has crept up on the U.S., and now the its intelligence apparatus must continue to work to both counter and better understand North Korea’s cyber capabilities.


Endnotes:

[1] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[2] Lennon, M. (December 19, 2014). Hackers Used Sophisticated SMB Worm Tool to Attack Sony. Retrieved July 7, 2018, from https://www.securityweek.com/hackers-used-sophisticated-smb-worm-tool-attack-sony

[3] Doman, C. (January 19, 2015). Destructive malware—a close look at an SMB worm tool. Retrieved July 7, 2018, from http://pwc.blogs.com/cyber_security_updates/2015/01/destructive-malware.html

[4] United States Computer Emergency Readiness Team (December 19, 2014). Alert (TA14-353A) Targeted Destructive Malware. Retrieved July 7, 2018, from https://www.us-cert.gov/ncas/alerts/TA14-353A

[5] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[6] Greenberg, A. (January 7, 2015). FBI Director: Sony’s ‘Sloppy’ North Korean Hackers Revealed Their IP Addresses. Retrieved July 7, 2018, from https://www.wired.com/2015/01/fbi-director-says-north-korean-hackers-sometimes-failed-use-proxies-sony-hack/

[7] Pagliery, J. (December 29, 2014). What caused Sony hack: What we know now. Retrieved July 8, 2018, from http://money.cnn.com/2014/12/24/technology/security/sony-hack-facts/

[8] Sanger, D., Kirkpatrick, D., and Perlroth, N. (October 15, 2017). The World Once Laughed at North Korean Cyberpower. No More. Retrieved July 8, 2018, from https://mobile.nytimes.com/2017/10/15/world/asia/north-korea-hacking-cyber-sony.html

[9] Ibid.

Assessment Papers Cyberspace Emily Weinstein Information Systems

Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity

An Assessment of Information Warfare as a Cybersecurity Issue

Justin Sherman is a sophomore at Duke University double-majoring in Computer Science and Political Science, focused on cybersecurity, cyberwarfare, and cyber governance. Justin conducts technical security research through Duke’s Computer Science Department; he conducts technology policy research through Duke’s Sanford School of Public Policy; and he’s a Cyber Researcher at a Department of Defense-backed, industry-intelligence-academia group at North Carolina State University focused on cyber and national security – through which he works with the U.S. defense and intelligence communities on issues of cybersecurity, cyber policy, and national cyber strategy. Justin is also a regular contributor to numerous industry blogs and policy journals.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in cybersecurity and IT project management. During his service in the Armed Forces, Anastasios was assigned to various key positions in national, NATO, and EU headquarters, and he’s been honored by numerous high-ranking officers for his expertise and professionalism, including a nomination as a certified NATO evaluator for information security. Anastasios currently works as an informatics instructor at AKMI Educational Institute, where his interests include exploring the human side of cybersecurity – psychology, public education, organizational training programs, and the effects of cultural, cognitive, and heuristic biases.

Paul Cobaugh is the Vice President of Narrative Strategies, a coalition of scholars and military professionals involved in the non-kinetic aspects of counter-terrorism, defeating violent extremism, irregular warfare, large-scale conflict mediation, and peace-building. Paul recently retired from a distinguished career in U.S. Special Operations Command, and his specialties include campaigns of influence and engagement with indigenous populations.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Information Warfare as a Cybersecurity Issue

Date Originally Written:  March 2, 2018.

Date Originally Published:  June 18, 2018.

Summary:  Information warfare is not new, but the evolution of cheap, accessible, and scalable cyber technologies enables it greatly.  The U.S. Department of Justice’s February 2018 indictment of the Internet Research Agency – one of the Russian groups behind disinformation in the 2016 American election – establishes that information warfare is not just a global problem from the national security and fact-checking perspectives; but a cybersecurity issue as well.

Text:  On February 16, 2018, U.S. Department of Justice Special Counsel Robert Mueller indicted 13 Russians for interfering in the 2016 United States presidential election [1]. Beyond the important legal and political ramifications of this event, this indictment should make one thing clear: information warfare is a cybersecurity issue.

It shouldn’t be surprising that Russia created fake social media profiles to spread disinformation on sites like Facebook.  This tactic had been demonstrated for some time, and the Russians have done this in numerous other countries as well[2].  Instead, what’s noteworthy about the investigation’s findings, is that Russian hackers also stole the identities of real American citizens to spread disinformation[3].  Whether the Russian hackers compromised accounts through technical hacking, social engineering, or other means, this technique proved remarkably effective; masquerading as American citizens lent significantly greater credibility to trolls (who purposely sow discord on the Internet) and bots (automated information-spreaders) that pushed Russian narratives.

Information warfare has traditionally been viewed as an issue of fact-checking or information filtering, which it certainly still is today.  Nonetheless, traditional information warfare was conducted before the advent of modern cyber technologies, which have greatly changed the ways in which information campaigns are executed.  Whereas historical campaigns took time to spread information and did so through in-person speeches or printed news articles, social media enables instantaneous, low-cost, and scalable access to the world’s populations, as does the simplicity of online blogging and information forgery (e.g., using software to manufacture false images).  Those looking to wage information warfare can do so with relative ease in today’s digital world.

The effectiveness of modern information warfare, then, is heavily dependent upon the security of these technologies and platforms – or, in many cases, the total lack thereof.  In this situation, the success of the Russian hackers was propelled by the average U.S. citizen’s ignorance of basic cyber “hygiene” rules, such as strong password creation.  If cybersecurity mechanisms hadn’t failed to keep these hackers out, Russian “agents of influence” would have gained access to far fewer legitimate social media profiles – making their overall campaign significantly less effective.

To be clear, this is not to blame the campaign’s effectiveness on specific end users; with over 100,000 Facebook accounts hacked every single day we can imagine it wouldn’t be difficult for any other country to use this same technique[4].  However, it’s important to understand the relevance of cybersecurity here. User access control, strong passwords, mandated multi-factor authentication, fraud detection, and identity theft prevention were just some of the cybersecurity best practices that failed to combat Russian disinformation just as much as fact-checking mechanisms or counter-narrative strategies.

These technical and behavioral failures didn’t just compromise the integrity of information, a pillar of cybersecurity; they also enabled the campaign to become incredibly more effective.  As the hackers planned to exploit the polarized election environment, access to American profiles made this far easier: by manipulating and distorting information to make it seem legitimate (i.e., opinions coming from actual Americans), these Russians undermined law enforcement operations, election processes, and more.  We are quick to ask: how much of this information was correct and how much of it wasn’t?  Who can tell whether the information originated from un-compromised, credible sources or from credible sources that have actually been hacked?

However, we should also consider another angle: what if the hackers hadn’t won access to those American profiles in the first place?  What if the hackers were forced to almost entirely use fraudulent accounts, which are prone to be detected by Facebook’s algorithms?  It is for these reasons that information warfare is so critical for cybersecurity, and why Russian information warfare campaigns of the past cannot be equally compared to the digital information wars of the modern era.

The global cybersecurity community can take an even greater, active role in addressing the account access component of disinformation.  Additionally, those working on information warfare and other narrative strategies could leverage cybersecurity for defensive operations.  Without a coordinated and integrated effort between these two sectors of the cyber and security communities, the inability to effectively combat disinformation will only continue as false information penetrates our social media feeds, news cycles, and overall public discourse.

More than ever, a demand signal is present to educate the world’s citizens on cyber risks and basic cyber “hygiene,” and to even mandate the use of multi-factor authentication, encrypted Internet connections, and other critical security features.  The security of social media and other mass-content-sharing platforms has become an information warfare issue, both within respective countries and across the planet as a whole.  When rhetoric and narrative can spread (or at least appear to spread) from within, the effectiveness of a campaign is amplified.  The cybersecurity angle of information warfare, in addition to the misinformation, disinformation, and rhetoric itself, will remain integral to effectively combating the propaganda and narrative campaigns of the modern age.


Endnotes:

[1] United States of America v. Internet Research Agency LLC, Case 1:18-cr-00032-DLF. Retrieved from https://www.justice.gov/file/1035477/download

[2] Wintour, P. (2017, September 5). West Failing to Tackle Russian Hacking and Fake News, Says Latvia. Retrieved from https://www.theguardian.com/world/2017/sep/05/west-failing-to-tackle-russian-hacking-and-fake-news-says-latvia

[3] Greenberg, A. (2018, February 16). Russian Trolls Stole Real US Identities to Hide in Plain Sight. Retrieved from https://www.wired.com/story/russian-trolls-identity-theft-mueller-indictment/

[4] Callahan, M. (2015, March 1). Big Brother 2.0: 160,000 Facebook Pages are Hacked a Day. Retrieved from https://nypost.com/2015/03/01/big-brother-2-0-160000-facebook-pages-are-hacked-a-day/

Anastasios Arampatzis Assessment Papers Cyberspace Information and Intelligence Information Systems Justin Sherman Paul Cobaugh Political Warfare Psychological Factors

Assessment of the Threat Posed by the Turkish Cyber Army

Marita La Palm is a graduate student at American University where she focuses on terrorism, countering violent extremism, homeland security policy, and cyber domain activities.  She can be found on Twitter at maritalp.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessment of the Threat Posed by the Turkish Cyber Army

Date Originally Written:  March 25, 2018.

Date Originally Published:  April 9, 2018.

Summary:  Turkish-sympathetic hacker group, the Turkish Cyber Army, has changed tactics from seizing and defacing websites to a Twitter phishing campaign that has come remarkably close to the President of the United States.

Text:  The Turkish Cyber Army (Ay Yildiz Tim) attempted to compromise U.S. President Donald Trump’s Twitter account in January of 2018 as part of a systematic cyber attack accompanying the Turkish invasion of Syria.  They were not successful, but they did seize control of various well-known accounts and the operation is still in progress two months later.

Although the Turkish Cyber Army claims to date back to a 2002 foundation in New Zealand, it first appears in hacking annals on October 2, 2006.  Since then, the group has taken over vulnerable websites in Kenya, the European Union, and the United States[1].  As of the summer of 2017, the Turikish Cyber Army changed tactics to focus on Twitter phishing, where they used the compromised Twitter account of a trustworthy source to bait a target to surrender log-in credentials[2].  They do this by sending a direct message from a familiar account they control telling the desired victim to click on a link and enter their log-in information to a page that looks like Twitter but actually records their username and password.  Upon accessing the victim’s account, the hackers rapidly make pro-Turkish posts, download the message history, and send new phishing attacks through the new account, all within a few hours.  The Turkish Cyber Army claim to have downloaded the targets’ messages, apparently both for intelligence purposes and to embarrass the target by publicly releasing the messages[3].  Oddly enough, the group has yet to release the private messages they acquired in spite of their threats to do so.  The group is notable both for their beginner-level sophistication when compared to state hackers such as Fancy Bear and the way they broadcast every hack they make.

The first documented victim of the 2018 operation was Syed Akbaruddin, Indian Permanent Representative to the United Nations.  Before the attack on Akbaruddin, the hackers likely targeted Kurdish accounts in a similar manner[4].  Since these initial attacks, the Turkish Cyber Army moved steadily closer to accounts followed by President Trump and even managed to direct message him on Twitter[5].  In January 2018, they phished multiple well-known Western public figures such as television personality Greta van Susteren and the head of the World Economic Forum, Børge Brende.  It so happened that Greta and Eric Bolling, another victim, are two of the only 45 accounts followed by President Trump.  From Eric and Greta’s accounts, the hackers were able to send messages to Trump.  Two months later, the Turkish Cyber Army continued on Twitter, but now primarily with a focus on Indian accounts.  The group took over Air India’s Twitter account on March 15, 2018.  However, the aftereffects of their Western efforts can still be seen: on March 23, 2018 the Chief Content Officer of Time, Inc. and the President of Fortune, Alan Murray tweeted, “I was locked out of Twitter for a month after being hacked by the Turkish cyber army…” Meanwhile, the Turkish Cyber Army has a large and loud Twitter presence with very little regulation considering they operate as an openly criminal organization on the platform.

President Trump’s personal Twitter account was also a target for the Turkish Cyber Army.  This is not a secret account known only to a few.  President Trump’s account name is public, and his password is all that is needed to post unless he has set up two-factor authentication.  Trump uses his account to express his personal opinions, and since some of his tweets have had high shock value, a fake message intended to disrupt might go unquestioned.  It is fair to assume that multiple groups have gone at President Trump’s account with a password cracker without stopping since inauguration.  It is only a matter of time before a foreign intelligence service or other interested party manages to access President Trump’s direct messages, make provocative statements from his account that could threaten the financial sector or national security, and from there go on to access more sensitive information.  While the Turkish Cyber Army blasts their intrusion from the compromised accounts, more sophisticated hacking teams would be in and out without a word and might have already done so.  The most dangerous hackers would maintain that access for the day it is useful and unexpected.

While nothing immediately indicates that this group is a Turkish government organization, they are either supporters of the current government or work for it.  Both reporter Joseph Cox and the McAfee report claimed the group used Turkish code[6].  Almost a hundred actual or bot accounts have some identifier of the Turkish Cyber Army, none of which appear to be censored by Twitter.  Of particular interest in the group’s history are the attacks on Turkish political party Cumhuriyet Halk Partisi’s (CHP) deputy Eren Erdem’ın, alleging his connections with Fethullah Gulen and the 2006 and possible 2017 attempts to phish Kurdish activists[7].  The Turkish Cyber Army’s current operations occurred on the eve of massive Turkish political risk, as the events in Syria could have ended Turkish President Recep Tayyip Erdogan’s career had they gone poorly. Not only did Turkey invade Syria in order to attack trained troops of its North Atlantic Treaty Organization (NATO) ally, the United States, but Turkish representatives had been banned from campaigning in parts of the European Union, and Turkish banks might face a multi-billion dollar fine thanks to the Reza Zarrab case[8].  Meanwhile, both Islamist and Kurdish insurgents appeared emboldened within the country[9].  Turkey had everything to lose, and a cyberattack, albeit not that sophisticated but conducted against high value targets, was a possibility while the United States appeared undecided as to whom to back — its proxy force or its NATO ally.  In the end, the United States has made efforts to reconcile diplomatically with Turkey since January, and Turkey has saved face.


Endnotes:

[1]  Ayyildiz Tim. (n.d.). Retrieved January 24, 2018, from https://ayyildiz.org/; Turks ‘cyber-leger’ kaapt Nederlandse websites . (2006, October 2). Retrieved January 24, 2018, from https://www.nrc.nl/nieuws/2006/10/02/turks-cyber-leger-kaapt-nederlandse-websites-11203640-a1180482; Terry, N. (2013, August 12). Asbury park’s website taken over by hackers. McClatchy – Tribune Business News; Ministry of transport website hacked. (2014, March 5). AllAfrica.Com. 

[2] Turkish hackers target Sevan Nishanyan’s Twitter account. (2017, July 28). Armenpress News Agency.

[3] Beek, C., & Samani, R. (2018, January 24). Twitter Accounts of US Media Under Attack by Large Campaign. Retrieved January 24, 2018, from https://securingtomorrow.mcafee.com/mcafee-labs/twitter-accounts-of-us-media-under-attack-by-large-campaign/.

[4] #EfrinNotAlone. (2018, January 17). “News that people  @realDonaldTrump followers have been hacked by Turkish cyber army. TCA made an appearance a few days ago sending virus/clickey links to foreigners and my Kurdish/friends. The journalist who have had their accounts hacked in US have clicked the link.”  [Tweet]. https://twitter.com/la_Caki__/status/953572575602462720.

[5] Herreria, C. (2018, January 17). Hackers DM’d Donald Trump With Former Fox News Hosts’ Twitter Accounts. Retrieved March 25, 2018, from https://www.huffingtonpost.com/entry/eric-bolling-greta-van-susteren-twitter-hacked_us_5a5eb17de4b096ecfca88729

[6] Beek, C., & Samani, R. (2018, January 24). Twitter Accounts of US Media Under Attack by Large Campaign. Retrieved January 24, 2018, from https://securingtomorrow.mcafee.com/mcafee-labs/twitter-accounts-of-us-media-under-attack-by-large-campaign/; Joseph Cox. (2018, January 23). “Interestingly, the code of the phishing page is in… Turkish. “Hesabın var mı?”, or “Do you have an account?”.”  [Tweet]. https://twitter.com/josephfcox/status/955861462190383104.

[7] Ayyıldız Tim FETÖnün CHP bağlantısını deşifre etti. (2016, August 27). Retrieved January 24, 2018, from http://www.ensonhaber.com/ayyildiz-tim-fetonun-chp-baglantisini-desifre-etti-2016-08-28.html; Turks ‘cyber-leger’ kaapt Nederlandse websites . (2006, October 2). Retrieved January 24, 2018, from https://www.nrc.nl/nieuws/2006/10/02/turks-cyber-leger-kaapt-nederlandse-websites-11203640-a1180482.

[8] Turkey-backed FSA entered Afrin, Turkey shelling targets. (2018, January 21). BBC Monitoring Newsfile; Turkey blasts Germany, Netherlands for campaign bans. (2017, March 5). BBC Monitoring European; Zaman, A. (2017, December 07). Turkey probes US prosecutor in Zarrab trial twist. Retrieved January 24, 2018, from https://www.al-monitor.com/pulse/originals/2017/11/turkey-probes-reza-zarrab-investigators.html.

[9] Moore, J. (2017, December 28). Hundreds of ISIS fighters are hiding in Turkey, increasing fears of attacks in Europe. Retrieved January 24, 2018, from http://www.newsweek.com/hundreds-isis-fighters-are-hiding-turkey-increasing-fears-europe-attacks-759877; Mandıracı, B. (2017, July 20). Turkey’s PKK Conflict Kills almost 3,000 in Two Years. Retrieved January 24, 2018, from https://www.crisisgroup.org/europe-central-asia/western-europemediterranean/turkey/turkeys-pkk-conflict-kills-almost-3000-two-years.

Assessment Papers Cyberspace Marita La Palm Trump (U.S. President) Turkey

An Assessment of Violent Extremist Use of Social Media Technologies

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Violent Extremist Use of Social Media Technologies

Date Originally Written:  November 9, 2017.

Date Originally Published:  February 5, 2018.

Summary:  The leveraging of social media technologies by violent extremists like Al-Qaeda (AQ) and Daesh have created a road map for others to do the same.  Without a combined effort by social media companies and intelligence and law enforcement organizations, violent extremists and others will continue to operate nearly unchecked on social media platforms and inspire others to acts of violence.

Text:  Following the 9/11 attacks the U.S. invaded Afghanistan and AQ, the violent extremist organization who launched these attacks, lost ground.  With the loss of ground came an increase in online activity.  In the time before the worldwide embrace of social media, jihadi’s like Irhabi007 (Younis Tsouli) led AQ hacking operations by breaking into vulnerable web pages and defacing them with AQ propaganda as well as establishing dead drop sites for materials others could use.  This method was pioneered by Irhabi007, who was later hunted down by other hackers and finally arrested in 2005[1].  Five years after Tsouli’s arrest, Al-Qaeda in the Arabian Peninsula (AQAP) established Inspire Magazine as a way to communicate with its existing followers and “inspire” new ones[2].  Unfortunately for AQAP, creating and distributing an online magazine became a challenge.

Today, social media platforms such as Twitter, Facebook, VKontakte, and YouTube are now the primary modus for jihadi extremists to spread the call to jihad as well as sow fear into those they target.  Social media is perfect for connecting people because of the popularity of the platforms and the ease of use, creation of accounts, and ability to send messages that could have a large audience.  In the case of Daesh, they use Twitter and YouTube as their primary means of messaging not only for fear but also command and control as well as recruitment.  Daesh sees the benefits of using social media, and their use has paved the way for others.  Even after Twitter and YouTube began to catch on and act against the Daesh accounts, it is still easy still for Daesh to create new accounts and keep the messages flowing with a new user name followed by a digit.

AQ’s loss of terrain combined with the expansion of social media set the conditions for movement toward inciting the “far war” over the local struggle as AQ saw it before Osama bin Laden was killed.  In fact, the call to the West had been made in Inspire magazine on many occasions.  Inspire even created a section of their magazine on “Open Source Jihad” which was later adopted by Dabiq[3] (Daesh’s magazine), but the problem was actually motivating the Western faithful into action.  This paradigm was finally worked out in social media where recruiters and mouthpieces could, in real-time, talk to these potential recruits and work with them to act.

Online messaging by violent extremist organizations has now reached a point of asymmetry where very little energy or money invested on the jihadi’s part can produce large returns on investments like the incident in Garland Texas[4].  To AQ, Daesh, and others, it is now clear that social media could be the bedrock of the fight against the West and anywhere else if others can be incited to act.  This incited activity takes the form of what has been called as “Lone Wolf Jihad” which has caused several incidents like the Garland shootings to current day events like the attack in New York City on the bike path by Sayfullo Saipov, a green card holder in the U.S. from Uzbekistan[5].

With the activating of certain individuals to the cause using the propaganda and manuals put out by the jihadi’s on social media, it is clear that the medium works and that even with all the attempts by companies like Facebook and Twitter to root accounts out and delete them, the messaging still gets to those who may act upon it.  The memetic virus of violent extremism has a carrier and that is social media.  Now, with the advent of social media’s leveraging by Russia in the campaign against the U.S. electoral system, we are seeing a paradigm shift into larger and more dangerous memetic and asymmetric warfare.

Additionally, with the advent of encryption technologies to the social media platforms the net effect has been to create channels of radicalization, recruitment, and activation over live chats and messages that cannot be indicted by authorities easily.  This use for encryption and live chats and messages makes the notion of social media as a means of asymmetric warfare even more prescient.  The jihadis now have not only a means to reach out to would be followers, but also a constant contact at a distance, where before they would have to radicalize potential recruits a physical location.

Expanding this out further, the methodologies that the jihadi’s have created and used online are now studied by other like-minded groups and can be emulated.  This means that whatever the bent, a group of like-minded individuals seeking extremist ends can simply sign up and replicate the jihadi model to the same ends of activating individuals to action.  We have already started to see this with the Russian hybrid warfare at a nominal level by activating people in the U.S. such as neo nazi’s and empowering them to act.

Social media is a boon and a bane depending on it’s use and it’s moderation by the companies that create the platforms and manage them.  However, with the First Amendment protecting freedom of speech in the U.S., it is hard for companies to delineate what is free speech and what is exhortation to violence.  This is the crux of the issue for companies and governments in the fight against violent extremism on platforms such as YouTube or Twitter.  Social media utilization boils down to terms of service and policing, and until now the companies have not been willing to monitor and take action.  Post Russian meddling in the U.S. election though, social media company attitudes seems to be changing.

Ultimately, the use of social media for extremist ideas and action will always be a problem.  This is not going away, and policing is key.  The challenge lies in working out the details and legal interpretations concerning the balance of what constitutes freedom of speech and what constitutes illegal activity.  The real task will be to see if algorithms and technical means will be helpful in sorting between the two.  The battle however, will never end.  It is my assessment that the remediation will have to be a melding of human intelligence activities and technical means together to monitor and interdict those users and feeds that are seeking to incite violence within the medium.


Endnotes:

[1] Katz, R., & Kern, M. (2006, March 26). Terrorist 007, Exposed. Retrieved November 17, 2017, from http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html

[2] Zelin, A. Y. (2017, August 14). Inspire Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/inspire-magazine/

[3] Zelin, A. Y. (2016, July 31). Dabiq Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/dabiq-magazine/

[4] Chandler, A. (2015, May 04). A Terror Attack in Texas. Retrieved November 17, 2017, from https://www.theatlantic.com/national/archive/2015/05/a-terror-attack-in-texas/392288/

[5] Kilgannon, C., & Goldstein, J. (2017, October 31). Sayfullo Saipov, the Suspect in the New York Terror Attack, and His Past. Retrieved November 17, 2017, from https://www.nytimes.com/2017/10/31/nyregion/sayfullo-saipov-manhattan-truck-attack.html

 

Al-Qaeda Assessment Papers Cyberspace Islamic State Variants Scot A. Terban Violent Extremism

An Australian Perspective on Identity, Social Media, and Ideology as Drivers for Violent Extremism

Kate McNair has a Bachelor’s Degree in Criminology from Macquarie University and is currently pursuing her a Master’s Degree in Security Studies and Terrorism at Charles Sturt University.  You can follow her on Twitter @kate_amc .  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  An Australian Perspective on Identity, Social Media, and Ideology as Drivers for Violent Extremism

Date Originally Written:  December 2, 2017.

Date Originally Published:  January 8, 2018.

Summary:  Countering Violent Extremism (CVE) is a leading initiative by many western sovereigns to reduce home-grown terrorism and extremism.  Social media, ideology, and identity are just some of the issues that fuel violent extremism for various individuals and groups and are thus areas that CVE must be prepared to address.

Text:  On March 7, 2015, two brothers aged 16 and 17 were arrested after they were suspected of leaving Australia through Sydney Airport to fight for the Islamic State[1].  The young boys fouled their parents and forged school letters.  Then they presented themselves to Australian Immigration and Border Protection shortly after purchasing tickets to an unknown middle eastern country with a small amount of funds and claimed to be on their way to visit family for three months.  Later, they were arrested for admitting to intending to become foreign fighters for the Islamic State.  October 2, 2015, Farhad Khalil Mohammad Jabar, 15 years old, approached Parramatta police station in Sydney’s West, and shot civilian police accountant Curtis Cheng in the back[2].  Later it was discovered that Jabar was inspired and influenced by two older men aged 18 and 22, who manipulated him into becoming a lone wolf attacker, and supplied him the gun he used to kill the civilian worker.

In November 2016 Parliament passed the Counter-Terrorism Legislation Amendment Bill (No. 1) 2016 and stated that “Keeping Australians safe is the first priority of the Turnbull Government, which committed to ensuring Australian law enforcement and intelligence agencies have the tools they need to fight terrorism[3].”  More recently, the Terrorism (Police Powers) Act of 2002 was extensively amended to become the Terrorism Legislation Amendment (Police Powers and Parole) Act of 2017 which allows police to have more powers during investigations and puts stronger restrictions and requirements on parolees when integrating back into society.  Although these governing documents aim at honing in on law enforcement and the investigation side of terrorism efforts, in 2014 the Tony Abbot Government implemented a nation-wide initiative called Living Safe Together[4].  Living Safe Together opposed a law enforcement-centric approach and instead focused on community-based initiatives to address the growing appeal of violent extremist ideologies in young people.

Levi West, a well-known academic in the field of terrorism in Australia highlighted that, in the cases of the aforementioned individuals, they have lived there entire lives in a world where the war of terror has existed.  These young men were part of a Muslim minority and have grown up witnessing a war that has been painted by some as the West vs Islam.  These young men were influenced by many voices between school, work, social events, and at home[5].  This leads to the question on whether these young individuals are driven to violent extremism by the ideology or are they trying to find their identity and their purpose in this world.

For young adults in Australia, social media is a strong driver for violent extremism.  Young adults are vulnerable and uncertain about various things in their life.  When people feel uncertain about who they are, the accuracy of their perceptions, beliefs, and attitudes, they seek out people who are similar to them in order to make comparisons that largely confirm the veracity and appropriateness of their own attitudes.  Social media is being weaponised by violent extremist organizations such as the Islamic State.  Social media, and other communicative Peer-to-Peer sharing platforms, are ideal to facilitate virtual learning and virtual interactions between young adults and violent extremists.  While young adults who interact within these online forums may be less likely to engage in a lone wolf attack, these forums can reinforce prior beliefs and slowly manipulate people over time.

Is it violent extremist ideology that is inspiring young individuals to become violent extremists and participate in terrorism and political violence?  Decentralized command and control within violent extremist organizations, also referred to as leaderless resistance, is a technique to inspire young individuals to take it upon themselves, with no leadership, to commit attacks against western governments and communities[6].  In the case of the Islamic State and its use of this strategy, its ideology is already known to be extreme and violent, therefore its interpretation and influence of leaderless resistance is nothing less.  Decentralization has been implemented internationally as the Islamic State continues to provide information, through sites such as Insider, on how to acquire the materiel needed to conduct attacks.  Not only does the Islamic State provide training and skill information, they encourage others to spread the their ideology through the conduct of lone wolf attacks and glorify these acts as a divine right.  Together with the vulnerability of young individuals, the strategy of decentralized command and control with the extreme ideology, has been successful thus far.  Based upon this success, CVE’s effectiveness is likely tied to it being equally focused on combating identity as a driver for violent extremism, in addition to an extreme ideology, and the strategies and initiative that can prevent individuals to becoming violent extremists.

The leading strategies in CVE have been social media, social cohesion, and identity focused.  Policy leaders and academics have identified that young individuals are struggling with the social constraints of labels and identity, therefore need to take a community-based approach when countering violent extremism.  The 2015 CVE Regional Summit reveled various recommendations and findings that relate to the use of social media and the effects it has on young, vulnerable individuals and the realities that Australia must face as a country, and as a society.  With the growing threat of homegrown violent extremism and the returning of foreign fighters from fighting with the Islamic State, without programs that address individual identity and social cohesion, violent extremism will continue to be a problem.  The Australian Federal Police (AFP) have designated Community Liaison Team members whose role is to develop partnerships with community leaders to tackle the threat of violent extremism and enhance community relations, with the AFP also adopting strategies to improve dialogue with Muslim communities. The AFP’s efforts, combined with the participation of young local leaders, is paramount to the success of these strategies and initiatives to counter the violent extremism narrative.


Endnotes:

[1] Nick Ralston, ‘Parramatta shooting: Curtis Cheng was on his way home when shot dead’ October 3rd 2015 http://www.smh.com.au/nsw/parramatta-shooting-curtis-cheng-was-on-his-way-home-when-shot-dead-20151003-gk0ibk.html Accessed December 1, 2017.

[2] Lanai Scarr, ‘Immigration Minister Peter Dutton said two teenage brothers arrested while trying to leave Australia to fight with ISIS were ‘saved’’ March 8th 2015 http://www.news.com.au/national/immigration-minister-peter-dutton-said-two-teenage-brothers-arrested-while-trying-to-leave-australia-to-fight-with-isis-were-saved/news-story/90b542528076cbdd02ed34aa8a78d33a Accessed December 1, 2017.

[3] Australian Government media release, Parliament passes Counter Terrorism Legislation Amendment Bill No 1 2016. https://www.attorneygeneral.gov.au/Mediareleases/Pages/2016/FourthQuarter/Parliament-passes-Counter-Terrorism-Legislation-Amendment-Bill-No1-2016.aspx Accessed December 1, 2017.

[4] Australian Government, Living Safer Together Building community resilience to violent extremism. https://www.livingsafetogether.gov.au/pages/home.aspx Accessed December 1, 2017.

[5] John W. Little, Episode 77 Australian Approaches to Counterterrorism Podcast, Covert Contact. October 2, 2017.

[6] West, L. 2016. ‘#jihad: Understanding social media as a weapon’, Security Challenges 12 (2): pp. 9-26.

Assessment Papers Australia Cyberspace Islamic State Variants Kate McNair Social Media Violent Extremism

Assessment of U.S. Cyber Command’s Elevation to Unified Combatant Command

Ali Crawford is a current M.A. Candidate at the Patterson School of Diplomacy and International Commerce.  She studies diplomacy and intelligence with a focus on cyber policy and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessment of U.S. Cyber Command’s Elevation to Unified Combatant Command

Date Originally Written:  September 18, 2017.

Date Originally Published:  November 13, 2017.

Summary:  U.S. President Donald Trump instructed the Department of Defense to elevate U.S. Cyber Command to the status of Unified Combatant Command (UCC).  Cyber Command as a UCC could determine the operational standards for missions and possibly streamline decision-making.  Pending Secretary of Defense James Mattis’ nomination, the Commander of Cyber Command will have the opportunity to alter U.S. posturing in cyberspace.

Text:  In August 2017, U.S. President Donald Trump ordered the Department of Defense to begin initiating Cyber Command’s elevation to a UCC[1].  With the elevation of U.S. Cyber Command there will be ten combatant commands within the U.S. military infrastructure[2].  Combatant commands have geographical[3] or functional areas[4] of responsibility and are granted authorities by law, the President, and the Secretary of Defense (SecDef) to conduct military operations.  This elevation of Cyber Command to become a UCC is a huge progressive step forward.  The character of warfare is changing. Cyberspace has quickly become a new operational domain for war, with battles being waged each day.  The threat landscape in the cyberspace domain is always evolving, and so the U.S. will evolve to meet these new challenges.  Cyber Command’s elevation is timely and demonstrates the Department of Defense’s commitment to defend U.S. national interests across all operational domains.

Cyber Command was established in 2009 to ensure the U.S. would maintain superiority in the cyberspace operational domain.  Reaching full operational capacity in 2010, Cyber Command mainly provides assistance and other augmentative services to the military’s various cyberspace missions, such as planning; coordinating; synchronizing; and preparing, when directed, military operations in cyberspace[5].  Currently, Cyber Command is subordinate to U.S. Strategic Command, but housed within the National Security Agency (NSA).  Cyber Command’s subordinate components include Army Cyber Command, Fleet Cyber Command, Air Force Cyber Command, Marine Forces Cyber Command, and it also maintains an operational relationship with the Coast Guard Cyber Command[6].  By 2018, Cyber Command expects to ready 133 cyber mission force teams which will consist of 25 support teams, 27 combat mission teams, 68 cyber protection teams, and 13 national mission teams[7].

Admiral Michael Rogers of the United States Navy currently heads Cyber Command.  He is also head of the NSA.  This “dual-hatting” of Admiral Rogers is of interest.  President Trump has directed SecDef James Mattis to recommend a nominee to head Cyber Command once it becomes a UCC.  Commanders of Combatant Commands must be uniformed military officers, whereas the NSA may be headed by a civilian.  It is very likely that Mattis will nominate Rogers to lead Cyber Command[8].  Beyond Cyber Command’s current missions, as a UCC its new commander would have the power to alter U.S. tactical and strategic cyberspace behaviors.  The elevation will also streamline the time-sensitive process of conducting cyber operations by possibly enabling a single authority with the capacity to make independent decisions who also has direct access to SecDef Mattis.  The elevation of Cyber Command to a UCC led by a four-star military officer may also point to the Department of Defense re-prioritizing U.S. posturing in cyberspace to become more offensive rather than defensive.

As one can imagine, Admiral Rogers is not thrilled with the idea of splitting his agencies apart.  Fortunately, it is very likely that he will maintain dual-authority for at least another year[9].  The Cyber Command separation from the NSA will also take some time, pending the successful confirmation of a new commander.  Cyber Command would also need to demonstrate its ability to function independently from its NSA intelligence counterpart[10].  Former SecDef Ash Carter and Director of Intelligence (DNI) James Clapper were not fans of Rogers’ dual-hat arrangement.  It remains to be seen what current SecDef Mattis’ or DNI Coats’ think of the “dual hat” arrangement.

Regardless, as this elevation process develops, it is worthwhile to follow.  Whoever becomes commander of Cyber Command, whether it be a novel nominee or Admiral Rogers, will have an incredible opportunity to spearhead a new era of U.S. cyberspace operations, doctrine, and influence policy.  A self-actualized Cyber Command may be able to launch Stuxnet-style attacks aimed at North Korea or speak more nuanced rhetoric aimed at creating impenetrable networks.  Regardless, the elevation of Cyber Command to a UCC signals the growing importance of cyber-related missions and will likely encourage U.S. policymakers to adopt specific cyber policies, all the while ensuring the freedom of action in cyberspace.


Endnotes:

[1] The White House, “Statement by President Donald J. Trump on the Elevation of Cyber Command,” 18 August 2017, https://www.whitehouse.gov/the-press-office/2017/08/18/statement-donald-j-trump-elevation-cyber-command

[2] Unified Command Plan. (n.d.). Retrieved October 27, 2017, from https://www.defense.gov/About/Military-Departments/Unified-Combatant-Commands/

[3] 10 U.S. Code § 164 – Commanders of combatant commands: assignment; powers and duties. (n.d.). Retrieved October 27, 2017, from https://www.law.cornell.edu/uscode/text/10/164

[4] 10 U.S. Code § 167 – Unified combatant command for special operations forces. (n.d.). Retrieved October 27, 2017, from https://www.law.cornell.edu/uscode/text/10/167

[5] U.S. Strategic Command, “U.S. Cyber Command (USCYBERCOM),” 30 September 2016, http://www.stratcom.mil/Media/Factsheets/Factsheet-View/Article/960492/us-cyber-command-uscybercom/

[6] U.S. Strategic Command, “U.S. Cyber Command (USCYBERCOM),” 30 September 2016, http://www.stratcom.mil/Media/Factsheets/Factsheet-View/Article/960492/us-cyber-command-uscybercom/

[7] Richard Sisk, Military, “Cyber Command to Become Unified Combatant Command,” 18 August 2017, http://www.military.com/daily-news/2017/08/18/cyber-command-become-unified-combatant-command.html

[8] Department of Defense, “The Department of Defense Cyber Strategy,” 2015, https://www.defense.gov/News/Special-Reports/0415_Cyber-Strategy/

[9] Thomas Gibbons-Neff and Ellen Nakashima, The Washington Post, “President Trump announces move to elevate Cyber Command,” 18 August 2017, https://www.washingtonpost.com/news/checkpoint/wp/2017/08/18/president-trump-announces-move-to-elevate-cyber-command/

[10] Ibid.

Ali Crawford Assessment Papers Cyberspace United States

Options for U.S. National Guard Defense of Cyberspace

Jeffrey Alston is a member of the United States Army National Guard and a graduate of the United States Army War College.  He can be found on Twitter @jeffreymalston.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The United States has not organized its battlespace to defend against cyberattacks.  Cyberattacks are growing in scale and scope and threaten surprise and loss of initiative at the strategic, operational and tactical levels.  Shortfalls in the nation’s cybersecurity workforce and lack of division of labor amongst defenders exacerbates these shortfalls.

Date Originally Written:  July 23, 2017.

Date Originally Published:  September 4, 2017.

Author and / or Article Point of View:  This paper is written from a perspective of a U.S. Army field grade officer with maneuver battalion command experience who is a senior service college graduate.  The officer has also been a practitioner of delivery of Information Technology (IT) services and cybersecurity for his organization for over 15 years and in the IT industry for nearly 20 years.

Background:  At the height of the Cold War, the United States, and the North American (NA) continent, organized for defense against nuclear attack.  A series of radar early warning lines and control stations were erected and arrayed across the northern reaches of the continent to warn of nuclear attack.  This system of electronic sentries were controlled and monitored through a series of air defense centers.  The actual air defense fell to a number of key air bases across the U.S. ready to intercept and defeat bombers from the Union of Soviet Socialist Republics entering the NA airspace.  The system was comprehensive, arrayed in-depth, and redundant[1].  Today, with threats posed by sophisticated cyber actors who directly challenge numerous United States interests, no equivalent warning structure exists.  Only high level, broad outlines of responsibility exist[2].  Existing national capabilities, while not trivial, are not enough to provide assurances to U.S. states as these national capabilities may require a cyber event of national significance to occur before they are committed to address a state’s cyber defense needs.  Worse, national entities may notify a state after a breach has occurred or a network is believed to be compromised.  The situation is not sustainable.

Significance:  Today, the vast Cold War NA airspace has its analog in undefended space and gray area networks where the cyber threats propagate, unfettered from active security measures[3].  While the capabilities of the myriad of companies and firms that make up the critical infrastructure and key resource sectors have considerable cybersecurity resources and skill, there are just as many that have next to nothing.  Many companies and firms cannot afford cyber capability or worse are simply unaware of the threats they face.  Between all of these entities the common terrain consists of the numerous networks, private and public, that interconnect or expose all of these actors.  With its Title 32 authorities in U.S. law, the National Guard is well positioned to take a key role in the unique spot interface between private industry – especially critical infrastructure – in that it can play a key role in this gray space.

There is a unique role for the National Guard cyber forces in gray space of the internet.  The National Guard could provide a key defensive capability in two different ways.

Option #1:  The National Guard’s Defensive Cyberspace Operations-Element (DCO-E), not part of the Department of Defense Cyber Mission Force, fulfills an active role providing depth in their states’ networks, both public and private.  These elements, structured as full-time assets, can cooperatively work to negotiate the placement of sensors and honeypots in key locations in the network and representative sectors in their states.  Data from these sensors and honey pots, optimized to only detect high-threat or active indicators of compromise, would be aggregated in security operations centers manned primarily by the DCO-Es but with state government and Critical Infrastructure and Key Resources (CIKR) participation.  These security operations centers provide valuable intelligence, analytics, cyber threat intelligence to all and act to provide depth in cybersecurity.  These units watch for only the most sophisticated threats and allow for the CIKR private industry entities to concentrate their resources on internal operations.  Surveilling gray space networks provides another layer of protection and builds a shared understanding of adversary threats, traffic, exploitation attempts returning initiative to CIKR and preventing surprise in cyberspace.

Risk:  The National Guard cannot be expected to intercept every threat that is potentially targeted at a state entity.  Negative perceptions of “mini-National Security Agencies (NSAs)” within each state could raise suspicions and privacy concerns jeopardizing the potential of these assets.  Duplicate efforts by all stakeholders threaten to spoil an available capability rather than integrating it into a whole of government approach.

Gain:  Externally, this option builds the network of cyber threat intelligence and unifies efforts within the particular DCO-E’s state.  Depth is created for all stakeholders.  Internally, allowing National Guard DCO-Es to focus in the manner in this option provides specific direction, equipping options, and training for their teams.

Option #2:  The National Guard’s DCO-Es offer general support functions within their respective states for their Adjutants General, Governors, Department of Homeland Security Advisors, etc.  These elements are tasked on an as-needed basis to perform cybersecurity vulnerability assessments of critical infrastructure when requested or when directed by state leadership.  Assessments and follow-on recommendations are delivered to the supported entity for the purpose of increasing their cybersecurity posture.  The DCO-Es fulfill a valuable role especially for those entities that lack a dedicated cybersecurity capability or remain unaware of the threats they face.  In this way, the DCO-Es may prevent a breach of a lessor defended entity as the entry point for larger scale attacks or much larger chain-reaction or cascading disruptions of a particular industry.

Risk:  Given the hundreds and potentially thousands of private industry CIKR entities within any particular state, this option risks futility in that there is no guarantee the assessments are performed on the entities at the greatest risk.  These assessments are a cybersecurity improvement for the state overall, however, given the vast numbers of industry actors this option is equivalent to trying to boil the ocean.

Gain:  These efforts help fill in the considerable gap that exists in the cybersecurity of CIKR entities in the state.  The value of the assessments may be multiplied through communication of the results of these assessments and vulnerabilities at state and national level industry specific associations and conferences etc.  DCO-Es can gradually collect information on trends in these industries and attempt to use that information for the benefit of all such as through developing knowledge bases and publishing state specific trends.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1]  Winkler, D. F. (1997). SEARCHING THE SKIES: THE LEGACY OF THE UNITED STATES COLD WAR DEFENSE RADAR PROGRAM(USA, Headquarters Air Combatant Command).

[2]  Federal Government Resources. (n.d.). Retrieved July 22, 2017, from https://www.americanbar.org/content/dam/aba/marketing/Cybersecurity/2013march21_cyberroleschart.authcheckdam.pdf

[3]  Brenner, J. (2014, October 24). Nations everywhere are exploiting the lack of cybersecurity. Retrieved July 21, 2017, from https://www.washingtonpost.com/opinions/joel-brenner-nations-everywhere-are-exploiting-the-lack-of-cybersecurity

 

 

 

Cyberspace Jeffrey Alston Non-Full-Time Military Forces (Guard, Reserve, Territorial Forces, Militias, etc) Option Papers United States

Assessment of Cryptocurrencies and Their Potential for Criminal Use 

The Viking Cop has served in a law enforcement capacity with multiple organizations within the U.S. Executive Branch.  He can be found on Twitter @TheVikingCop.  The views reflected are his own and do not represent the opinion of any government entities.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Cryptocurrencies and Their Potential for Criminal Use

Date Originally Written:  July 22, 2017.

Date Originally Published:  August 28, 2017.

Summary:  Cryptocurrencies are a new technology-driven virtual currency that has existed since late 2009.  Due to the anonymous or near-anonymous nature of their design they are useful to criminal organizations.  It is vital for law enforcement organizations and regulators to know the basics about how cryptocurrencies work as their use by criminal organizations is likely to continue.

Text:  Cryptocurrencies are a group of virtual currencies that relay on a peer-to-peer system disconnected from a central issuing authority that allows users an anonymous or near-anonymous method to conduct transactions[1][2].

Bitcoin, Ethereum, LiteCoin, and DogeCoin are among 820 currently existing cryptocurrencies that have a combined market capitalization of over ninety billion U.S. Dollars at the time of this assessment[3][4].

The majority of cryptocurrencies run off a system design created by an unknown individual or group of individuals published under the name Satoshi Nakamoto[2].  This system relies on a decentralized public ledger system, conceptualized by Nakamoto in a whitepaper published in October of 2008, which would later become widely known as “Blockchain.”

Simplistically, blockchain works as a system of electronic signature keys and cryptographic hash codes printed onto a publicly accessible ledger.  Once a coin in any cryptocurrency is created through a “mining” process that consists of a computer or node solving a complex mathematical calculation known as a “proof-of-work,” the original signature and hash of that coin is added to the public ledger on the initial node and then also transmitted to every other node in the network in a block.  These proof-of-work calculations are based on confirming the hash code of previous transactions and printing it to a local copy of the public ledger.  Once the block is transmitted to all other nodes they confirm that the transaction is valid and print it to their copy of the public ledger.  This distribution and cross-verification of the public ledger by multiple computers ensures the accuracy and security of each transaction in the blockchain as the only way to falsely print to public ledger would be to control fifty percent plus one of the nodes in the network[1][2].

While the electronic signatures for each user are contained within the coin, the signature itself contains no personally identifiable information.  From a big data perspective this system allows one to see all the transactions that a user has conducted through the used electronic signature but it will not allow one to know from who or where the transaction originated or terminated.

A further level of security has been developed by private groups that provide a method of virtually laundering the money called “Mixing.”  A third-party source acts as an intermediary receiving and disturbing payments removing any direct connection between two parties in the coin signature[5].

This process of separating the coins and signatures within from the actual user gives cryptocurrencies an anonymous or near-anonymous method for conducting criminal transactions online.  A level of the internet, known as Darknet, which is only accessible through the use of special software and work off non-standard communication protocols has seen a rise in online marketplaces.  Illicit Darknet marketplaces such as Silk Road and the more recently AlphaBay have levied cryptocurrencies as a go-to for concealing various online black market transactions such as stolen credit card information, controlled substances, and firearms[6].

The few large criminal cases that have involved the cryptocurrency Bitcoin, such as U.S. Citizen Ross Ulbricht involved with Silk Road and Czech national Tomáš Jiříkovský for stealing ninety thousand Bitcoins ($225 million USD in current market value), have been solved by investigators through traditional methods of discovering an IP address left through careless online posts and not through a vulnerability in the public ledger[7].

Even in smaller scale cases of narcotics transactions taking place on Darknet marketplaces local investigators have only been able to trace cryptocurrency purchases backwards after intercepting shipments through normal detection methods and finding cryptocurrency artifacts during the course of a regular investigation.  There has been little to no success on linking cryptocurrencies back to distributors that hasn’t involved regular investigative methods[8].

Looking at future scenarios involving cryptocurrencies the Global Public Policy Institute sees a possible future whereby terrorism devolves back to populist movements and employs decentralized hierarchy heavily influenced by online interactions.  In this possible future, cryptocurrencies could allow groups to covertly move money between supporters and single or small group operatives along with being a means to buy and sell software to be used in cyberterrorism attacks or to support physical terrorism attacks[9].

Cryptocurrency is currently positioned to exploit a massive vulnerability in the global financial and legal systems and law enforcement organizations are only beginning to acquire the knowledge and tools to combat illicit use.  In defense of law enforcement organizations and regulators, cryptocurrencies are in their infancy, with massive changes in their operation, trading, and even foundational technology changing rapidly.  This rapid change makes it so that until cryptocurrencies reach a stable or mature state, they will be an unpredictable moving target to track and hit[10].


Endnotes:

[1]  Arvind Narayanan, J. B. (2016). Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Pinceton University Press.

[2]  Nakamoto, S. (n.d.). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved July 10, 2017, from Bitcoin: https://bitcoin.org/bitcoin.pdf

[3]  Cryptocurrency market cap analysis. (n.d.). Retrieved from Cryptolization: https://cryptolization.com/

[4]  CryptoCurrency Market Capitalizations. (n.d.). Retrieved July 10, 2017, from CoinMarketCap: https://coinmarketcap.com/currencies/views/all/

[5]  Jacquez, T. (2016). Cryptocurrency the new money laundering problem for banking, law enforcement, and the legal system. Utica College: ProQuest Dissertations Publishing.

[6]  Over 57% Of Darknet Sites Offer Unlawful Items, Study Shows. (n.d.). Retrieved July 21, 2017, from AlphaBay Market: https://alphabaymarket.com/over-57-of-darknet-sites-offer-unlawful-items-study-shows/

[7]  Bohannon, J. (2016, March 9). Why criminals can’t hide behind Bitcoin. Retrieved July 10, 2017, from Science: http://www.sciencemag.org/news/2016/03/why-criminals-cant-hide-behind-bitcoin

[8]  Jens Anton Bjørnage, M. W. (2017, Feburary 21). Dom: Word-dokument og bitcoins fælder narkohandler. Retrieved July 21, 2017, from Berlingske: https://www.b.dk/nationalt/dom-word-dokument-og-bitcoins-faelder-narkohandler

[9]  Bhatnagar, A., Ma, Y., Manome, M., Markiewicz, S., Sun, F., Wahedi, L. A., et al. (@017, June). Volatile Years: Transnational Terrorism in 2027. Retrieved July 21, 2017, from Robert Bosch Foundation: http://www.bosch-stiftung.de/content/language1/downloads/GGF_2027_Volatile_Years_Transnational_Terrorism_in_2027.pdf

[10]  Engle, E. (2016). Is Bitcoin Rat Poison: Cryptocurrency, Crime, and Counterfeiting (CCC). Journal of High Technology Law 16.2, 340-393.

Assessment Papers Criminal Activities Cyberspace Economic Factors The Viking Cop

Options for Paying Ransoms to Advanced Persistent Threat Actors

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  Paying ransom for exploits being extorted by Advanced Persistent Threat Actors: Weighing the Options.

Date Originally Written:  June 1, 2017.

Date Originally Published:  June 8, 2017.

Author and / or Article Point of View:  Recent events have given rise to the notion of crowd funding monies to pay for exploits being held by a hacking group called ShadowBrokers in their new “Dump of the month club” they have ostensibly started.  This article examines, from a red team point of view,  the idea of meeting actors’ extortion demands to get access to new nation state-level exploits and, in doing so, being able to reverse engineer them and immunize the community.

Background:  On May 30, 2017 the ShadowBrokers posted to their new blog site that they were starting a monthly dump service wherein clients could pay a fee for access to exploits and other materials that the ShadowBrokers had stolen from the U.S. Intelligence Community (USIC).  On May 31, 2017 a collective of hackers created a Patreon site to crowd fund monies in an effort to pay the ShadowBrokers for their wares and gather the exploits to reverse engineer them in the hopes of disarming them for the greater community.  This idea was roundly debated on the internet and as of this writing  has since been pulled by the collective after collecting about $3,000.00 of funds.  In the end it was the legal counsel of one of the hackers who had the Patreon site shut down due to potential illegalities with buying such exploits from actors like ShadowBrokers.  There were many who supported the idea with a smaller but vocal dissenting group warning that it was bad idea.

Significance:  The significance of these events has import on many levels of national security issues that now deal with information security and information warfare.  The fact that ShadowBrokers exist and have been dumping nation-state hacking tools is only one order of magnitude here.  Since the ShadowBrokers dumped their last package of files a direct international event ensued in the WannaCrypt0r malware being augmented with code from ETERNALBLUE and DOUBLEPULSAR U.S. National Security Agency exploits and infecting large numbers of hosts all over the globe with ransomware.  An additional aspect of this was that the code for those exploits may have been copied from the open source sites of reverse engineers working on the exploits to secure networks via penetration testing tools.  This was the crux of the argument that the hackers were making, simply put, they would pay for the access to deny others from having it while trying to make the exploits safe.  Would this model work for both public and private entities?  Would this actually stop the ShadowBrokers from posting the data publicly even if paid privately?

Option #1:  Private actors buy the exploits through crowd funding and reverse the exploits to make them safe (i.e. report them to vendors for patching).

Risk:  Private actors like the hacker collective who attempted this could be at risk to the following scenarios:

1) Legal issues over buying classified information could lead to arrest and incarceration.

2) Buying the exploits could further encourage ShadowBrokers’ attempts to extort the United States Intelligence Community and government in an active measures campaign.

3) Set a precedent with other actors by showing that the criminal activity will in fact produce monetary gain and thus more extortion campaigns can occur.

4) The actor could be paid and still dump the data to the internet and thus the scheme moot.

Gain:  Private actors like the hacker collective who attempted this could have net gains from the following scenarios:

1) The actor is paid, and the data is given leaving the hacker collective to reverse engineer the exploits and immunize the community.

2) The hacker collective could garner attention to the issues and themselves, this perhaps could get more traction on such issues and secure more environments.

Option #2:  Private actors do not pay for the exploits and do not reward such activities like ransomware and extortion on a global scale.

Risk:  By not paying the extortionists the data is dumped on the internet and the exploits are used in malware and other hacking attacks globally by those capable of understanding the exploits and using or modifying them.  This has already happened and even with the exploits being in the wild and known of by vendors the attacks still happened to great effect.  Another side effect is that all operations that had been using these exploits have been burned, but, this is already a known quantity to the USIC as they likely already know what exploits have been stolen and or remediated in country.

Gain:  By not paying the extortionists the community at large is not feeding the cost to benefit calculation that the attackers must make in their plans of profit.  If we do not deal with extortionists or terrorists you are not giving them positive incentive to carry out such attacks for monetary benefit.

Other Comments:  While it may be laudable to consider such schemes as crowd funding and attempting to open source such exploit reversal and mitigation, it is hubris to consider that this will stop the actor with bad intent to just sell the data and be done with it.  It is also of note that the current situation that this red team article is based on involves a nation-state actor, Russia and its military intelligence service Glavnoye Razvedyvatel’noye Upravleniye (GRU) and its foreign intelligence service the Sluzhba Vneshney Razvedki (SVR) that are understood to not care about the money.  This current situation is not about money, it is about active measures and sowing chaos in the USIC and the world.  However, the precepts still hold true, dealing with terrorists and extortionists is a bad practice that will only incentivize the behavior.  The takeaway here is that one must understand the actors and the playing field to make an informed decision on such activities.

Recommendation:  None.


Endnotes:

None.

Cyberspace Extortion Option Papers Scot A. Terban

Options for Defining “Acts of War” in Cyberspace

Michael R. Tregle, Jr. is a U.S. Army judge advocate officer currently assigned as a student in the 65th Graduate Course at The Judge Advocate General’s Legal Center & School.  A former enlisted infantryman, he has served at almost every level of command, from the infantry squad to an Army Service Component Command, and overseas in Afghanistan and the Pacific Theater.  He tweets @shockandlawblog and writes at www.medium.com/@shock_and_law.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The international community lacks consensus on a binding definition of “act of war” in cyberspace.

Date Originally Written:  March 24, 2017.

Date Originally Published:  June 5, 2017.

Author and / or Article Point of View:  The author is an active duty officer in the U.S. Army.  This article is written from the point of view of the international community toward common understandings of “acts of war” in cyberspace.

Background:  The rising prominence of cyber operations in modern international relations highlights a lack of widely established and accepted rules and norms governing their use and status.  Where no common definitions of “force” or “attack” in the cyber domain can be brought to bear, the line between peace and war becomes muddled.  It is unclear which coercive cyber acts rise to a level of force sufficient to trigger international legal rules, or how coercive a cyber act must be before it can be considered an “act of war.”  The term “act of war” is antiquated and mostly irrelevant in the current international legal system.  Instead, international law speaks in terms of “armed conflicts” and “attacks,” the definitions of which govern the resort to force in international relations.  The United Nations (UN) Charter flatly prohibits the use or threat of force between states except when force is sanctioned by the UN Security Council or a state is required to act in self-defense against an “armed attack.”  While it is almost universally accepted that these rules apply in cyberspace, how this paradigm works in the cyber domain remains a subject of debate.

Significance:  Shared understanding among states on what constitutes legally prohibited force is vital to recognizing when states are at war, with whom they are at war, and whether or not their actions, in war or otherwise, are legally permissible.  As the world finds itself falling deeper into perpetual “gray” or “hybrid” conflicts, clear lines between acceptable international conduct and legally prohibited force reduce the chance of miscalculation and define the parameters of war and peace.

Option #1:  States can define cyberattacks causing physical damage, injury, or destruction to tangible objects as prohibited uses of force that constitute “acts of war.”  This definition captures effects caused by cyber operations that are analogous to the damage caused by traditional kinetic weapons like bombs and bullets.  There are only two known instances of cyberattacks that rise to this level – the Stuxnet attack on the Natanz nuclear enrichment facility in Iran that physically destroyed centrifuges, and an attack on a German steel mill that destroyed a blast furnace.

Risk:  Limiting cyber “acts of war” to physically destructive attacks fails to fully capture the breadth and variety of detrimental actions that can be achieved in the cyber domain.  Cyber operations that only delete or alter data, however vital that data may be to national interests, would fall short of the threshold.  Similarly, attacks that temporarily interfere with use of or access to vital systems without physically altering them would never rise to the level of illegal force.  Thus, states would not be permitted to respond with force, cyber or otherwise, to such potentially devastating attacks.  Election interference and crashing economic systems exemplify attacks that would not be considered force under the physical damage standard.

Gain:  Reliance on physical damage and analogies to kinetic weapons provides a clear, bright-line threshold that eliminates uncertainty.  It is easily understood by international players and maintains objective standards by which to judge whether an operation constitutes illegal force.

Option #2:  Expand the definition of cyber force to include effects that cause virtual damage to data, infrastructure, and systems.  The International Group of Experts responsible for the Tallinn Manual approached this option with the “functionality test,” whereby attacks that interfere with the functionality of systems can qualify as cyber force, even if they cause no physical damage or destruction.  Examples of such attacks would include the Shamoon attack on Saudi Arabia in 2012 and 2016, cyberattacks that shut down portions of the Ukrainian power grid during the ongoing conflict there, and Iranian attacks on U.S. banks in 2016.

Risk:  This option lacks the objectivity and clear standards by which to assess the cyber force threshold, which may undermine shared understanding.  Expanding the spectrum of cyber activities that may constitute force also potentially destabilizes international relations by increasing circumstances by which force may be authorized.  Such expansion may also undermine international law by vastly expanding its scope, and thus discouraging compliance.  If too many activities are considered force, states that wish to engage in them may be prompted to ignore overly burdensome legal restrictions on too broad a range of activities.

Gain:  Eliminating the physical damage threshold provides more flexibility for states to defend themselves against the potentially severe consequences of cyberattacks.  Broadening the circumstances under which force may be used in response also enhances the deterrent value of cyber capabilities that may be unleashed against an adversary.  Furthermore, lowering the threshold for legally permissible cyber activities discourages coercive international acts.

Other Comments:  None.

Recommendation:  None.


Endnotes:

None.

Cyberspace Law & Legal Issues Michael R. Tregle, Jr. Option Papers

“Do You Have A Flag?” – Egyptian Political Upheaval & Cyberspace Attribution

Murad A. Al-Asqalani is an open-source intelligence analyst based in Cairo, Egypt.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


(Author’s Note — “Do You Have A Flag?” is a reference to the Eddie Izzard sketch of the same name[1].)

National Security Situation:  Response to offensive Information Operations in cyberspace against the Government of Egypt (GoE).

Date Originally Written:  May 15, 2017.

Date Originally Published:  June 1, 2017.

Author and / or Article Point of View:  This article discusses a scenario where the GoE tasks an Interagency Special Task Force (ISTF) with formulating a framework for operating in cyberspace against emergent threats to Egyptian national security.

Background:  In 2011, a popular uprising that relied mainly on the Internet and social media websites to organize protests and disseminate white, grey and black propaganda against the Mubarak administration of the GoE, culminated in former President Mubarak stepping down after three decades in power.

Three disturbing trends have since emerged.  The first is repeated deployment of large-scale, structured campaigns of online disinformation by all political actors, foreign and domestic, competing for dominance in the political arena.  Media outlets and think tanks seem to primarily cater to their owners’ or donors’ agendas.  Egyptian politics have been reduced to massive astroturfing campaigns, scripted by creative content developers and mobilized by marketing strategists, who create and drive talking points using meat and sock puppets, mask them as organic interactions between digital grassroots activists, amplify them in the echo chambers of social media, then pass them along to mainstream media outlets, which use them to pressure the GoE citing ‘public opinion’; thus, empowering their client special interest groups in this ‘digital political conflict’.

The second trend to emerge is the rise in Computer Network Attack (CNA) and Computer Network Exploitation (CNE) incidents.  CNA incidents mainly focus on hacking GoE websites and defacing them with political messages, whereas CNE incidents mainly focus on information gathering (data mining) and spear phishing on social media websites to identify and target Egyptian Army and Police personnel and their families, thus threatening their Personal Security (PERSEC), and overall Operation Security (OPSEC).  The best known effort of this type is the work of the first-ever Arabic Advanced Persistent Threat (APT) group: Desert Falcons[2].

The third trend is the abundance of Jihadi indoctrination material, and the increase in propaganda efforts of Islamist terrorist organizations in cyberspace.  New technologies, applications and encryption allow for new channels to reach potential recruits, and to disseminate written, audio, and multimedia messages of violence and hate to target populations.

Significance:  The first trend represents a direct national security threat to GoE and the interests of the Egyptian people.  Manipulation of public opinion is an Information Operations discipline known as “Influence Operations” that draws heavily on Psychological Operations or PSYOP doctrines.  It can render drastic economic consequences that can amount to economic occupation and subsequent loss of sovereignty.  Attributing each influence campaign to the special interest group behind it can help identify which Egyptian political or economic interest is at stake.

The second trend reflects the serious developments in modus operandi of terrorist organizations, non-state actors, and even state actors controlling proxies or hacker groups, which have been witnessed and acknowledged recently by most domestic intelligence services operating across the world.  Attributing these operations will identify the cells conducting them as well as the networks that support these cells, which will save lives and resources.

The third trend is a global challenge that touches on issues of freedom of speech, freedom of belief, Internet neutrality, online privacy, as well as technology proliferation and exploitation.  Terrorists use the Internet as a force multiplier, and the best approach to solving this problem is to keep them off of it through attribution and targeting, not to ban services and products available to law-abiding Internet users.

Given these parameters, the ISTF can submit a report with the following options:

Option #1:  Maintain the status quo.

Risk:  By maintaining the status quo, bureaucracy and fragmentation will always place the GoE on the defensive.  GoE will continue to defend against an avalanche of influence operations by making concessions to whoever launches them.  The GoE will continue to appear as incompetent, and lose personnel to assassinations and improvised explosive device attacks. The GoE will fail to prevent new recruits from joining terrorist groups, and it will not secure the proper atmosphere for investment and economic development.

This will eventually result in the full disintegration of the 1952 Nasserite state bodies, a disintegration that is central to the agendas of many regional and foreign players, and will give rise to a neo-Mamluk state, where rogue generals and kleptocrats maintain independent information operations to serve their own interests, instead of adopting a unified framework to serve the Egyptian people.

Gain:  Perhaps the only gain in this case is avoidance of further escalation by parties invested in the digital political conflict that may give rise to more violent insurgencies, divisions within the military enterprise, or even a fully fledged civil war.

Option #2:  Form an Interagency Cyber Threat Research and Intelligence Group (ICTRIG).

Risk:  By forming an ICTRIG, the ISTF risks fueling both intra-agency and interagency feuds that may trigger divisions within the military enterprise and the Egyptian Intelligence Community.  Competing factions within both communities will aim to control ICTRIG through staffing to protect their privileges and compartmentalization.

Gain:  Option #2 will define a holistic approach to waging cyber warfare to protect the political and economic interests of the Egyptian people, protect the lives of Egyptian service and statesmen, protect valuable resources and infrastructure, and tackle extremism.  ICTRIG will comprise an elite cadre of highly qualified commissioned officers trained in computer science, Information Operations, linguistics, political economy, counterterrorism, as well as domestic and international law to operate in cyberspace.  ICTRIG will develop its own playbook of mission, ethics, strategies and tactics in accordance with a directive from the political leadership of GoE.

Other Comments:  Option #1 can only be submitted and/or adopted due to a total lack of true political will to shoulder the responsibility of winning this digital political conflict.  It means whoever submits or adopts Option #1 is directly undermining GoE institutions.  Since currently this is the actual reality of GoE’s response to the threats outlined above, uncoordinated efforts at running several independent information operations have been noted and documented, with the Morale Affairs Department of the Military Intelligence and Reconnaissance Directorate running the largest one.

Recommendation:  None.


Endnotes:

[1]  Eddie Izzard: “Do you have a flag?”, Retrieved from: https://www.youtube.com/watch?v=_9W1zTEuKLY

[2]   Desert Falcons: The Middle East’s Preeminent APT, Kaspersky Labs Blog, Retrieved from https://blog.kaspersky.com/desert-falcon-arabic-apt/7678/

Cyberspace Egypt Murad A. Al-Asqalani Option Papers Psychological Factors

U.S. Options to Develop a Cyberspace Influence Capability

Sina Kashefipour is the founder and producer of the national security podcast The Loopcast.  He  currently works as an analyst.  The opinions expressed in this paper do not represent the position of his employer.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The battle for control and influence over the information space.

Date Originally Written:  May 18, 2017.

Date Originally Published:  May 29, 2017.

Author and / or Article Point of View:  The author believes that there is no meat space or cyberspace, there is only the information space.  The author also believes that while the tools, data, and knowledge are available, there is no United States organization designed primarily to address the issue of information warfare.

Background:  Information warfare is being used by state and non-state adversaries.  Information warfare, broadly defined, makes use of information technology to gain an advantage over an adversary.  Information is the weapon, the target, and the medium through which this type of conflict takes place[1][2][3].  Information warfare includes tactics such as misinformation, disinformation, propaganda, psychological operations and computer network operations [3][4][5].

Significance:  Information warfare is a force multiplier.  Control and mastery of information determines success in politics and enables the driving of the political narrative with the benefit of not having to engage in overt warfare.  Information warfare has taken a new edge as the information space and the political are highly interlinked and can, in some instances, be considered as one[6][7][8].

Option #1:  The revival of the United States Information Agency (USIA) or the creation of a government agency with similar function and outlook. The USIA’s original purpose can be summed as:

  • “To explain and advocate U.S. policies in terms that are credible and meaningful in foreign cultures”
  • “To provide information about the official policies of the United States, and about the people, values, and institutions which influence those policies”
  • “To bring the benefits of international engagement to American citizens and institutions by helping them build strong long-term relationships with their counterparts overseas”
  • “To advise the President and U.S. government policy-makers on the ways in which foreign attitudes will have a direct bearing on the effectiveness of U.S. policies.[9]”

USIA’s original purpose was largely designated by the Cold War.  The aforementioned four points are a good starting point, but any revival of the USIA would involve the resulting organization as one devoted to modern information warfare.  A modern USIA would not just focus on what a government agency can do but also build ties with other governments and across the private sector including with companies like Google, Facebook, and Twitter as they are platforms that have been used recently to propagate information warfare campaigns [10][11].  Private sector companies are also essential to understanding and limiting these types of campaigns [10][12][13][14].  Furthermore, building ties and partnering with other countries facing similar issues to engage in information warfare would be part of the mission [15][16][17].

Risk:  There are two fundamental risks to reconstituting a USIA: where does a USIA agency fit within the national security bureaucracy and how does modern information warfare pair with the legal bounds of the first amendment?

Defining the USIA within the national security apparatus would be difficult[18].  The purpose of the USIA would be easy to state, but difficult to bureaucratically define.  Is this an organization to include public diplomacy and how does that pair/compete with the Department of State’s public diplomacy mission?  Furthermore, if this is an organization to include information warfare how does that impact Department of Defense capabilities such as the National Security Agency or United States Cyber Command?  Where does the Broadcasting Board of Governors fit in?  Lastly, modern execution of successful information warfare relies on a whole of government approach or the ability to advance strategy in an interdisciplinary fashion, which is difficult given the complexity of the bureaucracy.

The second risk is how does an agency engage in information warfare in regards to the first amendment?  Consider for a moment that if war or conflict that sees information as the weapon, the target, and the medium, what role can the government legally play?  Can a government wage information warfare without, say, engaging in outright censorship or control of information mediums like Facebook and Twitter?  The legal framework surrounding these issues are ill-defined at present [19][20].

Gain:  Having a fully funded cabinet level organization devoted to information warfare complete with the ability to network across government agencies, other governments and the private sector able to both wage and defend the United States against information warfare.

Option #2:  Smaller and specific interagency working groups similar to the Active Measures Working Group of the late eighties.  The original Active Measures Working Group was an interagency collaboration devoted to countering Soviet disinformation, which consequently became the “U.S Government’s body of expertise on disinformation [21].”

The proposed working group would focus on a singular issue and in contrast to Option #1, a working group would have a tightly focused mission, limited staff, and only focus on a singular problem.

Risk:  Political will is in competition with success, meaning if the proposed working group does not show immediate success, more than likely it will be disbanded.  The group has the potential of being disbanded once the issue appears “solved.”

Gain:  A small and focused group has the potential to punch far above its weight.  As Schoen and Lamb point out “the group exposed Soviet disinformation at little cost to the United States but negated much of the effort mounted by the large Soviet bureaucracy that produced the multibillion dollar Soviet disinformation effort[22].”

Option #3:  The United States Government creates a dox and dump Wikileaks/Shadow Brokers style group[23][24].  If all else fails then engaging in attacks against adversary’s secrets and making them public could be an option.  Unlike the previous two options, this option does not necessarily represent a truthful approach, rather just truthiness[25].  In practice this means leaking/dumping data that reinforces and emphasizes a deleterious narrative concerning an adversary.  Thus, making their secrets very public, and putting the adversary in a compromising position.

Risk:  Burning data publicly might compromise sources and methods which would ultimately impede/stop investigations and prosecutions.  For instance, if an adversary has a deep and wide corruption problem is it more effective to dox and dump accounts and shell companies or engage in a multi-year investigatory process?  Dox and dump would have an immediate effect but an investigation and prosecution would likely have a longer effect.

Gain:  An organization and/or network is only as stable as its secrets are secure, and being able to challenge that security effectively is a gain.

Recommendation:  None


Endnotes:

[1]  Virag, Saso. (2017, April 23). Information and Information Warfare Primer. Retrieved from:  http://playgod.org/information-warfare-primer/

[2]  Waltzman, Rand. (2017, April 27). The Weaponization of Information: The Need of Cognitive Security. Testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity on April 27, 2017.

[3]  Pomerantsev, Peter and Michael Weiss. (2014). The Menace of Unreality: How the Kremlin Weaponizes Information, Culture, and Money.

[4]  Matthews, Miriam and Paul, Christopher (2016). The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It

[5]  Giles, Keir. (2016, November). Handbook of Russian Information Warfare. Fellowship Monograph Research Division NATO Defense College.

[6]  Giles, Keir and Hagestad II, William. (2013). Divided by a Common Language: Cyber Definitions in Chinese, Russian, and English. 2013 5th International Conference on Cyber Conflict

[7]  Strategy Bridge. (2017, May 8). An Extended Discussion on an Important Question: What is Information Operations? Retrieved: https://thestrategybridge.org/the-bridge/2017/5/8/an-extended-discussion-on-an-important-question-what-is-information-operations

[8] There is an interesting conceptual and academic debate to be had between what is information warfare and what is an information operation. In reality, there is no difference given that the United States’ adversaries see no practical difference between the two.

[9] State Department. (1998). USIA Overview. Retrieved from: http://dosfan.lib.uic.edu/usia/usiahome/oldoview.htm

[10]  Nuland, William, Stamos, Alex, and Weedon, Jen. (2017, April 27). Information Operations on Facebook.

[11]  Koerner, Brendan. (2016, March). Why ISIS is Winning the Social Media War. Wired

[12]  Atlantic Council. (2017). Digital Forensic Research Lab Retrieved:  https://medium.com/dfrlab

[13]  Bellingcat. (2017).  Bellingcat: The Home of Online Investigations. Retrieved: https://www.bellingcat.com/

[14]  Bergen, Mark. (2016). Google Brings Fake News Fact-Checking to Search Results. Bloomberg News. Retrieved: https://www.bloomberg.com/news/articles/2017-04-07/google-brings-fake-news-fact-checking-to-search-results

[15]  NATO Strategic Communications Centre of Excellence. (2017). Retrieved: http://stratcomcoe.org/

[16]  National Public Radio. (2017, May 10). NATO Takes Aim at Disinformation Campaigns. Retrieved: http://www.npr.org/2017/05/10/527720078/nato-takes-aim-at-disinformation-campaigns

[17]  European Union External Action. (2017). Questions and Answers about the East Stratcom Task Force. Retrieved: https://eeas.europa.eu/headquarters/headquarters-homepage/2116/-questions-and-answers-about-the-east-

[18]  Armstrong, Matthew. (2015, November 12). No, We Do Not Need to Revive The U.S. Information Agency. War on the Rocks. Retrieved:  https://warontherocks.com/2015/11/no-we-do-not-need-to-revive-the-u-s-information-agency/ 

[19]  For example the Countering Foreign Propaganda and Disinformation Act included in the National Defense Authorization Act for fiscal year 2017 acts more with the issues of funding, organization, and some strategy rather than legal infrastructure issues.  Retrieved: https://www.congress.gov/114/crpt/hrpt840/CRPT-114hrpt840.pdf

[20]  The U.S Information and Educational Exchange Act of 1948 also known as the Smith-Mundt Act. The act effectively creates the basis for public diplomacy and the dissemination of government view point data abroad. The law also limits what the United States can disseminate at home. Retrieved: http://legisworks.org/congress/80/publaw-402.pdf

[21]  Lamb, Christopher and Schoen, Fletcher (2012, June). Deception, Disinformation, and Strategic Communications: How One Interagency Group Made a Major Difference. Retrieved: http://ndupress.ndu.edu/Portals/68/Documents/stratperspective/inss/Strategic-Perspectives-11.pdf

[22]  Lamb and Schoen, page 3

[23]  RT. (2016, October 3). Wikileaks turns 10: Biggest Secrets Exposed by Whistleblowing Project. Retrieved: https://www.rt.com/news/361483-wikileaks-anniversary-dnc-assange/

[24]  The Gruqg. (2016, August 18). Shadow Broker Breakdown. Retrieved: https://medium.com/@thegrugq/shadow-broker-breakdown-b05099eb2f4a

[25]  Truthiness is defined as “the quality of seeming to be true according to one’s intuition, opinion, or perception, without regard to logic, factual evidence, or the like.” Dictionary.com. Truthiness. Retrieved:  http://www.dictionary.com/browse/truthiness.

Truthiness in this space is not just about leaking data but also how that data is presented and organized. The goal is to take data and shape it so it feels and looks true enough to emphasize the desired narrative.

Capacity / Capability Enhancement Cyberspace Option Papers Psychological Factors Sina Kashefipour United States

Evolution of U.S. Cyber Operations and Information Warfare

Brett Wessley is an officer in the U.S. Navy, currently assigned to U.S. Pacific Command.   The contents of this paper reflect his own personal views and are not necessarily endorsed by U.S. Pacific Command, Department of the Navy or Department of Defense.  Connect with him on Twitter @Brett_Wessley.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  Evolving role of cyber operations and information warfare in military operational planning.

Date Originally Written:  April 19, 2017.

Date Originally Published:  May 25, 2017.

Author and / or Article Point of View:  This article is intended to present options to senior level Department of Defense planners involved with Unified Command Plan 2017.

Background:  Information Warfare (IW) has increasingly gained prominence throughout defense circles, with both allied and adversarial militaries reforming and reorganizing IW doctrine across their force structures.  Although not doctrinally defined by the U.S. Department of Defense (DoD), IW has been embraced with varying degrees by the individual branches of the U.S. armed forces[1].  For the purposes of this paper, the definition of IW is: the means of creating non-kinetic effects in the battlespace that disrupt, degrade, corrupt, or influence the ability of adversaries or potential adversaries to conduct military operations while protecting our own.

Significance:  IW has been embraced by U.S. near-peer adversaries as a means of asymmetrically attacking U.S. military superiority.  Russian Defense Minister Sergei Shoigu recently acknowledged the existence of “information warfare troops,” who conduct military exercises and real-world operations in Ukraine demonstrating the fusion of intelligence, offensive cyber operations, and information operations (IO)[2].   The People’s Republic of China has also reorganized its armed forces to operationalize IW, with the newly created People’s Liberation Army Strategic Support Force drawing from existing units to combine intelligence, cyber electronic warfare (EW), IO and space forces into a single command[3].

Modern militaries increasingly depend on sophisticated systems for command and control (C2), communications and intelligence.  Information-related vulnerabilities have the potential for creating non-kinetic operational effects, often as effective as kinetic fires options.  According to U.S. Army Major General Stephen Fogarty, “Russian activities in Ukraine…really are a case study for the potential for CEMA, cyber-electromagnetic activities…It’s not just cyber, it’s not just electronic warfare, it’s not just intelligence, but it’s really effective integration of all these capabilities with kinetic measures to actually create the effect that their commanders [want] to achieve[4].”  Without matching the efforts of adversaries to operationalize IW, U.S. military operations risk vulnerability to enemy IW operations.

Option #1:  United States Cyber Command (USCYBERCOM) will oversee Military Department efforts to man, train, and equip IW and IW-related forces to be used to execute military operations under Combatant Command (CCMD) authority.  Additionally, USCYBERCOM will synchronize IW planning and coordinate IW operations across the CCMDs, as well as execute some IW operations under its own authority.

Risk:  USCYBERCOM, under United States Strategic Command (USSTRATCOM) as a sub-unified command, and being still relatively new, has limited experience coordinating intelligence, EW, space and IO capabilities within coherent IW operations.  USSTRATCOM is tasked with responsibility for DoD-wide space operations, and the Geographic Combatant Commands (GCCs) are tasked with intelligence, EW, and IO operational responsibility[5][6][7].”  Until USCYBERCOM gains experience supporting GCCs with full-spectrum IW operations, previously GCC-controlled IO and EW operations will operate at elevated risk relative to similar support provided by USSTRATCOM.

Gain:  USCYBERCOM overseeing Military Department efforts to man, train, and equip IW and IW-related forces will ensure that all elements of successful non-kinetic military effects are ready to be imposed on the battlefield.  Operational control of IW forces will remain with the GCC, but USCYBERCOM will organize, develop, and plan support during crisis and war.  Much like United States Special Operations Command’s (USSOCOM) creation as a unified command consolidated core special operations activities, and tasked USSOCOM to organize, train, and equip special operations forces, fully optimized USCYBERCOM would do the same for IW-related forces.

This option is a similar construct to the Theater Special Operations Commands (TSOCs) which ensure GCCs are fully supported during execution of operational plans.  Similar to TSOCs, Theater Cyber Commands could be established to integrate with GCCs and support both contingency planning and operations, replacing the current Joint Cyber Centers (JCCs) that coordinate current cyber forces controlled by USCYBERCOM and its service components[8].

Streamlined C2 and co-location of IW and IW-related forces would have a force multiplying effect when executing non-kinetic effects during peacetime, crisis and conflict.  Instead of cyber, intelligence, EW, IO, and space forces separately planning and coordinating their stove-piped capabilities, they would plan and operate as an integrated unit.

Option #2:  Task GCCs with operational responsibility over aligned cyber forces, and integrate them with current IW-related planning and operations.

Risk:  GCCs lack the institutional cyber-related knowledge and expertise that USCYBERCOM maintains, largely gained by Commander, USCYBERCOM traditionally being dual-hatted as Director of the National Security Agency (NSA).  While it is plausible that in the future USCYBERCOM could develop equivalent cyber-related tools and expertise of NSA, it is much less likely that GCC responsibility for cyber forces could sustain this relationship with NSA and other Non-Defense Federal Departments and Agencies (NDFDA) that conduct cyber operations.

Gain:  GCCs are responsible for theater operational and contingency planning, and would be best suited for tailoring IW-related effects to military plans.  During all phases of military operations, the GCC would C2 IW operations, leveraging the full spectrum of IW to both prepare the operational environment and execute operations in conflict.  While the GCCs would be supported by USSTRATCOM/USCYBERCOM, in addition to the NDFDAs, formally assigning Cyber Mission Teams (CMTs) as the Joint Force Cyber Component (JFCC) to the GCC would enable the Commander influence the to manning, training, and equipping of forces relevant to the threats posed by their unique theater.

GCCs are already responsible for theater intelligence collection and IO, and removing administrative barriers to integrating cyber-related effects would improve the IW capabilities in theater.  Although CMTs currently support GCCs and their theater campaign and operational plans, targeting effects are coordinated instead of tasked[9].  Integration of the CMTs as a fully operational JFCC would more efficiently synchronize non-kinetic effects throughout the targeting cycle.

Other Comments:  The current disjointed nature of DoD IW planning and operations prevents the full impact of non-kinetic effects to be realized.  While cyber, intelligence, EW, IO, and space operations are carried out by well-trained and equipped forces, these planning efforts remain stove-piped within their respective forces.  Until these operations are fully integrated, IW will remain a strength for adversaries who have organized their forces to exploit this military asymmetry.

Recommendation:  None.


Endnotes:

[1]  Richard Mosier, “NAVY INFORMATION WARFARE — WHAT IS IT?,” Center for International Maritime Security, September 13, 2016. http://cimsec.org/navy-information-warfare/27542

[2]  Vladimir Isachenkov, “Russia military acknowledges new branch: info warfare troops,” The Associated Press, February 22, 2017. http://bigstory.ap.org/article/8b7532462dd0495d9f756c9ae7d2ff3c/russian-military-continues-massive-upgrade

[3]  John Costello, “The Strategic Support Force: China’s Information Warfare Service,” The Jamestown Foundation, February 8, 2016. https://jamestown.org/program/the-strategic-support-force-chinas-information-warfare-service/#.V6AOI5MrKRv

[4]  Keir Giles, “The Next Phase of Russian Information Warfare,” The NATO STRATCOM Center of Excellence, accessed April 20, 2017. http://www.stratcomcoe.org/next-phase-russian-information-warfare-keir-giles

[5]  U.S. Joint Chiefs of Staff, “Joint Publication 2-0: Joint Intelligence”, October 22, 2013, Chapter III: Intelligence Organizations and Responsibilities, III-7-10.

[6]  U.S. Joint Chiefs of Staff, “Joint Publication 3-13: Information Operations”, November 20, 2014, Chapter III: Authorities, Responsibilities, and Legal Considerations, III-2; Chapter IV: Integrating Information-Related Capabilities into the Joint Operations Planning Process, IV-1-5.

[7]  U.S. Joint Chiefs of Staff, “Joint Publication 3-12 (R): Cyberspace Operations”, February 5, 2013, Chapter III: Authorities, Roles, and Responsibilities, III-4-7.

[8]  Ibid.

[9]  U.S. Cyber Command News Release, “All Cyber Mission Force Teams Achieve Initial Operating Capability,” U.S. Department of Defense, October 24, 2016.  https://www.defense.gov/News/Article/Article/984663/all-cyber-mission-force-teams-achieve-initial-operating-capability/

Brett Wessley Cyberspace Information and Intelligence Option Papers Planning Psychological Factors United States

Cyber Vulnerabilities in U.S. Law Enforcement & Public Safety Communication Networks

The Viking Cop has served in a law enforcement capacity with multiple organizations within the U.S. Executive Branch.  He can be found on Twitter @TheVikingCop.  The views reflected are his own and do not represent the opinion of any government entities.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Cyber vulnerabilities in regional-level Law Enforcement and Public Safety (LE/PS) communication networks which could be exploited by violent extremists in support of a physical attack.

Date Originally Written:  April 15, 2017.

Date Originally Published:  May 22, 2017.

Author and / or Article Point of View:  Author is a graduate of both University and Federal LE/PS training.  Author has two years of sworn and unsworn law enforcement experience.  Author had been a licensed amateur radio operator and builder for eleven years.

Background:  Currently LE/PS agencies in the U.S. operate on communication networks designed on the Association of Public-Safety Communications Officials, Project 25 (P25) standard established in 1995[1].  European and East Asian Countries operate on a similar network standard known as the Terrestrial Trunked Radio.

The push on a federal level for widespread implementation of the P25 standard across all U.S. emergency services was prompted by failures of communication during critical incidents such as the September 11th attacks, Columbine Massacre, and the Oklahoma City bombing[2].  Prior to the P25 implementation, different LE/PS organizations had been operating on different bands, frequencies, and equipment that prevented them from directly communicating to each other.

During P25 implementation many agencies, in an effort to offset cost and take advantage of the interoperability concept, established Regional Communication Centers (RCC) such as the Consolidated Communication Bureau in Maine, the Grand Junction Regional Communications Center in Colorado, and South Sound 911 in Washington.  These RCCs have consolidated dispatching for all LE/PS activities thus providing the ability of smaller jurisdictions to better work together handling daily calls for service.

Significance:  During a critical incident the rapid, clear, and secure flow of communications between responding personnel is essential.  The ability of responding LE/PS organizations is greatly enhanced by the P25 standard where unified networks can be quickly established due to operating on the same band and the flow of information can avoid bottle necks.

Issues begin to arise as violent extremist groups, such as the Islamic State of Iraq and Syria (ISIS), have been attempting to recruit more technically minded members that will be able to increase the group’s ability to plan and conduct cyber operations as a direct attack or in support of a physical attack[3].  Electronic security researchers have also found various security flaws in the P25 standard’s method of framing transmission data that prove it is vulnerable to practical attacks such as high-energy denial of service attacks and low-energy selective jamming attacks[4][5].

This article focuses on a style of attack known as Selective Jamming, in which an attacker would be able to use one or more low-power, inexpensive, and portable transceivers to specifically target encrypted communications in a manner that would not affect transmissions that are made in the clear (unencrypted).  Such an attack would be difficult to detect because of other flaws in the P25 standard and the attacks would last no more than a few hundredths of a second each [4].

If a series of Selective Jamming transceivers were activated shortly before a physical attack responding units, especially tactical units, would have minutes to make a decision on how to run communications.

Option #1:  Push all radio traffic into the clear to overcome a possible selective jamming attack.  This option would require all responding units to disable the encryption function on their radios or switch over to an unencrypted channel to continue to effectively communicate during the response phase.

Risk:  The purpose of encrypted communications in LE/PS is to prevent a perpetrator from listening to the tactical decisions and deployment of responders.  If a perpetrator has developed and implemented the capability to selectively jam communications they will likely have the ability and equipment to monitor radio traffic once it is in the clear.  This option would give the perpetrator of an attack a major advantage on knowing the response to the attack.  The hesitancy to operate in the clear by undercover teams was noted as a major safety risk in the after action report of the 2015 San Bernardino Shooting[6].

Gain:  LE/PS agencies responding to an incident would be able to continue to use their regular equipment and protocols without having to deploy an alternative system.  This would give responders the most speed in attempting to stop the attack with the known loss of operational security.  There would also be zero equipment costs above normal operation as P25 series radios are all capable of operating in the clear.

Option #2:  Develop and stage a secondary communications system for responding agencies or tactical teams to implement once a selective jamming attack is suspected to be occurring.

Risk:  Major cost and planning would have to be implemented to have a secondary system that is jamming-resistant that could be deployed rapidly by responding agencies.  This cost factor could prompt agencies to only equip tactical teams with a separate system such as push-to-talk cellphones or radio systems with different communications standards than P25.  Any LE/PS unit that does not have access to the secondary system will experience a near-communications blackout outside communications made in the clear.

Gain:  Responding units or tactical teams, once a possible selective jamming attack was recognized, would be able to maintain operational security by switching to a secure method of communications.  This would disrupt the advantage that the perpetrator was attempting to gain by disrupting and/or monitoring radio traffic.

Other Comments:  Both options would require significant additional training for LE/PS personnel to recognize the signs of a Selective Jamming attack and respond as appropriate.

Recommendation:  None.


Endnotes:

[1]  Horden, N. (2015). P25 History. Retrieved from Project 25 Technology Interest Group: http://www.project25.org/index.php/technology/p25-history

[2]  National Task Force on Interoperability. (2005). Why Can’t We Talk. Washington D.C.: National Institute of Justice.

[3]  Nussbaum, B. (2015). Thinking About ISIS And Its Cyber Capabilities: Somewhere Between Blue Skies and Falling One. Retrieved from The Center for Internet and Society: http://cyberlaw.stanford.edu/blog/2015/11/thinking-about-isis-and-its-cyber-capabilities-somewhere-between-blue-skies-and-falling

[4]  Clark, S., Metzger, P., Wasserman, Z., Xu, K., & Blaze, M. (2010). Security Weaknesses in the APCO Project 25 Two-Way Radio System. University of Pennsylvania Department of Computer & Information Science.

[5]  Glass, S., Muthukkumarasamy, V., Portmann, M., & Robert, M. (2011). Insecurity in Public-Safety Communications:. Brisbane: NICTA.

[6]  Braziel, R., Straub, F., Watson, G., & Hoops, R. (2016). Bringing Calm to Chaos: A Critical Incident Review of the San Bernardino Public Safety Response to the December 2, 2015, Terrorist Shooting Incident at the Inland Regional Center. Washington: Office of Community Oriented Policing Services.

Communications Cyberspace Law Enforcement & Public Safety Option Papers The Viking Cop United States

U.S. Diplomacy Options for Security & Adaptability in Cyberspace

Matthew Reitman is a science and technology journalist.  He has a background in security policy and studied International Relations at Boston University.  He can be found on Twitter @MatthewReitman.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  U.S. competitors conducting national security activities in cyberspace below the threshold of war aka in the “Gray Zone.”

Date Originally Written:  April 14, 2017.

Date Originally Published:  May 18, 2017.

Author and / or Article Point of View:  This article is written from the point of view of the U.S. State Department towards cyberspace.

Background:  State actors and their non-state proxies operate aggressively in cyberspace, but within a gray zone that violates international norms without justifying a “kinetic” response.  Russian influence operations in the 2016 U.S. election were not an act of war, but escalated tensions dramatically[1].  North Korea used the Lazarus Group to circumvent sanctions by stealing $81 million from Bangladesh’s central bank[2].  Since a U.S.-People’s Republic of China (PRC) agreement in 2015 to curb corporate espionage, there have been 13 intrusions by groups based in the PRC against the U.S. private sector[3].  The State Department has helped to curb Islamic State of Iraq and Syria propaganda online via the Global Engagement Center[4].  The recent creation of another interagency entity, the Russia Information Group, suggests similar efforts could be effective elsewhere[5].

The State Department continues to work towards establishing behavior norms in cyberspace via multilateral channels, like the United Nations Group of Governmental Experts, and bilateral channels, but this remains a slow and tedious process.  Until those norms are codified, gray zone activities in cyberspace will continue.  The risk of attacks on Information Technology (IT) or critical infrastructure and less destructive acts will only grow as the rest of the world comes online, increasing the attack surface.

Significance:  The ever-growing digitally connected ecosystem presents a chimera-like set of risks and rewards for U.S. policymakers.  Protecting the free exchange of information online, let alone keeping the U.S. and its allies safe, is difficult when facing gray zone threats.  Responding with conventional tools like economic sanctions can be evaded more easily online, while “hacking back” can escalate tensions in cyberspace and further runs the risk of creating a conflict that spills offline.  Despite the challenge, diplomacy can reduce threats and deescalate tensions for the U.S. and its allies by balancing security and adaptability.  This article provides policy options for responding to and defending against a range of gray zone threats in cyberspace.

Option #1:  Establish effective compellence methods tailored to each adversary.  Option #1 seeks to combine and tailor traditional coercive diplomacy methods like indictments, sanctions, and “naming and shaming,” in tandem with aggressive counter-messaging to combat information warfare, which can be anything from debunking fake news to producing misinformation that undermines the adversary’s narrative.  A bifocal approach has shown to be more effective form of coercion[6] than one or the other.

Risk:  Depending on the severity, the combined and tailored compellence methods could turn public opinion against the U.S.  Extreme sanctions that punish civilian populations could be viewed unfavorably.  If sanctions are evaded online, escalation could increase as more aggressive responses are considered.  “Naming and shaming” could backfire if an attack is falsely attributed.  Fake bread crumbs can be left behind in code to obfuscate the true offender and make it look as though another nation is responsible.  Depending on the severity of counter-propaganda, its content could damage U.S. credibility, especially if conducted covertly.  Additionally, U.S. actions under Option #1 could undermine efforts to establish behavior norms in cyberspace.

Gain:  Combined and tailored compellence methods can isolate an adversary financially and politically while eroding domestic support.  “Naming and shaming” sends a clear message to the adversary and the world that their actions will not be tolerated, justifying any retaliation.  Sanctions can weaken an economy and cut off outside funding for political support.  Leaking unfavorable information and counter-propaganda undermines an adversary’s credibility and also erodes domestic support.  Option #1’s severity can range depending on the scenario, from amplifying the spread of accurate news and leaked documents with social botnets to deliberately spreading misinformation.  By escalating these options, the risks increase.

Option #2:  Support U.S. Allies’ cybersecurity due diligence and capacity building.  Option #2 pursues confidence-building measures in cyberspace as a means of deterrence offline, so nations with U.S. collective defense agreements have priority.  This involves fortifying allies’ IT networks and industrial control systems for critical infrastructure by taking measures to reduce vulnerabilities and improve cybersecurity incident response teams (CSIRTs).  This option is paired with foreign aid for programs that teach media literacy, “cyber hygiene,” and computer science to civilians.

Risk:  Improving allies’ defensive posture can be viewed by some nations as threatening and could escalate tensions.  Helping allies fortify their defensive capabilities could lead to some sense of assumed responsibility if those measures failed, potentially fracturing the relationship or causing the U.S. to come to their defense.  Artificial Intelligence (AI)-enhanced defense systems aren’t a silver bullet and can contribute to a false sense of security.  Any effort to defend against information warfare runs the potential of going too far by infringing freedom of speech.  Aside from diminishing public trust in the U.S., Option #2 could undermine efforts to establish behavior norms in cyberspace.

Gain:  Collectively, this strategy can strengthen U.S. Allies by contributing to their independence while bolstering their defense against a range of attacks.  Option #2 can reduce risks to U.S. networks by decreasing threats to foreign networks.  Penetration testing and threat sharing can highlight vulnerabilities in IT networks and critical infrastructure, while educating CSIRTs.  Advances in AI-enhanced cybersecurity systems can decrease response time and reduce network intrusions.  Funding computer science education trains the next generation of CSIRTs.  Cyber hygiene, or best cybersecurity practices, can make civilians less susceptible to cyber intrusions, while media literacy can counter the effects of information warfare.

Other Comments:  The U.S. Cyber Command and intelligence agencies, such as the National Security Agency and Central Intelligence Agency, are largely responsible for U.S. government operations in cyberspace.  The U.S. State Department’s range of options may be limited, but partnering with the military and intelligence communities, as well as the private sector is crucial.

Recommendation:  None.


Endnotes:

[1]  Nakashima, E. (2017, February 7) Russia’s apparent meddling in U.S. election is not an act of war, cyber expert says. Washington Post. Retrieved from: https://www.washingtonpost.com/news/checkpoint/wp/2017/02/07/russias-apparent-meddling-in-u-s-election-is-not-an-act-of-war-cyber-expert-says

[2]  Finkle, J. (2017, March 15) “North Korean hacking group behind recent attacks on banks: Symantec.” Reuters. Retrieved from: http://www.reuters.com/article/us-cyber-northkorea-symantec

[3]  FireEye. (2016, June 20). Red Line Drawn: China Recalculates Its Use Of Cyber Espionage. Retrieved from: https://www.fireeye.com/blog/threat-research/2016/06/red-line-drawn-china-espionage.html

[4]  Warrick, J. (2017, February 3). “How a U.S. team uses Facebook, guerrilla marketing to peel off potential ISIS recruits.” Washington Post. Retrieved from: https://www.washingtonpost.com/world/national-security/bait-and-flip-us-team-uses-facebook-guerrilla-marketing-to-peel-off-potential-isis-recruits/2017/02/03/431e19ba-e4e4-11e6-a547-5fb9411d332c_story.html

[5]  Mak, T. (2017, February 6). “U.S. Preps for Infowar on Russia”. The Daily Beast. Retrieved from: http://www.thedailybeast.com/articles/2017/02/06/u-s-preps-for-infowar-on-russia.html

[6]  Valeriano, B., & Jensen, B. (2017, March 16). “From Arms and Influence to Data and Manipulation: What Can Thomas Schelling Tell Us About Cyber Coercion?”. Lawfare. Retrieved from: https://www.lawfareblog.com/arms-and-influence-data-and-manipulation-what-can-thomas-schelling-tell-us-about-cyber-coercion

Below Established Threshold Activities (BETA) Cyberspace Diplomacy Matthew Reitman Option Papers United States

Options for Private Sector Hacking Back

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  A future where Hacking Back / Offensive Cyber Operations in the Private Sphere are allowed by the U.S. Government.

Date Originally Written:  April 3, 2017.

Date Originally Published:  May 15, 2017.

Author and / or Article Point of View:  This article is written from the point of view of a future where Hacking Back / Offensive Cyber Operations as a means for corporations to react offensively as a defensive act has been legally sanctioned by the U.S. Government and the U.S. Department of Justice.  While this government sanctioning may seem encouraging to some, it could lead to national and international complications.

Background:  It is the year X and hacking back by companies in the U.S. has been given official sanction.  As such, any company that has been hacked may offensively react to the hacking by hacking the adversaries infrastructure to steal back data and / or deny and degrade the adversaries from attacking further.

Significance:  At present, Hacking Back / Offensive Cyber Operations are not sanctioned activities that the U.S. Government allows U.S. corporations to conduct.  If this were to come to pass, then U.S. corporations would have the capabilities to stand up offensive cyber operations divisions in their corporate structure or perhaps hire companies to carry out such actions for them i.e. Information Warfare Mercenaries.  These forces and actions taken by corporations, if allowed, could cause larger tensions within the geopolitical landscape and force other nation states to react.

Option #1:  The U.S. Government sanctions the act of hacking back against adversaries as fair game.  U.S. corporations stand up hacking teams to work with Blue Teams (Employees in companies who attempt to thwart incidents and respond to them) to react to incidents and to attempt to hack the adversaries back to recover information, determine who the adversaries are, and to prevent their infrastructure from being operational.

Risk:  Hacking teams at U.S. corporations, while hacking back, make mistakes and attack innocent companies/entities/foreign countries whose infrastructure may have been unwittingly used as part of the original attack.

Gain:  The hacking teams of these U.S. corporations manage to hack back, steal information, and determine if it had been copied and further exfiltrated.  This also allows the U.S. corporations to try to determine who the actor is and gather evidence as well as degrade the actor’s ability to attack others.

Option #2:  The U.S. Government allows for the formation of teams/companies of information warfare specialists that are non-governmental bodies to hack back as an offering.  This offensive activity would be sanctioned and monitored by the government but work for companies under a letter of marque approach with payment and / or bounties on actors stopped or for evidence brought to the judicial and used to prosecute actors.

Risk:  Letters of marque could be misused and attackers could go outside their mandates.  The same types of mistakes could also be made as those of the corporations that formed offensive teams internally.  Offensive actions could affect geopolitics as well as get in the way of other governmental operations that may be taking place.  Infrastructures could be hacked and abused of innocent actors who were just a pivot point and other not yet defined mistakes could be made.

Gain:  Such actors and operations could deter some adversaries and in fact could retrieve data that has been stolen and perhaps prevent that data from being further exploited.

Other Comments:  Clearly the idea of hacking back has been in the news these last few years and the notion has been something many security professionals have said was a terrible idea.  There are certain advantages to the idea that firms can protect themselves from hacking by hacking back, but generally the sense of things today is that many companies cannot even protect their data properly to start with so the idea of hacking back is a red herring to larger security concerns.

Recommendation:  None.


Endnotes:

None.

Cyberspace Offensive Operations Option Papers Private Sector Scot A. Terban United States

Options to Deter Cyber-Intrusions into Non-Government Computers

Elizabeth M. Bartels is a doctoral candidate at the Pardee RAND Graduate School and an assistant policy analyst at the nonprofit, nonpartisan RAND Corporation.  She has an M.S. in political science from the Massachusetts Institute of Technology and a B.A. in political science with a minor in Near Eastern languages and civilization from the University of Chicago.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Unless deterred, cyber-intrusions into non-government computer systems will continue to lead to the release of government-related information.

Date Originally Written:  March 15, 2017.

Date Originally Published:  May 11, 2017.

Author and / or Article Point of View:  Author is a PhD candidate in policy analysis, whose work focuses on wargaming and defense decision-making.

Background:  Over the years, a great deal of attention has been paid to gaining security in cyberspace to prevent unauthorized access to critical infrastructure like those that control electrical grids and financial systems, and military networks.  In recent years a new category of threat has emerged: the cyber-theft and subsequent public release of large troves of private communications, personal documents and other data.

This category of incident includes the release of government data by inside actors such as Chelsea Manning and Edward Snowden.  However, hacks of the Democratic National Committee and John Podesta, a Democratic party strategist, illustrate that the risk goes beyond the theft of government data to include information that has the potential to harm individuals or threaten the proper functioning of government.  Because the federal government depends on proxies such as contractors, non-profit organizations, and local governments to administer so many public functions, securing information that could harm the government – but is not on government-secured systems – may require a different approach.

Significance:  The growing dependence on government proxies, and the risk such dependence creates, is hardly new[1], and neither is concern over the cyber security implications of systems outside government’s immediate control[2].  However, recent attacks have called the sufficiency of current solutions into question.

Option #1:  Build Better Defenses.  The traditional approach to deterring cyber-exploitation has focused on securing networks, so that the likelihood of failure is high enough to dissuade adversaries from attempting to infiltrate systems.  These programs range from voluntary standards to improve network security[3], to contractual security standards, to counter-intelligence efforts that seek to identify potential insider threats.  These programs could be expanded to more aggressively set standards covering non-governmental systems containing information that could harm the government if released.

Risk:  Because the government does not own these systems, it must motivate proxy organizations to take actions they may not see as in their interest.  While negotiating contracts that align organizational goals with those of the government or providing incentives to organizations that improve their defenses may help, gaps are likely to remain given the limits of governmental authority over non-governmental networks and information[4].

Additionally, defensive efforts are often seen as a nuisance both inside and outside government.  For example, the military culture often prioritizes warfighting equipment over defensive or “office” functions like information technology[5], and counter-intelligence is often seen as a hindrance to intelligence gathering[6].  Other organizations are generally focused on efficiency of day-to-day functions over security[7].  These tendencies create a risk that security efforts will not be taken seriously by line operators, causing defenses to fail.

Gain:  Denying adversaries the opportunity to infiltrate U.S. systems can prevent unauthorized access to sensitive material and deter future attempted incursions.

Option #2:  Hit Back Harder.  Another traditional approach to deterrence is punishment—that is, credibly threatening to impose costs on the adversary if they commit a specific act.  The idea is that adversaries will be deterred if they believe attacks will extract a cost that outweighs any potential benefits.  Under the Obama administration, punishment for cyber attacks focused on the threat of economic sanctions[8] and, in the aftermath of attacks, promises of clandestine actions against adversaries[9].  This policy could be made stronger by a clear statement that the U.S. will take clandestine action not just when its own systems are compromised, but also when its interests are threatened by exploitation of other systems.  Recent work has advocated the use of cyber-tools which are acknowledged only to the victim as a means of punishment in this context[10], however the limited responsiveness of cyber weapons may make this an unattractive option.  Instead, diplomatic, economic, information, and military options in all domains should be considered when developing response options, as has been suggested in recent reports[11]. 

Risk:  Traditionally, there has been skepticism that cyber incursions can be effectively stopped through punishment, as in order to punish, the incursion must be attributed to an adversary.  Attributing cyber incidents is possible based on forensics, but the process often lacks speed and certainty of investigations into traditional attacks.  Adversaries may assume that decision makers will not be willing to retaliate long after the initiating incident and without “firm” proof as justification.  As a result, adversaries might still be willing to attack because they feel the threat of retaliation is not credible.  Response options will also need to deal with how uncertainty may shape U.S. decision maker tolerance for collateral damage and spillover effects beyond primary targets.

Gain:  Counter-attacks can be launched regardless of who owns the system, in contrast to defensive options, which are difficult to implement on systems not controlled by the government.

Option #3:  Status Quo. While rarely discussed, another option is to maintain the status quo and not expand existing programs that seek to protect government networks.

Risk:  By failing to evolve U.S. defenses against cyber-exploitation, adversaries could gain increased advantage as they develop new ways to overcome existing approaches.

Gain:  It is difficult to demonstrate that even the current level of spending on deterring cyber attacks has meaningful impact on adversary behavior.  Limiting the expansion of untested programs would free up resources that could be devoted to examining the effectiveness of current policies, which might generate new insights about what is, and is not, effective.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1]  John J. Dilulio Jr. [2014], Bring Back the Bureaucrats: Why More Federal Workers Will Lead to Better (and Smaller!) Government, Templeton Press.

[2]  President Barack Obama [2013], Executive Order—Improving Critical Infrastructure Cybersecurity, The White House Office of the Press Secretary.

[3]  National Institute of Standards and Technology (NIST) [2017], Framework for Improving Critical Infrastructure Cybersecurity, Draft Version 1.1.

[4]  Glenn S. Gerstell, NSA General Councel, Confronting the Cybersecurity Challenge, Keynote address at the 2017 Law, Ethics and National Security Conference at Duke Law School, February 25, 2017.

[5]  Allan Friedman and P.W. Singer, “Cult of the Cyber Offensive,” Foreign Policy, January 15, 2014.

[6]  James M. Olson, The Ten Commandments of Counterintelligence, 2007.

[7]  Don Norman, “When Security Gets in the Way,” Interactions, volume 16, issue 6: Norman, D. A. (2010).

[8]  President Barack Obama [2016], Executive Order—Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities.

[9]  Alex Johnson [2016], “US Will ‘Take Action’ on Russian Hacking, Obama Promises,” NBC News.

[10]  Evan Perkoski and Michael Poznansky [2016], “An Eye for an Eye: Deterring Russian Cyber Intrusions,” War on the Rocks.

[11]  Defense Science Board [2017], Task Force of Cyber Deterrence.

Cyberspace Deterrence Elizabeth M. Bartels Non-Government Entities Option Papers