Assessment of the Role of Cyber Power in Interstate Conflict

Eric Altamura is a graduate student in the Security Studies Program at Georgetown University’s School of Foreign Service. He previously served for four years on active duty as an armor officer in the United States Army.  He regularly writes for Georgetown Security Studies Review and can be found on Twitter @eric_senlu.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Role of Cyber Power in Interstate Conflict

Date Originally Written:  May 05, 2018 / Revised for Divergent Options July 14, 2018.

Date Originally Published:  September 17, 2018.

Summary:  The targeting of computer networks and digitized information during war can prevent escalation by providing an alternative means for states to create the strategic effects necessary to accomplish limited objectives, thereby bolstering the political viability of the use of force as a lever of state power.

Text:  Prussian General and military theorist Carl von Clausewitz wrote that in reality, one uses, “no greater force, and setting himself no greater military aim, than would be sufficient for the achievement of his political purpose.” State actors, thus far, have opted to limit cyberattacks in size and scope pursuant to specific political objectives when choosing to target information for accomplishing desired outcomes. This limiting occurs because as warfare approaches its unlimited form in cyberspace, computer network attacks increasingly affect the physical domain in areas where societies have become reliant upon IT systems for everyday functions. Many government and corporate network servers host data from industrial control systems (ICS) or supervisory control and data acquisition (SCADA) systems that control power generation, utilities, and virtually all other public services. Broader attacks on an adversary’s networks consequently affect the populations supported by these systems, so that the impacts of an attack go beyond simply denying an opponent the ability to communicate through digital networks.

At some point, a threshold exists where it becomes more practical for states to utilize other means to directly target the physical assets of an adversary rather than through information systems. Unlimited cyberattacks on infrastructure would come close to replicating warfare in its total form, with the goal of fully disarming an opponent of its means to generate resistance, so states become more willing to expend resources and effort towards accomplishing their objectives. In this case, cyber power decreases in utility relative to the use of physical munitions (i.e. bullets and bombs) as the scale of warfare increases, mainly due to the lower probability of producing enduring effects in cyberspace. As such, the targeting and attacking of an opponent’s digital communication networks tends to occur in a more limited fashion because alternative levers of state power provide more reliable solutions as warfare nears its absolute form. In other words, cyberspace offers much more value to states seeking to accomplish limited political objectives, rather than for waging total war against an adversary.

To understand how actors attack computer systems and networks to accomplish limited objectives during war, one must first identify what states actually seek to accomplish in cyberspace. Just as the prominent British naval historian Julian Corbett explains that command of the sea does not entail “the conquest of water territory,” states do not use information technology for the purpose of conquering the computer systems and supporting infrastructure that comprise an adversary’s information network. Furthermore, cyberattacks do not occur in isolation from the broader context of war, nor do they need to result in the total destruction of the enemy’s capabilities to successfully accomplish political objectives. Rather, the tactical objective in any environment is to exploit the activity that takes place within it – in this case, the communication of information across a series of interconnected digital networks – in a way that provides a relative advantage in war. Once the enemy’s communication of information is exploited, and an advantage achieved, states can then use force to accomplish otherwise unattainable political objectives.

Achieving such an advantage requires targeting the key functions and assets in cyberspace that enable states to accomplish political objectives. Italian General Giulio Douhet, an airpower theorist, describes command of the air as, “the ability to fly against an enemy so as to injure him, while he has been deprived of the power to do likewise.” Whereas airpower theorists propose targeting airfields alongside destroying airplanes as ways to deny an adversary access to the air, a similar concept prevails with cyber power. To deny an opponent the ability to utilize cyberspace for its own purposes, states can either attack information directly or target the means by which the enemy communicates its information. Once an actor achieves uncontested use of cyberspace, it can subsequently control or manipulate information for its own limited purposes, particularly by preventing the escalation of war toward its total form.

More specifically, the ability to communicate information while preventing an adversary from doing so has a limiting effect on warfare for three reasons. Primarily, access to information through networked communications systems provides a decisive advantage to military forces by allowing for “analyses and synthesis across a variety of domains” that enables rapid and informed decision-making at all echelons. The greater a decision advantage one military force has over another, the less costly military action becomes. Secondly, the ubiquity of networked information technologies creates an alternative way for actors to affect targets that would otherwise be politically, geographically, or normatively infeasible to target with physical munitions. Finally, actors can mask their activities in cyberspace, which makes attribution difficult. This added layer of ambiguity enables face-saving measures by opponents, who can opt to not respond to attacks overtly without necessarily appearing weak.

In essence, cyber power has become particularly useful for states as a tool for preventing conflict escalation, as an opponent’s ability to respond to attacks becomes constrained when denied access to communication networks. Societies’ dependence on information technology and resulting vulnerability to computer network attacks continues to increase, indicating that interstate violence may become much more prevalent in the near term if aggressors can use cyberattacks to decrease the likelihood of escalation by an adversary.


Endnotes:

[1] von Clausewitz, C. (1976). On War. (M. Howard, & P. Paret, Trans.) Princeton: Princeton University Press.

[2] United States Computer Emergency Readiness Team. (2018, March 15). Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors. (United States Department of Homeland Security) Retrieved May 1, 2018, from https://www.us-cert.gov/ncas/alerts/TA18-074A

[3] Fischer, E. A. (2016, August 12). Cybersecurity Issues and Challenges: In Brief. Retrieved May 1, 2018, from https://fas.org/sgp/crs/misc/R43831.pdf

[4] Corbett, J. S. (2005, February 16). Some Principles of Maritime Strategy. (S. Shell, & K. Edkins, Eds.) Retrieved May 2, 2018, from The Project Gutenberg: http://www.gutenberg.org/ebooks/15076

[5] Ibid.

[6] Douhet, G. (1942). The Command of the Air. (D. Ferrari, Trans.) New York: Coward-McCann.

[7] Singer, P. W., & Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know. New York: Oxford University Press.

[8] Boyd, J. R. (2010, August). The Essence of Winning and Losing. (C. Richards, & C. Spinney, Eds.) Atlanta.

Aggression Assessment Papers Cyberspace Emerging Technology Eric Altamura

Assessment of the North Korean Cyberattack on Sony Pictures

Emily Weinstein is a Research Analyst at Pointe Bello and a current M.A. candidate in Security Studies at Georgetown University.  Her research focuses on Sino-North Korean relations, foreign policy, and military modernization.  She can be found on Twitter @emily_sw1.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the North Korean Cyberattack on Sony Pictures

Date Originally Written:  July 11, 2018.

Date Originally Published:  August 20, 2018.

Summary:   The 2014 North Korean cyberattack on Sony Pictures shocked the world into realizing that a North Korean cyber threat truly existed.  Prior to 2014, what little information existed on North Korea’s cyber capabilities was largely dismissed, citing poor domestic conditions as rationale for cyber ineptitude.  However, the impressive nature of the Sony attack was instrumental in changing global understanding of Kim Jong-un and his regime’s daring nature.

Text:  On November 24, 2014 Sony employees discovered a massive cyber breach after an image of a red skull appeared on computer screens company-wide, displaying a warning that threatened to reveal the company’s secrets.  That same day, more than 7,000 employees turned on their computers to find gruesome images of the severed head of Sony’s chief executive, Michael Lynton[1].  These discoveries forced the company to shut down all computer systems, including those in international offices, until the incident was further investigated.  What was first deemed nothing more than a nuisance was later revealed as a breach of international proportions.  Since this incident, the world has noted the increasing prevalence of large-scale digital attacks and the dangers they pose to both private and public sector entities.

According to the U.S. Computer Emergency Readiness Team, the primary malware used in this case was a Server Message Block (SMB) Worm Tool, otherwise known as SVCH0ST.EXE.  An SMB worm is usually equipped with five components: a listening implant, lightweight backdoor, proxy tool, destructive hard drive tool, and a destructive target cleaning tool[2].  The worm spreads throughout the infected network via a trial-and-error method used to obtain information such as a user password or personal identification number known as a brute force authentication attack.  The worm then connects to the command-and-control infrastructure where it is then able to begin its damage, usually copying software that is intended to damage or disable computers and computer systems, known as malware, across to the victim system or administrator system via the network sharing process.  Once these tasks are complete, the worm executes the malware using remotely scheduled tasks[3].

This type of malware is highly destructive.  If an organization is infected, it is likely to experience massive impacts on daily operations, including the loss of intellectual property and the disruption of critical internal systems[4].  In Sony’s case, on an individual level, hackers obtained and leaked personal and somewhat embarrassing information about or said by Sony personnel to the general public, in addition to information from private Sony emails that was sensitive or controversial.  On the company level, hackers stole diverse information ranging from contracts, salary lists, budget information, and movie plans, including five entire yet-to-be released movies.  Moreover, Sony internal data centers had been wiped clean and 75 percent of the servers had been destroyed[5].

This hack was attributed to the release of Sony’s movie, The Interview—a comedy depicting U.S. journalists’ plan to assassinate North Korean leader Kim Jong-un.  A group of hackers who self-identified by the name “Guardians of Peace” (GOP) initially took responsibility for the attack; however, attribution remained unsettled, as experts had a difficult time determining the connections and sponsorship of the “GOP” hacker group.  Former Federal Bureau of Investigation (FBI) Director James Comey in December 2014 announced that U.S. government believed that the North Korean regime was behind the attack, alluding to the fact that the Sony hackers failed to use proxy servers that masked the origin of their attack, revealing Internet Protocol or IP addresses that the FBI knew to be exclusively used by North Korea[6].

Aside from Director Comey’s statements, other evidence exists that suggests North Korea’s involvement.  For instance, the type of malware deployed against Sony utilized methods similar to malware that North Korean actors had previously developed and used.  Similarly, the computer-wiping software used against Sony was also used in a 2013 attack against South Korean banks and media outlets.  However, most damning of all was the discovery that the malware was built on computers set to the Korean language[7].

As for a motivation, experts argue that the hack was executed by the North Korean government in an attempt to preserve the image of Kim Jong-un, as protecting their leader’s image is a chief political objective in North Korea’s cyber program.  Sony’s The Interview infantilized Kim Jong-un and disparaged his leadership skills, portraying him as an inept, ruthless, and selfish leader, while poking fun at him by depicting him singing Katy Perry’s “Firework” song while shooting off missiles.  Kim Jong-un himself has declared that “Cyberwarfare, along with nuclear weapons and missiles, is an ‘all-purpose sword[8],’” so it is not surprising that he would use it to protect his own reputation.

The biggest takeaway from the Sony breach is arguably the U.S. government’s change in attitude towards North Korean cyber capabilities.  In recent years leading up to the attack, U.S. analysts were quick to dismiss North Korea’s cyber-potential, citing its isolationist tactics, struggling economy, and lack of modernization as rationale for this judgement.  However, following this large-scale attack on a large and prominent U.S. company, the U.S. government has been forced to rethink how it views the Hermit Regime’s cyber capabilities.  Former National Security Agency Deputy Director Chris Inglis argues that cyber is a tailor-made instrument of power for the North Korean regime, thanks to its low-cost of entry, asymmetrical nature and degree of anonymity and stealth[9].  Indeed the North Korean cyber threat has crept up on the U.S., and now the its intelligence apparatus must continue to work to both counter and better understand North Korea’s cyber capabilities.


Endnotes:

[1] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[2] Lennon, M. (December 19, 2014). Hackers Used Sophisticated SMB Worm Tool to Attack Sony. Retrieved July 7, 2018, from https://www.securityweek.com/hackers-used-sophisticated-smb-worm-tool-attack-sony

[3] Doman, C. (January 19, 2015). Destructive malware—a close look at an SMB worm tool. Retrieved July 7, 2018, from http://pwc.blogs.com/cyber_security_updates/2015/01/destructive-malware.html

[4] United States Computer Emergency Readiness Team (December 19, 2014). Alert (TA14-353A) Targeted Destructive Malware. Retrieved July 7, 2018, from https://www.us-cert.gov/ncas/alerts/TA14-353A

[5] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[6] Greenberg, A. (January 7, 2015). FBI Director: Sony’s ‘Sloppy’ North Korean Hackers Revealed Their IP Addresses. Retrieved July 7, 2018, from https://www.wired.com/2015/01/fbi-director-says-north-korean-hackers-sometimes-failed-use-proxies-sony-hack/

[7] Pagliery, J. (December 29, 2014). What caused Sony hack: What we know now. Retrieved July 8, 2018, from http://money.cnn.com/2014/12/24/technology/security/sony-hack-facts/

[8] Sanger, D., Kirkpatrick, D., and Perlroth, N. (October 15, 2017). The World Once Laughed at North Korean Cyberpower. No More. Retrieved July 8, 2018, from https://mobile.nytimes.com/2017/10/15/world/asia/north-korea-hacking-cyber-sony.html

[9] Ibid.

Assessment Papers Cyberspace Emily Weinstein Information Systems

Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity

An Assessment of Information Warfare as a Cybersecurity Issue

Justin Sherman is a sophomore at Duke University double-majoring in Computer Science and Political Science, focused on cybersecurity, cyberwarfare, and cyber governance. Justin conducts technical security research through Duke’s Computer Science Department; he conducts technology policy research through Duke’s Sanford School of Public Policy; and he’s a Cyber Researcher at a Department of Defense-backed, industry-intelligence-academia group at North Carolina State University focused on cyber and national security – through which he works with the U.S. defense and intelligence communities on issues of cybersecurity, cyber policy, and national cyber strategy. Justin is also a regular contributor to numerous industry blogs and policy journals.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in cybersecurity and IT project management. During his service in the Armed Forces, Anastasios was assigned to various key positions in national, NATO, and EU headquarters, and he’s been honored by numerous high-ranking officers for his expertise and professionalism, including a nomination as a certified NATO evaluator for information security. Anastasios currently works as an informatics instructor at AKMI Educational Institute, where his interests include exploring the human side of cybersecurity – psychology, public education, organizational training programs, and the effects of cultural, cognitive, and heuristic biases.

Paul Cobaugh is the Vice President of Narrative Strategies, a coalition of scholars and military professionals involved in the non-kinetic aspects of counter-terrorism, defeating violent extremism, irregular warfare, large-scale conflict mediation, and peace-building. Paul recently retired from a distinguished career in U.S. Special Operations Command, and his specialties include campaigns of influence and engagement with indigenous populations.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Information Warfare as a Cybersecurity Issue

Date Originally Written:  March 2, 2018.

Date Originally Published:  June 18, 2018.

Summary:  Information warfare is not new, but the evolution of cheap, accessible, and scalable cyber technologies enables it greatly.  The U.S. Department of Justice’s February 2018 indictment of the Internet Research Agency – one of the Russian groups behind disinformation in the 2016 American election – establishes that information warfare is not just a global problem from the national security and fact-checking perspectives; but a cybersecurity issue as well.

Text:  On February 16, 2018, U.S. Department of Justice Special Counsel Robert Mueller indicted 13 Russians for interfering in the 2016 United States presidential election [1]. Beyond the important legal and political ramifications of this event, this indictment should make one thing clear: information warfare is a cybersecurity issue.

It shouldn’t be surprising that Russia created fake social media profiles to spread disinformation on sites like Facebook.  This tactic had been demonstrated for some time, and the Russians have done this in numerous other countries as well[2].  Instead, what’s noteworthy about the investigation’s findings, is that Russian hackers also stole the identities of real American citizens to spread disinformation[3].  Whether the Russian hackers compromised accounts through technical hacking, social engineering, or other means, this technique proved remarkably effective; masquerading as American citizens lent significantly greater credibility to trolls (who purposely sow discord on the Internet) and bots (automated information-spreaders) that pushed Russian narratives.

Information warfare has traditionally been viewed as an issue of fact-checking or information filtering, which it certainly still is today.  Nonetheless, traditional information warfare was conducted before the advent of modern cyber technologies, which have greatly changed the ways in which information campaigns are executed.  Whereas historical campaigns took time to spread information and did so through in-person speeches or printed news articles, social media enables instantaneous, low-cost, and scalable access to the world’s populations, as does the simplicity of online blogging and information forgery (e.g., using software to manufacture false images).  Those looking to wage information warfare can do so with relative ease in today’s digital world.

The effectiveness of modern information warfare, then, is heavily dependent upon the security of these technologies and platforms – or, in many cases, the total lack thereof.  In this situation, the success of the Russian hackers was propelled by the average U.S. citizen’s ignorance of basic cyber “hygiene” rules, such as strong password creation.  If cybersecurity mechanisms hadn’t failed to keep these hackers out, Russian “agents of influence” would have gained access to far fewer legitimate social media profiles – making their overall campaign significantly less effective.

To be clear, this is not to blame the campaign’s effectiveness on specific end users; with over 100,000 Facebook accounts hacked every single day we can imagine it wouldn’t be difficult for any other country to use this same technique[4].  However, it’s important to understand the relevance of cybersecurity here. User access control, strong passwords, mandated multi-factor authentication, fraud detection, and identity theft prevention were just some of the cybersecurity best practices that failed to combat Russian disinformation just as much as fact-checking mechanisms or counter-narrative strategies.

These technical and behavioral failures didn’t just compromise the integrity of information, a pillar of cybersecurity; they also enabled the campaign to become incredibly more effective.  As the hackers planned to exploit the polarized election environment, access to American profiles made this far easier: by manipulating and distorting information to make it seem legitimate (i.e., opinions coming from actual Americans), these Russians undermined law enforcement operations, election processes, and more.  We are quick to ask: how much of this information was correct and how much of it wasn’t?  Who can tell whether the information originated from un-compromised, credible sources or from credible sources that have actually been hacked?

However, we should also consider another angle: what if the hackers hadn’t won access to those American profiles in the first place?  What if the hackers were forced to almost entirely use fraudulent accounts, which are prone to be detected by Facebook’s algorithms?  It is for these reasons that information warfare is so critical for cybersecurity, and why Russian information warfare campaigns of the past cannot be equally compared to the digital information wars of the modern era.

The global cybersecurity community can take an even greater, active role in addressing the account access component of disinformation.  Additionally, those working on information warfare and other narrative strategies could leverage cybersecurity for defensive operations.  Without a coordinated and integrated effort between these two sectors of the cyber and security communities, the inability to effectively combat disinformation will only continue as false information penetrates our social media feeds, news cycles, and overall public discourse.

More than ever, a demand signal is present to educate the world’s citizens on cyber risks and basic cyber “hygiene,” and to even mandate the use of multi-factor authentication, encrypted Internet connections, and other critical security features.  The security of social media and other mass-content-sharing platforms has become an information warfare issue, both within respective countries and across the planet as a whole.  When rhetoric and narrative can spread (or at least appear to spread) from within, the effectiveness of a campaign is amplified.  The cybersecurity angle of information warfare, in addition to the misinformation, disinformation, and rhetoric itself, will remain integral to effectively combating the propaganda and narrative campaigns of the modern age.


Endnotes:

[1] United States of America v. Internet Research Agency LLC, Case 1:18-cr-00032-DLF. Retrieved from https://www.justice.gov/file/1035477/download

[2] Wintour, P. (2017, September 5). West Failing to Tackle Russian Hacking and Fake News, Says Latvia. Retrieved from https://www.theguardian.com/world/2017/sep/05/west-failing-to-tackle-russian-hacking-and-fake-news-says-latvia

[3] Greenberg, A. (2018, February 16). Russian Trolls Stole Real US Identities to Hide in Plain Sight. Retrieved from https://www.wired.com/story/russian-trolls-identity-theft-mueller-indictment/

[4] Callahan, M. (2015, March 1). Big Brother 2.0: 160,000 Facebook Pages are Hacked a Day. Retrieved from https://nypost.com/2015/03/01/big-brother-2-0-160000-facebook-pages-are-hacked-a-day/

Anastasios Arampatzis Assessment Papers Cyberspace Information and Intelligence Information Systems Justin Sherman Paul Cobaugh Political Warfare Psychological Factors

Assessment of the Threat Posed by the Turkish Cyber Army

Marita La Palm is a graduate student at American University where she focuses on terrorism, countering violent extremism, homeland security policy, and cyber domain activities.  She can be found on Twitter at maritalp.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessment of the Threat Posed by the Turkish Cyber Army

Date Originally Written:  March 25, 2018.

Date Originally Published:  April 9, 2018.

Summary:  Turkish-sympathetic hacker group, the Turkish Cyber Army, has changed tactics from seizing and defacing websites to a Twitter phishing campaign that has come remarkably close to the President of the United States.

Text:  The Turkish Cyber Army (Ay Yildiz Tim) attempted to compromise U.S. President Donald Trump’s Twitter account in January of 2018 as part of a systematic cyber attack accompanying the Turkish invasion of Syria.  They were not successful, but they did seize control of various well-known accounts and the operation is still in progress two months later.

Although the Turkish Cyber Army claims to date back to a 2002 foundation in New Zealand, it first appears in hacking annals on October 2, 2006.  Since then, the group has taken over vulnerable websites in Kenya, the European Union, and the United States[1].  As of the summer of 2017, the Turikish Cyber Army changed tactics to focus on Twitter phishing, where they used the compromised Twitter account of a trustworthy source to bait a target to surrender log-in credentials[2].  They do this by sending a direct message from a familiar account they control telling the desired victim to click on a link and enter their log-in information to a page that looks like Twitter but actually records their username and password.  Upon accessing the victim’s account, the hackers rapidly make pro-Turkish posts, download the message history, and send new phishing attacks through the new account, all within a few hours.  The Turkish Cyber Army claim to have downloaded the targets’ messages, apparently both for intelligence purposes and to embarrass the target by publicly releasing the messages[3].  Oddly enough, the group has yet to release the private messages they acquired in spite of their threats to do so.  The group is notable both for their beginner-level sophistication when compared to state hackers such as Fancy Bear and the way they broadcast every hack they make.

The first documented victim of the 2018 operation was Syed Akbaruddin, Indian Permanent Representative to the United Nations.  Before the attack on Akbaruddin, the hackers likely targeted Kurdish accounts in a similar manner[4].  Since these initial attacks, the Turkish Cyber Army moved steadily closer to accounts followed by President Trump and even managed to direct message him on Twitter[5].  In January 2018, they phished multiple well-known Western public figures such as television personality Greta van Susteren and the head of the World Economic Forum, Børge Brende.  It so happened that Greta and Eric Bolling, another victim, are two of the only 45 accounts followed by President Trump.  From Eric and Greta’s accounts, the hackers were able to send messages to Trump.  Two months later, the Turkish Cyber Army continued on Twitter, but now primarily with a focus on Indian accounts.  The group took over Air India’s Twitter account on March 15, 2018.  However, the aftereffects of their Western efforts can still be seen: on March 23, 2018 the Chief Content Officer of Time, Inc. and the President of Fortune, Alan Murray tweeted, “I was locked out of Twitter for a month after being hacked by the Turkish cyber army…” Meanwhile, the Turkish Cyber Army has a large and loud Twitter presence with very little regulation considering they operate as an openly criminal organization on the platform.

President Trump’s personal Twitter account was also a target for the Turkish Cyber Army.  This is not a secret account known only to a few.  President Trump’s account name is public, and his password is all that is needed to post unless he has set up two-factor authentication.  Trump uses his account to express his personal opinions, and since some of his tweets have had high shock value, a fake message intended to disrupt might go unquestioned.  It is fair to assume that multiple groups have gone at President Trump’s account with a password cracker without stopping since inauguration.  It is only a matter of time before a foreign intelligence service or other interested party manages to access President Trump’s direct messages, make provocative statements from his account that could threaten the financial sector or national security, and from there go on to access more sensitive information.  While the Turkish Cyber Army blasts their intrusion from the compromised accounts, more sophisticated hacking teams would be in and out without a word and might have already done so.  The most dangerous hackers would maintain that access for the day it is useful and unexpected.

While nothing immediately indicates that this group is a Turkish government organization, they are either supporters of the current government or work for it.  Both reporter Joseph Cox and the McAfee report claimed the group used Turkish code[6].  Almost a hundred actual or bot accounts have some identifier of the Turkish Cyber Army, none of which appear to be censored by Twitter.  Of particular interest in the group’s history are the attacks on Turkish political party Cumhuriyet Halk Partisi’s (CHP) deputy Eren Erdem’ın, alleging his connections with Fethullah Gulen and the 2006 and possible 2017 attempts to phish Kurdish activists[7].  The Turkish Cyber Army’s current operations occurred on the eve of massive Turkish political risk, as the events in Syria could have ended Turkish President Recep Tayyip Erdogan’s career had they gone poorly. Not only did Turkey invade Syria in order to attack trained troops of its North Atlantic Treaty Organization (NATO) ally, the United States, but Turkish representatives had been banned from campaigning in parts of the European Union, and Turkish banks might face a multi-billion dollar fine thanks to the Reza Zarrab case[8].  Meanwhile, both Islamist and Kurdish insurgents appeared emboldened within the country[9].  Turkey had everything to lose, and a cyberattack, albeit not that sophisticated but conducted against high value targets, was a possibility while the United States appeared undecided as to whom to back — its proxy force or its NATO ally.  In the end, the United States has made efforts to reconcile diplomatically with Turkey since January, and Turkey has saved face.


Endnotes:

[1]  Ayyildiz Tim. (n.d.). Retrieved January 24, 2018, from https://ayyildiz.org/; Turks ‘cyber-leger’ kaapt Nederlandse websites . (2006, October 2). Retrieved January 24, 2018, from https://www.nrc.nl/nieuws/2006/10/02/turks-cyber-leger-kaapt-nederlandse-websites-11203640-a1180482; Terry, N. (2013, August 12). Asbury park’s website taken over by hackers. McClatchy – Tribune Business News; Ministry of transport website hacked. (2014, March 5). AllAfrica.Com. 

[2] Turkish hackers target Sevan Nishanyan’s Twitter account. (2017, July 28). Armenpress News Agency.

[3] Beek, C., & Samani, R. (2018, January 24). Twitter Accounts of US Media Under Attack by Large Campaign. Retrieved January 24, 2018, from https://securingtomorrow.mcafee.com/mcafee-labs/twitter-accounts-of-us-media-under-attack-by-large-campaign/.

[4] #EfrinNotAlone. (2018, January 17). “News that people  @realDonaldTrump followers have been hacked by Turkish cyber army. TCA made an appearance a few days ago sending virus/clickey links to foreigners and my Kurdish/friends. The journalist who have had their accounts hacked in US have clicked the link.”  [Tweet]. https://twitter.com/la_Caki__/status/953572575602462720.

[5] Herreria, C. (2018, January 17). Hackers DM’d Donald Trump With Former Fox News Hosts’ Twitter Accounts. Retrieved March 25, 2018, from https://www.huffingtonpost.com/entry/eric-bolling-greta-van-susteren-twitter-hacked_us_5a5eb17de4b096ecfca88729

[6] Beek, C., & Samani, R. (2018, January 24). Twitter Accounts of US Media Under Attack by Large Campaign. Retrieved January 24, 2018, from https://securingtomorrow.mcafee.com/mcafee-labs/twitter-accounts-of-us-media-under-attack-by-large-campaign/; Joseph Cox. (2018, January 23). “Interestingly, the code of the phishing page is in… Turkish. “Hesabın var mı?”, or “Do you have an account?”.”  [Tweet]. https://twitter.com/josephfcox/status/955861462190383104.

[7] Ayyıldız Tim FETÖnün CHP bağlantısını deşifre etti. (2016, August 27). Retrieved January 24, 2018, from http://www.ensonhaber.com/ayyildiz-tim-fetonun-chp-baglantisini-desifre-etti-2016-08-28.html; Turks ‘cyber-leger’ kaapt Nederlandse websites . (2006, October 2). Retrieved January 24, 2018, from https://www.nrc.nl/nieuws/2006/10/02/turks-cyber-leger-kaapt-nederlandse-websites-11203640-a1180482.

[8] Turkey-backed FSA entered Afrin, Turkey shelling targets. (2018, January 21). BBC Monitoring Newsfile; Turkey blasts Germany, Netherlands for campaign bans. (2017, March 5). BBC Monitoring European; Zaman, A. (2017, December 07). Turkey probes US prosecutor in Zarrab trial twist. Retrieved January 24, 2018, from https://www.al-monitor.com/pulse/originals/2017/11/turkey-probes-reza-zarrab-investigators.html.

[9] Moore, J. (2017, December 28). Hundreds of ISIS fighters are hiding in Turkey, increasing fears of attacks in Europe. Retrieved January 24, 2018, from http://www.newsweek.com/hundreds-isis-fighters-are-hiding-turkey-increasing-fears-europe-attacks-759877; Mandıracı, B. (2017, July 20). Turkey’s PKK Conflict Kills almost 3,000 in Two Years. Retrieved January 24, 2018, from https://www.crisisgroup.org/europe-central-asia/western-europemediterranean/turkey/turkeys-pkk-conflict-kills-almost-3000-two-years.

Assessment Papers Cyberspace Marita La Palm Trump (U.S. President) Turkey

An Assessment of Violent Extremist Use of Social Media Technologies

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Violent Extremist Use of Social Media Technologies

Date Originally Written:  November 9, 2017.

Date Originally Published:  February 5, 2018.

Summary:  The leveraging of social media technologies by violent extremists like Al-Qaeda (AQ) and Daesh have created a road map for others to do the same.  Without a combined effort by social media companies and intelligence and law enforcement organizations, violent extremists and others will continue to operate nearly unchecked on social media platforms and inspire others to acts of violence.

Text:  Following the 9/11 attacks the U.S. invaded Afghanistan and AQ, the violent extremist organization who launched these attacks, lost ground.  With the loss of ground came an increase in online activity.  In the time before the worldwide embrace of social media, jihadi’s like Irhabi007 (Younis Tsouli) led AQ hacking operations by breaking into vulnerable web pages and defacing them with AQ propaganda as well as establishing dead drop sites for materials others could use.  This method was pioneered by Irhabi007, who was later hunted down by other hackers and finally arrested in 2005[1].  Five years after Tsouli’s arrest, Al-Qaeda in the Arabian Peninsula (AQAP) established Inspire Magazine as a way to communicate with its existing followers and “inspire” new ones[2].  Unfortunately for AQAP, creating and distributing an online magazine became a challenge.

Today, social media platforms such as Twitter, Facebook, VKontakte, and YouTube are now the primary modus for jihadi extremists to spread the call to jihad as well as sow fear into those they target.  Social media is perfect for connecting people because of the popularity of the platforms and the ease of use, creation of accounts, and ability to send messages that could have a large audience.  In the case of Daesh, they use Twitter and YouTube as their primary means of messaging not only for fear but also command and control as well as recruitment.  Daesh sees the benefits of using social media, and their use has paved the way for others.  Even after Twitter and YouTube began to catch on and act against the Daesh accounts, it is still easy still for Daesh to create new accounts and keep the messages flowing with a new user name followed by a digit.

AQ’s loss of terrain combined with the expansion of social media set the conditions for movement toward inciting the “far war” over the local struggle as AQ saw it before Osama bin Laden was killed.  In fact, the call to the West had been made in Inspire magazine on many occasions.  Inspire even created a section of their magazine on “Open Source Jihad” which was later adopted by Dabiq[3] (Daesh’s magazine), but the problem was actually motivating the Western faithful into action.  This paradigm was finally worked out in social media where recruiters and mouthpieces could, in real-time, talk to these potential recruits and work with them to act.

Online messaging by violent extremist organizations has now reached a point of asymmetry where very little energy or money invested on the jihadi’s part can produce large returns on investments like the incident in Garland Texas[4].  To AQ, Daesh, and others, it is now clear that social media could be the bedrock of the fight against the West and anywhere else if others can be incited to act.  This incited activity takes the form of what has been called as “Lone Wolf Jihad” which has caused several incidents like the Garland shootings to current day events like the attack in New York City on the bike path by Sayfullo Saipov, a green card holder in the U.S. from Uzbekistan[5].

With the activating of certain individuals to the cause using the propaganda and manuals put out by the jihadi’s on social media, it is clear that the medium works and that even with all the attempts by companies like Facebook and Twitter to root accounts out and delete them, the messaging still gets to those who may act upon it.  The memetic virus of violent extremism has a carrier and that is social media.  Now, with the advent of social media’s leveraging by Russia in the campaign against the U.S. electoral system, we are seeing a paradigm shift into larger and more dangerous memetic and asymmetric warfare.

Additionally, with the advent of encryption technologies to the social media platforms the net effect has been to create channels of radicalization, recruitment, and activation over live chats and messages that cannot be indicted by authorities easily.  This use for encryption and live chats and messages makes the notion of social media as a means of asymmetric warfare even more prescient.  The jihadis now have not only a means to reach out to would be followers, but also a constant contact at a distance, where before they would have to radicalize potential recruits a physical location.

Expanding this out further, the methodologies that the jihadi’s have created and used online are now studied by other like-minded groups and can be emulated.  This means that whatever the bent, a group of like-minded individuals seeking extremist ends can simply sign up and replicate the jihadi model to the same ends of activating individuals to action.  We have already started to see this with the Russian hybrid warfare at a nominal level by activating people in the U.S. such as neo nazi’s and empowering them to act.

Social media is a boon and a bane depending on it’s use and it’s moderation by the companies that create the platforms and manage them.  However, with the First Amendment protecting freedom of speech in the U.S., it is hard for companies to delineate what is free speech and what is exhortation to violence.  This is the crux of the issue for companies and governments in the fight against violent extremism on platforms such as YouTube or Twitter.  Social media utilization boils down to terms of service and policing, and until now the companies have not been willing to monitor and take action.  Post Russian meddling in the U.S. election though, social media company attitudes seems to be changing.

Ultimately, the use of social media for extremist ideas and action will always be a problem.  This is not going away, and policing is key.  The challenge lies in working out the details and legal interpretations concerning the balance of what constitutes freedom of speech and what constitutes illegal activity.  The real task will be to see if algorithms and technical means will be helpful in sorting between the two.  The battle however, will never end.  It is my assessment that the remediation will have to be a melding of human intelligence activities and technical means together to monitor and interdict those users and feeds that are seeking to incite violence within the medium.


Endnotes:

[1] Katz, R., & Kern, M. (2006, March 26). Terrorist 007, Exposed. Retrieved November 17, 2017, from http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html

[2] Zelin, A. Y. (2017, August 14). Inspire Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/inspire-magazine/

[3] Zelin, A. Y. (2016, July 31). Dabiq Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/dabiq-magazine/

[4] Chandler, A. (2015, May 04). A Terror Attack in Texas. Retrieved November 17, 2017, from https://www.theatlantic.com/national/archive/2015/05/a-terror-attack-in-texas/392288/

[5] Kilgannon, C., & Goldstein, J. (2017, October 31). Sayfullo Saipov, the Suspect in the New York Terror Attack, and His Past. Retrieved November 17, 2017, from https://www.nytimes.com/2017/10/31/nyregion/sayfullo-saipov-manhattan-truck-attack.html

 

Al-Qaeda Assessment Papers Cyberspace Islamic State Variants Scot A. Terban Violent Extremism

An Australian Perspective on Identity, Social Media, and Ideology as Drivers for Violent Extremism

Kate McNair has a Bachelor’s Degree in Criminology from Macquarie University and is currently pursuing her a Master’s Degree in Security Studies and Terrorism at Charles Sturt University.  You can follow her on Twitter @kate_amc .  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  An Australian Perspective on Identity, Social Media, and Ideology as Drivers for Violent Extremism

Date Originally Written:  December 2, 2017.

Date Originally Published:  January 8, 2018.

Summary:  Countering Violent Extremism (CVE) is a leading initiative by many western sovereigns to reduce home-grown terrorism and extremism.  Social media, ideology, and identity are just some of the issues that fuel violent extremism for various individuals and groups and are thus areas that CVE must be prepared to address.

Text:  On March 7, 2015, two brothers aged 16 and 17 were arrested after they were suspected of leaving Australia through Sydney Airport to fight for the Islamic State[1].  The young boys fouled their parents and forged school letters.  Then they presented themselves to Australian Immigration and Border Protection shortly after purchasing tickets to an unknown middle eastern country with a small amount of funds and claimed to be on their way to visit family for three months.  Later, they were arrested for admitting to intending to become foreign fighters for the Islamic State.  October 2, 2015, Farhad Khalil Mohammad Jabar, 15 years old, approached Parramatta police station in Sydney’s West, and shot civilian police accountant Curtis Cheng in the back[2].  Later it was discovered that Jabar was inspired and influenced by two older men aged 18 and 22, who manipulated him into becoming a lone wolf attacker, and supplied him the gun he used to kill the civilian worker.

In November 2016 Parliament passed the Counter-Terrorism Legislation Amendment Bill (No. 1) 2016 and stated that “Keeping Australians safe is the first priority of the Turnbull Government, which committed to ensuring Australian law enforcement and intelligence agencies have the tools they need to fight terrorism[3].”  More recently, the Terrorism (Police Powers) Act of 2002 was extensively amended to become the Terrorism Legislation Amendment (Police Powers and Parole) Act of 2017 which allows police to have more powers during investigations and puts stronger restrictions and requirements on parolees when integrating back into society.  Although these governing documents aim at honing in on law enforcement and the investigation side of terrorism efforts, in 2014 the Tony Abbot Government implemented a nation-wide initiative called Living Safe Together[4].  Living Safe Together opposed a law enforcement-centric approach and instead focused on community-based initiatives to address the growing appeal of violent extremist ideologies in young people.

Levi West, a well-known academic in the field of terrorism in Australia highlighted that, in the cases of the aforementioned individuals, they have lived there entire lives in a world where the war of terror has existed.  These young men were part of a Muslim minority and have grown up witnessing a war that has been painted by some as the West vs Islam.  These young men were influenced by many voices between school, work, social events, and at home[5].  This leads to the question on whether these young individuals are driven to violent extremism by the ideology or are they trying to find their identity and their purpose in this world.

For young adults in Australia, social media is a strong driver for violent extremism.  Young adults are vulnerable and uncertain about various things in their life.  When people feel uncertain about who they are, the accuracy of their perceptions, beliefs, and attitudes, they seek out people who are similar to them in order to make comparisons that largely confirm the veracity and appropriateness of their own attitudes.  Social media is being weaponised by violent extremist organizations such as the Islamic State.  Social media, and other communicative Peer-to-Peer sharing platforms, are ideal to facilitate virtual learning and virtual interactions between young adults and violent extremists.  While young adults who interact within these online forums may be less likely to engage in a lone wolf attack, these forums can reinforce prior beliefs and slowly manipulate people over time.

Is it violent extremist ideology that is inspiring young individuals to become violent extremists and participate in terrorism and political violence?  Decentralized command and control within violent extremist organizations, also referred to as leaderless resistance, is a technique to inspire young individuals to take it upon themselves, with no leadership, to commit attacks against western governments and communities[6].  In the case of the Islamic State and its use of this strategy, its ideology is already known to be extreme and violent, therefore its interpretation and influence of leaderless resistance is nothing less.  Decentralization has been implemented internationally as the Islamic State continues to provide information, through sites such as Insider, on how to acquire the materiel needed to conduct attacks.  Not only does the Islamic State provide training and skill information, they encourage others to spread the their ideology through the conduct of lone wolf attacks and glorify these acts as a divine right.  Together with the vulnerability of young individuals, the strategy of decentralized command and control with the extreme ideology, has been successful thus far.  Based upon this success, CVE’s effectiveness is likely tied to it being equally focused on combating identity as a driver for violent extremism, in addition to an extreme ideology, and the strategies and initiative that can prevent individuals to becoming violent extremists.

The leading strategies in CVE have been social media, social cohesion, and identity focused.  Policy leaders and academics have identified that young individuals are struggling with the social constraints of labels and identity, therefore need to take a community-based approach when countering violent extremism.  The 2015 CVE Regional Summit reveled various recommendations and findings that relate to the use of social media and the effects it has on young, vulnerable individuals and the realities that Australia must face as a country, and as a society.  With the growing threat of homegrown violent extremism and the returning of foreign fighters from fighting with the Islamic State, without programs that address individual identity and social cohesion, violent extremism will continue to be a problem.  The Australian Federal Police (AFP) have designated Community Liaison Team members whose role is to develop partnerships with community leaders to tackle the threat of violent extremism and enhance community relations, with the AFP also adopting strategies to improve dialogue with Muslim communities. The AFP’s efforts, combined with the participation of young local leaders, is paramount to the success of these strategies and initiatives to counter the violent extremism narrative.


Endnotes:

[1] Nick Ralston, ‘Parramatta shooting: Curtis Cheng was on his way home when shot dead’ October 3rd 2015 http://www.smh.com.au/nsw/parramatta-shooting-curtis-cheng-was-on-his-way-home-when-shot-dead-20151003-gk0ibk.html Accessed December 1, 2017.

[2] Lanai Scarr, ‘Immigration Minister Peter Dutton said two teenage brothers arrested while trying to leave Australia to fight with ISIS were ‘saved’’ March 8th 2015 http://www.news.com.au/national/immigration-minister-peter-dutton-said-two-teenage-brothers-arrested-while-trying-to-leave-australia-to-fight-with-isis-were-saved/news-story/90b542528076cbdd02ed34aa8a78d33a Accessed December 1, 2017.

[3] Australian Government media release, Parliament passes Counter Terrorism Legislation Amendment Bill No 1 2016. https://www.attorneygeneral.gov.au/Mediareleases/Pages/2016/FourthQuarter/Parliament-passes-Counter-Terrorism-Legislation-Amendment-Bill-No1-2016.aspx Accessed December 1, 2017.

[4] Australian Government, Living Safer Together Building community resilience to violent extremism. https://www.livingsafetogether.gov.au/pages/home.aspx Accessed December 1, 2017.

[5] John W. Little, Episode 77 Australian Approaches to Counterterrorism Podcast, Covert Contact. October 2, 2017.

[6] West, L. 2016. ‘#jihad: Understanding social media as a weapon’, Security Challenges 12 (2): pp. 9-26.

Assessment Papers Australia Cyberspace Islamic State Variants Kate McNair Social Media Violent Extremism

Assessment of U.S. Cyber Command’s Elevation to Unified Combatant Command

Ali Crawford is a current M.A. Candidate at the Patterson School of Diplomacy and International Commerce.  She studies diplomacy and intelligence with a focus on cyber policy and cyber warfare.  She tweets at @ali_craw.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessment of U.S. Cyber Command’s Elevation to Unified Combatant Command

Date Originally Written:  September 18, 2017.

Date Originally Published:  November 13, 2017.

Summary:  U.S. President Donald Trump instructed the Department of Defense to elevate U.S. Cyber Command to the status of Unified Combatant Command (UCC).  Cyber Command as a UCC could determine the operational standards for missions and possibly streamline decision-making.  Pending Secretary of Defense James Mattis’ nomination, the Commander of Cyber Command will have the opportunity to alter U.S. posturing in cyberspace.

Text:  In August 2017, U.S. President Donald Trump ordered the Department of Defense to begin initiating Cyber Command’s elevation to a UCC[1].  With the elevation of U.S. Cyber Command there will be ten combatant commands within the U.S. military infrastructure[2].  Combatant commands have geographical[3] or functional areas[4] of responsibility and are granted authorities by law, the President, and the Secretary of Defense (SecDef) to conduct military operations.  This elevation of Cyber Command to become a UCC is a huge progressive step forward.  The character of warfare is changing. Cyberspace has quickly become a new operational domain for war, with battles being waged each day.  The threat landscape in the cyberspace domain is always evolving, and so the U.S. will evolve to meet these new challenges.  Cyber Command’s elevation is timely and demonstrates the Department of Defense’s commitment to defend U.S. national interests across all operational domains.

Cyber Command was established in 2009 to ensure the U.S. would maintain superiority in the cyberspace operational domain.  Reaching full operational capacity in 2010, Cyber Command mainly provides assistance and other augmentative services to the military’s various cyberspace missions, such as planning; coordinating; synchronizing; and preparing, when directed, military operations in cyberspace[5].  Currently, Cyber Command is subordinate to U.S. Strategic Command, but housed within the National Security Agency (NSA).  Cyber Command’s subordinate components include Army Cyber Command, Fleet Cyber Command, Air Force Cyber Command, Marine Forces Cyber Command, and it also maintains an operational relationship with the Coast Guard Cyber Command[6].  By 2018, Cyber Command expects to ready 133 cyber mission force teams which will consist of 25 support teams, 27 combat mission teams, 68 cyber protection teams, and 13 national mission teams[7].

Admiral Michael Rogers of the United States Navy currently heads Cyber Command.  He is also head of the NSA.  This “dual-hatting” of Admiral Rogers is of interest.  President Trump has directed SecDef James Mattis to recommend a nominee to head Cyber Command once it becomes a UCC.  Commanders of Combatant Commands must be uniformed military officers, whereas the NSA may be headed by a civilian.  It is very likely that Mattis will nominate Rogers to lead Cyber Command[8].  Beyond Cyber Command’s current missions, as a UCC its new commander would have the power to alter U.S. tactical and strategic cyberspace behaviors.  The elevation will also streamline the time-sensitive process of conducting cyber operations by possibly enabling a single authority with the capacity to make independent decisions who also has direct access to SecDef Mattis.  The elevation of Cyber Command to a UCC led by a four-star military officer may also point to the Department of Defense re-prioritizing U.S. posturing in cyberspace to become more offensive rather than defensive.

As one can imagine, Admiral Rogers is not thrilled with the idea of splitting his agencies apart.  Fortunately, it is very likely that he will maintain dual-authority for at least another year[9].  The Cyber Command separation from the NSA will also take some time, pending the successful confirmation of a new commander.  Cyber Command would also need to demonstrate its ability to function independently from its NSA intelligence counterpart[10].  Former SecDef Ash Carter and Director of Intelligence (DNI) James Clapper were not fans of Rogers’ dual-hat arrangement.  It remains to be seen what current SecDef Mattis’ or DNI Coats’ think of the “dual hat” arrangement.

Regardless, as this elevation process develops, it is worthwhile to follow.  Whoever becomes commander of Cyber Command, whether it be a novel nominee or Admiral Rogers, will have an incredible opportunity to spearhead a new era of U.S. cyberspace operations, doctrine, and influence policy.  A self-actualized Cyber Command may be able to launch Stuxnet-style attacks aimed at North Korea or speak more nuanced rhetoric aimed at creating impenetrable networks.  Regardless, the elevation of Cyber Command to a UCC signals the growing importance of cyber-related missions and will likely encourage U.S. policymakers to adopt specific cyber policies, all the while ensuring the freedom of action in cyberspace.


Endnotes:

[1] The White House, “Statement by President Donald J. Trump on the Elevation of Cyber Command,” 18 August 2017, https://www.whitehouse.gov/the-press-office/2017/08/18/statement-donald-j-trump-elevation-cyber-command

[2] Unified Command Plan. (n.d.). Retrieved October 27, 2017, from https://www.defense.gov/About/Military-Departments/Unified-Combatant-Commands/

[3] 10 U.S. Code § 164 – Commanders of combatant commands: assignment; powers and duties. (n.d.). Retrieved October 27, 2017, from https://www.law.cornell.edu/uscode/text/10/164

[4] 10 U.S. Code § 167 – Unified combatant command for special operations forces. (n.d.). Retrieved October 27, 2017, from https://www.law.cornell.edu/uscode/text/10/167

[5] U.S. Strategic Command, “U.S. Cyber Command (USCYBERCOM),” 30 September 2016, http://www.stratcom.mil/Media/Factsheets/Factsheet-View/Article/960492/us-cyber-command-uscybercom/

[6] U.S. Strategic Command, “U.S. Cyber Command (USCYBERCOM),” 30 September 2016, http://www.stratcom.mil/Media/Factsheets/Factsheet-View/Article/960492/us-cyber-command-uscybercom/

[7] Richard Sisk, Military, “Cyber Command to Become Unified Combatant Command,” 18 August 2017, http://www.military.com/daily-news/2017/08/18/cyber-command-become-unified-combatant-command.html

[8] Department of Defense, “The Department of Defense Cyber Strategy,” 2015, https://www.defense.gov/News/Special-Reports/0415_Cyber-Strategy/

[9] Thomas Gibbons-Neff and Ellen Nakashima, The Washington Post, “President Trump announces move to elevate Cyber Command,” 18 August 2017, https://www.washingtonpost.com/news/checkpoint/wp/2017/08/18/president-trump-announces-move-to-elevate-cyber-command/

[10] Ibid.

Ali Crawford Assessment Papers Cyberspace United States

Options for U.S. National Guard Defense of Cyberspace

Jeffrey Alston is a member of the United States Army National Guard and a graduate of the United States Army War College.  He can be found on Twitter @jeffreymalston.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The United States has not organized its battlespace to defend against cyberattacks.  Cyberattacks are growing in scale and scope and threaten surprise and loss of initiative at the strategic, operational and tactical levels.  Shortfalls in the nation’s cybersecurity workforce and lack of division of labor amongst defenders exacerbates these shortfalls.

Date Originally Written:  July 23, 2017.

Date Originally Published:  September 4, 2017.

Author and / or Article Point of View:  This paper is written from a perspective of a U.S. Army field grade officer with maneuver battalion command experience who is a senior service college graduate.  The officer has also been a practitioner of delivery of Information Technology (IT) services and cybersecurity for his organization for over 15 years and in the IT industry for nearly 20 years.

Background:  At the height of the Cold War, the United States, and the North American (NA) continent, organized for defense against nuclear attack.  A series of radar early warning lines and control stations were erected and arrayed across the northern reaches of the continent to warn of nuclear attack.  This system of electronic sentries were controlled and monitored through a series of air defense centers.  The actual air defense fell to a number of key air bases across the U.S. ready to intercept and defeat bombers from the Union of Soviet Socialist Republics entering the NA airspace.  The system was comprehensive, arrayed in-depth, and redundant[1].  Today, with threats posed by sophisticated cyber actors who directly challenge numerous United States interests, no equivalent warning structure exists.  Only high level, broad outlines of responsibility exist[2].  Existing national capabilities, while not trivial, are not enough to provide assurances to U.S. states as these national capabilities may require a cyber event of national significance to occur before they are committed to address a state’s cyber defense needs.  Worse, national entities may notify a state after a breach has occurred or a network is believed to be compromised.  The situation is not sustainable.

Significance:  Today, the vast Cold War NA airspace has its analog in undefended space and gray area networks where the cyber threats propagate, unfettered from active security measures[3].  While the capabilities of the myriad of companies and firms that make up the critical infrastructure and key resource sectors have considerable cybersecurity resources and skill, there are just as many that have next to nothing.  Many companies and firms cannot afford cyber capability or worse are simply unaware of the threats they face.  Between all of these entities the common terrain consists of the numerous networks, private and public, that interconnect or expose all of these actors.  With its Title 32 authorities in U.S. law, the National Guard is well positioned to take a key role in the unique spot interface between private industry – especially critical infrastructure – in that it can play a key role in this gray space.

There is a unique role for the National Guard cyber forces in gray space of the internet.  The National Guard could provide a key defensive capability in two different ways.

Option #1:  The National Guard’s Defensive Cyberspace Operations-Element (DCO-E), not part of the Department of Defense Cyber Mission Force, fulfills an active role providing depth in their states’ networks, both public and private.  These elements, structured as full-time assets, can cooperatively work to negotiate the placement of sensors and honeypots in key locations in the network and representative sectors in their states.  Data from these sensors and honey pots, optimized to only detect high-threat or active indicators of compromise, would be aggregated in security operations centers manned primarily by the DCO-Es but with state government and Critical Infrastructure and Key Resources (CIKR) participation.  These security operations centers provide valuable intelligence, analytics, cyber threat intelligence to all and act to provide depth in cybersecurity.  These units watch for only the most sophisticated threats and allow for the CIKR private industry entities to concentrate their resources on internal operations.  Surveilling gray space networks provides another layer of protection and builds a shared understanding of adversary threats, traffic, exploitation attempts returning initiative to CIKR and preventing surprise in cyberspace.

Risk:  The National Guard cannot be expected to intercept every threat that is potentially targeted at a state entity.  Negative perceptions of “mini-National Security Agencies (NSAs)” within each state could raise suspicions and privacy concerns jeopardizing the potential of these assets.  Duplicate efforts by all stakeholders threaten to spoil an available capability rather than integrating it into a whole of government approach.

Gain:  Externally, this option builds the network of cyber threat intelligence and unifies efforts within the particular DCO-E’s state.  Depth is created for all stakeholders.  Internally, allowing National Guard DCO-Es to focus in the manner in this option provides specific direction, equipping options, and training for their teams.

Option #2:  The National Guard’s DCO-Es offer general support functions within their respective states for their Adjutants General, Governors, Department of Homeland Security Advisors, etc.  These elements are tasked on an as-needed basis to perform cybersecurity vulnerability assessments of critical infrastructure when requested or when directed by state leadership.  Assessments and follow-on recommendations are delivered to the supported entity for the purpose of increasing their cybersecurity posture.  The DCO-Es fulfill a valuable role especially for those entities that lack a dedicated cybersecurity capability or remain unaware of the threats they face.  In this way, the DCO-Es may prevent a breach of a lessor defended entity as the entry point for larger scale attacks or much larger chain-reaction or cascading disruptions of a particular industry.

Risk:  Given the hundreds and potentially thousands of private industry CIKR entities within any particular state, this option risks futility in that there is no guarantee the assessments are performed on the entities at the greatest risk.  These assessments are a cybersecurity improvement for the state overall, however, given the vast numbers of industry actors this option is equivalent to trying to boil the ocean.

Gain:  These efforts help fill in the considerable gap that exists in the cybersecurity of CIKR entities in the state.  The value of the assessments may be multiplied through communication of the results of these assessments and vulnerabilities at state and national level industry specific associations and conferences etc.  DCO-Es can gradually collect information on trends in these industries and attempt to use that information for the benefit of all such as through developing knowledge bases and publishing state specific trends.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1]  Winkler, D. F. (1997). SEARCHING THE SKIES: THE LEGACY OF THE UNITED STATES COLD WAR DEFENSE RADAR PROGRAM(USA, Headquarters Air Combatant Command).

[2]  Federal Government Resources. (n.d.). Retrieved July 22, 2017, from https://www.americanbar.org/content/dam/aba/marketing/Cybersecurity/2013march21_cyberroleschart.authcheckdam.pdf

[3]  Brenner, J. (2014, October 24). Nations everywhere are exploiting the lack of cybersecurity. Retrieved July 21, 2017, from https://www.washingtonpost.com/opinions/joel-brenner-nations-everywhere-are-exploiting-the-lack-of-cybersecurity

 

 

 

Cyberspace Jeffrey Alston Non-Full-Time Military Forces (Guard, Reserve, etc) Option Papers United States

Assessment of Cryptocurrencies and Their Potential for Criminal Use 

The Viking Cop has served in a law enforcement capacity with multiple organizations within the U.S. Executive Branch.  He can be found on Twitter @TheVikingCop.  The views reflected are his own and do not represent the opinion of any government entities.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Cryptocurrencies and Their Potential for Criminal Use

Date Originally Written:  July 22, 2017.

Date Originally Published:  August 28, 2017.

Summary:  Cryptocurrencies are a new technology-driven virtual currency that has existed since late 2009.  Due to the anonymous or near-anonymous nature of their design they are useful to criminal organizations.  It is vital for law enforcement organizations and regulators to know the basics about how cryptocurrencies work as their use by criminal organizations is likely to continue.

Text:  Cryptocurrencies are a group of virtual currencies that relay on a peer-to-peer system disconnected from a central issuing authority that allows users an anonymous or near-anonymous method to conduct transactions[1][2].

Bitcoin, Ethereum, LiteCoin, and DogeCoin are among 820 currently existing cryptocurrencies that have a combined market capitalization of over ninety billion U.S. Dollars at the time of this assessment[3][4].

The majority of cryptocurrencies run off a system design created by an unknown individual or group of individuals published under the name Satoshi Nakamoto[2].  This system relies on a decentralized public ledger system, conceptualized by Nakamoto in a whitepaper published in October of 2008, which would later become widely known as “Blockchain.”

Simplistically, blockchain works as a system of electronic signature keys and cryptographic hash codes printed onto a publicly accessible ledger.  Once a coin in any cryptocurrency is created through a “mining” process that consists of a computer or node solving a complex mathematical calculation known as a “proof-of-work,” the original signature and hash of that coin is added to the public ledger on the initial node and then also transmitted to every other node in the network in a block.  These proof-of-work calculations are based on confirming the hash code of previous transactions and printing it to a local copy of the public ledger.  Once the block is transmitted to all other nodes they confirm that the transaction is valid and print it to their copy of the public ledger.  This distribution and cross-verification of the public ledger by multiple computers ensures the accuracy and security of each transaction in the blockchain as the only way to falsely print to public ledger would be to control fifty percent plus one of the nodes in the network[1][2].

While the electronic signatures for each user are contained within the coin, the signature itself contains no personally identifiable information.  From a big data perspective this system allows one to see all the transactions that a user has conducted through the used electronic signature but it will not allow one to know from who or where the transaction originated or terminated.

A further level of security has been developed by private groups that provide a method of virtually laundering the money called “Mixing.”  A third-party source acts as an intermediary receiving and disturbing payments removing any direct connection between two parties in the coin signature[5].

This process of separating the coins and signatures within from the actual user gives cryptocurrencies an anonymous or near-anonymous method for conducting criminal transactions online.  A level of the internet, known as Darknet, which is only accessible through the use of special software and work off non-standard communication protocols has seen a rise in online marketplaces.  Illicit Darknet marketplaces such as Silk Road and the more recently AlphaBay have levied cryptocurrencies as a go-to for concealing various online black market transactions such as stolen credit card information, controlled substances, and firearms[6].

The few large criminal cases that have involved the cryptocurrency Bitcoin, such as U.S. Citizen Ross Ulbricht involved with Silk Road and Czech national Tomáš Jiříkovský for stealing ninety thousand Bitcoins ($225 million USD in current market value), have been solved by investigators through traditional methods of discovering an IP address left through careless online posts and not through a vulnerability in the public ledger[7].

Even in smaller scale cases of narcotics transactions taking place on Darknet marketplaces local investigators have only been able to trace cryptocurrency purchases backwards after intercepting shipments through normal detection methods and finding cryptocurrency artifacts during the course of a regular investigation.  There has been little to no success on linking cryptocurrencies back to distributors that hasn’t involved regular investigative methods[8].

Looking at future scenarios involving cryptocurrencies the Global Public Policy Institute sees a possible future whereby terrorism devolves back to populist movements and employs decentralized hierarchy heavily influenced by online interactions.  In this possible future, cryptocurrencies could allow groups to covertly move money between supporters and single or small group operatives along with being a means to buy and sell software to be used in cyberterrorism attacks or to support physical terrorism attacks[9].

Cryptocurrency is currently positioned to exploit a massive vulnerability in the global financial and legal systems and law enforcement organizations are only beginning to acquire the knowledge and tools to combat illicit use.  In defense of law enforcement organizations and regulators, cryptocurrencies are in their infancy, with massive changes in their operation, trading, and even foundational technology changing rapidly.  This rapid change makes it so that until cryptocurrencies reach a stable or mature state, they will be an unpredictable moving target to track and hit[10].


Endnotes:

[1]  Arvind Narayanan, J. B. (2016). Bitcoin and Cryptocurrency Technologies: A Comprehensive Introduction. Pinceton University Press.

[2]  Nakamoto, S. (n.d.). Bitcoin: A Peer-to-Peer Electronic Cash System. Retrieved July 10, 2017, from Bitcoin: https://bitcoin.org/bitcoin.pdf

[3]  Cryptocurrency market cap analysis. (n.d.). Retrieved from Cryptolization: https://cryptolization.com/

[4]  CryptoCurrency Market Capitalizations. (n.d.). Retrieved July 10, 2017, from CoinMarketCap: https://coinmarketcap.com/currencies/views/all/

[5]  Jacquez, T. (2016). Cryptocurrency the new money laundering problem for banking, law enforcement, and the legal system. Utica College: ProQuest Dissertations Publishing.

[6]  Over 57% Of Darknet Sites Offer Unlawful Items, Study Shows. (n.d.). Retrieved July 21, 2017, from AlphaBay Market: https://alphabaymarket.com/over-57-of-darknet-sites-offer-unlawful-items-study-shows/

[7]  Bohannon, J. (2016, March 9). Why criminals can’t hide behind Bitcoin. Retrieved July 10, 2017, from Science: http://www.sciencemag.org/news/2016/03/why-criminals-cant-hide-behind-bitcoin

[8]  Jens Anton Bjørnage, M. W. (2017, Feburary 21). Dom: Word-dokument og bitcoins fælder narkohandler. Retrieved July 21, 2017, from Berlingske: https://www.b.dk/nationalt/dom-word-dokument-og-bitcoins-faelder-narkohandler

[9]  Bhatnagar, A., Ma, Y., Manome, M., Markiewicz, S., Sun, F., Wahedi, L. A., et al. (@017, June). Volatile Years: Transnational Terrorism in 2027. Retrieved July 21, 2017, from Robert Bosch Foundation: http://www.bosch-stiftung.de/content/language1/downloads/GGF_2027_Volatile_Years_Transnational_Terrorism_in_2027.pdf

[10]  Engle, E. (2016). Is Bitcoin Rat Poison: Cryptocurrency, Crime, and Counterfeiting (CCC). Journal of High Technology Law 16.2, 340-393.

Assessment Papers Criminal Activities Cyberspace Economic Factors The Viking Cop

Options for Paying Ransoms to Advanced Persistent Threat Actors

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  Paying ransom for exploits being extorted by Advanced Persistent Threat Actors: Weighing the Options.

Date Originally Written:  June 1, 2017.

Date Originally Published:  June 8, 2017.

Author and / or Article Point of View:  Recent events have given rise to the notion of crowd funding monies to pay for exploits being held by a hacking group called ShadowBrokers in their new “Dump of the month club” they have ostensibly started.  This article examines, from a red team point of view,  the idea of meeting actors’ extortion demands to get access to new nation state-level exploits and, in doing so, being able to reverse engineer them and immunize the community.

Background:  On May 30, 2017 the ShadowBrokers posted to their new blog site that they were starting a monthly dump service wherein clients could pay a fee for access to exploits and other materials that the ShadowBrokers had stolen from the U.S. Intelligence Community (USIC).  On May 31, 2017 a collective of hackers created a Patreon site to crowd fund monies in an effort to pay the ShadowBrokers for their wares and gather the exploits to reverse engineer them in the hopes of disarming them for the greater community.  This idea was roundly debated on the internet and as of this writing  has since been pulled by the collective after collecting about $3,000.00 of funds.  In the end it was the legal counsel of one of the hackers who had the Patreon eite shut down due to potential illegalities with buying such exploits from actors like ShadowBrokers.  There were many who supported the idea with a smaller but vocal dissenting group warning that it was bad idea.

Significance:  The significance of these events has import on many levels of national security issues that now deal with information security and information warfare.  The fact that ShadowBrokers exist and have been dumping nation-state hacking tools is only one order of magnitude here.  Since the ShadowBrokers dumped their last package of files a direct international event ensued in the WannaCrypt0r malware being augmented with code from ETERNALBLUE and DOUBLEPULSAR U.S. National Security Agency exploits and infecting large numbers of hosts all over the globe with ransomware.  An additional aspect of this was that the code for those exploits may have been copied from the open source sites of reverse engineers working on the exploits to secure networks via penetration testing tools.  This was the crux of the argument that the hackers were making, simply put, they would pay for the access to deny others from having it while trying to make the exploits safe.  Would this model work for both public and private entities?  Would this actually stop the ShadowBrokers from posting the data publicly even if paid privately?

Option #1:  Private actors buy the exploits through crowd funding and reverse the exploits to make them safe (i.e. report them to vendors for patching).

Risk:  Private actors like the hacker collective who attempted this could be at risk to the following scenarios:

1) Legal issues over buying classified information could lead to arrest and incarceration.

2) Buying the exploits could further encourage ShadowBrokers’ attempts to extort the United States Intelligence Community and government in an active measures campaign.

3) Set a precedent with other actors by showing that the criminal activity will in fact produce monetary gain and thus more extortion campaigns can occur.

4) The actor could be paid and still dump the data to the internet and thus the scheme moot.

Gain:  Private actors like the hacker collective who attempted this could have net gains from the following scenarios:

1) The actor is paid, and the data is given leaving the hacker collective to reverse engineer the exploits and immunize the community.

2) The hacker collective could garner attention to the issues and themselves, this perhaps could get more traction on such issues and secure more environments.

Option #2:  Private actors do not pay for the exploits and do not reward such activities like ransomware and extortion on a global scale.

Risk:  By not paying the extortionists the data is dumped on the internet and the exploits are used in malware and other hacking attacks globally by those capable of understanding the exploits and using or modifying them.  This has already happened and even with the exploits being in the wild and known of by vendors the attacks still happened to great effect.  Another side effect is that all operations that had been using these exploits have been burned, but, this is already a known quantity to the USIC as they likely already know what exploits have been stolen and or remediated in country.

Gain:  By not paying the extortionists the community at large is not feeding the cost to benefit calculation that the attackers must make in their plans of profit.  If we do not deal with extortionists or terrorists you are not giving them positive incentive to carry out such attacks for monetary benefit.

Other Comments:  While it may be laudable to consider such schemes as crowd funding and attempting to open source such exploit reversal and mitigation, it is hubris to consider that this will stop the actor with bad intent to just sell the data and be done with it.  It is also of note that the current situation that this red team article is based on involves a nation-state actor, Russia and its military intelligence service Glavnoye Razvedyvatel’noye Upravleniye (GRU) and its foreign intelligence service the Sluzhba Vneshney Razvedki (SVR) that are understood to not care about the money.  This current situation is not about money, it is about active measures and sowing chaos in the USIC and the world.  However, the precepts still hold true, dealing with terrorists and extortionists is a bad practice that will only incentivize the behavior.  The takeaway here is that one must understand the actors and the playing field to make an informed decision on such activities.

Recommendation:  None.


Endnotes:

None.

Cyberspace Extortion Option Papers Scot A. Terban

Options for Defining “Acts of War” in Cyberspace

Michael R. Tregle, Jr. is a U.S. Army judge advocate officer currently assigned as a student in the 65th Graduate Course at The Judge Advocate General’s Legal Center & School.  A former enlisted infantryman, he has served at almost every level of command, from the infantry squad to an Army Service Component Command, and overseas in Afghanistan and the Pacific Theater.  He tweets @shockandlawblog and writes at www.medium.com/@shock_and_law.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The international community lacks consensus on a binding definition of “act of war” in cyberspace.

Date Originally Written:  March 24, 2017.

Date Originally Published:  June 5, 2017.

Author and / or Article Point of View:  The author is an active duty officer in the U.S. Army.  This article is written from the point of view of the international community toward common understandings of “acts of war” in cyberspace.

Background:  The rising prominence of cyber operations in modern international relations highlights a lack of widely established and accepted rules and norms governing their use and status.  Where no common definitions of “force” or “attack” in the cyber domain can be brought to bear, the line between peace and war becomes muddled.  It is unclear which coercive cyber acts rise to a level of force sufficient to trigger international legal rules, or how coercive a cyber act must be before it can be considered an “act of war.”  The term “act of war” is antiquated and mostly irrelevant in the current international legal system.  Instead, international law speaks in terms of “armed conflicts” and “attacks,” the definitions of which govern the resort to force in international relations.  The United Nations (UN) Charter flatly prohibits the use or threat of force between states except when force is sanctioned by the UN Security Council or a state is required to act in self-defense against an “armed attack.”  While it is almost universally accepted that these rules apply in cyberspace, how this paradigm works in the cyber domain remains a subject of debate.

Significance:  Shared understanding among states on what constitutes legally prohibited force is vital to recognizing when states are at war, with whom they are at war, and whether or not their actions, in war or otherwise, are legally permissible.  As the world finds itself falling deeper into perpetual “gray” or “hybrid” conflicts, clear lines between acceptable international conduct and legally prohibited force reduce the chance of miscalculation and define the parameters of war and peace.

Option #1:  States can define cyberattacks causing physical damage, injury, or destruction to tangible objects as prohibited uses of force that constitute “acts of war.”  This definition captures effects caused by cyber operations that are analogous to the damage caused by traditional kinetic weapons like bombs and bullets.  There are only two known instances of cyberattacks that rise to this level – the Stuxnet attack on the Natanz nuclear enrichment facility in Iran that physically destroyed centrifuges, and an attack on a German steel mill that destroyed a blast furnace.

Risk:  Limiting cyber “acts of war” to physically destructive attacks fails to fully capture the breadth and variety of detrimental actions that can be achieved in the cyber domain.  Cyber operations that only delete or alter data, however vital that data may be to national interests, would fall short of the threshold.  Similarly, attacks that temporarily interfere with use of or access to vital systems without physically altering them would never rise to the level of illegal force.  Thus, states would not be permitted to respond with force, cyber or otherwise, to such potentially devastating attacks.  Election interference and crashing economic systems exemplify attacks that would not be considered force under the physical damage standard.

Gain:  Reliance on physical damage and analogies to kinetic weapons provides a clear, bright-line threshold that eliminates uncertainty.  It is easily understood by international players and maintains objective standards by which to judge whether an operation constitutes illegal force.

Option #2:  Expand the definition of cyber force to include effects that cause virtual damage to data, infrastructure, and systems.  The International Group of Experts responsible for the Tallinn Manual approached this option with the “functionality test,” whereby attacks that interfere with the functionality of systems can qualify as cyber force, even if they cause no physical damage or destruction.  Examples of such attacks would include the Shamoon attack on Saudi Arabia in 2012 and 2016, cyberattacks that shut down portions of the Ukrainian power grid during the ongoing conflict there, and Iranian attacks on U.S. banks in 2016.

Risk:  This option lacks the objectivity and clear standards by which to assess the cyber force threshold, which may undermine shared understanding.  Expanding the spectrum of cyber activities that may constitute force also potentially destabilizes international relations by increasing circumstances by which force may be authorized.  Such expansion may also undermine international law by vastly expanding its scope, and thus discouraging compliance.  If too many activities are considered force, states that wish to engage in them may be prompted to ignore overly burdensome legal restrictions on too broad a range of activities.

Gain:  Eliminating the physical damage threshold provides more flexibility for states to defend themselves against the potentially severe consequences of cyberattacks.  Broadening the circumstances under which force may be used in response also enhances the deterrent value of cyber capabilities that may be unleashed against an adversary.  Furthermore, lowering the threshold for legally permissible cyber activities discourages coercive international acts.

Other Comments:  None.

Recommendation:  None.


Endnotes:

None.

Cyberspace Law & Legal Issues Michael R. Tregle, Jr. Option Papers

“Do You Have A Flag?” – Egyptian Political Upheaval & Cyberspace Attribution

Murad A. Al-Asqalani is an open-source intelligence analyst based in Cairo, Egypt.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


(Author’s Note — “Do You Have A Flag?” is a reference to the Eddie Izzard sketch of the same name[1].)

National Security Situation:  Response to offensive Information Operations in cyberspace against the Government of Egypt (GoE).

Date Originally Written:  May 15, 2017.

Date Originally Published:  June 1, 2017.

Author and / or Article Point of View:  This article discusses a scenario where the GoE tasks an Interagency Special Task Force (ISTF) with formulating a framework for operating in cyberspace against emergent threats to Egyptian national security.

Background:  In 2011, a popular uprising that relied mainly on the Internet and social media websites to organize protests and disseminate white, grey and black propaganda against the Mubarak administration of the GoE, culminated in former President Mubarak stepping down after three decades in power.

Three disturbing trends have since emerged.  The first is repeated deployment of large-scale, structured campaigns of online disinformation by all political actors, foreign and domestic, competing for dominance in the political arena.  Media outlets and think tanks seem to primarily cater to their owners’ or donors’ agendas.  Egyptian politics have been reduced to massive astroturfing campaigns, scripted by creative content developers and mobilized by marketing strategists, who create and drive talking points using meat and sock puppets, mask them as organic interactions between digital grassroots activists, amplify them in the echo chambers of social media, then pass them along to mainstream media outlets, which use them to pressure the GoE citing ‘public opinion’; thus, empowering their client special interest groups in this ‘digital political conflict’.

The second trend to emerge is the rise in Computer Network Attack (CNA) and Computer Network Exploitation (CNE) incidents.  CNA incidents mainly focus on hacking GoE websites and defacing them with political messages, whereas CNE incidents mainly focus on information gathering (data mining) and spear phishing on social media websites to identify and target Egyptian Army and Police personnel and their families, thus threatening their Personal Security (PERSEC), and overall Operation Security (OPSEC).  The best known effort of this type is the work of the first-ever Arabic Advanced Persistent Threat (APT) group: Desert Falcons[2].

The third trend is the abundance of Jihadi indoctrination material, and the increase in propaganda efforts of Islamist terrorist organizations in cyberspace.  New technologies, applications and encryption allow for new channels to reach potential recruits, and to disseminate written, audio, and multimedia messages of violence and hate to target populations.

Significance:  The first trend represents a direct national security threat to GoE and the interests of the Egyptian people.  Manipulation of public opinion is an Information Operations discipline known as “Influence Operations” that draws heavily on Psychological Operations or PSYOP doctrines.  It can render drastic economic consequences that can amount to economic occupation and subsequent loss of sovereignty.  Attributing each influence campaign to the special interest group behind it can help identify which Egyptian political or economic interest is at stake.

The second trend reflects the serious developments in modus operandi of terrorist organizations, non-state actors, and even state actors controlling proxies or hacker groups, which have been witnessed and acknowledged recently by most domestic intelligence services operating across the world.  Attributing these operations will identify the cells conducting them as well as the networks that support these cells, which will save lives and resources.

The third trend is a global challenge that touches on issues of freedom of speech, freedom of belief, Internet neutrality, online privacy, as well as technology proliferation and exploitation.  Terrorists use the Internet as a force multiplier, and the best approach to solving this problem is to keep them off of it through attribution and targeting, not to ban services and products available to law-abiding Internet users.

Given these parameters, the ISTF can submit a report with the following options:

Option #1:  Maintain the status quo.

Risk:  By maintaining the status quo, bureaucracy and fragmentation will always place the GoE on the defensive.  GoE will continue to defend against an avalanche of influence operations by making concessions to whoever launches them.  The GoE will continue to appear as incompetent, and lose personnel to assassinations and improvised explosive device attacks. The GoE will fail to prevent new recruits from joining terrorist groups, and it will not secure the proper atmosphere for investment and economic development.

This will eventually result in the full disintegration of the 1952 Nasserite state bodies, a disintegration that is central to the agendas of many regional and foreign players, and will give rise to a neo-Mamluk state, where rogue generals and kleptocrats maintain independent information operations to serve their own interests, instead of adopting a unified framework to serve the Egyptian people.

Gain:  Perhaps the only gain in this case is avoidance of further escalation by parties invested in the digital political conflict that may give rise to more violent insurgencies, divisions within the military enterprise, or even a fully fledged civil war.

Option #2:  Form an Interagency Cyber Threat Research and Intelligence Group (ICTRIG).

Risk:  By forming an ICTRIG, the ISTF risks fueling both intra-agency and interagency feuds that may trigger divisions within the military enterprise and the Egyptian Intelligence Community.  Competing factions within both communities will aim to control ICTRIG through staffing to protect their privileges and compartmentalization.

Gain:  Option #2 will define a holistic approach to waging cyber warfare to protect the political and economic interests of the Egyptian people, protect the lives of Egyptian service and statesmen, protect valuable resources and infrastructure, and tackle extremism.  ICTRIG will comprise an elite cadre of highly qualified commissioned officers trained in computer science, Information Operations, linguistics, political economy, counterterrorism, as well as domestic and international law to operate in cyberspace.  ICTRIG will develop its own playbook of mission, ethics, strategies and tactics in accordance with a directive from the political leadership of GoE.

Other Comments:  Option #1 can only be submitted and/or adopted due to a total lack of true political will to shoulder the responsibility of winning this digital political conflict.  It means whoever submits or adopts Option #1 is directly undermining GoE institutions.  Since currently this is the actual reality of GoE’s response to the threats outlined above, uncoordinated efforts at running several independent information operations have been noted and documented, with the Morale Affairs Department of the Military Intelligence and Reconnaissance Directorate running the largest one.

Recommendation:  None.


Endnotes:

[1]  Eddie Izzard: “Do you have a flag?”, Retrieved from: https://www.youtube.com/watch?v=_9W1zTEuKLY

[2]   Desert Falcons: The Middle East’s Preeminent APT, Kaspersky Labs Blog, Retrieved from https://blog.kaspersky.com/desert-falcon-arabic-apt/7678/

Cyberspace Egypt Murad A. Al-Asqalani Option Papers Psychological Factors

U.S. Options to Develop a Cyberspace Influence Capability

Sina Kashefipour is the founder and producer of the national security podcast The Loopcast.  He  currently works as an analyst.  The opinions expressed in this paper do not represent the position of his employer.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The battle for control and influence over the information space.

Date Originally Written:  May 18, 2017.

Date Originally Published:  May 29, 2017.

Author and / or Article Point of View:  The author believes that there is no meat space or cyberspace, there is only the information space.  The author also believes that while the tools, data, and knowledge are available, there is no United States organization designed primarily to address the issue of information warfare.

Background:  Information warfare is being used by state and non-state adversaries.  Information warfare, broadly defined, makes use of information technology to gain an advantage over an adversary.  Information is the weapon, the target, and the medium through which this type of conflict takes place[1][2][3].  Information warfare includes tactics such as misinformation, disinformation, propaganda, psychological operations and computer network operations [3][4][5].

Significance:  Information warfare is a force multiplier.  Control and mastery of information determines success in politics and enables the driving of the political narrative with the benefit of not having to engage in overt warfare.  Information warfare has taken a new edge as the information space and the political are highly interlinked and can, in some instances, be considered as one[6][7][8].

Option #1:  The revival of the United States Information Agency (USIA) or the creation of a government agency with similar function and outlook. The USIA’s original purpose can be summed as:

  • “To explain and advocate U.S. policies in terms that are credible and meaningful in foreign cultures”
  • “To provide information about the official policies of the United States, and about the people, values, and institutions which influence those policies”
  • “To bring the benefits of international engagement to American citizens and institutions by helping them build strong long-term relationships with their counterparts overseas”
  • “To advise the President and U.S. government policy-makers on the ways in which foreign attitudes will have a direct bearing on the effectiveness of U.S. policies.[9]”

USIA’s original purpose was largely designated by the Cold War.  The aforementioned four points are a good starting point, but any revival of the USIA would involve the resulting organization as one devoted to modern information warfare.  A modern USIA would not just focus on what a government agency can do but also build ties with other governments and across the private sector including with companies like Google, Facebook, and Twitter as they are platforms that have been used recently to propagate information warfare campaigns [10][11].  Private sector companies are also essential to understanding and limiting these types of campaigns [10][12][13][14].  Furthermore, building ties and partnering with other countries facing similar issues to engage in information warfare would be part of the mission [15][16][17].

Risk:  There are two fundamental risks to reconstituting a USIA: where does a USIA agency fit within the national security bureaucracy and how does modern information warfare pair with the legal bounds of the first amendment?

Defining the USIA within the national security apparatus would be difficult[18].  The purpose of the USIA would be easy to state, but difficult to bureaucratically define.  Is this an organization to include public diplomacy and how does that pair/compete with the Department of State’s public diplomacy mission?  Furthermore, if this is an organization to include information warfare how does that impact Department of Defense capabilities such as the National Security Agency or United States Cyber Command?  Where does the Broadcasting Board of Governors fit in?  Lastly, modern execution of successful information warfare relies on a whole of government approach or the ability to advance strategy in an interdisciplinary fashion, which is difficult given the complexity of the bureaucracy.

The second risk is how does an agency engage in information warfare in regards to the first amendment?  Consider for a moment that if war or conflict that sees information as the weapon, the target, and the medium, what role can the government legally play?  Can a government wage information warfare without, say, engaging in outright censorship or control of information mediums like Facebook and Twitter?  The legal framework surrounding these issues are ill-defined at present [19][20].

Gain:  Having a fully funded cabinet level organization devoted to information warfare complete with the ability to network across government agencies, other governments and the private sector able to both wage and defend the United States against information warfare.

Option #2:  Smaller and specific interagency working groups similar to the Active Measures Working Group of the late eighties.  The original Active Measures Working Group was an interagency collaboration devoted to countering Soviet disinformation, which consequently became the “U.S Government’s body of expertise on disinformation [21].”

The proposed working group would focus on a singular issue and in contrast to Option #1, a working group would have a tightly focused mission, limited staff, and only focus on a singular problem.

Risk:  Political will is in competition with success, meaning if the proposed working group does not show immediate success, more than likely it will be disbanded.  The group has the potential of being disbanded once the issue appears “solved.”

Gain:  A small and focused group has the potential to punch far above its weight.  As Schoen and Lamb point out “the group exposed Soviet disinformation at little cost to the United States but negated much of the effort mounted by the large Soviet bureaucracy that produced the multibillion dollar Soviet disinformation effort[22].”

Option #3:  The United States Government creates a dox and dump Wikileaks/Shadow Brokers style group[23][24].  If all else fails then engaging in attacks against adversary’s secrets and making them public could be an option.  Unlike the previous two options, this option does not necessarily represent a truthful approach, rather just truthiness[25].  In practice this means leaking/dumping data that reinforces and emphasizes a deleterious narrative concerning an adversary.  Thus, making their secrets very public, and putting the adversary in a compromising position.

Risk:  Burning data publicly might compromise sources and methods which would ultimately impede/stop investigations and prosecutions.  For instance, if an adversary has a deep and wide corruption problem is it more effective to dox and dump accounts and shell companies or engage in a multi-year investigatory process?  Dox and dump would have an immediate effect but an investigation and prosecution would likely have a longer effect.

Gain:  An organization and/or network is only as stable as its secrets are secure, and being able to challenge that security effectively is a gain.

Recommendation:  None


Endnotes:

[1]  Virag, Saso. (2017, April 23). Information and Information Warfare Primer. Retrieved from:  http://playgod.org/information-warfare-primer/

[2]  Waltzman, Rand. (2017, April 27). The Weaponization of Information: The Need of Cognitive Security. Testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity on April 27, 2017.

[3]  Pomerantsev, Peter and Michael Weiss. (2014). The Menace of Unreality: How the Kremlin Weaponizes Information, Culture, and Money.

[4]  Matthews, Miriam and Paul, Christopher (2016). The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It

[5]  Giles, Keir. (2016, November). Handbook of Russian Information Warfare. Fellowship Monograph Research Division NATO Defense College.

[6]  Giles, Keir and Hagestad II, William. (2013). Divided by a Common Language: Cyber Definitions in Chinese, Russian, and English. 2013 5th International Conference on Cyber Conflict

[7]  Strategy Bridge. (2017, May 8). An Extended Discussion on an Important Question: What is Information Operations? Retrieved: https://thestrategybridge.org/the-bridge/2017/5/8/an-extended-discussion-on-an-important-question-what-is-information-operations

[8] There is an interesting conceptual and academic debate to be had between what is information warfare and what is an information operation. In reality, there is no difference given that the United States’ adversaries see no practical difference between the two.

[9] State Department. (1998). USIA Overview. Retrieved from: http://dosfan.lib.uic.edu/usia/usiahome/oldoview.htm

[10]  Nuland, William, Stamos, Alex, and Weedon, Jen. (2017, April 27). Information Operations on Facebook.

[11]  Koerner, Brendan. (2016, March). Why ISIS is Winning the Social Media War. Wired

[12]  Atlantic Council. (2017). Digital Forensic Research Lab Retrieved:  https://medium.com/dfrlab

[13]  Bellingcat. (2017).  Bellingcat: The Home of Online Investigations. Retrieved: https://www.bellingcat.com/

[14]  Bergen, Mark. (2016). Google Brings Fake News Fact-Checking to Search Results. Bloomberg News. Retrieved: https://www.bloomberg.com/news/articles/2017-04-07/google-brings-fake-news-fact-checking-to-search-results

[15]  NATO Strategic Communications Centre of Excellence. (2017). Retrieved: http://stratcomcoe.org/

[16]  National Public Radio. (2017, May 10). NATO Takes Aim at Disinformation Campaigns. Retrieved: http://www.npr.org/2017/05/10/527720078/nato-takes-aim-at-disinformation-campaigns

[17]  European Union External Action. (2017). Questions and Answers about the East Stratcom Task Force. Retrieved: https://eeas.europa.eu/headquarters/headquarters-homepage/2116/-questions-and-answers-about-the-east-

[18]  Armstrong, Matthew. (2015, November 12). No, We Do Not Need to Revive The U.S. Information Agency. War on the Rocks. Retrieved:  https://warontherocks.com/2015/11/no-we-do-not-need-to-revive-the-u-s-information-agency/ 

[19]  For example the Countering Foreign Propaganda and Disinformation Act included in the National Defense Authorization Act for fiscal year 2017 acts more with the issues of funding, organization, and some strategy rather than legal infrastructure issues.  Retrieved: https://www.congress.gov/114/crpt/hrpt840/CRPT-114hrpt840.pdf

[20]  The U.S Information and Educational Exchange Act of 1948 also known as the Smith-Mundt Act. The act effectively creates the basis for public diplomacy and the dissemination of government view point data abroad. The law also limits what the United States can disseminate at home. Retrieved: http://legisworks.org/congress/80/publaw-402.pdf

[21]  Lamb, Christopher and Schoen, Fletcher (2012, June). Deception, Disinformation, and Strategic Communications: How One Interagency Group Made a Major Difference. Retrieved: http://ndupress.ndu.edu/Portals/68/Documents/stratperspective/inss/Strategic-Perspectives-11.pdf

[22]  Lamb and Schoen, page 3

[23]  RT. (2016, October 3). Wikileaks turns 10: Biggest Secrets Exposed by Whistleblowing Project. Retrieved: https://www.rt.com/news/361483-wikileaks-anniversary-dnc-assange/

[24]  The Gruqg. (2016, August 18). Shadow Broker Breakdown. Retrieved: https://medium.com/@thegrugq/shadow-broker-breakdown-b05099eb2f4a

[25]  Truthiness is defined as “the quality of seeming to be true according to one’s intuition, opinion, or perception, without regard to logic, factual evidence, or the like.” Dictionary.com. Truthiness. Retrieved:  http://www.dictionary.com/browse/truthiness.

Truthiness in this space is not just about leaking data but also how that data is presented and organized. The goal is to take data and shape it so it feels and looks true enough to emphasize the desired narrative.

Capacity / Capability Enhancement Cyberspace Option Papers Psychological Factors Sina Kashefipour United States

Evolution of U.S. Cyber Operations and Information Warfare

Brett Wessley is an officer in the U.S. Navy, currently assigned to U.S. Pacific Command.   The contents of this paper reflect his own personal views and are not necessarily endorsed by U.S. Pacific Command, Department of the Navy or Department of Defense.  Connect with him on Twitter @Brett_Wessley.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  Evolving role of cyber operations and information warfare in military operational planning.

Date Originally Written:  April 19, 2017.

Date Originally Published:  May 25, 2017.

Author and / or Article Point of View:  This article is intended to present options to senior level Department of Defense planners involved with Unified Command Plan 2017.

Background:  Information Warfare (IW) has increasingly gained prominence throughout defense circles, with both allied and adversarial militaries reforming and reorganizing IW doctrine across their force structures.  Although not doctrinally defined by the U.S. Department of Defense (DoD), IW has been embraced with varying degrees by the individual branches of the U.S. armed forces[1].  For the purposes of this paper, the definition of IW is: the means of creating non-kinetic effects in the battlespace that disrupt, degrade, corrupt, or influence the ability of adversaries or potential adversaries to conduct military operations while protecting our own.

Significance:  IW has been embraced by U.S. near-peer adversaries as a means of asymmetrically attacking U.S. military superiority.  Russian Defense Minister Sergei Shoigu recently acknowledged the existence of “information warfare troops,” who conduct military exercises and real-world operations in Ukraine demonstrating the fusion of intelligence, offensive cyber operations, and information operations (IO)[2].   The People’s Republic of China has also reorganized its armed forces to operationalize IW, with the newly created People’s Liberation Army Strategic Support Force drawing from existing units to combine intelligence, cyber electronic warfare (EW), IO and space forces into a single command[3].

Modern militaries increasingly depend on sophisticated systems for command and control (C2), communications and intelligence.  Information-related vulnerabilities have the potential for creating non-kinetic operational effects, often as effective as kinetic fires options.  According to U.S. Army Major General Stephen Fogarty, “Russian activities in Ukraine…really are a case study for the potential for CEMA, cyber-electromagnetic activities…It’s not just cyber, it’s not just electronic warfare, it’s not just intelligence, but it’s really effective integration of all these capabilities with kinetic measures to actually create the effect that their commanders [want] to achieve[4].”  Without matching the efforts of adversaries to operationalize IW, U.S. military operations risk vulnerability to enemy IW operations.

Option #1:  United States Cyber Command (USCYBERCOM) will oversee Military Department efforts to man, train, and equip IW and IW-related forces to be used to execute military operations under Combatant Command (CCMD) authority.  Additionally, USCYBERCOM will synchronize IW planning and coordinate IW operations across the CCMDs, as well as execute some IW operations under its own authority.

Risk:  USCYBERCOM, under United States Strategic Command (USSTRATCOM) as a sub-unified command, and being still relatively new, has limited experience coordinating intelligence, EW, space and IO capabilities within coherent IW operations.  USSTRATCOM is tasked with responsibility for DoD-wide space operations, and the Geographic Combatant Commands (GCCs) are tasked with intelligence, EW, and IO operational responsibility[5][6][7].”  Until USCYBERCOM gains experience supporting GCCs with full-spectrum IW operations, previously GCC-controlled IO and EW operations will operate at elevated risk relative to similar support provided by USSTRATCOM.

Gain:  USCYBERCOM overseeing Military Department efforts to man, train, and equip IW and IW-related forces will ensure that all elements of successful non-kinetic military effects are ready to be imposed on the battlefield.  Operational control of IW forces will remain with the GCC, but USCYBERCOM will organize, develop, and plan support during crisis and war.  Much like United States Special Operations Command’s (USSOCOM) creation as a unified command consolidated core special operations activities, and tasked USSOCOM to organize, train, and equip special operations forces, fully optimized USCYBERCOM would do the same for IW-related forces.

This option is a similar construct to the Theater Special Operations Commands (TSOCs) which ensure GCCs are fully supported during execution of operational plans.  Similar to TSOCs, Theater Cyber Commands could be established to integrate with GCCs and support both contingency planning and operations, replacing the current Joint Cyber Centers (JCCs) that coordinate current cyber forces controlled by USCYBERCOM and its service components[8].

Streamlined C2 and co-location of IW and IW-related forces would have a force multiplying effect when executing non-kinetic effects during peacetime, crisis and conflict.  Instead of cyber, intelligence, EW, IO, and space forces separately planning and coordinating their stove-piped capabilities, they would plan and operate as an integrated unit.

Option #2:  Task GCCs with operational responsibility over aligned cyber forces, and integrate them with current IW-related planning and operations.

Risk:  GCCs lack the institutional cyber-related knowledge and expertise that USCYBERCOM maintains, largely gained by Commander, USCYBERCOM traditionally being dual-hatted as Director of the National Security Agency (NSA).  While it is plausible that in the future USCYBERCOM could develop equivalent cyber-related tools and expertise of NSA, it is much less likely that GCC responsibility for cyber forces could sustain this relationship with NSA and other Non-Defense Federal Departments and Agencies (NDFDA) that conduct cyber operations.

Gain:  GCCs are responsible for theater operational and contingency planning, and would be best suited for tailoring IW-related effects to military plans.  During all phases of military operations, the GCC would C2 IW operations, leveraging the full spectrum of IW to both prepare the operational environment and execute operations in conflict.  While the GCCs would be supported by USSTRATCOM/USCYBERCOM, in addition to the NDFDAs, formally assigning Cyber Mission Teams (CMTs) as the Joint Force Cyber Component (JFCC) to the GCC would enable the Commander influence the to manning, training, and equipping of forces relevant to the threats posed by their unique theater.

GCCs are already responsible for theater intelligence collection and IO, and removing administrative barriers to integrating cyber-related effects would improve the IW capabilities in theater.  Although CMTs currently support GCCs and their theater campaign and operational plans, targeting effects are coordinated instead of tasked[9].  Integration of the CMTs as a fully operational JFCC would more efficiently synchronize non-kinetic effects throughout the targeting cycle.

Other Comments:  The current disjointed nature of DoD IW planning and operations prevents the full impact of non-kinetic effects to be realized.  While cyber, intelligence, EW, IO, and space operations are carried out by well-trained and equipped forces, these planning efforts remain stove-piped within their respective forces.  Until these operations are fully integrated, IW will remain a strength for adversaries who have organized their forces to exploit this military asymmetry.

Recommendation:  None.


Endnotes:

[1]  Richard Mosier, “NAVY INFORMATION WARFARE — WHAT IS IT?,” Center for International Maritime Security, September 13, 2016. http://cimsec.org/navy-information-warfare/27542

[2]  Vladimir Isachenkov, “Russia military acknowledges new branch: info warfare troops,” The Associated Press, February 22, 2017. http://bigstory.ap.org/article/8b7532462dd0495d9f756c9ae7d2ff3c/russian-military-continues-massive-upgrade

[3]  John Costello, “The Strategic Support Force: China’s Information Warfare Service,” The Jamestown Foundation, February 8, 2016. https://jamestown.org/program/the-strategic-support-force-chinas-information-warfare-service/#.V6AOI5MrKRv

[4]  Keir Giles, “The Next Phase of Russian Information Warfare,” The NATO STRATCOM Center of Excellence, accessed April 20, 2017. http://www.stratcomcoe.org/next-phase-russian-information-warfare-keir-giles

[5]  U.S. Joint Chiefs of Staff, “Joint Publication 2-0: Joint Intelligence”, October 22, 2013, Chapter III: Intelligence Organizations and Responsibilities, III-7-10.

[6]  U.S. Joint Chiefs of Staff, “Joint Publication 3-13: Information Operations”, November 20, 2014, Chapter III: Authorities, Responsibilities, and Legal Considerations, III-2; Chapter IV: Integrating Information-Related Capabilities into the Joint Operations Planning Process, IV-1-5.

[7]  U.S. Joint Chiefs of Staff, “Joint Publication 3-12 (R): Cyberspace Operations”, February 5, 2013, Chapter III: Authorities, Roles, and Responsibilities, III-4-7.

[8]  Ibid.

[9]  U.S. Cyber Command News Release, “All Cyber Mission Force Teams Achieve Initial Operating Capability,” U.S. Department of Defense, October 24, 2016.  https://www.defense.gov/News/Article/Article/984663/all-cyber-mission-force-teams-achieve-initial-operating-capability/

Brett Wessley Cyberspace Information and Intelligence Option Papers Planning Psychological Factors United States

Cyber Vulnerabilities in U.S. Law Enforcement & Public Safety Communication Networks

The Viking Cop has served in a law enforcement capacity with multiple organizations within the U.S. Executive Branch.  He can be found on Twitter @TheVikingCop.  The views reflected are his own and do not represent the opinion of any government entities.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Cyber vulnerabilities in regional-level Law Enforcement and Public Safety (LE/PS) communication networks which could be exploited by violent extremists in support of a physical attack.

Date Originally Written:  April 15, 2017.

Date Originally Published:  May 22, 2017.

Author and / or Article Point of View:  Author is a graduate of both University and Federal LE/PS training.  Author has two years of sworn and unsworn law enforcement experience.  Author had been a licensed amateur radio operator and builder for eleven years.

Background:  Currently LE/PS agencies in the U.S. operate on communication networks designed on the Association of Public-Safety Communications Officials, Project 25 (P25) standard established in 1995[1].  European and East Asian Countries operate on a similar network standard known as the Terrestrial Trunked Radio.

The push on a federal level for widespread implementation of the P25 standard across all U.S. emergency services was prompted by failures of communication during critical incidents such as the September 11th attacks, Columbine Massacre, and the Oklahoma City bombing[2].  Prior to the P25 implementation, different LE/PS organizations had been operating on different bands, frequencies, and equipment that prevented them from directly communicating to each other.

During P25 implementation many agencies, in an effort to offset cost and take advantage of the interoperability concept, established Regional Communication Centers (RCC) such as the Consolidated Communication Bureau in Maine, the Grand Junction Regional Communications Center in Colorado, and South Sound 911 in Washington.  These RCCs have consolidated dispatching for all LE/PS activities thus providing the ability of smaller jurisdictions to better work together handling daily calls for service.

Significance:  During a critical incident the rapid, clear, and secure flow of communications between responding personnel is essential.  The ability of responding LE/PS organizations is greatly enhanced by the P25 standard where unified networks can be quickly established due to operating on the same band and the flow of information can avoid bottle necks.

Issues begin to arise as violent extremist groups, such as the Islamic State of Iraq and Syria (ISIS), have been attempting to recruit more technically minded members that will be able to increase the group’s ability to plan and conduct cyber operations as a direct attack or in support of a physical attack[3].  Electronic security researchers have also found various security flaws in the P25 standard’s method of framing transmission data that prove it is vulnerable to practical attacks such as high-energy denial of service attacks and low-energy selective jamming attacks[4][5].

This article focuses on a style of attack known as Selective Jamming, in which an attacker would be able to use one or more low-power, inexpensive, and portable transceivers to specifically target encrypted communications in a manner that would not affect transmissions that are made in the clear (unencrypted).  Such an attack would be difficult to detect because of other flaws in the P25 standard and the attacks would last no more than a few hundredths of a second each [4].

If a series of Selective Jamming transceivers were activated shortly before a physical attack responding units, especially tactical units, would have minutes to make a decision on how to run communications.

Option #1:  Push all radio traffic into the clear to overcome a possible selective jamming attack.  This option would require all responding units to disable the encryption function on their radios or switch over to an unencrypted channel to continue to effectively communicate during the response phase.

Risk:  The purpose of encrypted communications in LE/PS is to prevent a perpetrator from listening to the tactical decisions and deployment of responders.  If a perpetrator has developed and implemented the capability to selectively jam communications they will likely have the ability and equipment to monitor radio traffic once it is in the clear.  This option would give the perpetrator of an attack a major advantage on knowing the response to the attack.  The hesitancy to operate in the clear by undercover teams was noted as a major safety risk in the after action report of the 2015 San Bernardino Shooting[6].

Gain:  LE/PS agencies responding to an incident would be able to continue to use their regular equipment and protocols without having to deploy an alternative system.  This would give responders the most speed in attempting to stop the attack with the known loss of operational security.  There would also be zero equipment costs above normal operation as P25 series radios are all capable of operating in the clear.

Option #2:  Develop and stage a secondary communications system for responding agencies or tactical teams to implement once a selective jamming attack is suspected to be occurring.

Risk:  Major cost and planning would have to be implemented to have a secondary system that is jamming-resistant that could be deployed rapidly by responding agencies.  This cost factor could prompt agencies to only equip tactical teams with a separate system such as push-to-talk cellphones or radio systems with different communications standards than P25.  Any LE/PS unit that does not have access to the secondary system will experience a near-communications blackout outside communications made in the clear.

Gain:  Responding units or tactical teams, once a possible selective jamming attack was recognized, would be able to maintain operational security by switching to a secure method of communications.  This would disrupt the advantage that the perpetrator was attempting to gain by disrupting and/or monitoring radio traffic.

Other Comments:  Both options would require significant additional training for LE/PS personnel to recognize the signs of a Selective Jamming attack and respond as appropriate.

Recommendation:  None.


Endnotes:

[1]  Horden, N. (2015). P25 History. Retrieved from Project 25 Technology Interest Group: http://www.project25.org/index.php/technology/p25-history

[2]  National Task Force on Interoperability. (2005). Why Can’t We Talk. Washington D.C.: National Institute of Justice.

[3]  Nussbaum, B. (2015). Thinking About ISIS And Its Cyber Capabilities: Somewhere Between Blue Skies and Falling One. Retrieved from The Center for Internet and Society: http://cyberlaw.stanford.edu/blog/2015/11/thinking-about-isis-and-its-cyber-capabilities-somewhere-between-blue-skies-and-falling

[4]  Clark, S., Metzger, P., Wasserman, Z., Xu, K., & Blaze, M. (2010). Security Weaknesses in the APCO Project 25 Two-Way Radio System. University of Pennsylvania Department of Computer & Information Science.

[5]  Glass, S., Muthukkumarasamy, V., Portmann, M., & Robert, M. (2011). Insecurity in Public-Safety Communications:. Brisbane: NICTA.

[6]  Braziel, R., Straub, F., Watson, G., & Hoops, R. (2016). Bringing Calm to Chaos: A Critical Incident Review of the San Bernardino Public Safety Response to the December 2, 2015, Terrorist Shooting Incident at the Inland Regional Center. Washington: Office of Community Oriented Policing Services.

Communications Cyberspace Law Enforcement & Public Safety Option Papers The Viking Cop United States

U.S. Diplomacy Options for Security & Adaptability in Cyberspace

Matthew Reitman is a science and technology journalist.  He has a background in security policy and studied International Relations at Boston University.  He can be found on Twitter @MatthewReitman.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  U.S. competitors conducting national security activities in cyberspace below the threshold of war aka in the “Gray Zone.”

Date Originally Written:  April 14, 2017.

Date Originally Published:  May 18, 2017.

Author and / or Article Point of View:  This article is written from the point of view of the U.S. State Department towards cyberspace.

Background:  State actors and their non-state proxies operate aggressively in cyberspace, but within a gray zone that violates international norms without justifying a “kinetic” response.  Russian influence operations in the 2016 U.S. election were not an act of war, but escalated tensions dramatically[1].  North Korea used the Lazarus Group to circumvent sanctions by stealing $81 million from Bangladesh’s central bank[2].  Since a U.S.-People’s Republic of China (PRC) agreement in 2015 to curb corporate espionage, there have been 13 intrusions by groups based in the PRC against the U.S. private sector[3].  The State Department has helped to curb Islamic State of Iraq and Syria propaganda online via the Global Engagement Center[4].  The recent creation of another interagency entity, the Russia Information Group, suggests similar efforts could be effective elsewhere[5].

The State Department continues to work towards establishing behavior norms in cyberspace via multilateral channels, like the United Nations Group of Governmental Experts, and bilateral channels, but this remains a slow and tedious process.  Until those norms are codified, gray zone activities in cyberspace will continue.  The risk of attacks on Information Technology (IT) or critical infrastructure and less destructive acts will only grow as the rest of the world comes online, increasing the attack surface.

Significance:  The ever-growing digitally connected ecosystem presents a chimera-like set of risks and rewards for U.S. policymakers.  Protecting the free exchange of information online, let alone keeping the U.S. and its allies safe, is difficult when facing gray zone threats.  Responding with conventional tools like economic sanctions can be evaded more easily online, while “hacking back” can escalate tensions in cyberspace and further runs the risk of creating a conflict that spills offline.  Despite the challenge, diplomacy can reduce threats and deescalate tensions for the U.S. and its allies by balancing security and adaptability.  This article provides policy options for responding to and defending against a range of gray zone threats in cyberspace.

Option #1:  Establish effective compellence methods tailored to each adversary.  Option #1 seeks to combine and tailor traditional coercive diplomacy methods like indictments, sanctions, and “naming and shaming,” in tandem with aggressive counter-messaging to combat information warfare, which can be anything from debunking fake news to producing misinformation that undermines the adversary’s narrative.  A bifocal approach has shown to be more effective form of coercion[6] than one or the other.

Risk:  Depending on the severity, the combined and tailored compellence methods could turn public opinion against the U.S.  Extreme sanctions that punish civilian populations could be viewed unfavorably.  If sanctions are evaded online, escalation could increase as more aggressive responses are considered.  “Naming and shaming” could backfire if an attack is falsely attributed.  Fake bread crumbs can be left behind in code to obfuscate the true offender and make it look as though another nation is responsible.  Depending on the severity of counter-propaganda, its content could damage U.S. credibility, especially if conducted covertly.  Additionally, U.S. actions under Option #1 could undermine efforts to establish behavior norms in cyberspace.

Gain:  Combined and tailored compellence methods can isolate an adversary financially and politically while eroding domestic support.  “Naming and shaming” sends a clear message to the adversary and the world that their actions will not be tolerated, justifying any retaliation.  Sanctions can weaken an economy and cut off outside funding for political support.  Leaking unfavorable information and counter-propaganda undermines an adversary’s credibility and also erodes domestic support.  Option #1’s severity can range depending on the scenario, from amplifying the spread of accurate news and leaked documents with social botnets to deliberately spreading misinformation.  By escalating these options, the risks increase.

Option #2:  Support U.S. Allies’ cybersecurity due diligence and capacity building.  Option #2 pursues confidence-building measures in cyberspace as a means of deterrence offline, so nations with U.S. collective defense agreements have priority.  This involves fortifying allies’ IT networks and industrial control systems for critical infrastructure by taking measures to reduce vulnerabilities and improve cybersecurity incident response teams (CSIRTs).  This option is paired with foreign aid for programs that teach media literacy, “cyber hygiene,” and computer science to civilians.

Risk:  Improving allies’ defensive posture can be viewed by some nations as threatening and could escalate tensions.  Helping allies fortify their defensive capabilities could lead to some sense of assumed responsibility if those measures failed, potentially fracturing the relationship or causing the U.S. to come to their defense.  Artificial Intelligence (AI)-enhanced defense systems aren’t a silver bullet and can contribute to a false sense of security.  Any effort to defend against information warfare runs the potential of going too far by infringing freedom of speech.  Aside from diminishing public trust in the U.S., Option #2 could undermine efforts to establish behavior norms in cyberspace.

Gain:  Collectively, this strategy can strengthen U.S. Allies by contributing to their independence while bolstering their defense against a range of attacks.  Option #2 can reduce risks to U.S. networks by decreasing threats to foreign networks.  Penetration testing and threat sharing can highlight vulnerabilities in IT networks and critical infrastructure, while educating CSIRTs.  Advances in AI-enhanced cybersecurity systems can decrease response time and reduce network intrusions.  Funding computer science education trains the next generation of CSIRTs.  Cyber hygiene, or best cybersecurity practices, can make civilians less susceptible to cyber intrusions, while media literacy can counter the effects of information warfare.

Other Comments:  The U.S. Cyber Command and intelligence agencies, such as the National Security Agency and Central Intelligence Agency, are largely responsible for U.S. government operations in cyberspace.  The U.S. State Department’s range of options may be limited, but partnering with the military and intelligence communities, as well as the private sector is crucial.

Recommendation:  None.


Endnotes:

[1]  Nakashima, E. (2017, February 7) Russia’s apparent meddling in U.S. election is not an act of war, cyber expert says. Washington Post. Retrieved from: https://www.washingtonpost.com/news/checkpoint/wp/2017/02/07/russias-apparent-meddling-in-u-s-election-is-not-an-act-of-war-cyber-expert-says

[2]  Finkle, J. (2017, March 15) “North Korean hacking group behind recent attacks on banks: Symantec.” Reuters. Retrieved from: http://www.reuters.com/article/us-cyber-northkorea-symantec

[3]  FireEye. (2016, June 20). Red Line Drawn: China Recalculates Its Use Of Cyber Espionage. Retrieved from: https://www.fireeye.com/blog/threat-research/2016/06/red-line-drawn-china-espionage.html

[4]  Warrick, J. (2017, February 3). “How a U.S. team uses Facebook, guerrilla marketing to peel off potential ISIS recruits.” Washington Post. Retrieved from: https://www.washingtonpost.com/world/national-security/bait-and-flip-us-team-uses-facebook-guerrilla-marketing-to-peel-off-potential-isis-recruits/2017/02/03/431e19ba-e4e4-11e6-a547-5fb9411d332c_story.html

[5]  Mak, T. (2017, February 6). “U.S. Preps for Infowar on Russia”. The Daily Beast. Retrieved from: http://www.thedailybeast.com/articles/2017/02/06/u-s-preps-for-infowar-on-russia.html

[6]  Valeriano, B., & Jensen, B. (2017, March 16). “From Arms and Influence to Data and Manipulation: What Can Thomas Schelling Tell Us About Cyber Coercion?”. Lawfare. Retrieved from: https://www.lawfareblog.com/arms-and-influence-data-and-manipulation-what-can-thomas-schelling-tell-us-about-cyber-coercion

Below Established Threshold Activities (BETA) Cyberspace Diplomacy Matthew Reitman Option Papers United States

Options for Private Sector Hacking Back

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  A future where Hacking Back / Offensive Cyber Operations in the Private Sphere are allowed by the U.S. Government.

Date Originally Written:  April 3, 2017.

Date Originally Published:  May 15, 2017.

Author and / or Article Point of View:  This article is written from the point of view of a future where Hacking Back / Offensive Cyber Operations as a means for corporations to react offensively as a defensive act has been legally sanctioned by the U.S. Government and the U.S. Department of Justice.  While this government sanctioning may seem encouraging to some, it could lead to national and international complications.

Background:  It is the year X and hacking back by companies in the U.S. has been given official sanction.  As such, any company that has been hacked may offensively react to the hacking by hacking the adversaries infrastructure to steal back data and / or deny and degrade the adversaries from attacking further.

Significance:  At present, Hacking Back / Offensive Cyber Operations are not sanctioned activities that the U.S. Government allows U.S. corporations to conduct.  If this were to come to pass, then U.S. corporations would have the capabilities to stand up offensive cyber operations divisions in their corporate structure or perhaps hire companies to carry out such actions for them i.e. Information Warfare Mercenaries.  These forces and actions taken by corporations, if allowed, could cause larger tensions within the geopolitical landscape and force other nation states to react.

Option #1:  The U.S. Government sanctions the act of hacking back against adversaries as fair game.  U.S. corporations stand up hacking teams to work with Blue Teams (Employees in companies who attempt to thwart incidents and respond to them) to react to incidents and to attempt to hack the adversaries back to recover information, determine who the adversaries are, and to prevent their infrastructure from being operational.

Risk:  Hacking teams at U.S. corporations, while hacking back, make mistakes and attack innocent companies/entities/foreign countries whose infrastructure may have been unwittingly used as part of the original attack.

Gain:  The hacking teams of these U.S. corporations manage to hack back, steal information, and determine if it had been copied and further exfiltrated.  This also allows the U.S. corporations to try to determine who the actor is and gather evidence as well as degrade the actor’s ability to attack others.

Option #2:  The U.S. Government allows for the formation of teams/companies of information warfare specialists that are non-governmental bodies to hack back as an offering.  This offensive activity would be sanctioned and monitored by the government but work for companies under a letter of marque approach with payment and / or bounties on actors stopped or for evidence brought to the judicial and used to prosecute actors.

Risk:  Letters of marque could be misused and attackers could go outside their mandates.  The same types of mistakes could also be made as those of the corporations that formed offensive teams internally.  Offensive actions could affect geopolitics as well as get in the way of other governmental operations that may be taking place.  Infrastructures could be hacked and abused of innocent actors who were just a pivot point and other not yet defined mistakes could be made.

Gain:  Such actors and operations could deter some adversaries and in fact could retrieve data that has been stolen and perhaps prevent that data from being further exploited.

Other Comments:  Clearly the idea of hacking back has been in the news these last few years and the notion has been something many security professionals have said was a terrible idea.  There are certain advantages to the idea that firms can protect themselves from hacking by hacking back, but generally the sense of things today is that many companies cannot even protect their data properly to start with so the idea of hacking back is a red herring to larger security concerns.

Recommendation:  None.


Endnotes:

None.

Cyberspace Offensive Operations Option Papers Private Sector Scot A. Terban United States

Options to Deter Cyber-Intrusions into Non-Government Computers

Elizabeth M. Bartels is a doctoral candidate at the Pardee RAND Graduate School and an assistant policy analyst at the nonprofit, nonpartisan RAND Corporation.  She has an M.S. in political science from the Massachusetts Institute of Technology and a B.A. in political science with a minor in Near Eastern languages and civilization from the University of Chicago.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Unless deterred, cyber-intrusions into non-government computer systems will continue to lead to the release of government-related information.

Date Originally Written:  March 15, 2017.

Date Originally Published:  May 11, 2017.

Author and / or Article Point of View:  Author is a PhD candidate in policy analysis, whose work focuses on wargaming and defense decision-making.

Background:  Over the years, a great deal of attention has been paid to gaining security in cyberspace to prevent unauthorized access to critical infrastructure like those that control electrical grids and financial systems, and military networks.  In recent years a new category of threat has emerged: the cyber-theft and subsequent public release of large troves of private communications, personal documents and other data.

This category of incident includes the release of government data by inside actors such as Chelsea Manning and Edward Snowden.  However, hacks of the Democratic National Committee and John Podesta, a Democratic party strategist, illustrate that the risk goes beyond the theft of government data to include information that has the potential to harm individuals or threaten the proper functioning of government.  Because the federal government depends on proxies such as contractors, non-profit organizations, and local governments to administer so many public functions, securing information that could harm the government – but is not on government-secured systems – may require a different approach.

Significance:  The growing dependence on government proxies, and the risk such dependence creates, is hardly new[1], and neither is concern over the cyber security implications of systems outside government’s immediate control[2].  However, recent attacks have called the sufficiency of current solutions into question.

Option #1:  Build Better Defenses.  The traditional approach to deterring cyber-exploitation has focused on securing networks, so that the likelihood of failure is high enough to dissuade adversaries from attempting to infiltrate systems.  These programs range from voluntary standards to improve network security[3], to contractual security standards, to counter-intelligence efforts that seek to identify potential insider threats.  These programs could be expanded to more aggressively set standards covering non-governmental systems containing information that could harm the government if released.

Risk:  Because the government does not own these systems, it must motivate proxy organizations to take actions they may not see as in their interest.  While negotiating contracts that align organizational goals with those of the government or providing incentives to organizations that improve their defenses may help, gaps are likely to remain given the limits of governmental authority over non-governmental networks and information[4].

Additionally, defensive efforts are often seen as a nuisance both inside and outside government.  For example, the military culture often prioritizes warfighting equipment over defensive or “office” functions like information technology[5], and counter-intelligence is often seen as a hindrance to intelligence gathering[6].  Other organizations are generally focused on efficiency of day-to-day functions over security[7].  These tendencies create a risk that security efforts will not be taken seriously by line operators, causing defenses to fail.

Gain:  Denying adversaries the opportunity to infiltrate U.S. systems can prevent unauthorized access to sensitive material and deter future attempted incursions.

Option #2:  Hit Back Harder.  Another traditional approach to deterrence is punishment—that is, credibly threatening to impose costs on the adversary if they commit a specific act.  The idea is that adversaries will be deterred if they believe attacks will extract a cost that outweighs any potential benefits.  Under the Obama administration, punishment for cyber attacks focused on the threat of economic sanctions[8] and, in the aftermath of attacks, promises of clandestine actions against adversaries[9].  This policy could be made stronger by a clear statement that the U.S. will take clandestine action not just when its own systems are compromised, but also when its interests are threatened by exploitation of other systems.  Recent work has advocated the use of cyber-tools which are acknowledged only to the victim as a means of punishment in this context[10], however the limited responsiveness of cyber weapons may make this an unattractive option.  Instead, diplomatic, economic, information, and military options in all domains should be considered when developing response options, as has been suggested in recent reports[11]. 

Risk:  Traditionally, there has been skepticism that cyber incursions can be effectively stopped through punishment, as in order to punish, the incursion must be attributed to an adversary.  Attributing cyber incidents is possible based on forensics, but the process often lacks speed and certainty of investigations into traditional attacks.  Adversaries may assume that decision makers will not be willing to retaliate long after the initiating incident and without “firm” proof as justification.  As a result, adversaries might still be willing to attack because they feel the threat of retaliation is not credible.  Response options will also need to deal with how uncertainty may shape U.S. decision maker tolerance for collateral damage and spillover effects beyond primary targets.

Gain:  Counter-attacks can be launched regardless of who owns the system, in contrast to defensive options, which are difficult to implement on systems not controlled by the government.

Option #3:  Status Quo. While rarely discussed, another option is to maintain the status quo and not expand existing programs that seek to protect government networks.

Risk:  By failing to evolve U.S. defenses against cyber-exploitation, adversaries could gain increased advantage as they develop new ways to overcome existing approaches.

Gain:  It is difficult to demonstrate that even the current level of spending on deterring cyber attacks has meaningful impact on adversary behavior.  Limiting the expansion of untested programs would free up resources that could be devoted to examining the effectiveness of current policies, which might generate new insights about what is, and is not, effective.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1]  John J. Dilulio Jr. [2014], Bring Back the Bureaucrats: Why More Federal Workers Will Lead to Better (and Smaller!) Government, Templeton Press.

[2]  President Barack Obama [2013], Executive Order—Improving Critical Infrastructure Cybersecurity, The White House Office of the Press Secretary.

[3]  National Institute of Standards and Technology (NIST) [2017], Framework for Improving Critical Infrastructure Cybersecurity, Draft Version 1.1.

[4]  Glenn S. Gerstell, NSA General Councel, Confronting the Cybersecurity Challenge, Keynote address at the 2017 Law, Ethics and National Security Conference at Duke Law School, February 25, 2017.

[5]  Allan Friedman and P.W. Singer, “Cult of the Cyber Offensive,” Foreign Policy, January 15, 2014.

[6]  James M. Olson, The Ten Commandments of Counterintelligence, 2007.

[7]  Don Norman, “When Security Gets in the Way,” Interactions, volume 16, issue 6: Norman, D. A. (2010).

[8]  President Barack Obama [2016], Executive Order—Taking Additional Steps to Address the National Emergency with Respect to Significant Malicious Cyber-Enabled Activities.

[9]  Alex Johnson [2016], “US Will ‘Take Action’ on Russian Hacking, Obama Promises,” NBC News.

[10]  Evan Perkoski and Michael Poznansky [2016], “An Eye for an Eye: Deterring Russian Cyber Intrusions,” War on the Rocks.

[11]  Defense Science Board [2017], Task Force of Cyber Deterrence.

Cyberspace Deterrence Elizabeth M. Bartels Non-Government Entities Option Papers