Options to Bridge the U.S. Department of Defense – Silicon Valley Gap with Cyber Foreign Area Officers

Kat Cassedy is a qualitative analyst with 20 years of work in hard problem solving, alternative analysis, and red teaming.  She currently works as an independent consultant/contractor, with experience in the public, private, and academic sectors.  She can be found on Twitter @Katnip95352013, tweeting on modern #politicalwarfare, #proxywarfare, #NatSec issues, #grayzoneconflict, and a smattering of random nonsense.  Divergent Options’ content does not contain information of any official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  The cultural gap between the U.S. Department of Defense and Silicon Valley is significant.  Bridging this gap likely requires more than military members learning tech speak as their primary duties allow.

Date Originally Written:  April 15, 2019. 

Date Originally Published:  April 15, 2019. 

Author and / or Article Point of View:  The author’s point of view is that the cyber-sector may be more akin to a foreign culture than a business segment, and that bridging the growing gulf between the Pentagon and Silicon Valley may require sociocultural capabilities as much or more so than technical or acquisition skills. 

Background:  As the end of the third decade of the digital revolution nears an end, and close to a year after the U.S. Cyber Command was elevated to a Unified Combatant Command, the gap between the private sector’s most advanced technology talents, intellectual property (IP), services, and products and that of the DoD is strained and increasing. Although the Pentagon needs and wants Silicon Valley’s IP and capabilities, the technorati are rejecting DoD’s overtures[1] in favor of enormous new markets such as those available in China. In the Information Age, DoD assesses that it needs Silicon Valley’s technology much the way it needed the Middle East’s fossil fuels over the last half century, to maintain U.S. global battlespace dominance. And Silicon Valley’s techno giants, with their respective market caps rivaling or exceeding the Gross Domestic Product of the globe’s most thriving economies, have global agency and autonomy such that they should arguably be viewed as geo-political power players, not simply businesses.  In that context, perhaps it is time to consider 21st century alternatives to the DoD way of thinking of Silicon Valley and its subcomponents as conventional Defense Industrial Base vendors to be managed like routine government contractors. 

Significance:  Many leaders and action officers in the DoD community are concerned that Silicon Valley’s emphasis on revenue share and shareholder value is leading it to prioritize relationships with America’s near-peer competitors – mostly particularly but not limited to China[2] – over working with the U.S. DoD and national security community. “In the policy world, 30 years of experience usually makes you powerful. In the technical world, 30 years of experience usually makes you obsolete[3].” Given the DoD’s extreme reliance on and investment in highly networked and interdependent information systems to dominate the modern global operating environment, the possibility that U.S. companies are choosing foreign adversaries as clients and partners over the U.S. government is highly concerning. If this technology shifts away from U.S. national security concerns continues, 1)  U.S. companies may soon be providing adversaries with advanced capabilities that run counter to U.S. national interests[4]; and 2) even where these companies continue to provide products and services to the U.S., there is an increased concern about counter-intelligence vulnerabilities in U.S. Government (USG) systems and platforms due to technology supply chain vulnerabilities[5]; and 3) key U.S. tech startup and emerging technology companies are accepting venture capital, seed, and private equity investment from investors who’s ultimate beneficial owners trace back to foreign sovereign and private wealth sources that are of concern to the national security community[6].

Option #1:  To bridge the cultural gap between Silicon Valley and the Pentagon, the U.S. Military Departments will train, certify, and deploy “Cyber Foreign Area Officers” or CFAOs.  These CFAOs would align with DoD Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs[7]” and, within the cyber and Silicon Valley context, do the same as a traditional FAO and “provide expertise in planning and executing operations, to provide liaison with foreign militaries operating in coalitions with U.S. forces, to conduct political-military activities, and to execute military-diplomatic missions.”

Risk:  DoD treating multinational corporations like nation states risks further decreasing or eroding the recognition of nation states as bearing ultimate authority.  Additionally, there is risk that the checks and balances specifically within the U.S. between the public and private sectors will tip irrevocably towards the tech sector and set the sector up as a rival for the USG in foreign and domestic relationships. Lastly, success in this approach may lead to other business sectors/industries pushing to be treated on par.

Gain:  Having DoD establish a CFAO program would serve to put DoD-centric cyber/techno skills in a socio-cultural context, to aid in Silicon Valley sense-making, narrative development/dissemination, and to establish mutual trusted agency. In effect, CFAOs would act as translators and relationship builders between Silicon Valley and DoD, with the interests of all the branches of service fully represented. Given the routine real world and fictional depictions of Silicon Valley and DoD being from figurative different worlds, using a FAO construct to break through this recognized barrier may be a case of USG policy retroactively catching up with present reality. Further, considering the national security threats that loom from the DoD losing its technological superiority, perhaps the potential gains of this option outweigh its risks.

Option #2:  Maintain the status quo, where DoD alternates between first treating Silicon Valley as a necessary but sometimes errant supplier, and second seeking to emulate Silicon Valley’s successes and culture within existing DoD constructs.  

Risk:  Possibly the greatest risk in continuing the path of the current DoD approach to the tech world is the loss of the advantage of technical superiority through speed of innovation, due to mutual lack of understanding of priorities, mission drivers, objectives, and organizational design.  Although a number of DoD acquisition reform initiatives are gaining some traction, conventional thinking is that DoD must acquire technology and services through a lengthy competitive bid process, which once awarded, locks both the DoD and the winner into a multi-year relationship. In Silicon Valley, speed-to-market is valued, and concepts pitched one month may be expected to be deployable within a few quarters, before the technology evolves yet again. Continual experimentation, improvisation, adaptation, and innovation are at the heart of Silicon Valley. DoD wants advanced technology, but they want it scalable, repeatable, controllable, and inexpensive. These are not compatible cultural outlooks.

Gain:  Continuing the current course of action has the advantage of familiarity, where the rules and pathways are well-understood by DoD and where risk can be managed. Although arguably slow to evolve, DoD acquisition mechanisms are on solid legal ground regarding use of taxpayer dollars, and program managers and decision makers alike are quite comfortable in navigating the use of conventional DoD acquisition tools. This approach represents good fiscal stewardship of DoD budgets.

Other Comments:  None. 

Recommendation:  None.  


Endnotes:

[1] Malcomson, S. Why Silicon Valley Shouldn’t Work With the Pentagon. New York Times. 19APR2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/04/19/opinion/silicon-valley-military-contract.html.

[2] Hsu, J. Pentagon Warns Silicon Valley About Aiding Chinese Military. IEEE Spectrum. 28MAR2019. Retrieved 15APR2019, from https://spectrum.ieee.org/tech-talk/aerospace/military/pentagon-warns-silicon-valley-about-aiding-chinese-military.

[3] Zegart, A and Childs, K. The Growing Gulf Between Silicon Valley and Washington. The Atlantic. 13DEC2018. Retrieved 15APR2019, from https://www.theatlantic.com/ideas/archive/2018/12/growing-gulf-between-silicon-valley-and-washington/577963/.

[4] Copestake, J. Google China: Has search firm put Project Dragonfly on hold? BBC News. 18DEC2018. Retrieved 15APR2019, from https://www.bbc.com/news/technology-46604085.

[5] Mozur, P. The Week in Tech: Fears of the Supply Chain in China. New York Times. 12OCT2018. Retrieved 15APR2019, from https://www.nytimes.com/2018/10/12/technology/the-week-in-tech-fears-of-the-supply-chain-in-china.html.

[6] Northam, J. China Makes A Big Play In Silicon Valley. National Public Radio. 07OCT2018. Retrieved 15APR2019, from https://www.npr.org/2018/10/07/654339389/china-makes-a-big-play-in-silicon-valley.

[7] Department of Defense Directive 1315.17, “Military Department Foreign Area Officer (FAO) Programs,” April 28, 2005.  Retrieved 15APR2019, from https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/131517p.pdf.

 

Cyberspace Emerging Technology Information Systems Kat Cassedy Option Papers Public-Private Partnerships and Intersections United States

Assessment of the North Korean Cyberattack on Sony Pictures

Emily Weinstein is a Research Analyst at Pointe Bello and a current M.A. candidate in Security Studies at Georgetown University.  Her research focuses on Sino-North Korean relations, foreign policy, and military modernization.  She can be found on Twitter @emily_sw1.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the North Korean Cyberattack on Sony Pictures

Date Originally Written:  July 11, 2018.

Date Originally Published:  August 20, 2018.

Summary:   The 2014 North Korean cyberattack on Sony Pictures shocked the world into realizing that a North Korean cyber threat truly existed.  Prior to 2014, what little information existed on North Korea’s cyber capabilities was largely dismissed, citing poor domestic conditions as rationale for cyber ineptitude.  However, the impressive nature of the Sony attack was instrumental in changing global understanding of Kim Jong-un and his regime’s daring nature.

Text:  On November 24, 2014 Sony employees discovered a massive cyber breach after an image of a red skull appeared on computer screens company-wide, displaying a warning that threatened to reveal the company’s secrets.  That same day, more than 7,000 employees turned on their computers to find gruesome images of the severed head of Sony’s chief executive, Michael Lynton[1].  These discoveries forced the company to shut down all computer systems, including those in international offices, until the incident was further investigated.  What was first deemed nothing more than a nuisance was later revealed as a breach of international proportions.  Since this incident, the world has noted the increasing prevalence of large-scale digital attacks and the dangers they pose to both private and public sector entities.

According to the U.S. Computer Emergency Readiness Team, the primary malware used in this case was a Server Message Block (SMB) Worm Tool, otherwise known as SVCH0ST.EXE.  An SMB worm is usually equipped with five components: a listening implant, lightweight backdoor, proxy tool, destructive hard drive tool, and a destructive target cleaning tool[2].  The worm spreads throughout the infected network via a trial-and-error method used to obtain information such as a user password or personal identification number known as a brute force authentication attack.  The worm then connects to the command-and-control infrastructure where it is then able to begin its damage, usually copying software that is intended to damage or disable computers and computer systems, known as malware, across to the victim system or administrator system via the network sharing process.  Once these tasks are complete, the worm executes the malware using remotely scheduled tasks[3].

This type of malware is highly destructive.  If an organization is infected, it is likely to experience massive impacts on daily operations, including the loss of intellectual property and the disruption of critical internal systems[4].  In Sony’s case, on an individual level, hackers obtained and leaked personal and somewhat embarrassing information about or said by Sony personnel to the general public, in addition to information from private Sony emails that was sensitive or controversial.  On the company level, hackers stole diverse information ranging from contracts, salary lists, budget information, and movie plans, including five entire yet-to-be released movies.  Moreover, Sony internal data centers had been wiped clean and 75 percent of the servers had been destroyed[5].

This hack was attributed to the release of Sony’s movie, The Interview—a comedy depicting U.S. journalists’ plan to assassinate North Korean leader Kim Jong-un.  A group of hackers who self-identified by the name “Guardians of Peace” (GOP) initially took responsibility for the attack; however, attribution remained unsettled, as experts had a difficult time determining the connections and sponsorship of the “GOP” hacker group.  Former Federal Bureau of Investigation (FBI) Director James Comey in December 2014 announced that U.S. government believed that the North Korean regime was behind the attack, alluding to the fact that the Sony hackers failed to use proxy servers that masked the origin of their attack, revealing Internet Protocol or IP addresses that the FBI knew to be exclusively used by North Korea[6].

Aside from Director Comey’s statements, other evidence exists that suggests North Korea’s involvement.  For instance, the type of malware deployed against Sony utilized methods similar to malware that North Korean actors had previously developed and used.  Similarly, the computer-wiping software used against Sony was also used in a 2013 attack against South Korean banks and media outlets.  However, most damning of all was the discovery that the malware was built on computers set to the Korean language[7].

As for a motivation, experts argue that the hack was executed by the North Korean government in an attempt to preserve the image of Kim Jong-un, as protecting their leader’s image is a chief political objective in North Korea’s cyber program.  Sony’s The Interview infantilized Kim Jong-un and disparaged his leadership skills, portraying him as an inept, ruthless, and selfish leader, while poking fun at him by depicting him singing Katy Perry’s “Firework” song while shooting off missiles.  Kim Jong-un himself has declared that “Cyberwarfare, along with nuclear weapons and missiles, is an ‘all-purpose sword[8],’” so it is not surprising that he would use it to protect his own reputation.

The biggest takeaway from the Sony breach is arguably the U.S. government’s change in attitude towards North Korean cyber capabilities.  In recent years leading up to the attack, U.S. analysts were quick to dismiss North Korea’s cyber-potential, citing its isolationist tactics, struggling economy, and lack of modernization as rationale for this judgement.  However, following this large-scale attack on a large and prominent U.S. company, the U.S. government has been forced to rethink how it views the Hermit Regime’s cyber capabilities.  Former National Security Agency Deputy Director Chris Inglis argues that cyber is a tailor-made instrument of power for the North Korean regime, thanks to its low-cost of entry, asymmetrical nature and degree of anonymity and stealth[9].  Indeed the North Korean cyber threat has crept up on the U.S., and now the its intelligence apparatus must continue to work to both counter and better understand North Korea’s cyber capabilities.


Endnotes:

[1] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[2] Lennon, M. (December 19, 2014). Hackers Used Sophisticated SMB Worm Tool to Attack Sony. Retrieved July 7, 2018, from https://www.securityweek.com/hackers-used-sophisticated-smb-worm-tool-attack-sony

[3] Doman, C. (January 19, 2015). Destructive malware—a close look at an SMB worm tool. Retrieved July 7, 2018, from http://pwc.blogs.com/cyber_security_updates/2015/01/destructive-malware.html

[4] United States Computer Emergency Readiness Team (December 19, 2014). Alert (TA14-353A) Targeted Destructive Malware. Retrieved July 7, 2018, from https://www.us-cert.gov/ncas/alerts/TA14-353A

[5] Cieply, M. and Barnes, B. (December 30, 2014). Sony Cyberattack, First a Nuisance, Swiftly Grew Into a Firestorm. Retrieved July 7, 2018, from https://www.nytimes.com/2014/12/31/business/media/sony-attack-first-a-nuisance-swiftly-grew-into-a-firestorm-.html

[6] Greenberg, A. (January 7, 2015). FBI Director: Sony’s ‘Sloppy’ North Korean Hackers Revealed Their IP Addresses. Retrieved July 7, 2018, from https://www.wired.com/2015/01/fbi-director-says-north-korean-hackers-sometimes-failed-use-proxies-sony-hack/

[7] Pagliery, J. (December 29, 2014). What caused Sony hack: What we know now. Retrieved July 8, 2018, from http://money.cnn.com/2014/12/24/technology/security/sony-hack-facts/

[8] Sanger, D., Kirkpatrick, D., and Perlroth, N. (October 15, 2017). The World Once Laughed at North Korean Cyberpower. No More. Retrieved July 8, 2018, from https://mobile.nytimes.com/2017/10/15/world/asia/north-korea-hacking-cyber-sony.html

[9] Ibid.

Assessment Papers Cyberspace Emily Weinstein Information Systems

An Assessment of Information Warfare as a Cybersecurity Issue

Justin Sherman is a sophomore at Duke University double-majoring in Computer Science and Political Science, focused on cybersecurity, cyberwarfare, and cyber governance. Justin conducts technical security research through Duke’s Computer Science Department; he conducts technology policy research through Duke’s Sanford School of Public Policy; and he’s a Cyber Researcher at a Department of Defense-backed, industry-intelligence-academia group at North Carolina State University focused on cyber and national security – through which he works with the U.S. defense and intelligence communities on issues of cybersecurity, cyber policy, and national cyber strategy. Justin is also a regular contributor to numerous industry blogs and policy journals.

Anastasios Arampatzis is a retired Hellenic Air Force officer with over 20 years’ worth of experience in cybersecurity and IT project management. During his service in the Armed Forces, Anastasios was assigned to various key positions in national, NATO, and EU headquarters, and he’s been honored by numerous high-ranking officers for his expertise and professionalism, including a nomination as a certified NATO evaluator for information security. Anastasios currently works as an informatics instructor at AKMI Educational Institute, where his interests include exploring the human side of cybersecurity – psychology, public education, organizational training programs, and the effects of cultural, cognitive, and heuristic biases.

Paul Cobaugh is the Vice President of Narrative Strategies, a coalition of scholars and military professionals involved in the non-kinetic aspects of counter-terrorism, defeating violent extremism, irregular warfare, large-scale conflict mediation, and peace-building. Paul recently retired from a distinguished career in U.S. Special Operations Command, and his specialties include campaigns of influence and engagement with indigenous populations.

Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Information Warfare as a Cybersecurity Issue

Date Originally Written:  March 2, 2018.

Date Originally Published:  June 18, 2018.

Summary:  Information warfare is not new, but the evolution of cheap, accessible, and scalable cyber technologies enables it greatly.  The U.S. Department of Justice’s February 2018 indictment of the Internet Research Agency – one of the Russian groups behind disinformation in the 2016 American election – establishes that information warfare is not just a global problem from the national security and fact-checking perspectives; but a cybersecurity issue as well.

Text:  On February 16, 2018, U.S. Department of Justice Special Counsel Robert Mueller indicted 13 Russians for interfering in the 2016 United States presidential election [1]. Beyond the important legal and political ramifications of this event, this indictment should make one thing clear: information warfare is a cybersecurity issue.

It shouldn’t be surprising that Russia created fake social media profiles to spread disinformation on sites like Facebook.  This tactic had been demonstrated for some time, and the Russians have done this in numerous other countries as well[2].  Instead, what’s noteworthy about the investigation’s findings, is that Russian hackers also stole the identities of real American citizens to spread disinformation[3].  Whether the Russian hackers compromised accounts through technical hacking, social engineering, or other means, this technique proved remarkably effective; masquerading as American citizens lent significantly greater credibility to trolls (who purposely sow discord on the Internet) and bots (automated information-spreaders) that pushed Russian narratives.

Information warfare has traditionally been viewed as an issue of fact-checking or information filtering, which it certainly still is today.  Nonetheless, traditional information warfare was conducted before the advent of modern cyber technologies, which have greatly changed the ways in which information campaigns are executed.  Whereas historical campaigns took time to spread information and did so through in-person speeches or printed news articles, social media enables instantaneous, low-cost, and scalable access to the world’s populations, as does the simplicity of online blogging and information forgery (e.g., using software to manufacture false images).  Those looking to wage information warfare can do so with relative ease in today’s digital world.

The effectiveness of modern information warfare, then, is heavily dependent upon the security of these technologies and platforms – or, in many cases, the total lack thereof.  In this situation, the success of the Russian hackers was propelled by the average U.S. citizen’s ignorance of basic cyber “hygiene” rules, such as strong password creation.  If cybersecurity mechanisms hadn’t failed to keep these hackers out, Russian “agents of influence” would have gained access to far fewer legitimate social media profiles – making their overall campaign significantly less effective.

To be clear, this is not to blame the campaign’s effectiveness on specific end users; with over 100,000 Facebook accounts hacked every single day we can imagine it wouldn’t be difficult for any other country to use this same technique[4].  However, it’s important to understand the relevance of cybersecurity here. User access control, strong passwords, mandated multi-factor authentication, fraud detection, and identity theft prevention were just some of the cybersecurity best practices that failed to combat Russian disinformation just as much as fact-checking mechanisms or counter-narrative strategies.

These technical and behavioral failures didn’t just compromise the integrity of information, a pillar of cybersecurity; they also enabled the campaign to become incredibly more effective.  As the hackers planned to exploit the polarized election environment, access to American profiles made this far easier: by manipulating and distorting information to make it seem legitimate (i.e., opinions coming from actual Americans), these Russians undermined law enforcement operations, election processes, and more.  We are quick to ask: how much of this information was correct and how much of it wasn’t?  Who can tell whether the information originated from un-compromised, credible sources or from credible sources that have actually been hacked?

However, we should also consider another angle: what if the hackers hadn’t won access to those American profiles in the first place?  What if the hackers were forced to almost entirely use fraudulent accounts, which are prone to be detected by Facebook’s algorithms?  It is for these reasons that information warfare is so critical for cybersecurity, and why Russian information warfare campaigns of the past cannot be equally compared to the digital information wars of the modern era.

The global cybersecurity community can take an even greater, active role in addressing the account access component of disinformation.  Additionally, those working on information warfare and other narrative strategies could leverage cybersecurity for defensive operations.  Without a coordinated and integrated effort between these two sectors of the cyber and security communities, the inability to effectively combat disinformation will only continue as false information penetrates our social media feeds, news cycles, and overall public discourse.

More than ever, a demand signal is present to educate the world’s citizens on cyber risks and basic cyber “hygiene,” and to even mandate the use of multi-factor authentication, encrypted Internet connections, and other critical security features.  The security of social media and other mass-content-sharing platforms has become an information warfare issue, both within respective countries and across the planet as a whole.  When rhetoric and narrative can spread (or at least appear to spread) from within, the effectiveness of a campaign is amplified.  The cybersecurity angle of information warfare, in addition to the misinformation, disinformation, and rhetoric itself, will remain integral to effectively combating the propaganda and narrative campaigns of the modern age.


Endnotes:

[1] United States of America v. Internet Research Agency LLC, Case 1:18-cr-00032-DLF. Retrieved from https://www.justice.gov/file/1035477/download

[2] Wintour, P. (2017, September 5). West Failing to Tackle Russian Hacking and Fake News, Says Latvia. Retrieved from https://www.theguardian.com/world/2017/sep/05/west-failing-to-tackle-russian-hacking-and-fake-news-says-latvia

[3] Greenberg, A. (2018, February 16). Russian Trolls Stole Real US Identities to Hide in Plain Sight. Retrieved from https://www.wired.com/story/russian-trolls-identity-theft-mueller-indictment/

[4] Callahan, M. (2015, March 1). Big Brother 2.0: 160,000 Facebook Pages are Hacked a Day. Retrieved from https://nypost.com/2015/03/01/big-brother-2-0-160000-facebook-pages-are-hacked-a-day/

Anastasios Arampatzis Assessment Papers Cyberspace Information and Intelligence Information Systems Justin Sherman Paul Cobaugh Political Warfare Psychological Factors

Options for United States Intelligence Community Analytical Tools

Marvin Ebrahimi has served as an intelligence analyst.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Centralization of United States Intelligence Community (USIC) Analytical Tools.

Date Originally Written:  March 6, 2017.

Date Originally Published:  May 1, 2017.

Author and / or Article Point of View:  Author has served as an all-source analyst.  The author has personal experience with the countless tool suites, programs, and platforms used by intelligence analysts within the USIC to perform their analytical function, and the fact that there is no centralized collaborative environment from which such tools can be learned, evaluated, or selected for unit-specific mission sets tailored to the end-user.

Background:  Various USIC agencies and components have access to countless tool suites, some which are unique to that agency, and some available across the community.  These tools vary according to intelligence function, and are available across the multiple information systems used by intelligence components within the community[1].  While a baseline of tools are available to all, there is little centralization of these tools.  This lack of centralization requires analysts to learn and retain knowledge of how to manipulate the information systems and search engines at their disposal in order to find specific tools required for specific analytical functions.

Significance:  Digital collocation or compilation of analytical tool suites, programs, and platforms in a collaborative space would benefit all-source analysts who require access to a diverse set of tools required of their broadly focused function.  The knowledge and ability required to conduct tool-specific searches, i.e. manipulating a basic search engine in order to navigate to a landing page or agency-specific site where tool-specific information can hopefully be found, is time-consuming and detracts from analysts’ time available to conduct and employ analytical tradecraft.  This loss of time from an analyst’s daily schedule creates an opportunity cost that has yet to be realized.

Option #1:  Centralize analytical training, visibility, and accessibility to tool suites, programs, and platforms through creation of a USIC-wide collaborative analytical environment such as the Director of National Intelligence’s Intelligence Community IT Enterprise initiative[2].  Centralization of analytical training is used here to describe the necessity to train analysts on how to manipulate various databases, search engines, and environments effectively to find the specific tool suite or program required for their function.  Option #1 does not refer to centralizing all USIC member analytical training as outlined in the Intelligence Community Directive 203, “Analytical Standards.”

Risk:  Centralization of analytical training and accessibility leads to conflict primarily within respective USIC members’ unique identities and analytical functions.  The tools used by one agency may not work or be required by another.  Various agencies provide different functions, and centralizing access to tools requires further integration of all USIC agencies in the same or at least a compatible digital space.

Creation of a USIC-wide entity to collate, evaluate, and manage access to the multitude of tool suites, etc. creates yet another potentially bureaucratic element in an already robust community.  Such an entity would have to be controlled by the Office of the Director of National Intelligence unless delegated to a subordinate element.

Compartmentalization required to protect agencies’ sources and methods, analytical tradecraft included, may preclude community-wide collaborative efforts and information sharing necessary to create an effective space.

Gain:  Shared information is a shared reality.  Creation of a community-wide space to colocate analytical tool suites enables and facilitates collaboration, information exchange, and innovation across agencies.  These are positive factors in an information-driven age where rapid exchange of information is expected.

Effectiveness of end-user analysts increases with greater access to collective knowledge, skills, and abilities used by other professionals who perform similar specific analytical functions.

Community-wide efforts to improve efficiency in mission execution, development of analytical tradecraft, and the associated tools or programs can be captured and codified for mass implementation or dissemination thereby improving the overall capability of the USIC analytical workforce.

Option #2:  Improve analyst education and training that increases knowledge, skills, and abilities to better manipulate current and existing tool suites, programs, and platforms.

Risk:  USIC components continue to provide function-specific training to their own analysts while other agency partners do not benefit from agency-specific tools, due to inaccessibility or ignorance of tool), thereby decreasing the analytical effectiveness of the community as a whole.  Worded differently, community-wide access to the full range of analytical tools available does not reach all levels or entities.

Analysts rely on unstructured or personal means such as informal on-the-job training and personal networks in learning to manipulate current tools, while possibly remaining unaware or untrained on other analytical tools at their disposal elsewhere in the community.  Analysts rely on ingenuity and resolve to accomplish tasks but are not as efficient or effective.

Gain:  Analytical tradecraft unique to specific community members remains protected and inaccessible to analysts that do not possess required access, thereby preserving developed tradecraft.

Other Comments:  The author does not possess expert knowledge of collaborative spaces or the digital infrastructure required to create such an information environment; however, the author does have deployment experience as an end-user of several tool suites and programs utilized to perform specific analytical functions.  The majority of these were somewhat proliferated across a small portion of USIC components but were not widely accessible or available past basic search engine functions.

Recommendation:  None.


Endnotes:

[1] Treverton, G. F. (2016). New Tools for Collaboration. Retrieved March, 2016, from https://csis-prod.s3.amazonaws.com/s3fs-public/legacy_files/files/publication/160111_Treverton_NewTools_Web.pdf

[2] Office of the Director of National Intelligence IC IT Enterprise Fact Sheet. Retrieved March, 2017, from https://www.dni.gov/files/documents/IC%20ITE%20Fact%20Sheet.pdf

Information and Intelligence Information Systems Marvin Ebrahimi Option Papers United States