Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Zac Rogers is an academic from Adelaide, South Australia. Zac has published in journals including International Affairs, The Cyber Defense Review, Joint Force Quarterly, and Australian Quarterly, and communicates with a wider audience across various multimedia platforms regularly. Parasitoid is his first book.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Date Originally Written:  January 3, 2022.

Date Originally Published:  January 17, 2022.

Author and / or Article Point of View:  The author is an Australia-based academic whose research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.  The author works closely with industry and government partners across multiple projects. 

Summary:  Military investment in war-gaming, table-top exercises, scenario planning, and future force design is increasing.  Some of this investment focuses on adversary activities in the “cognitive domain.” While this investment is necessary, it may fail due to it anchoring to data-driven machine-learning and automation for both offensive and defensive purposes, without a clear understanding of their appropriateness. 

Text:  In 2019 the author wrote a short piece for the U.S. Army’s MadSci website titled  “In the Cognitive War, the Weapon is You![1]” This article attempted to spur self-reflection by the national security, intelligence, and defence communities in Australia, the United States and Canada, Europe, and the United Kingdom.  At the time these communities were beginning to incorporate discussion of “cognitive” security/insecurity in their near future threat assessments and future force design discourses. The article is cited in in the North Atlantic Treaty Organization (NATO) Cognitive Warfare document of 2020[2]. Either in ways that demonstrate the misunderstanding directly, or as part of the wider context in which the point of that particular title is thoroughly misinterpreted, the author’s desired self-reflection has not been forthcoming. Instead, and not unexpectedly, the discourse on the cognitive aspects of contemporary conflict have consumed and regurgitated a familiar sequence of errors which will continue to perpetuate rather than mitigate the problem if not addressed head-on.  

What the cognitive threat is

The primary cognitive threat is us[3]. The threat is driven by a combination of, firstly, techno-futurist hubris which exists as a permanently recycling feature of late-modern military thought.  The threat includes a precipitous slide into scientism which military thinkers and the organisations they populate have not avoided[4].  Further contributing to the threat is the commercial and financial rent-seeking which overhangs military affairs as a by-product of private-sector led R&D activities and government dependence on and cultivation of those activities increasingly since the 1990s[5].  Lastly, adversary awareness of these dynamics and an increasing willingness and capacity to manipulate and exacerbate them via the multitude of vulnerabilities ushered in by digital hyper-connectivity[6]. In other words, before the cognitive threat is an operational and tactical menace to be addressed and countered by the joint force, it is a central feature of the deteriorating epistemic condition of the late modern societies in which said forces operate and from which its personnel, funding, R&D pathways, doctrine and operating concepts, epistemic communities and strategic leadership emerge. 

What the cognitive threat is not   

The cognitive threat is not what adversary military organisations and their patrons are doing in and to the information environment with regard to activities other than kinetic military operations. Terms for adversarial activities occurring outside of conventional lethal/kinetic combat operations – such as the “grey-zone” and “below-the-threshold” – describe time-honoured tactics by which interlocutors engage in methods aimed at weakening and sowing dysfunction in the social and political fabric of competitor or enemy societies.  These tactics are used to gain advantage in areas not directly including military conflict, or in areas likely to be critical to military preparedness and mobilization in times of war[7]. A key stumbling block here is obvious: its often difficult to know which intentions such tactics express. This is not cognitive warfare. It is merely typical of contending across and between cross-cultural communities, and the permanent unwillingness of contending societies to accord with the other’s rules. Information warfare – particularly influence operations traversing the Internet and exploiting the dominant commercial operations found there – is part of this mix of activities which belong under the normal paradigm of competition between states for strategic advantage. Active measures – influence operations designed to self-perpetuate – have found fertile new ground on the Internet but are not new to the arsenals of intelligence services and, as Thomas Rid has warned, while they proliferate, are more unpredictable and difficult to control than they were in the pre-Internet era[8]. None of this is cognitive warfare either. Unfortunately, current and recent discourse has lapsed into the error of treating it as such[9], leading to all manner of self-inflicted confusion[10]. 

Why the distinction matters

Two trends emerge from the abovementioned confusion which represent the most immediate threat to the military enterprise[11]. Firstly, private-sector vendors and the consulting and lobbying industry they employ are busily pitching technological solutions based on machine-learning and automation which have been developed in commercial business settings in which sensitivity to error is not high[12]. While militaries experiment with this raft of technologies, eager to be seen at the vanguard of emerging tech; to justify R&D budgets and stave off defunding; or simply out of habit, they incur opportunity cost.  This cost is best described as stultifying investment in the human potential which strategic thinkers have long identified as the real key to actualizing new technologies[13], and entering into path dependencies with behemoth corporate actors whose strategic goal is the cultivation of rentier-relations not excluding the ever-lucrative military sector[14]. 

Secondly, to the extent that automation and machine learning technologies enter the operational picture, cognitive debt is accrued as the military enterprise becomes increasingly dependent on fallible tech solutions[15]. Under battle conditions, the first assumption is the contestation of the electromagnetic spectrum on which all digital information technologies depend for basic functionality. Automated data gathering and analysis tools suffer from heavy reliance on data availability and integrity.  When these tools are unavailable any joint multinational force will require multiple redundancies, not only in terms of technology, but more importantly, in terms of leadership and personnel competencies. It is evidently unclear where the military enterprise draws the line in terms of the likely cost-benefit ratio when it comes to experimenting with automated machine learning tools and the contexts in which they ought to be applied[16]. Unfortunately, experimentation is never cost-free. When civilian / military boundaries are blurred to the extent they are now as a result of the digital transformation of society, such experimentation requires consideration  in light of all of its implications, including to the integrity and functionality of open democracy as the entity being defended[17]. 

The first error of misinterpreting the meaning and bounds of cognitive insecurity is compounded by a second mistake: what the military enterprise chooses to invest time, attention, and resources into tomorrow[18]. Path dependency, technological lock-in, and opportunity cost all loom large if  digital information age threats are misinterpreted. This is the solipsistic nature of the cognitive threat at work – the weapon really is you! Putting one’s feet in the shoes of the adversary, nothing could be more pleasing than seeing that threat self-perpetuate. As a first step, militaries could organise and invest immediately in a strategic technology assessment capacity[19] free from the biases of rent-seeking vendors and lobbyists who, by definition, will not only not pay the costs of mission failure, but stand to benefit from rentier-like dependencies that emerge as the military enterprise pays the corporate sector to play in the digital age. 


Endnotes:

[1] Zac Rogers, “158. In the Cognitive War – The Weapon Is You!,” Mad Scientist Laboratory (blog), July 1, 2019, https://madsciblog.tradoc.army.mil/158-in-the-cognitive-war-the-weapon-is-you/.

[2] Francois du Cluzel, “Cognitive Warfare” (Innovation Hub, 2020), https://www.innovationhub-act.org/sites/default/files/2021-01/20210122_CW%20Final.pdf.

[3] “us” refers primarily but not exclusively to the national security, intelligence, and defence communities taking up discourse on cognitive security and its threats including Australia, the U.S., U.K., Europe, and other liberal democratic nations. 

[4] Henry Bauer, “Science in the 21st Century: Knowledge Monopolies and Research Cartels,” Journal of Scientific Exploration 18 (December 1, 2004); Matthew B. Crawford, “How Science Has Been Corrupted,” UnHerd, December 21, 2021, https://unherd.com/2021/12/how-science-has-been-corrupted-2/; William A. Wilson, “Scientific Regress,” First Things, May 2016, https://www.firstthings.com/article/2016/05/scientific-regress; Philip Mirowski, Science-Mart (Harvard University Press, 2011).

[5] Dima P Adamsky, “Through the Looking Glass: The Soviet Military-Technical Revolution and the American Revolution in Military Affairs,” Journal of Strategic Studies 31, no. 2 (2008): 257–94, https://doi.org/10.1080/01402390801940443; Linda Weiss, America Inc.?: Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin UK, 2018).

[6] Timothy L. Thomas, “Russian Forecasts of Future War,” Military Review, June 2019, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MJ-19/Thomas-Russian-Forecast.pdf; Nathan Beauchamp-Mustafaga, “Cognitive Domain Operations: The PLA’s New Holistic Concept for Influence Operations,” China Brief, The Jamestown Foundation 19, no. 16 (September 2019), https://jamestown.org/program/cognitive-domain-operations-the-plas-new-holistic-concept-for-influence-operations/.

[7] See Peter Layton, “Social Mobilisation in a Contested Environment,” The Strategist, August 5, 2019, https://www.aspistrategist.org.au/social-mobilisation-in-a-contested-environment/; Peter Layton, “Mobilisation in the Information Technology Era,” The Forge (blog), N/A, https://theforge.defence.gov.au/publications/mobilisation-information-technology-era.

[8] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, Illustrated edition (New York: MACMILLAN USA, 2020).

[9] For example see Jake Harrington and Riley McCabe, “Detect and Understand: Modernizing Intelligence for the Gray Zone,” CSIS Briefs (Center for Strategic & International Studies, December 2021), https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/211207_Harrington_Detect_Understand.pdf?CXBQPSNhUjec_inYLB7SFAaO_8kBnKrQ; du Cluzel, “Cognitive Warfare”; Kimberly Underwood, “Cognitive Warfare Will Be Deciding Factor in Battle,” SIGNAL Magazine, August 15, 2017, https://www.afcea.org/content/cognitive-warfare-will-be-deciding-factor-battle; Nicholas D. Wright, “Cognitive Defense of the Joint Force in a Digitizing World” (Pentagon Joint Staff Strategic Multilayer Assessment Group, July 2021), https://nsiteam.com/cognitive-defense-of-the-joint-force-in-a-digitizing-world/.

[10] Zac Rogers and Jason Logue, “Truth as Fiction: The Dangers of Hubris in the Information Environment,” The Strategist, February 14, 2020, https://www.aspistrategist.org.au/truth-as-fiction-the-dangers-of-hubris-in-the-information-environment/.

[11] For more on this see Zac Rogers, “The Promise of Strategic Gain in the Information Age: What Happened?,” Cyber Defense Review 6, no. 1 (Winter 2021): 81–105.

[12] Rodney Brooks, “An Inconvenient Truth About AI,” IEEE Spectrum, September 29, 2021, https://spectrum.ieee.org/rodney-brooks-ai.

[13] Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology Is Only Half the Battle,” War on the Rocks, December 25, 2018, https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/.

[14] Jathan Sadowski, “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism,” Antipode 52, no. 2 (2020): 562–80, https://doi.org/10.1111/anti.12595.

[15] For problematic example see Ben Collier and Lydia Wilson, “Governments Try to Fight Crime via Google Ads,” New Lines Magazine (blog), January 4, 2022, https://newlinesmag.com/reportage/governments-try-to-fight-crime-via-google-ads/.

[16] Zac Rogers, “Discrete, Specified, Assigned, and Bounded Problems: The Appropriate Areas for AI Contributions to National Security,” SMA Invited Perspectives (NSI Inc., December 31, 2019), https://nsiteam.com/discrete-specified-assigned-and-bounded-problems-the-appropriate-areas-for-ai-contributions-to-national-security/.

[17] Emily Bienvenue and Zac Rogers, “Strategic Army: Developing Trust in the Shifting Strategic Landscape,” Joint Force Quarterly 95 (November 2019): 4–14.

[18] Zac Rogers, “Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven,” Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.

[19] For expansion see Zac Rogers and Emily Bienvenue, “Combined Information Overlay for Situational Awareness in the Digital Anthropological Terrain: Reclaiming Information for the Warfighter,” The Cyber Defense Review, no. Summer Edition (2021), https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/06_Rogers_Bienvenue_CDR_V6N3_2021.pdf?ver=6qlw1l02DXt1A_1n5KrL4g%3d%3d.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Below Established Threshold Activities (BETA) Cyberspace Influence Operations Information Systems Zac Rogers

Options to Counter Foreign Influence Operations Targeting Servicemember and Veterans

Marcus Laird has served in the United States Air Force. He presently works at Headquarters Air Force Reserve Command as a Strategic Plans and Programs Officer. He can be found on Twitter @USLairdForce.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  Divergent Options’ does not contain official information nor is it affiliated with the Department of Defense or the U. S. Air Force. The following opinion is of the author only, and is not official Air Force or Department of Defense policy. This publication was reviewed by AFRC/PA, and is cleared for public release and unlimited distribution.


National Security Situation:  Foreign Actors are using Social Media to influence Servicemember and Veteran communities. 

Date Originally Written:  December 2, 2021.

Date Originally Published:  January 3, 2022.

Author and / or Article Point of View:  The author is a military member who has previously researched the impact of social media on US military internal dialogue for professional military education and graduate courses. 

Background:  During the lead up to the 2016 election, members of the U.S. Army Reserve were specifically targeted by advertisements on Facebook purchased by Russia’s Internet Research Agency at least ten times[1]. In 2017, the Vietnam Veterans of America (VVA) also detected social media profiles which were sophisticated mimics of their official web pages. These web pages were created for several reasons to include identity theft, fraud, and disseminating disinformation favorable to Russia. Further investigation revealed a network of fake personas attempting to make inroads within online military and veteran communities for the purpose of bolstering persona credibility to spread disinformation. Because these mimics used VVA logos, VVA was able to have these web pages deplatformed after two months due to trademark infringement[2].  

Alternatively, military influencers, after building a substantial following, have chosen to sell their personas as a means of monetizing their social media brands. While foreign adversary networks have not incorporated this technique for building an audience, the purchase of a persona is essentially an opportunity to purchase a turnkey information operation platform. 

Significance:  Servicemembers and veterans are trusted voices within their communities on matters of national security. The special trust society places on these communities makes them a particularly lucrative target for an adversary seeking to influence public opinion and shape policy debates[3]. Social media is optimized for advertising, allowing specific demographics to be targeted with unprecedented precision. Unchecked, adversaries can use this capability to sow mistrust, degrade unit cohesion, and spread disinformation through advertisements, mimicking legitimate organizations, or purchasing a trusted persona. 

Option #1:  Closing Legislative Loopholes 

Currently, foreign entities are prohibited from directly contributing to campaigns. However, there is no legal prohibition on purchasing advertising by foreign entities for the purpose of influencing elections. Using legislative means to close this loophole would deny adversaries’ abuse of platforms’ microtargeting capabilities for the purpose of political influence[4].

Risk:  Enforcement – As evidenced during inquiries into election interference, enforcement could prove difficult. Enforcement relies on good faith efforts by platforms to conduct internal assessments of sophisticated actors’ affiliations and intentions and report them. Additionally, government agencies have neither backend system access nor adequate resources to forensically investigate every potential instance of foreign advertising.

Gain:  Such a solution would protect society as a whole, to include the military and veteran communities. Legislation would include reporting and data retention requirements for platforms, allowing for earlier detection of potential information operations. Ideally, regulation would prompt platforms to tailor their content moderation standards around political advertising to create additional barriers for foreign entities.  

Option #2:  Deplatforming on the Grounds of Trademark Infringement

Should a foreign adversary attempt to use sophisticated mimicry of official accounts to achieve a veneer of credibility, then the government may elect to request a platform remove a user or network of users on the basis of trademark infringement. This technique was successfully employed by the VVA in 2017. Military services have trademark offices, which license the use of their official logos and can serve as focal points for removing unauthorized materials[5].

Risk:  Resources – since trademark offices are self-funded and rely on royalties for operations, they may not be adequately resourced to challenge large-scale trademark infringement by foreign actors.

Personnel – personnel in trademark offices may not have adequate training to determine whether or not a U.S. person or a foreign entity is using the organization’s trademarked materials. Failure to adequately delineate between U.S. persons and foreign actors when requesting to deplatform a user potentially infringes upon civil liberties. 

Gain:  Developing agency response protocols using existing intellectual property laws ensures responses are coordinated between the government and platforms as opposed to a pickup game during an ongoing operation. Regular deplatforming can also help develop signatures for sophisticated mimicry, allowing for more rapid detection and mitigation by the platforms. 

Option #3:  Subject the Sale of Influence Networks to Review by the Committee on Foreign Investment in the United States (CFIUS) 

Inform platform owners of the intent of CFIUS to review the sale of all influence networks and credentials which specifically market to military and veteran communities. CFIUS review has been used to prevent the acquisition of applications by foreign entities. Specifically, in 2019 CFIUS retroactively reviewed the purchase of Grindr, an LGBTQ+ dating application, due to national security concerns about the potential for the Chinese firm Kunlun to pass sensitive data to the Chinese government.  Data associated with veteran and servicemember social networks could be similarly protected[6]. 

Risk:  Enforcement – Due to the large number of influencers and the lack of knowledge of the scope of the problem, enforcement may be difficult in real time. In the event a sale happens, then ex post facto CFIUS review would provide a remedy.  

Gain:  Such a notification should prompt platforms to craft governance policies around the sale and transfer of personas to allow for more transparency and reporting.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1] Goldsmith, K. (2020). An Investigation Into Foreign Entities Who Are Targeting Servicemembers and Veterans Online. Vietnam Veterans of America. Retrieved September 17, 2019, from https://vva.org/trollreport/, 108.

[2] Ibid, 6-7.

[3] Gallacher, J. D., Barash, V., Howard, P. N., & Kelly, J. (2018). Junk news on military affairs and national security: Social media disinformation campaigns against us military personnel and veterans. arXiv preprint arXiv:1802.03572.

[4] Wertheimer, F. (2019, May 28). Loopholes allow foreign adversaries to legally interfere in U.S. elections. Just Security. Retrieved December 10, 2021, from https://www.justsecurity.org/64324/loopholes-allow-foreign-adversaries-to-legally-interfere-in-u-s-elections/.

[5] Air Force Trademark Office. (n.d.). Retrieved December 3, 2021, from https://www.trademark.af.mil/Licensing/Applications.aspx.

[6] Kara-Pabani, K., & Sherman, J. (2021, May 11). How a Norwegian government report shows the limits of Cfius Data Reviews. Lawfare. Retrieved December 10, 2021, from https://www.lawfareblog.com/how-norwegian-government-report-shows-limits-cfius-data-reviews.

Cyberspace Influence Operations Marcus Laird Military Veterans and Military Members Option Papers Social Media United States

Analyzing Social Media as a Means to Undermine the United States

Michael Martinez is a consultant who specializes in data analysis, project management and community engagement. has a M.S. of Intelligence Management from University of Maryland University College. He can be found on Twitter @MichaelMartinez. Divergent Optionscontent does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Analyzing Social Media as a Means to Undermine the United States

Date Originally Written:  November 30, 2021.

Date Originally Published:  December 27, 2021.

Author and / or Article Point of View:  The author believes that social media is not inherently good nor bad, but a tool to enhance discussion. Unless the national security apparatus understands how to best utilize Open Source Intelligence to achieve its stated goals, i.e. engaging the public on social media and public forums, it will lag behind its adversaries in this space.

Summary:  Stopping online radicalization of all varieties is complex and includes the individual, the government, social media companies, and Internet Service Providers. Artificial intelligence reviewing information online and flagging potential threats may not be adequate. Only through public-private partnerships can an effective system by created to support anti-radicalization endeavors.

Text:  The adage, “If you’re not paying for the product, you are the product[1],” cannot be further from the truth in the age of social media. Every user’s click and purchase are recorded by private entities such as Facebook and Twitter. These records can be utilized by other nations to gather information on the United States economy, intellectual property, as well as information on government personnel and agencies. This collation of data can be packaged together and be used to inform operations to prey on U.S. personnel.  Examples include extortion through ransomware, an adversary intelligence service probing an employee for specific national information by appealing to their subject matter expertise, and online influence / radicalization.

It is crucial to accept that the United States and its citizens are more heavily reliant on social media than ever before. Social media entities such as Meta (formerly Facebook) have new and yet to be released products for children (i.e., the “Instagram for Kids” product) enabling adversaries to prey upon any age a potential. Terrorist organizations such as Al-Qaeda utilize cartoons on outlets like YouTube and Instagram to entice vulnerable youth to carry out attacks or help radicalize potential suicide bombers[2]. 

While Facebook and YouTube are the most common among most age groups, Tik-Tok and Snapchat have undergone a meteoric arise among youth under thirty[3]. Intelligence services and terrorist organizations have vastly improved their online recruiting techniques including video and media as the platforms have become just as sophisticated. Unless federal, state, and local governments strengthen their public-private partnerships to stay ahead of growth in next generation social media platforms this adversary behavior will continue.  The national security community has tools at its disposal to help protect Americans from being turned into cybercriminals through coercion, or radicalizing individuals from overseas entities such as the Islamic State to potentially carry out domestic attacks.

To counter such trends within social media radicalization, the National Institutes of Justice (NIJ) worked with the National Academies to identify traits and agendas to facilitate disruption of these efforts. Some of the things identified include functional databases, considering links between terrorism and lesser crimes, and exploring the culture of terrorism, including structure and goals[4]. While a solid federal infrastructure and deterrence mechanism is vital, it is also important for the social media platform themselves to eliminate radical media that may influence at-risk individuals. 

According to the NIJ, there are several characteristics that contribute to social media radicalization: being unemployed, a loner, having a criminal history, a history of mental illness, and having prior military experience[5]. These are only potential factors that do not apply to all who are radicalized[6]. However, these factors do provide a base to begin investigation and mitigation strategies. 

As a long-term solution, the Bipartisan Policy Center recommends enacting and teaching media literacy to understand and spot internet radicalization[7]. Social media algorithms are not fool proof. These algorithms require the cyberspace equivalent of “see something, say something” and for users to report any suspicious activity to the platforms. The risks of these companies not acting is also vital as their main goal is to monetize. Acting in this manner does not help companies make more money. This inaction is when the government steps in to ensure that private enterprise is not impeding national security. 

Creating a system that works will balance the rights of the individual with the national security of the United States. It will also respect the rights of private enterprise and the pipelines that carry the information to homes, the Internet Service Providers. Until this system can be created, the radicalization of Americans will be a pitfall for the entire National Security apparatus. 


Endnotes:

[1] Oremus, W. (2018, April 27). Are You Really the Product? Retrieved on November 15, 2021, from https://slate.com/technology/2018/04/are-you-really-facebooks-product-the-history-of-a-dangerous-idea.html. 

[2] Thompson, R. (2011). Radicalization and the Use of Social Media. Journal of Strategic Security, 4(4), 167–190. http://www.jstor.org/stable/26463917 

[3] Pew Research Center. (2021, April 7). Social Media Use in 2021. Retrieved from https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/ 

[4] Aisha Javed Qureshi, “Understanding Domestic Radicalization and Terrorism,” August 14, 2020, nij.ojp.gov: https://nij.ojp.gov/topics/articles/understanding-domestic-radicalization-and-terrorism.

[5] The National Counterintelligence and Security Center. Intelligence Threats & Social Media Deception. Retrieved November 15, 2021, from https://www.dni.gov/index.php/ncsc-features/2780-ncsc-intelligence-threats-social-media-deception. 

[6] Schleffer, G., & Miller, B. (2021). The Political Effects of Social Media Platforms on Different Regime Types. Austin, TX. Retrieved November 29, 2021, from http://dx.doi.org/10.26153/tsw/13987. 

[7] Bipartisan Policy Center. (2012, December). Countering Online Radicalization in America. Retrieved November 29, 2021, from https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2019/03/BPC-_Online-Radicalization-Report.pdf 

Assessment Papers Cyberspace Influence Operations Michael Martinez Social Media United States

Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Alex Buck is a currently serving officer in the Canadian Armed Forces. He has deployed twice to Afghanistan, once to Ukraine, and is now working towards an MA in National Security.  Alex can be found on Twitter @RCRbuck.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Date Originally Written:  August 29, 2021.

Date Originally Published:  December 13, 2021.

Author and / or Article Point of View: The author believes that without appropriate action, the United States’ political climate will continue to be exploited by Russian influence campaigns. These campaigns will have broad impacts across the Western world, and potentially generate an increased competitive advantage for Russia.

Summary:  To achieve a competitive advantage over the United States, Russia uses social media-based influence campaigns to influence American foreign policy. Political polarization makes the United States an optimal target for such campaigns. 

Text:  Russia aspires to regain influence over the international system that they once had as the Soviet Union. To achieve this aim, Russia’s interest lies in building a stronger economy and expanding their regional influence over Eastern Europe[1]. Following the Cold War, Russia recognized that these national interests were at risk of being completely destroyed by Western influence. The Russian economy was threatened by the United States’ unipolar hegemony over the global economy[2]. A strong North Atlantic Treaty Organization (NATO) has threatened Russia’s regional influence in Eastern Europe. NATO’s collective security agreement was originally conceived to counter the Soviet threat following World War II and has continued to do so to this day. Through the late 1990s and early 2000s, NATO expanded their membership to include former Soviet states in Eastern Europe. This expansion was done in an effort to reduce Russian regional influence [1]. Russia perceives these actions as a threat to their survival as a state, and needs a method to regain competitive advantage.

Following the Cold War, Russia began to identify opportunities they could exploit to increase their competitive advantage in the international system. One of those opportunities began to develop in the early-2000s as social media emerged. During this time, social media began to impact American culture in such a significant way that it could not be ignored. Social media has two significant impacts on society. First, it causes people to create very dense clusters of social connections. Second, these clusters are populated by very similar types of people[3]. These two factors caused follow-on effects to American society in that they created a divided social structure and an extremely polarized political system. Russia viewed these as opportunities ripe for their exploitation. Russia sees U.S. social media as a cost-effective medium to exert influence on the United States. 

In the late 2000s, Russia began experimenting with their concept of exploiting the cyber domain as a means of exerting influence on other nation-states. After the successful use of cyber operations against Ukraine, Estonia, Georgia and again in Ukraine in 2004, 2007, 2008, and 2014 respectively, Russia was poised to attempt utilizing their concept against the United States and NATO[4]. In 2014, Russia slowly built a network of social media accounts that would eventually begin sowing disinformation amongst American social media users[3]. The significance of the Russian information campaign leading up to the 2016 U.S. presidential election can not be underestimated. The Russian Internet Research Agency propagated ~10.4 million tweets on Twitter, 76.5 million engagements on Facebook, and 187 million engagements on Instagram[5]. Although within the context of 200 billion tweets sent annually this may seem like a small-scale effort, the targeted nature of the tweets contributed to their effectiveness. This Russian social media campaign was estimated to expose between 110 and 130 million American social media users to misinformation aimed at skewing the results of the presidential election[3]. The 2000 presidential election was decided by 537 votes in the state of Florida. To change the results of an American election like that of 2000, a Russian information campaign could potentially sway electoral results with a campaign that is 0.00049% effective.

The bifurcated nature of the current American political arena has created the perfect target for Russian attacks via the cyber domain. Due to the persistently slim margins of electoral results, Russia will continue to exploit this opportunity until it achieves its national aims and gains a competitive advantage over the United States. Social media’s influence offers Russia a cost effective and highly impactful tool that has the potential to sway American policies in its favor. Without coherent strategies to protect national networks and decrease Russian social influence the United States, and the broader Western world, will continue to be subject to Russian influence. 


Endnotes:

[1] Arakelyan, L. A. (2017). Russian Foreign Policy in Eurasia: National Interests and Regional Integration (1st ed.). Routledge. https://doi.org/10.4324/9781315468372

[2] Blank, S. (2008). Threats to and from Russia: An Assessment. The Journal of Slavic Military Studies, 21(3), 491–526. https://doi.org/10.1080/13518040802313746

[3] Aral, S. (2020). The hype machine: How social media disrupts our elections, our economy, and our health–and how we must adapt (First edition). Currency.

[4] Geers, K. & NATO Cooperative Cyber Defence Centre of Excellence. (2015). Cyber war in perspective: Russian aggression against Ukraine. https://www.ccdcoe.org/library/publications/cyber-war-in-perspective-russian-aggression-against-ukraine/

[5] DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., & Matney, R. (2019). The Tactics & Tropes of the Internet Research Agency. US Senate Documents.

Alex Buck Assessment Papers Cyberspace Influence Operations Russia Social Media United States

Assessing a Situation where the Mission is a Headline

Samir Srivastava is serving in the Indian Armed Forces. The views expressed and suggestions made in the article are solely of the author in his personal capacity and do not have any official endorsement.  Divergent Opinions’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing a Situation where the Mission is a Headline

Date Originally Written:  July 5, 2021.

Date Originally Published:  July 26, 2021.

Author and / or Article Point of View:  The author is serving with the Indian Armed Forces.   The article is written from the point of view of India in its prevailing environment.

Summary:  While headlines in news media describe the outcome of military operations, in this information age, the world could now be heading towards a situation where military operations are the outcome of a desired headline.  In situations like this, goals can be achieved by taking into assured success, the target audience, connectivity in a retaliatory context, verifiability, and deniability.

Text:  When nations fight each other, there will be news media headlines. Through various mediums and platforms, headline(s) will travel to everyone – the belligerents, their allies/supporters and also neutral parties. Conflict will be presented as a series of headlines culminating in one headline that describes the final outcome. Thus, when operations happen, headlines also happen. Yet to be considered is when  an operation  is planned and executed to make a headline happen.

In nation versus nation conflict, the days of large scale wars are certainly not over, but as trends suggest these will be more of an exception rather than rule. The future war in all likelihood will be fought at a level without a formal war declaration and quite localised. The world has seen wars where each side endeavours to prevail upon the adversary’s bodies and materiel, but already greater emphasis is being laid on prevailing upon the enemy’s mind. In that case, a decision will be required regarding what objective is being pursued – attrition, territory or just a headline.

Today, a military operation is more often than not planned at the strategic level and executed at a tactical level. This model is likely to become a norm because if a strategic outcome is achievable through a standalone tactical action, there is no reason to let the fight get bigger and more costly in terms of blood and treasure. The Balakote Airstrike[1] by the Indian Air Force is a case in point. It has been over two years since that strike took place but there is nothing to show a change in Pakistan’s attitude, which continues to harbour terrorists on its soil who would very well be plotting the next strike on India. However, what has endured is the headlines of February 26-28, 2019, which carried different messages for different people and one for Pakistan as well.

Unlike propaganda where a story is made out of nothing, if the mission is to make a headline, then that particular operation will have taken place on ground.  In this context, Headline Selection and Target Selection are two sides of the same coin but the former is the driving force.  Beyond this, success is enhanced by taking into account the probability of success, the target audience, connectivity in a retaliatory context, verifiability and deniability.  

Without assured success, the outcome will be a mismatch between the desired headline and  target selection. Taking an example from movies, in the 1997 film  “Tomorrow Never Dies[2],” the entire plot focuses on  the protagonist, Agent 007,  spoiling antagonist Carver’s scheme of creating headlines to be beamed by his media network. Once a shot is fired or ordnance dropped, there will be a headline and it is best to make sure it is the desired one.

Regarding the target audience, it is not necessary that an event gains the interest of the masses. The recipient population may be receptive, non-receptive or simply indifferent.  A headline focused on  the largest receptive group who can further propagate it has the best chance of success. 

If the operation is carried out in a retaliatory context,  it is best to connect  the enemy action and friendly reaction. For example, while cyber-attacks or economic sanctions may be an apt response to an armed attack, the likelihood of achieving the desired headline is enhanced if there is something connecting the two- action and reaction.

The headline will have much more impact if the event and its effects can be easily verified, preferably by neutral agencies and individuals. A perfect headline would be that which an under resourced freelance journalist can easily report. To that end, targets in inaccessible locations or at places that don’t strike a chord with the intended audience will be of little use. No amount of satellite photos can match one reporter on ground.   

The headline cannot lend itself to any possibility of denial because even a feeble denial can lead to credibility being questioned. It therefore goes without saying that choice of target and mode of attack should be such. During U.S. Operation NEPTUNE SPEAR[3], the raid on Osama bin Laden’s compound in Abbotabad, Pakistan,  the first sliver of publicly available information was a tweet by someone nearby. This tweet could have very well closed any avenue for denial by Pakistan or Al Qaeda.

A well thought out headline can be the start point when planning an operation or even a campaign. This vision of a headline however needs different thinking tempered with a lot of imagination and creativity. Pre-planned headlines, understanding the expertise of journalists and having platforms at the ready can be of value.      

Every field commander, division and above should have some pre-planned headlines to speak of that their organization can create if given the opportunity. These headlines include both national headlines flowing out of the higher commander’s intent, and local headlines that are more focused on the immediate engagement area.

There is benefit to be gained from the expertise of journalists – both Indian and Foreign. Their practical experience will be invaluable when deciding on the correct headline and pinpointing a target audience. Journalists are already seen in war zones and media rooms as reporters, and getting them into the operations room as planners is worthy of consideration.

An array of reporters, platforms amd mediums can be kept ready to carry the desired headline far and wide. Freelance journalists in foreign countries coupled with internet will be a potent combination. In addition, the military’s public information organization cannot succeed in this new reality without restructuring.

Every battle in military history has name of some commander(s) attached to it. Hannibal crossing the Alps, U.S. General George S. Patton’s exploits during Battle of the Bulge, and then Indian Colonel Desmond Hayde in the Battle of Dograi. The day is not far when some field commander will etch his or her name in history fighting the Battle of the Headline or, more apt, the Battle for the Headline.      


Endnotes:

[1] BBC. (2019, February 26). Balakot: Indian air strikes target militants in Pakistan. BBC News. https://www.bbc.com/news/world-asia-47366718.

[2] IMDb.com. (1997, December 19). Tomorrow Never Dies. IMDb. https://www.imdb.com/title/tt0120347.

[3] Olson, P. (2011, August 11). Man Inadvertently Live Tweets Osama Bin Laden Raid. Forbes. https://www.forbes.com/sites/parmyolson/2011/05/02/man-inadvertently-live-tweets-osama-bin-laden-raid.

Assessment Papers India Influence Operations Information and Intelligence Samir Srivastava Social Media

Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Scott Harr is a U.S. Army Special Forces officer with deployment and service experience throughout the Middle East.  He has contributed articles on national security and foreign policy topics to military journals and professional websites focusing on strategic security issues.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Date Originally Written:  September 26, 2020.

Date Originally Published:  December 2, 2020.

Author and / or Article Point of View:  The author believes that realist theory of international relations will have to take into account the weaponization of information in order to continue to be viable.

Summary:  The weaponization of information as an instrument of security has re-shaped the traditional security dilemma faced by nation-states under realist theory. While yielding to the anarchic ordering principle from realist thought, the information domain also extends the classic security dilemma and layers it with new dynamics. These dynamics put liberal democracies on the defensive compared to authoritarian regimes.

Text:  According to realist theory, the Westphalian nation-state exists in a self-interested international community[1]. Because of the lack of binding international law, anarchy, as an ordering principle, characterizes the international environment as each nation-state, not knowing the intentions of those around it, is incentivized to provide for its own security and survival[2]. This self-help system differentiates insecure nations according to their capabilities to provide and project security. While this state-of-play within the international community holds the structure together, it also creates a classic security dilemma: the more each insecure state invests in its own security, the more such actions are interpreted as aggression by other insecure states which initiates and perpetuates a never-ending cycle of escalating aggression amongst them[3]. Traditionally, the effects of the realist security dilemma have been observed and measured through arms-races between nations or the general buildup of military capabilities. In the emerging battlefield of the 21st century, however, states have weaponized the Information Domain as both nation-states and non-state actors realize and leverage the power of information (and new ways to transmit it) to achieve security objectives. Many, like author Sean McFate, see the end of traditional warfare as these new methods captivate entities with security interests while altering and supplanting the traditional military means to wage conflict[4]. If the emergence and weaponization of information technology is changing the instruments of security, it is worth assessing how the realist security dilemma may be changing along with it.

One way to assess the Information Domain’s impact on the realist security dilemma is to examine the ordering principle that undergirds this dilemma. As mentioned above, the realist security dilemma hinges on the anarchic ordering principle of the international community that drives (compels) nations to militarily invest in security for their survival. Broadly, because no (enforceable) international law exists to uniformly regulate nation-state actions weaponizing information as a security tool, the anarchic ordering principle still exists. However, on closer inspection, while the anarchic ordering principle from realist theory remains intact, the weaponization of information creates a domain with distinctly different operating principles for nation-states existing in an anarchic international environment and using information as an instrument of security. Nation-states espousing liberal-democratic values operate on the premise that information should flow freely and (largely) uncontrolled or regulated by government authority. For this reason, countries such as the United States do not have large-scale and monopolistic “state-run” information or media channels. Rather, information is, relatively, free to flow unimpeded on social media, private news corporations, and print journalism. Countries that leverage the “freedom” operating principle for information implicitly rely on the strength and attractiveness of liberal-democratic values endorsing liberty and freedom as the centerpiece for efforts in the information domain. The power of enticing ideals, they seem to say, is the best application of power within the Information Domain and surest means to preserve security. Nevertheless, reliance on the “freedom” operating principle puts liberal democratic countries on the defensive when it comes to the security dimensions of the information domain.

In contrast to the “freedom” operating principle employed by liberal democratic nations in the information domain, nations with authoritarian regimes utilize an operating principle of “control” for information. According to authors Irina Borogan and Andrei Soldatov, when the photocopier was first invented in Russia in the early 20th century, Russian authorities promptly seized the device and hid the technology deep within government archives to prevent its proliferation[5]. Plainly, the information-disseminating capabilities implied by the photocopier terrified the Russian authorities. Such paranoid efforts to control information have shaped the Russian approach to information technology through every new technological development from the telephone, computer, and internet. Since authoritarian regimes maintain tight control of information as their operating principle, they remain less concerned about adhering to liberal values and can thus assume a more offensive stance in the information domain. For this reason, the Russian use of information technology is characterized by wide-scale distributed denial of services attacks on opposition voices domestically and “patriot hackers” spreading disinformation internationally to achieve security objectives[6]. Plausible deniability surrounding information used in this way allows authoritarian regimes to skirt and obscure the ideological values cherished by liberal democracies under the “freedom” ordering principle.

The realist security dilemma is far too durable to be abolished at the first sign of nation-states developing and employing new capabilities for security. But even as the weaponization of information has not abolished the classic realist dilemma, it has undoubtedly extended and complicated it by adding a new layer with new considerations. Whereas in the past the operating principles of nation-states addressing their security has been uniformly observed through the straight-forward build-up of overtly military capabilities, the information domain, while preserving the anarchic ordering principle from realist theory, creates a new dynamic where nation-states employ opposite operating principles in the much-more-subtle Information Domain. Such dynamics create “sub-dilemmas” for liberal democracies put on the defensive in the Information Domain. As renowned realist scholar Kenneth Waltz notes, a democratic nation may have to “consider whether it would prefer to violate its code of behavior” (i.e. compromise its liberal democratic values) or “abide by its code and risk its survival[7].” This is the crux of the matter as democracies determine how to compete in the Information Domain and all the challenges it poses (adds) to the realist security dilemma: they must find a way to leverage the strength (and attractiveness) of their values in the Information Domain while not succumbing to temptations to forsake those values and stoop to the levels of adversaries. In sum, regarding the emerging operating principles, “freedom” is the harder right to “control’s” easier wrong. To forget this maxim is to sacrifice the foundations that liberal democracies hope to build upon in the international community.


Endnotes:

[1] Waltz, Kenneth. Realism and International Politics. New York: Taylor and Francis, 2008.

[2] Ibid, Waltz, Realism.

[3] Ibid, Waltz, Realsim

[4] Mcfate, Sean. The New Rules Of War: Victory in the Age of Durable Disorder. New York: Harper Collins Press, 2019.

[5] Soldatov, Andrei and Borogan, Irina. The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries. New York: Perseus Books Group, 2015.

[6] Ibid, Soldatov.

[7] Waltz, Kenneth Neal. Man, the State, and War: A Theoretical Analysis. New York: Columbia University Press, 1959.

Assessment Papers Cyberspace Influence Operations Scott Harr

Assessment of Opportunities to Engage with the Chinese Film Market

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Irk is a freelance writer. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Opportunities to Engage with the Chinese Film Market

Date Originally Written:  July 29, 2020.

Date Originally Published:  November 11, 2020.

Author and / or Article Point of View:  The author believes that the film industry remains a relatively underexploited channel that can be used to shape the soft power dynamic in the U.S.-China relationship.

Summary:  While China’s film industry has grown in recent years, the market for Chinese films remains primarily domestic. Access to China’s film market remains heavily restricted, allowing the Chinese Communist Party to craft a film industry that can reinforce its values at home and abroad. However, there are opportunities for the United States to liberalize the Chinese film market which could contribute to long-term social and political change.

Text:  The highest-grossing Chinese film is 2017’s Wolf Warrior 2, netting nearly $900 million globally. For the Chinese Communist Party (CCP), the only problem is that a mere 2% of this gross came from outside the country. For the CCP, this is a troubling pattern replicated across many of China’s most financially successful films[1]. Last year, PricewaterhouseCoopers predicted that the Chinese film market would surpass the United States’ (U.S.) in 2020, growing to a total value of $15.5 billion by 2023[2]. Despite tremendous growth by every metric – new cinema screens, films released, ticket revenue – the Chinese film industry has failed to market itself to the outside world[3].

This failure is not for lack of trying: film is a key aspect of China’s project to accumulate soft power in Africa[4], and may leave a significant footprint on the emergent film markets in many countries. The Chinese film offensive abroad has been paired with heavy-handed protectionism at home, fulfilling a desire to develop the domestic film industry and guard against the influence introduced by foreign films. In 1994 China instituted an annual quota on foreign films which has slowly crept upwards, sometimes being broken to meet growing demand[5]. But even so, the number of foreign films entering the Chinese market each year floats between only 30-40. From the perspective of the CCP, there may be good reasons to be so conservative. In the U.S., research has indicated that some films may nudge audiences in ideological directions[6] or change their opinion of the government[7]. As might be expected, Chinese censorship targets concepts like “sex, violence, and rebellious individualism”[8]. While it remains difficult to draw any definite conclusions from this research, the threat is sufficient for the CCP to carefully monitor what sorts of messaging (and how much) it makes widely available for consumption. In India, economic liberalization was reflected in the values expressed by the most popular domestic films[9] – if messaging in film can be reflected in political attitudes, and political attitudes can be reflected in messaging in film, there is the possibility of a slow but consistent feedback loop creating serious social change. That is, unless the government clamps down on this relationship.

China’s “national film strategy” has gone largely un-countered by the U.S., in spite of its potential relevance to political change within the country. In 2018, Hollywood’s attempt to push quota liberalization was largely sidelined[10] and earlier this year the Independent Film & Television Alliance stated that little progress had been made since the start of the China-U.S. trade war[11]. Despite all this, 2018 revealed that quota liberalization was something China was willing to negotiate. This is an opportunity which could be exploited in order to begin seriously engaging with China’s approach to film.

In a reappraisal of common criticisms levied against Chinese engagement in the late 1990s and early 2000s, Alastair Iain Johnston of Harvard University notes that Chinese citizens with more connections to the outside world (facilitated by opening and reform) have developed “more liberal worldviews and are less nationalistic on average than older or less internationalized members of Chinese societies”[12]. The primary market for foreign films in China is this group of “internationalized” urban citizens, both those with higher disposable income in Guangdong, Zhejiang, Shanghai, Jiangsu, Beijing, and Tianjin[13] and those in non-coastal “Anchor Cities” which are integrated into transport networks and often boast international airports[14]. These demographics are both likely to be more amenable to the messaging in foreign films and capable of consuming them in large amounts.

During future trade negotiations, the U.S. could be willing to aggressively pursue the offered concession regarding film quotas, raising the cap as high as possible. In exchange, the United States Trade Representative could offer to revoke tariffs imposed since the trade war. As an example, the “phase one” trade deal was able to secure commitments from China solely by promising not to impose further tariffs and cutting a previous tariffs package by 50%[15]. The commitments asked of China in this agreement are far more financially intensive than film market liberalization, but it is difficult to put a price tag on the ideological component of film. Even so, the party has demonstrated willingness to put the quota on the table, and this is an offer that could be explored as part of a strategy to affect change within China.

In addition to focusing on quota liberalization in trade negotiations, state and city governments in the U.S. could engage in local diplomacy to establish cultural exchange through film. In 2017, China initiated a China-Africa film festival[16], and a similar model could be pursued by local government in the U.S. The low appeal of Chinese films outside of China (compared to the high appeal of American films within China) means that the exchange would likely be a “net gain” for the U.S. in terms of cultural impression. Chinese localities with citizens more open to foreign film would have another avenue of engagement, while Chinese producers who wanted to take advantage of the opportunity to present in exclusive U.S. markets may have to adjust the overtones in their films, possibly shedding some nationalist messaging. Federal or local government could provide incentives for theaters to show films banned in China for failing to meet these messaging standards. Films like A Touch of Sin that have enjoyed critical acclaim within the U.S. could reach a wider audience and create an alternate current of Chinese film in opposition to CCP preference.

Disrupting the development of China’s film industry may provide an opportunity to initiate a process of long-term attitudinal change in a wealthy and open segment of the Chinese population. At the same time, increasing the market share of foreign films and creating countervailing notions of “the Chinese film” could make China’s soft power accumulation more difficult. Hollywood is intent on marketing to China; instead of forcing them to collaborate with Chinese censors, it may serve American strategic objectives to allow competition to consume the Chinese market. If Chinese film producers adapt in response, they will have to shed certain limitations. Either way, slow-moving change will have taken root.


Endnotes:

[1] Magnan-Park, A. (2019, May 29). The global failure of cinematic soft power ‘with Chinese characteristics’. Retrieved July 29, 2020, from https://theasiadialogue.com/2019/05/27/the-global-failure-of-cinematic-soft-power-with-chinese-characteristics

[2] PricewaterhouseCoopers. (2019, June 17). Strong revenue growth continues in China’s cinema market. Retrieved July 29, 2020, from https://www.pwccn.com/en/press-room/press-releases/pr-170619.html

[3] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[4] Wu, Y. (2020, June 24). How media and film can help China grow its soft power in Africa. Retrieved July 30, 2020, from https://theconversation.com/how-media-and-film-can-help-china-grow-its-soft-power-in-africa-97401

[5] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[6] Glas, J. M., & Taylor, J. B. (2017). The Silver Screen and Authoritarianism: How Popular Films Activate Latent Personality Dispositions and Affect American Political Attitudes. American Politics Research, 46(2), 246-275. doi:10.1177/1532673×17744172

[7] Pautz, M. C. (2014). Argo and Zero Dark Thirty: Film, Government, and Audiences. PS: Political Science & Politics, 48(01), 120-128. doi:10.1017/s1049096514001656

[8] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[9] Adhia, N. (2013). The role of ideological change in India’s economic liberalization. The Journal of Socio-Economics, 44, 103-111. doi:10.1016/j.socec.2013.02.015

[10] Li, P., & Martina, M. (2018, May 20). Hollywood’s China dreams get tangled in trade talks. Retrieved July 29, 2020, from https://www.reuters.com/article/us-usa-trade-china-movies/hollywoods-china-dreams-get-tangled-in-trade-talks-idUSKCN1IK0W0

[11] Frater, P. (2020, February 15). IFTA Says U.S. Should Punish China for Cheating on Film Trade Deal. Retrieved July 30, 2020, from https://variety.com/2020/film/asia/ifta-china-film-trade-deal-1203505171

[12] Johnston, A. I. (2019). The Failures of the ‘Failure of Engagement’ with China. The Washington Quarterly, 42(2), 99-114. doi:10.1080/0163660x.2019.1626688

[13] Figure 2.4 Urban per capita disposable income, by province, 2017. (n.d.). Retrieved July 30, 2020, from https://www.unicef.cn/en/figure-24-urban-capita-disposable-income-province-2017

[14] Liu, S., & Parilla, J. (2019, August 08). Meet the five urban Chinas. Retrieved July 30, 2020, from https://www.brookings.edu/blog/the-avenue/2018/06/19/meet-the-five-urban-chinas

[15] Lawder, D., Shalal, A., & Mason, J. (2019, December 14). What’s in the U.S.-China ‘phase one’ trade deal. Retrieved July 30, 2020, from https://www.reuters.com/article/us-usa-trade-china-details-factbox/whats-in-the-u-s-china-phase-one-trade-deal-idUSKBN1YH2IL

[16] Fei, X. (2017, June 19). China Africa International Film Festival to open in October. Retrieved July 30, 2020, from http://chinaplus.cri.cn/news/showbiz/14/20170619/6644.html

2020 - Contest: PRC Below Threshold Writing Contest Assessment Papers Film and Entertainment Influence Operations Irk

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk