Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Scott Harr is a U.S. Army Special Forces officer with deployment and service experience throughout the Middle East.  He has contributed articles on national security and foreign policy topics to military journals and professional websites focusing on strategic security issues.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Date Originally Written:  September 26, 2020.

Date Originally Published:  December 2, 2020.

Author and / or Article Point of View:  The author believes that realist theory of international relations will have to take into account the weaponization of information in order to continue to be viable.

Summary:  The weaponization of information as an instrument of security has re-shaped the traditional security dilemma faced by nation-states under realist theory. While yielding to the anarchic ordering principle from realist thought, the information domain also extends the classic security dilemma and layers it with new dynamics. These dynamics put liberal democracies on the defensive compared to authoritarian regimes.

Text:  According to realist theory, the Westphalian nation-state exists in a self-interested international community[1]. Because of the lack of binding international law, anarchy, as an ordering principle, characterizes the international environment as each nation-state, not knowing the intentions of those around it, is incentivized to provide for its own security and survival[2]. This self-help system differentiates insecure nations according to their capabilities to provide and project security. While this state-of-play within the international community holds the structure together, it also creates a classic security dilemma: the more each insecure state invests in its own security, the more such actions are interpreted as aggression by other insecure states which initiates and perpetuates a never-ending cycle of escalating aggression amongst them[3]. Traditionally, the effects of the realist security dilemma have been observed and measured through arms-races between nations or the general buildup of military capabilities. In the emerging battlefield of the 21st century, however, states have weaponized the Information Domain as both nation-states and non-state actors realize and leverage the power of information (and new ways to transmit it) to achieve security objectives. Many, like author Sean McFate, see the end of traditional warfare as these new methods captivate entities with security interests while altering and supplanting the traditional military means to wage conflict[4]. If the emergence and weaponization of information technology is changing the instruments of security, it is worth assessing how the realist security dilemma may be changing along with it.

One way to assess the Information Domain’s impact on the realist security dilemma is to examine the ordering principle that undergirds this dilemma. As mentioned above, the realist security dilemma hinges on the anarchic ordering principle of the international community that drives (compels) nations to militarily invest in security for their survival. Broadly, because no (enforceable) international law exists to uniformly regulate nation-state actions weaponizing information as a security tool, the anarchic ordering principle still exists. However, on closer inspection, while the anarchic ordering principle from realist theory remains intact, the weaponization of information creates a domain with distinctly different operating principles for nation-states existing in an anarchic international environment and using information as an instrument of security. Nation-states espousing liberal-democratic values operate on the premise that information should flow freely and (largely) uncontrolled or regulated by government authority. For this reason, countries such as the United States do not have large-scale and monopolistic “state-run” information or media channels. Rather, information is, relatively, free to flow unimpeded on social media, private news corporations, and print journalism. Countries that leverage the “freedom” operating principle for information implicitly rely on the strength and attractiveness of liberal-democratic values endorsing liberty and freedom as the centerpiece for efforts in the information domain. The power of enticing ideals, they seem to say, is the best application of power within the Information Domain and surest means to preserve security. Nevertheless, reliance on the “freedom” operating principle puts liberal democratic countries on the defensive when it comes to the security dimensions of the information domain.

In contrast to the “freedom” operating principle employed by liberal democratic nations in the information domain, nations with authoritarian regimes utilize an operating principle of “control” for information. According to authors Irina Borogan and Andrei Soldatov, when the photocopier was first invented in Russia in the early 20th century, Russian authorities promptly seized the device and hid the technology deep within government archives to prevent its proliferation[5]. Plainly, the information-disseminating capabilities implied by the photocopier terrified the Russian authorities. Such paranoid efforts to control information have shaped the Russian approach to information technology through every new technological development from the telephone, computer, and internet. Since authoritarian regimes maintain tight control of information as their operating principle, they remain less concerned about adhering to liberal values and can thus assume a more offensive stance in the information domain. For this reason, the Russian use of information technology is characterized by wide-scale distributed denial of services attacks on opposition voices domestically and “patriot hackers” spreading disinformation internationally to achieve security objectives[6]. Plausible deniability surrounding information used in this way allows authoritarian regimes to skirt and obscure the ideological values cherished by liberal democracies under the “freedom” ordering principle.

The realist security dilemma is far too durable to be abolished at the first sign of nation-states developing and employing new capabilities for security. But even as the weaponization of information has not abolished the classic realist dilemma, it has undoubtedly extended and complicated it by adding a new layer with new considerations. Whereas in the past the operating principles of nation-states addressing their security has been uniformly observed through the straight-forward build-up of overtly military capabilities, the information domain, while preserving the anarchic ordering principle from realist theory, creates a new dynamic where nation-states employ opposite operating principles in the much-more-subtle Information Domain. Such dynamics create “sub-dilemmas” for liberal democracies put on the defensive in the Information Domain. As renowned realist scholar Kenneth Waltz notes, a democratic nation may have to “consider whether it would prefer to violate its code of behavior” (i.e. compromise its liberal democratic values) or “abide by its code and risk its survival[7].” This is the crux of the matter as democracies determine how to compete in the Information Domain and all the challenges it poses (adds) to the realist security dilemma: they must find a way to leverage the strength (and attractiveness) of their values in the Information Domain while not succumbing to temptations to forsake those values and stoop to the levels of adversaries. In sum, regarding the emerging operating principles, “freedom” is the harder right to “control’s” easier wrong. To forget this maxim is to sacrifice the foundations that liberal democracies hope to build upon in the international community.


Endnotes:

[1] Waltz, Kenneth. Realism and International Politics. New York: Taylor and Francis, 2008.

[2] Ibid, Waltz, Realism.

[3] Ibid, Waltz, Realsim

[4] Mcfate, Sean. The New Rules Of War: Victory in the Age of Durable Disorder. New York: Harper Collins Press, 2019.

[5] Soldatov, Andrei and Borogan, Irina. The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries. New York: Perseus Books Group, 2015.

[6] Ibid, Soldatov.

[7] Waltz, Kenneth Neal. Man, the State, and War: A Theoretical Analysis. New York: Columbia University Press, 1959.

Assessment Papers Cyberspace Influence Operations Scott Harr

Assessment of Opportunities to Engage with the Chinese Film Market

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Irk is a freelance writer. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Opportunities to Engage with the Chinese Film Market

Date Originally Written:  July 29, 2020.

Date Originally Published:  November 11, 2020.

Author and / or Article Point of View:  The author believes that the film industry remains a relatively underexploited channel that can be used to shape the soft power dynamic in the U.S.-China relationship.

Summary:  While China’s film industry has grown in recent years, the market for Chinese films remains primarily domestic. Access to China’s film market remains heavily restricted, allowing the Chinese Communist Party to craft a film industry that can reinforce its values at home and abroad. However, there are opportunities for the United States to liberalize the Chinese film market which could contribute to long-term social and political change.

Text:  The highest-grossing Chinese film is 2017’s Wolf Warrior 2, netting nearly $900 million globally. For the Chinese Communist Party (CCP), the only problem is that a mere 2% of this gross came from outside the country. For the CCP, this is a troubling pattern replicated across many of China’s most financially successful films[1]. Last year, PricewaterhouseCoopers predicted that the Chinese film market would surpass the United States’ (U.S.) in 2020, growing to a total value of $15.5 billion by 2023[2]. Despite tremendous growth by every metric – new cinema screens, films released, ticket revenue – the Chinese film industry has failed to market itself to the outside world[3].

This failure is not for lack of trying: film is a key aspect of China’s project to accumulate soft power in Africa[4], and may leave a significant footprint on the emergent film markets in many countries. The Chinese film offensive abroad has been paired with heavy-handed protectionism at home, fulfilling a desire to develop the domestic film industry and guard against the influence introduced by foreign films. In 1994 China instituted an annual quota on foreign films which has slowly crept upwards, sometimes being broken to meet growing demand[5]. But even so, the number of foreign films entering the Chinese market each year floats between only 30-40. From the perspective of the CCP, there may be good reasons to be so conservative. In the U.S., research has indicated that some films may nudge audiences in ideological directions[6] or change their opinion of the government[7]. As might be expected, Chinese censorship targets concepts like “sex, violence, and rebellious individualism”[8]. While it remains difficult to draw any definite conclusions from this research, the threat is sufficient for the CCP to carefully monitor what sorts of messaging (and how much) it makes widely available for consumption. In India, economic liberalization was reflected in the values expressed by the most popular domestic films[9] – if messaging in film can be reflected in political attitudes, and political attitudes can be reflected in messaging in film, there is the possibility of a slow but consistent feedback loop creating serious social change. That is, unless the government clamps down on this relationship.

China’s “national film strategy” has gone largely un-countered by the U.S., in spite of its potential relevance to political change within the country. In 2018, Hollywood’s attempt to push quota liberalization was largely sidelined[10] and earlier this year the Independent Film & Television Alliance stated that little progress had been made since the start of the China-U.S. trade war[11]. Despite all this, 2018 revealed that quota liberalization was something China was willing to negotiate. This is an opportunity which could be exploited in order to begin seriously engaging with China’s approach to film.

In a reappraisal of common criticisms levied against Chinese engagement in the late 1990s and early 2000s, Alastair Iain Johnston of Harvard University notes that Chinese citizens with more connections to the outside world (facilitated by opening and reform) have developed “more liberal worldviews and are less nationalistic on average than older or less internationalized members of Chinese societies”[12]. The primary market for foreign films in China is this group of “internationalized” urban citizens, both those with higher disposable income in Guangdong, Zhejiang, Shanghai, Jiangsu, Beijing, and Tianjin[13] and those in non-coastal “Anchor Cities” which are integrated into transport networks and often boast international airports[14]. These demographics are both likely to be more amenable to the messaging in foreign films and capable of consuming them in large amounts.

During future trade negotiations, the U.S. could be willing to aggressively pursue the offered concession regarding film quotas, raising the cap as high as possible. In exchange, the United States Trade Representative could offer to revoke tariffs imposed since the trade war. As an example, the “phase one” trade deal was able to secure commitments from China solely by promising not to impose further tariffs and cutting a previous tariffs package by 50%[15]. The commitments asked of China in this agreement are far more financially intensive than film market liberalization, but it is difficult to put a price tag on the ideological component of film. Even so, the party has demonstrated willingness to put the quota on the table, and this is an offer that could be explored as part of a strategy to affect change within China.

In addition to focusing on quota liberalization in trade negotiations, state and city governments in the U.S. could engage in local diplomacy to establish cultural exchange through film. In 2017, China initiated a China-Africa film festival[16], and a similar model could be pursued by local government in the U.S. The low appeal of Chinese films outside of China (compared to the high appeal of American films within China) means that the exchange would likely be a “net gain” for the U.S. in terms of cultural impression. Chinese localities with citizens more open to foreign film would have another avenue of engagement, while Chinese producers who wanted to take advantage of the opportunity to present in exclusive U.S. markets may have to adjust the overtones in their films, possibly shedding some nationalist messaging. Federal or local government could provide incentives for theaters to show films banned in China for failing to meet these messaging standards. Films like A Touch of Sin that have enjoyed critical acclaim within the U.S. could reach a wider audience and create an alternate current of Chinese film in opposition to CCP preference.

Disrupting the development of China’s film industry may provide an opportunity to initiate a process of long-term attitudinal change in a wealthy and open segment of the Chinese population. At the same time, increasing the market share of foreign films and creating countervailing notions of “the Chinese film” could make China’s soft power accumulation more difficult. Hollywood is intent on marketing to China; instead of forcing them to collaborate with Chinese censors, it may serve American strategic objectives to allow competition to consume the Chinese market. If Chinese film producers adapt in response, they will have to shed certain limitations. Either way, slow-moving change will have taken root.


Endnotes:

[1] Magnan-Park, A. (2019, May 29). The global failure of cinematic soft power ‘with Chinese characteristics’. Retrieved July 29, 2020, from https://theasiadialogue.com/2019/05/27/the-global-failure-of-cinematic-soft-power-with-chinese-characteristics

[2] PricewaterhouseCoopers. (2019, June 17). Strong revenue growth continues in China’s cinema market. Retrieved July 29, 2020, from https://www.pwccn.com/en/press-room/press-releases/pr-170619.html

[3] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[4] Wu, Y. (2020, June 24). How media and film can help China grow its soft power in Africa. Retrieved July 30, 2020, from https://theconversation.com/how-media-and-film-can-help-china-grow-its-soft-power-in-africa-97401

[5] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[6] Glas, J. M., & Taylor, J. B. (2017). The Silver Screen and Authoritarianism: How Popular Films Activate Latent Personality Dispositions and Affect American Political Attitudes. American Politics Research, 46(2), 246-275. doi:10.1177/1532673×17744172

[7] Pautz, M. C. (2014). Argo and Zero Dark Thirty: Film, Government, and Audiences. PS: Political Science & Politics, 48(01), 120-128. doi:10.1017/s1049096514001656

[8] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[9] Adhia, N. (2013). The role of ideological change in India’s economic liberalization. The Journal of Socio-Economics, 44, 103-111. doi:10.1016/j.socec.2013.02.015

[10] Li, P., & Martina, M. (2018, May 20). Hollywood’s China dreams get tangled in trade talks. Retrieved July 29, 2020, from https://www.reuters.com/article/us-usa-trade-china-movies/hollywoods-china-dreams-get-tangled-in-trade-talks-idUSKCN1IK0W0

[11] Frater, P. (2020, February 15). IFTA Says U.S. Should Punish China for Cheating on Film Trade Deal. Retrieved July 30, 2020, from https://variety.com/2020/film/asia/ifta-china-film-trade-deal-1203505171

[12] Johnston, A. I. (2019). The Failures of the ‘Failure of Engagement’ with China. The Washington Quarterly, 42(2), 99-114. doi:10.1080/0163660x.2019.1626688

[13] Figure 2.4 Urban per capita disposable income, by province, 2017. (n.d.). Retrieved July 30, 2020, from https://www.unicef.cn/en/figure-24-urban-capita-disposable-income-province-2017

[14] Liu, S., & Parilla, J. (2019, August 08). Meet the five urban Chinas. Retrieved July 30, 2020, from https://www.brookings.edu/blog/the-avenue/2018/06/19/meet-the-five-urban-chinas

[15] Lawder, D., Shalal, A., & Mason, J. (2019, December 14). What’s in the U.S.-China ‘phase one’ trade deal. Retrieved July 30, 2020, from https://www.reuters.com/article/us-usa-trade-china-details-factbox/whats-in-the-u-s-china-phase-one-trade-deal-idUSKBN1YH2IL

[16] Fei, X. (2017, June 19). China Africa International Film Festival to open in October. Retrieved July 30, 2020, from http://chinaplus.cri.cn/news/showbiz/14/20170619/6644.html

2020 - Contest: PRC Below Threshold Writing Contest Assessment Papers Film and Entertainment Influence Operations Irk

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence & Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk