Assessment of the People’s Republic of China’s United Front Work Department, its Impact on Taiwan’s National Security, and Strategies to Combat Foreign Interference

Editor’s Note:  This article is part of our 2023 Writing Contest called The Taiwan Offensive, which took place from March 1, 2023 to July 31, 2023.  More information about the contest can be found by clicking here.

Heath Sloane is a research analyst based in London, UK, and Masters graduate of Peking University’s Yenching Academy. He has worked for the Middle East Media Research Institute where his research includes Chinese strategic affairs. His research on Chinese and China-Middle Eastern / North African affairs has been translated and published in several leading international affairs. He can be found on Twitter at @HeathSloane.


Title:  Assessment of the People’s Republic of China’s United Front Work Department, its Impact on Taiwan’s National Security, and Strategies to Combat Foreign Interference

Date Originally Written:  July 10, 2023.

Date Originally Published:  August 28, 2023.

Author and / or Article Point of View:  The author believes that the People’s Republic of China (PRC) United Front Work Department’s (UFWD) interference in Taiwan constitutes a political warfare offensive. 

Summary:  The PRC’s UFWD threatens democracies, particularly Taiwan, by exploiting the openness inherent to democratic societies. The UFWD combines military and non-military tactics in its offensive against Taiwan’s institutions. Taiwan’s countermeasures include legislation, education, and a state-civil society collaboration. Unless democracies remain vigilant in their defense against foreign interference, the UFWD will continue to be effective.

Text:  The intricate tableau of global politics is marked by the fluctuating interplay of national interests, aspirations, and stratagems. One of the most prominent actors on this stage, the PRC, guided by General Secretary Xi Jinping, boasts an expansive and complex political infrastructure. Among its numerous political entities, the UFWD — an integral component of the Chinese Communist Party (CCP) — emerges as an entity of particular concern due to its amalgamation of both military and non-military tactics[1]. The UFWD’s mode of operation poses a severe challenge to democratic nations across the globe, particularly those upholding the principles of freedom of speech, freedom of the press, and the cultivation of a dynamic civil society. 

The strength of the UFWD lies in its ability to exploit the inherent characteristics of democratic systems. Unlike the PRC’s command economy and authoritarian political structure, democratic nations embrace a liberal ethos that allows substantial latitude in civil society. This democratic openness becomes a significant point of exploitation for the UFWD[2]. Consequently, comprehending the inner workings, methodologies, and objectives of the UFWD is a critical requirement for the democratic world in crafting an effective and proportionate response.

Taiwan, due to its unique historical bonds and political interplay with the PRC, finds itself at the epicentre of the UFWD’s operations. This positioning transforms Taiwan into an invaluable case study in unravelling the dynamics of foreign interference and devising counter-interference measures. Accordingly, this extensive analysis endeavours to explore Taiwan’s responses to the UFWD’s activities, extrapolate the broader geopolitical implications, and offer viable countermeasures for the global democratic community.

Under Xi Jinping’s leadership, the UFWD has metamorphosed from a predominantly domestic entity into an apparatus deeply embedded in the PRC’s foreign policy machinery[3]. This transformation is epitomised by the growth in the number of pro-CCP organisations operating in democratic nations worldwide, coupled with the escalating use of disinformation campaigns during critical political junctures. Such activities underscore the expanded global reach of the UFWD and highlight its potential to disrupt the democratic processes of various nations.

Yet, Taiwan has refused to be a mere spectator in the face of the UFWD’s interference. Taiwan’s Political Warfare Bureau, an institution harking back to Taiwan’s more authoritarian past, has effectively countered the UFWD’s aggressive manoeuvres[4]. Over the years, this bureau has undergone considerable reforms to better align with Taiwan’s democratic norms, values, and institutions. This transformation has strengthened its capabilities to protect Taiwan’s democratic institutions from the covert activities of the UFWD.

Education serves as the cornerstone of Taiwan’s defence against the UFWD. The educational initiatives, geared towards the dual objectives of demystifying the ideology and tactics that drive the UFWD’s operations, and proliferating awareness about these operations among the military and civilian populations, empower Taiwanese society with the knowledge and tools to recognise and resist UFWD interference. Given the multifarious nature of the UFWD’s operations — which include political donations, espionage, and the establishment of pro-CCP cultural and social organisations[5] — gaining an in-depth understanding of its diverse strategies is crucial for effecting a robust and sustained counteraction.

In conjunction with education, Taiwan’s Political Warfare Bureau has orchestrated a nationwide coordination of counter-interference initiatives. This broad-based network extends across the country’s civil society and national defence infrastructure, fostering an unprecedented level of collaboration between a wide array of national institutions. Regular briefings on UFWD activities, rigorous training programs, and the promotion of cross-institutional collaborations form the lynchpin of this response mechanism.

In the face of the UFWD’s interference, inaction or complacency could lead to dire consequences for Taiwan and democratic societies worldwide. The UFWD’s sophisticated tactics, flexibility, and adaptability make it a formidable adversary. In the absence of proactivity, the road may be paved for deeper and more disruptive infiltration into the political, social, and cultural landscapes of democracies. As such, the development of vigilant, comprehensive, and proactive countermeasures is of paramount importance[6].

Reflecting on Taiwan’s experiences and strategic responses, there is more that democratic nations could do to enhance democratic resilience against the UFWD. Democratic nations could delineate a clear legal definition for ‘foreign interference’ and incorporate this definition into the structural frameworks of relevant state institutions. This step will provide a solid legal foundation for counter-interference initiatives. Additionally, the concept of foreign interference could be integrated into national educational curricula, providing citizens with the necessary knowledge to identify and resist such activities. Finally, systematic training on identifying and countering foreign interference could be mandatory for all military personnel and staff within relevant state institutions.

Further, democracies could consider the establishment of a publicly-accessible monitoring centre, working in conjunction with national defence bodies, civil society organisations, and other institutions to identify, monitor, and publicise instances of foreign interference. The transparent and fact-based disclosure of individuals and organisations exposed to the UFWD will enable citizens and institutions within democracies to be responsive to malign elements in their midst.

The PRC’s UFWD poses a significant challenge to Taiwan’s national security and, more broadly, to democratic societies worldwide. Nonetheless, Taiwan’s experience in grappling with this entity offers a wealth of insights into devising effective counter-interference strategies. As the global geopolitical landscape continues to evolve, the UFWD’s reach continues to extend, necessitating democracies to remain vigilant, adaptable, and proactive in safeguarding their national security and democratic processes from foreign interference. The task ahead is daunting, but the stakes are high, and the preservation of democratic values and structures necessitates that no effort be spared.


Endnotes:

[1] Brady, A. M. (2015). Magic weapons: China’s political influence activities under Xi Jinping. Clingendael Institute, from https://www.wilsoncenter.org/article/magic-weapons-chinas-political-influence-activities-under-xi-jinping 

[2] Gill, B., & Schreer, B. (2018). Countering China’s “United Front”. The Washington Quarterly, 41(2), 155-170, from https://doi.org/10.1080/0163660X.2018.1485323 

[3] Suzuki, T. (2019). China’s United Front Work in the Xi Jinping era–institutional developments and activities. Journal of Contemporary East Asia Studies, 8(1), 83-98, from https://doi.org/10.1080/24761028.2019.1627714 

[4] Blanchette, J., Livingston, S., Glaser, B., & Kennedy, S. (2021). Protecting democracy in an age of disinformation: lessons from Taiwan.Blanchette, J., Livingston, S., Glaser, B., & Kennedy, S. (2021). Protecting democracy in an age of disinformation: lessons from Taiwan, from https://www.csis.org/analysis/protecting-democracy-age-disinformation-lessons-taiwan 

[5] Joske, A. (2022). Spies and Lies. Hardie Grant Publishing.

[6] Gershaneck, K. K. (2019). Under Attack: Recommendations for Victory in the PRC’s Political War to Destroy the ROC. 復興崗學報, (114), 1-40, from https://www.fhk.ndu.edu.tw/uploads/1562309764098tuX1wh0h.pdf 

2023 - Contest: The Taiwan Offensive Assessment Papers China (People's Republic of China) Heath Sloane Influence Operations Taiwan

Options to Mitigate Cognitive Threats

John Chiment is a strategic threat intelligence analyst and has supported efforts across the Department of Defense and U.S. Intelligence Community. The views expressed herein are those of the author and do not reflect the official policy or position of the LinQuest Corporation, any of LinQuest’s subsidiaries or parents, or the U.S. Government.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


National Security Situation:  Cognitive attacks target the defender’s ability to accurately perceive the battlespace and react appropriately. If successful, these attacks may permit an attacker to defeat better equipped or positioned defenders. Defenders who deploy defenses poorly matched against the incoming threat – either due to mischaracterizing that threat or by rushing to respond – likely will suffer greater losses. Mitigation strategies for cognitive attacks all carry risks.

Date Originally Written:  January 31, 2022.

Date Originally Published:   March 7, 2022.

Author and / or Article Point of View:  The author is an American threat intelligence analyst with time in uniform, as a U.S. government civilian, and as a DoD contractor. 

Background:  Effectively countering an attack requires the defender to detect its existence, recognize the danger posed, decide on a course of action, and implement that action before the attack completes its engagement. An attacker can improve the odds of a successful strike by increasing the difficulty in each of these steps (via stealth, speed, deception, saturation, etc.) while defenders can improve their chances through preparation, awareness, and technical capabilities. Correct detection and characterization of a threat enables decision-makers to decide which available defense is the most appropriate. 

Significance:  A defender deploying a suboptimal or otherwise inappropriate defense benefits the attacker. Attackers who target the defender’s understanding of the incoming attack and their decision-making process may prompt defenders to select inappropriate defenses. Technological superiority – long a goal of western militaries – may be insufficient against such cognitive manipulations that target human decision-making processes rather than the capabilities the defender controls.

Option #1:  Defenders increase their number of assets collecting Intelligence, Surveillance, and Reconnaissance (ISR) data in order to more rapidly detect threats.

Risk:  Increasing ISR data collection consumes industrial and financial resources and may worsen relationships with other powers and the general public. Increasing collection may also overwhelm analytic capabilities by providing too much data [1].

Gain:  Event detection begins the defender’s process and earlier detection permits the defender to develop more options in subsequent stages. By increasing the number of ISR assets that can begin the defender’s decision-making process, the defender increases their opportunities to select an appropriate defense.

Option #2:  The defender increases the number of assets capable of analyzing information in order to more rapidly identify the threat.

Risk:  Increasing the number of assets capable of accurately processing, exploiting, and disseminating (PED) information consumes intellectual and financial resources. Threat characterization decisions can also be targeted in the same ways as defense deployment decisions [2].

Gain:   A larger network of available PED analysts may better address localized spikes in attacks, more evenly distribute stress among analysts and analytic networks within supporting agencies, and lower the risk of mischaracterizing threats, likely improving decision-maker’s chances of selecting an appropriate defense.

Option #3:  The defender automates defense deployment decisions in order to rapidly respond with a defense.

Risk:  Automated systems may possess exploitable logical flaws that can be targeted in much the same way as defender’s existing decision-making process. Automated systems operate at greater speeds, limiting opportunities for the defender to detect and correct inappropriate decisions [3].

Gain:  Automated systems operate at high speed and may mitigate time lost to late detection or initial mischaracterization of threats. Automating decisions also reduces the immediate cognitive load on the defender by permitting defensive software designers to explore and plan for complex potentials without the stress of an incoming attack.

Option #4:  The defender increases the number of assets authorized to make defense deployment decisions in order to more likely select an appropriate defense.

Risk:  Increasing the available pool of authorized decision-makers consumes communication bandwidth and financial resources. Larger communication networks have larger attack surfaces and increase the risk of both data leaks and attackers maliciously influencing decisions into far-off engagements. Attacking the network segment may produce delays resulting in defenders not deploying appropriate defenses in time [4].

Gain:  A larger network of authorized decision-makers may better address localized spikes in attacks, more evenly distribute stress among decision-making personnel, and lower the risk of rushed judgements that may prompt inappropriate defense deployments.

Option #5:  The defender trains authorized decision-makers to operate at higher cognitive loads in order to more likely select an appropriate defense.

Risk:  Attackers likely can increase attacks and overwhelm even extremely well-trained decision-makers.  As such, this option is a short-term solution. Increasing the cognitive load on an already limited resource pool likely will increase burnout rates, lowering the overall supply of experienced decision-makers [5].

Gain:  Improving decision-maker training can likely be achieved with minimal new investments as it focusses on better utilization of existing resources.

Option #6:  The defender prepositions improved defenses and defense response options in order to better endure attacks regardless of decision-making timelines.

Risk:  Prepositioned defenses and response options consume logistical and financial resources. Actions made prior to conflict risk being detected and planned for by adversaries, reducing their potential value. Rarely used defenses have maintenance costs that can be difficult to justify [6].

Gain:  Prepositioned defenses may mitigate attacks not detected before impact by improving the targeted asset’s overall endurance, and attackers knowledgeable of the defender’s defensive capabilities and response options may be deterred or slowed when pursuing goals that will now have to contend with the defender’s assets.

Other Comments:  Risks to the decision-making processes cannot be fully avoided. Options #3 and #6 attempt to make decisions before any cognitive attacks target decision-makers while Options #2 and #4 attempt to mitigate cognitive attack impact by spreading the load across a larger pool of assets. Options #1 and #2 may permit decision-makers to make better decisions earlier in an active attack while Option #5 attempts to improve the decision-making abilities of existing decision-makers. 

Recommendation:  None.


Endnotes:

[1] Krohley, N. (2017, 24 October). The Intelligence Cycle is Broken. Here’s How To Fix It. Modern Warfare Institute at West Point. https://mwi.usma.edu/intelligence-cycle-broken-heres-fix/

[2] Corona, I., Giancinto, G., & Roli, F. (2013, 1 August). Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues. Information Sciences, 239, 201-225. https://doi.org/10.1016/j.ins.2013.03.022

[3] Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A. Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust Physical-World Attacks on Deep Learning Visual Classification [Paper Presentation]. Conference on Computer Vision and Pattern Recognition. https://arxiv.org/abs/1707.08945v5

[4] Joint Chiefs of Staff. (2016, 21 December). Countering Threat Networks (JP 3-25). https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp3_25.pdf

[5] Larsen, R. P. (2001). Decision Making by Military Students Under Severe Stress. Military Psychology, 13(2), 89-98. https://doi.org/10.1207/S15327876MP1302_02

[6] Gerritz, C. (2018, 1 February). Special Report: Defense in Depth is a Flawed Cyber Strategy. Cyber Defense Magazine. https://www.cyberdefensemagazine.com/special-report-defense-in-depth-is-a-flawed-cyber-strategy/

Cyberspace Influence Operations Information and Intelligence John Chiment Option Papers

Assessing China as a Case Study in Cognitive Threats

John Guerrero is currently serving in the Indo-Pacific region. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing China as a Case Study in Cognitive Threats

Date Originally Written:  February 1, 2022.

Date Originally Published:  February 28, 2022.

Author and / or Article Point of View:  The author is currently serving in the Indo-Pacific region.  The author believes that China is more mature than the U.S. in projecting force in the cognitive space. This increased maturity is largely due to the China’s insistence on operating outside of the rules-based system. 

Summary:  China has largely been effective in pursuing their national interests through cognitive threats. In this cognitive space, China influences public opinion through propaganda, disinformation campaigns, censorship, and controlling critical nodes of information flow.  China’s understanding of U.S. politics, and its economic strength, will enable it to continue threatening U.S. national security. 

Text:  China is pursuing its national interests through its effective employment of cognitive threats- efforts undertaken to manipulate an adversary’s perceptions to achieve a national security objective. Cognitive threats generally include psychological warfare which target the enemy’s decision-making calculus causing him to doubt himself and make big blunders. Psychological warfare also includes strategic deception, diplomatic pressure, rumor, false narratives, and harassment[1].  Chinese actions illustrate their use of all of the above.  

The cognitive threat area illustrates the disparity between U.S. defensive efforts and China’s offensive actions below the threshold of war. The United States remains wedded to the state-versus-state construct that has kept strategists occupied since 1945. The stave-versus-state construct is antiquated and the commitment to it hamstrings the U.S. from pursuing more effective options.

China’s efforts in the cognitive space  exceed any other state. China understands the importance of influencing its competitors’ thinking. There are four lines of effort in China’s pursuit of favorable global public opinion. China aims to influence public opinion through propaganda, disinformation campaigns, censorship, and controlling critical nodes of information flow[2].

Globalization complicates problems in the cognitive space. Globalization creates opportunities, but it also creates multiple areas a nefarious actor can exploit. Corporations, as an example, are multi-national and have influence across borders. There are clear incentives for corporations to transact with the Chinese market. Chinese exposure for a corporation oftentimes translates into an uptick in revenue. However, there are consequences. Corporations are “expected to bend, and even violate, their interests and values when the Party demands[3].”

China’s reach into the United States is vast. One area of significant importance is the American pension plan. American pensioners are “underwriting their own demise” by contributing to their retirement accounts that may be tied to China[4]. Planning a financially stable future is noble, but it is not without unforeseen consequences. There are 248 corporations of Chinese origin listed on American stock exchanges[5]. The Chinese government enjoys significant influence over these corporations through their program called “Military-Civil Fusion[6].” Many index funds available to Americans include Chinese corporations. China’s economic strengths facilitate censorship over dissenters and any information aimed at painting the government in an unfavorable light. In another example of China’s expansive reach, they have recently placed an advertisement on digital billboards in Times Square attempting to sway onlookers into their favor[7].

The Chinese Communist Party (CCP) understands that while global opinion is important, they cannot ignore domestic public opinion. Despite reports on their treatment of ethnic minorities[8], the Party continues to push the idea that they enjoy “political stability, ethnic unity and social stability[9].” China’s domestic news agencies critical to Party and Party leadership are few and far between. There are incentives to march to the Party’s tune- corporate survival.  

China’s efforts, and the U.S. response, in cognitive threats will progress along the four lines of effort discussed. China’s economic strength enables their strategic efforts to sway global public opinion in their favor. Few state, and non-state, actors can do this at this scale, but it does not preclude them from partaking. These smaller states, and non-states, will serve as proxies for China. 

China understands U.S. domestic politics. This understanding is critical to their national interests. The current state of U.S. domestic politics is divisive and presents opportunities for China. For example, China has exploited the U.S. media’s coverage of brutal treatment by law enforcement officers on Americans. China widens the division between Americans over these tragic events and accuses the United States of hypocrisy[10]. 

China is attempting to control, curate, and censor information for Americans and the world. Hollywood is the latest node of influence. “China has leveraged its market to exert growing influence over exported U.S. films, censoring content that could cast China in a negative light and demanding the addition of scenes that glorify the country[11].” Movies are not the full extent of Hollywood’s reach and influence. Celebrities active on social media could be advancing China’s interests and influence unknowingly. China’s adversaries, peer governments, aren’t the target of these cognitive threats. Rather, they target the ordinary citizen. In a democratic government, the ordinary citizen is the center of gravity- a fact the Chinese know very well. 

The cognitive threat arena is dynamic and evolves at a staggering pace. Technological advancements, while beneficial, presents opportunities for exploitation. The PRC continues to advance their footprint in this space. This is dangerous as it has far-reaching effects on the United States and its ability to pursue its national interests. Strategists should keep a watchful eye at how this pervasive and omnipresent threat progresses. These threats will continue to influence future conflicts.  


Endnotes:

[1] Sean McFate, The New Rules of War: Victory in the Age of Durable Disorder, First edition (New York, NY: William Morrow, an Imprint of HarperCollins Publishers, 2019).

[2] Sarah Cook, “China’s Global Media Footprint,” National Endowment for Democracy, February 2021, 24.

[3] Luke A. Patey, How China Loses: The Pushback against Chinese Global Ambitions (New York: Oxford University Press, 2021).

[4] Joe Rogan, “General H.R. McMaster,” accessed January 27, 2022, https://open.spotify.com/episode/2zVnXIoC5w9ZkkQAmWOIbJ?si=p1PD8RaZR7y1fA0S8FcyFw.

[5] “Chinese Companies Listed on Major U.S. Stock Exchanges” (Washington, DC: U.S.-China Economic and Security Review Commission, May 13, 2021), https://www.uscc.gov/research/chinese-companies-listed-major-us-stock-exchanges.

[6] U.S. Department of State, “Military-Civil Fusion and the People’s Republic of China” (Department of State, 2021 2017), https://2017-2021.state.gov/military-civil-fusion/index.html.

[7] Eva Fu, “Chinese State Media Uses Times Square Screen to Play Xinjiang Propaganda,” http://www.theepochtimes.com, January 7, 2022, https://www.theepochtimes.com/chinese-state-media-uses-times-square-screen-to-play-xinjiang-propaganda_4200206.html.

[8] Adrian Zenz, “Uighurs in Xinjiang Targeted by Potentially Genocidal Sterilization Plans, Chinese Documents Show,” News Media, Foreign Policy, July 1, 2020, https://foreignpolicy.com/2020/07/01/china-documents-uighur-genocidal-sterilization-xinjiang/.

[9] PRC Government, “China’s National Defense in the New Era” (The State Council Information Office of the People’s Republic of China, July 2019), https://armywarcollege.blackboard.com/bbcswebdav/pid-25289-dt-announcement-rid-963503_1/courses/19DE910001D2/China-White%20Paper%20on%20National%20Defense%20in%20a%20New%20Era.pdf.

[10] Paul D. Shinkman, “China Leverages George Floyd Protests Against Trump, U.S. | World Report | US News,” June 9, 2020, https://www.usnews.com/news/world-report/articles/2020-06-09/china-leverages-george-floyd-protests-against-trump-us.

[11] Gabriela Sierra, “China’s Starring Role in Hollywood,” accessed January 28, 2022, https://www.cfr.org/podcasts/chinas-starring-role-hollywood.

Assessment Papers China (People's Republic of China) Governing Documents and Ideas Influence Operations John Guerrero United States

Options to Address Disinformation as a Cognitive Threat to the United States

Joe Palank is a Captain in the U.S. Army Reserve, where he leads a Psychological Operations Detachment. He has also previously served as an assistant to former Secretary of Homeland Security Jeh Johnson. He can be found on Twitter at @JoePalank. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization or any group.


National Security Situation:  Disinformation as a cognitive threat poses a risk to the U.S.

Date Originally Written:  January 17, 2022.

Date Originally Published:  February 14, 2022.

Author and / or Article Point of View:  The author is a U.S. Army Reservist specializing in psychological operations and information operations. He has also worked on political campaigns and for the U.S. Department of Homeland Security. He has studied psychology, political communications, disinformation, and has Masters degrees in Political Management and in Public Policy, focusing on national security.

Background:  Disinformation as a non-lethal weapon for both state and non-state actors is nothing new.  However the rise of the internet age and social media, paired with cultural change in the U.S., has given this once fringe capability new salience. Russia, China, Iran, North Korea, and violent extremist organizations pose the most pervasive and significant risks to the United States through their increasingly weaponized use of disinformation[1]. 

Significance:  Due to the nature of disinformation, this cognitive threat poses a risk to U.S. foreign and domestic policy-making, undercuts a foundational principle of democracy, and has already caused significant disruption to the U.S. political process. Disinformation can be used tactically alongside military operations, operationally to shape the information environment within a theater of conflict, and strategically by potentially sidelining the U.S. or allies from joining international coalitions.

Option #1:  The U.S. focuses domestically. 

The U.S. could combat the threat of disinformation defensively, by looking inward, and take a two-pronged approach to prevent the effects of disinformation. First, the U.S. could adopt new laws and policies to make social media companies—the primary distributor of disinformation—more aligned with U.S. national security objectives related to disinformation. The U.S. has an asymmetric advantage in serving as the home to the largest social media companies, but thus far has treated those platforms with the same laissez faire approach other industries enjoy. In recent years, these companies have begun to fight disinformation, but they are still motivated by profits, which are in turn motivated by clicks and views, which disinformation can increase[2]. Policy options might include defining disinformation and passing a law making the deliberate spread of disinformation illegal or holding social media platforms accountable for the spread of disinformation posted by their users.

Simultaneously, the U.S. could embark on widescale media literacy training for its populace. Raising awareness of disinformation campaigns, teaching media consumers how to vet information for authenticity, and educating them on the biases within media and our own psychology is an effective line of defense against disinformation[3]. In a meta-analysis of recommendations for improving awareness of disinformation, improved media literacy training was the single most common suggestion among experts[4]. Equipping the end users to be able to identify real, versus fake, news would render most disinformation campaigns ineffective.

Risk:  Legal – the United States enjoys a nearly pure tradition of “free speech” which may prevent the passage of laws combatting disinformation.

Political – Passing laws holding individuals criminally liable for speech, even disinformation, would be assuredly unpopular. Additionally, cracking down on social media companies, who are both politically powerful and broadly popular, would be a political hurdle for lawmakers concerned with re-election. 

Feasibility –  Media literacy training would be expensive and time-consuming to implement at scale, and the same U.S. agencies that currently combat disinformation are ill-equipped to focus on domestic audiences for broad-scale educational initiatives.

Gain:  A U.S. public that is immune to disinformation would make for a healthier polity and more durable democracy, directly thwarting some of the aims of disinformation campaigns, and potentially permanently. Social media companies that are more heavily regulated would drastically reduce the dissemination of disinformation campaigns worldwide, benefiting the entire liberal economic order.

Option #2:  The U.S. focuses internationally. 

Strategically, the U.S. could choose to target foreign suppliers of disinformation. This targeting is currently being done tactically and operationally by U.S. DoD elements, the intelligence community, and the State Department. That latter agency also houses the coordinating mechanism for the country’s handling of disinformation, the Global Engagement Center, which has no actual tasking authority within the Executive Branch. A similar, but more aggressive agency, such as the proposed Malign Foreign Influence Response Center (MFIRC), could literally bring the fight to purveyors of disinformation[5]. 

The U.S. has been slow to catch up to its rivals’ disinformation capabilities, responding to disinformation campaigns only occasionally, and with a varied mix of sanctions, offensive cyber attacks, and even kinetic strikes (only against non-state actors)[6]. National security officials benefit from institutional knowledge and “playbooks” for responding to various other threats to U.S. sovereignty or the liberal economic order. These playbooks are valuable for responding quickly, in-kind, and proportionately, while also giving both sides “off-ramps” to de-escalate. An MFIRC could develop playbooks for disinformation and the institutional memory for this emerging type of warfare. Disinformation campaigns are popular among U.S. adversaries due to the relative capabilities advantage they enjoy, as well as for their low costs, both financially and diplomatically[7]. Creating a basket of response options lends itself to the national security apparatus’s current capabilities, and poses fewer legal and political hurdles than changing U.S. laws that infringe on free speech. Moreover, an MFIRC would make the U.S. a more equal adversary in this sphere and raise the costs to conduct such operations, making them less palatable options for adversaries.

Risk:  Geopolitical – Disinformation via the internet is still a new kind of warfare; responding disproportionately carries a significant risk of escalation, possibly turning a meme into an actual war.

Effectiveness – Going after the suppliers of disinformation could be akin to a whack-a-mole game, constantly chasing the next threat without addressing the underlying domestic problems.

Gain:  Adopting this approach would likely have faster and more obvious effects. A drone strike to Russia’s Internet Research Agency’s headquarters, for example, would send a very clear message about how seriously the U.S. takes disinformation. At relatively little cost and time—more a shifting of priorities and resources—the U.S. could significantly blunt its adversaries’ advantages and make disinformation prohibitively expensive to undertake at scale.

Other Comments:  There is no reason why both options could not be pursued simultaneously, save for costs or political appetite.

Recommendation:  None.


Endnotes:

[1] Nemr, C. & Gangware, W. (2019, March). Weapons of Mass Distraction: Foreign State-Sponsored Disinformation in the Digital Age. Park Advisors. Retrieved January 16, 2022 from https://2017-2021.state.gov/weapons-of-mass-distraction-foreign-state-sponsored-disinformation-in-the-digital-age/index.html 

[2] Cerini, M. (2021, December 22). Social media companies beef up promises, but still fall short on climate disinformation. Fortune.com. Retrieved January 16, 2022 from https://fortune.com/2021/12/22/climate-change-disinformation-misinformation-social-media/

[3] Kavanagh, J. & Rich, M.D. (2018) Truth Decay: An Initial Exploration of the Diminishing Role of Facts and Analysis in American Public Life. RAND Corporation. https://www.rand.org/t/RR2314

[4] Helmus, T. & Keep, M. (2021). A Compendium of Recommendations for Countering Russian and Other State-Sponsored Propaganda. Research Report. RAND Corporation. https://www.rand.org/pubs/research_reports/RRA894-1.html

[5] Press Release. (2020, February 14). Following Passage of their Provision to Establish a Center to Combat Foreign Influence Campaigns, Klobuchar, Reed Ask Director of National Intelligence for Progress Report on Establishment of the Center. Office of Senator Amy Klobuchar. https://www.klobuchar.senate.gov/public/index.cfm/2020/2/following-passage-of-their-provision-to-establish-a-center-to-combat-foreign-influence-campaigns-klobuchar-reed-ask-director-of-national-intelligence-for-progress-report-on-establishment-of-the-center

[6] Goldman, A. & Schmitt, E. (2016, November 24). One by One, ISIS Social Media Experts Are Killed as Result of F.B.I. Program. New York Times. Retrieved January 15, 2022 from https://www.nytimes.com/2016/11/24/world/middleeast/isis-recruiters-social-media.html

[7] Stricklin, K. (2020, March 29). Why Does Russia Use Disinformation? Lawfare. Retrieved January 15, 2022 from https://www.lawfareblog.com/why-does-russia-use-disinformation

Cyberspace Influence Operations Information and Intelligence Joe Palank Option Papers Social Media United States

Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Zac Rogers is an academic from Adelaide, South Australia. Zac has published in journals including International Affairs, The Cyber Defense Review, Joint Force Quarterly, and Australian Quarterly, and communicates with a wider audience across various multimedia platforms regularly. Parasitoid is his first book.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Cognitive Threat Posed by Technology Discourses Intended to Address Adversary Grey Zone Activities

Date Originally Written:  January 3, 2022.

Date Originally Published:  January 17, 2022.

Author and / or Article Point of View:  The author is an Australia-based academic whose research combines a traditional grounding in national security, intelligence, and defence with emerging fields of social cybersecurity, digital anthropology, and democratic resilience.  The author works closely with industry and government partners across multiple projects. 

Summary:  Military investment in war-gaming, table-top exercises, scenario planning, and future force design is increasing.  Some of this investment focuses on adversary activities in the “cognitive domain.” While this investment is necessary, it may fail due to it anchoring to data-driven machine-learning and automation for both offensive and defensive purposes, without a clear understanding of their appropriateness. 

Text:  In 2019 the author wrote a short piece for the U.S. Army’s MadSci website titled  “In the Cognitive War, the Weapon is You![1]” This article attempted to spur self-reflection by the national security, intelligence, and defence communities in Australia, the United States and Canada, Europe, and the United Kingdom.  At the time these communities were beginning to incorporate discussion of “cognitive” security/insecurity in their near future threat assessments and future force design discourses. The article is cited in in the North Atlantic Treaty Organization (NATO) Cognitive Warfare document of 2020[2]. Either in ways that demonstrate the misunderstanding directly, or as part of the wider context in which the point of that particular title is thoroughly misinterpreted, the author’s desired self-reflection has not been forthcoming. Instead, and not unexpectedly, the discourse on the cognitive aspects of contemporary conflict have consumed and regurgitated a familiar sequence of errors which will continue to perpetuate rather than mitigate the problem if not addressed head-on.  

What the cognitive threat is

The primary cognitive threat is us[3]. The threat is driven by a combination of, firstly, techno-futurist hubris which exists as a permanently recycling feature of late-modern military thought.  The threat includes a precipitous slide into scientism which military thinkers and the organisations they populate have not avoided[4].  Further contributing to the threat is the commercial and financial rent-seeking which overhangs military affairs as a by-product of private-sector led R&D activities and government dependence on and cultivation of those activities increasingly since the 1990s[5].  Lastly, adversary awareness of these dynamics and an increasing willingness and capacity to manipulate and exacerbate them via the multitude of vulnerabilities ushered in by digital hyper-connectivity[6]. In other words, before the cognitive threat is an operational and tactical menace to be addressed and countered by the joint force, it is a central feature of the deteriorating epistemic condition of the late modern societies in which said forces operate and from which its personnel, funding, R&D pathways, doctrine and operating concepts, epistemic communities and strategic leadership emerge. 

What the cognitive threat is not   

The cognitive threat is not what adversary military organisations and their patrons are doing in and to the information environment with regard to activities other than kinetic military operations. Terms for adversarial activities occurring outside of conventional lethal/kinetic combat operations – such as the “grey-zone” and “below-the-threshold” – describe time-honoured tactics by which interlocutors engage in methods aimed at weakening and sowing dysfunction in the social and political fabric of competitor or enemy societies.  These tactics are used to gain advantage in areas not directly including military conflict, or in areas likely to be critical to military preparedness and mobilization in times of war[7]. A key stumbling block here is obvious: its often difficult to know which intentions such tactics express. This is not cognitive warfare. It is merely typical of contending across and between cross-cultural communities, and the permanent unwillingness of contending societies to accord with the other’s rules. Information warfare – particularly influence operations traversing the Internet and exploiting the dominant commercial operations found there – is part of this mix of activities which belong under the normal paradigm of competition between states for strategic advantage. Active measures – influence operations designed to self-perpetuate – have found fertile new ground on the Internet but are not new to the arsenals of intelligence services and, as Thomas Rid has warned, while they proliferate, are more unpredictable and difficult to control than they were in the pre-Internet era[8]. None of this is cognitive warfare either. Unfortunately, current and recent discourse has lapsed into the error of treating it as such[9], leading to all manner of self-inflicted confusion[10]. 

Why the distinction matters

Two trends emerge from the abovementioned confusion which represent the most immediate threat to the military enterprise[11]. Firstly, private-sector vendors and the consulting and lobbying industry they employ are busily pitching technological solutions based on machine-learning and automation which have been developed in commercial business settings in which sensitivity to error is not high[12]. While militaries experiment with this raft of technologies, eager to be seen at the vanguard of emerging tech; to justify R&D budgets and stave off defunding; or simply out of habit, they incur opportunity cost.  This cost is best described as stultifying investment in the human potential which strategic thinkers have long identified as the real key to actualizing new technologies[13], and entering into path dependencies with behemoth corporate actors whose strategic goal is the cultivation of rentier-relations not excluding the ever-lucrative military sector[14]. 

Secondly, to the extent that automation and machine learning technologies enter the operational picture, cognitive debt is accrued as the military enterprise becomes increasingly dependent on fallible tech solutions[15]. Under battle conditions, the first assumption is the contestation of the electromagnetic spectrum on which all digital information technologies depend for basic functionality. Automated data gathering and analysis tools suffer from heavy reliance on data availability and integrity.  When these tools are unavailable any joint multinational force will require multiple redundancies, not only in terms of technology, but more importantly, in terms of leadership and personnel competencies. It is evidently unclear where the military enterprise draws the line in terms of the likely cost-benefit ratio when it comes to experimenting with automated machine learning tools and the contexts in which they ought to be applied[16]. Unfortunately, experimentation is never cost-free. When civilian / military boundaries are blurred to the extent they are now as a result of the digital transformation of society, such experimentation requires consideration  in light of all of its implications, including to the integrity and functionality of open democracy as the entity being defended[17]. 

The first error of misinterpreting the meaning and bounds of cognitive insecurity is compounded by a second mistake: what the military enterprise chooses to invest time, attention, and resources into tomorrow[18]. Path dependency, technological lock-in, and opportunity cost all loom large if  digital information age threats are misinterpreted. This is the solipsistic nature of the cognitive threat at work – the weapon really is you! Putting one’s feet in the shoes of the adversary, nothing could be more pleasing than seeing that threat self-perpetuate. As a first step, militaries could organise and invest immediately in a strategic technology assessment capacity[19] free from the biases of rent-seeking vendors and lobbyists who, by definition, will not only not pay the costs of mission failure, but stand to benefit from rentier-like dependencies that emerge as the military enterprise pays the corporate sector to play in the digital age. 


Endnotes:

[1] Zac Rogers, “158. In the Cognitive War – The Weapon Is You!,” Mad Scientist Laboratory (blog), July 1, 2019, https://madsciblog.tradoc.army.mil/158-in-the-cognitive-war-the-weapon-is-you/.

[2] Francois du Cluzel, “Cognitive Warfare” (Innovation Hub, 2020), https://www.innovationhub-act.org/sites/default/files/2021-01/20210122_CW%20Final.pdf.

[3] “us” refers primarily but not exclusively to the national security, intelligence, and defence communities taking up discourse on cognitive security and its threats including Australia, the U.S., U.K., Europe, and other liberal democratic nations. 

[4] Henry Bauer, “Science in the 21st Century: Knowledge Monopolies and Research Cartels,” Journal of Scientific Exploration 18 (December 1, 2004); Matthew B. Crawford, “How Science Has Been Corrupted,” UnHerd, December 21, 2021, https://unherd.com/2021/12/how-science-has-been-corrupted-2/; William A. Wilson, “Scientific Regress,” First Things, May 2016, https://www.firstthings.com/article/2016/05/scientific-regress; Philip Mirowski, Science-Mart (Harvard University Press, 2011).

[5] Dima P Adamsky, “Through the Looking Glass: The Soviet Military-Technical Revolution and the American Revolution in Military Affairs,” Journal of Strategic Studies 31, no. 2 (2008): 257–94, https://doi.org/10.1080/01402390801940443; Linda Weiss, America Inc.?: Innovation and Enterprise in the National Security State (Cornell University Press, 2014); Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Penguin UK, 2018).

[6] Timothy L. Thomas, “Russian Forecasts of Future War,” Military Review, June 2019, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/MJ-19/Thomas-Russian-Forecast.pdf; Nathan Beauchamp-Mustafaga, “Cognitive Domain Operations: The PLA’s New Holistic Concept for Influence Operations,” China Brief, The Jamestown Foundation 19, no. 16 (September 2019), https://jamestown.org/program/cognitive-domain-operations-the-plas-new-holistic-concept-for-influence-operations/.

[7] See Peter Layton, “Social Mobilisation in a Contested Environment,” The Strategist, August 5, 2019, https://www.aspistrategist.org.au/social-mobilisation-in-a-contested-environment/; Peter Layton, “Mobilisation in the Information Technology Era,” The Forge (blog), N/A, https://theforge.defence.gov.au/publications/mobilisation-information-technology-era.

[8] Thomas Rid, Active Measures: The Secret History of Disinformation and Political Warfare, Illustrated edition (New York: MACMILLAN USA, 2020).

[9] For example see Jake Harrington and Riley McCabe, “Detect and Understand: Modernizing Intelligence for the Gray Zone,” CSIS Briefs (Center for Strategic & International Studies, December 2021), https://csis-website-prod.s3.amazonaws.com/s3fs-public/publication/211207_Harrington_Detect_Understand.pdf?CXBQPSNhUjec_inYLB7SFAaO_8kBnKrQ; du Cluzel, “Cognitive Warfare”; Kimberly Underwood, “Cognitive Warfare Will Be Deciding Factor in Battle,” SIGNAL Magazine, August 15, 2017, https://www.afcea.org/content/cognitive-warfare-will-be-deciding-factor-battle; Nicholas D. Wright, “Cognitive Defense of the Joint Force in a Digitizing World” (Pentagon Joint Staff Strategic Multilayer Assessment Group, July 2021), https://nsiteam.com/cognitive-defense-of-the-joint-force-in-a-digitizing-world/.

[10] Zac Rogers and Jason Logue, “Truth as Fiction: The Dangers of Hubris in the Information Environment,” The Strategist, February 14, 2020, https://www.aspistrategist.org.au/truth-as-fiction-the-dangers-of-hubris-in-the-information-environment/.

[11] For more on this see Zac Rogers, “The Promise of Strategic Gain in the Information Age: What Happened?,” Cyber Defense Review 6, no. 1 (Winter 2021): 81–105.

[12] Rodney Brooks, “An Inconvenient Truth About AI,” IEEE Spectrum, September 29, 2021, https://spectrum.ieee.org/rodney-brooks-ai.

[13] Michael Horowitz and Casey Mahoney, “Artificial Intelligence and the Military: Technology Is Only Half the Battle,” War on the Rocks, December 25, 2018, https://warontherocks.com/2018/12/artificial-intelligence-and-the-military-technology-is-only-half-the-battle/.

[14] Jathan Sadowski, “The Internet of Landlords: Digital Platforms and New Mechanisms of Rentier Capitalism,” Antipode 52, no. 2 (2020): 562–80, https://doi.org/10.1111/anti.12595.

[15] For problematic example see Ben Collier and Lydia Wilson, “Governments Try to Fight Crime via Google Ads,” New Lines Magazine (blog), January 4, 2022, https://newlinesmag.com/reportage/governments-try-to-fight-crime-via-google-ads/.

[16] Zac Rogers, “Discrete, Specified, Assigned, and Bounded Problems: The Appropriate Areas for AI Contributions to National Security,” SMA Invited Perspectives (NSI Inc., December 31, 2019), https://nsiteam.com/discrete-specified-assigned-and-bounded-problems-the-appropriate-areas-for-ai-contributions-to-national-security/.

[17] Emily Bienvenue and Zac Rogers, “Strategic Army: Developing Trust in the Shifting Strategic Landscape,” Joint Force Quarterly 95 (November 2019): 4–14.

[18] Zac Rogers, “Goodhart’s Law: Why the Future of Conflict Will Not Be Data-Driven,” Grounded Curiosity (blog), February 13, 2021, https://groundedcuriosity.com/goodharts-law-why-the-future-of-conflict-will-not-be-data-driven/.

[19] For expansion see Zac Rogers and Emily Bienvenue, “Combined Information Overlay for Situational Awareness in the Digital Anthropological Terrain: Reclaiming Information for the Warfighter,” The Cyber Defense Review, no. Summer Edition (2021), https://cyberdefensereview.army.mil/Portals/6/Documents/2021_summer_cdr/06_Rogers_Bienvenue_CDR_V6N3_2021.pdf?ver=6qlw1l02DXt1A_1n5KrL4g%3d%3d.

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Below Established Threshold Activities (BETA) Cyberspace Influence Operations Information Systems Zac Rogers

Options to Counter Foreign Influence Operations Targeting Servicemember and Veterans

Marcus Laird has served in the United States Air Force. He presently works at Headquarters Air Force Reserve Command as a Strategic Plans and Programs Officer. He can be found on Twitter @USLairdForce.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  Divergent Options’ does not contain official information nor is it affiliated with the Department of Defense or the U. S. Air Force. The following opinion is of the author only, and is not official Air Force or Department of Defense policy. This publication was reviewed by AFRC/PA, and is cleared for public release and unlimited distribution.


National Security Situation:  Foreign Actors are using Social Media to influence Servicemember and Veteran communities. 

Date Originally Written:  December 2, 2021.

Date Originally Published:  January 3, 2022.

Author and / or Article Point of View:  The author is a military member who has previously researched the impact of social media on US military internal dialogue for professional military education and graduate courses. 

Background:  During the lead up to the 2016 election, members of the U.S. Army Reserve were specifically targeted by advertisements on Facebook purchased by Russia’s Internet Research Agency at least ten times[1]. In 2017, the Vietnam Veterans of America (VVA) also detected social media profiles which were sophisticated mimics of their official web pages. These web pages were created for several reasons to include identity theft, fraud, and disseminating disinformation favorable to Russia. Further investigation revealed a network of fake personas attempting to make inroads within online military and veteran communities for the purpose of bolstering persona credibility to spread disinformation. Because these mimics used VVA logos, VVA was able to have these web pages deplatformed after two months due to trademark infringement[2].  

Alternatively, military influencers, after building a substantial following, have chosen to sell their personas as a means of monetizing their social media brands. While foreign adversary networks have not incorporated this technique for building an audience, the purchase of a persona is essentially an opportunity to purchase a turnkey information operation platform. 

Significance:  Servicemembers and veterans are trusted voices within their communities on matters of national security. The special trust society places on these communities makes them a particularly lucrative target for an adversary seeking to influence public opinion and shape policy debates[3]. Social media is optimized for advertising, allowing specific demographics to be targeted with unprecedented precision. Unchecked, adversaries can use this capability to sow mistrust, degrade unit cohesion, and spread disinformation through advertisements, mimicking legitimate organizations, or purchasing a trusted persona. 

Option #1:  Closing Legislative Loopholes 

Currently, foreign entities are prohibited from directly contributing to campaigns. However, there is no legal prohibition on purchasing advertising by foreign entities for the purpose of influencing elections. Using legislative means to close this loophole would deny adversaries’ abuse of platforms’ microtargeting capabilities for the purpose of political influence[4].

Risk:  Enforcement – As evidenced during inquiries into election interference, enforcement could prove difficult. Enforcement relies on good faith efforts by platforms to conduct internal assessments of sophisticated actors’ affiliations and intentions and report them. Additionally, government agencies have neither backend system access nor adequate resources to forensically investigate every potential instance of foreign advertising.

Gain:  Such a solution would protect society as a whole, to include the military and veteran communities. Legislation would include reporting and data retention requirements for platforms, allowing for earlier detection of potential information operations. Ideally, regulation would prompt platforms to tailor their content moderation standards around political advertising to create additional barriers for foreign entities.  

Option #2:  Deplatforming on the Grounds of Trademark Infringement

Should a foreign adversary attempt to use sophisticated mimicry of official accounts to achieve a veneer of credibility, then the government may elect to request a platform remove a user or network of users on the basis of trademark infringement. This technique was successfully employed by the VVA in 2017. Military services have trademark offices, which license the use of their official logos and can serve as focal points for removing unauthorized materials[5].

Risk:  Resources – since trademark offices are self-funded and rely on royalties for operations, they may not be adequately resourced to challenge large-scale trademark infringement by foreign actors.

Personnel – personnel in trademark offices may not have adequate training to determine whether or not a U.S. person or a foreign entity is using the organization’s trademarked materials. Failure to adequately delineate between U.S. persons and foreign actors when requesting to deplatform a user potentially infringes upon civil liberties. 

Gain:  Developing agency response protocols using existing intellectual property laws ensures responses are coordinated between the government and platforms as opposed to a pickup game during an ongoing operation. Regular deplatforming can also help develop signatures for sophisticated mimicry, allowing for more rapid detection and mitigation by the platforms. 

Option #3:  Subject the Sale of Influence Networks to Review by the Committee on Foreign Investment in the United States (CFIUS) 

Inform platform owners of the intent of CFIUS to review the sale of all influence networks and credentials which specifically market to military and veteran communities. CFIUS review has been used to prevent the acquisition of applications by foreign entities. Specifically, in 2019 CFIUS retroactively reviewed the purchase of Grindr, an LGBTQ+ dating application, due to national security concerns about the potential for the Chinese firm Kunlun to pass sensitive data to the Chinese government.  Data associated with veteran and servicemember social networks could be similarly protected[6]. 

Risk:  Enforcement – Due to the large number of influencers and the lack of knowledge of the scope of the problem, enforcement may be difficult in real time. In the event a sale happens, then ex post facto CFIUS review would provide a remedy.  

Gain:  Such a notification should prompt platforms to craft governance policies around the sale and transfer of personas to allow for more transparency and reporting.

Other Comments:  None.

Recommendation:  None.


Endnotes:

[1] Goldsmith, K. (2020). An Investigation Into Foreign Entities Who Are Targeting Servicemembers and Veterans Online. Vietnam Veterans of America. Retrieved September 17, 2019, from https://vva.org/trollreport/, 108.

[2] Ibid, 6-7.

[3] Gallacher, J. D., Barash, V., Howard, P. N., & Kelly, J. (2018). Junk news on military affairs and national security: Social media disinformation campaigns against us military personnel and veterans. arXiv preprint arXiv:1802.03572.

[4] Wertheimer, F. (2019, May 28). Loopholes allow foreign adversaries to legally interfere in U.S. elections. Just Security. Retrieved December 10, 2021, from https://www.justsecurity.org/64324/loopholes-allow-foreign-adversaries-to-legally-interfere-in-u-s-elections/.

[5] Air Force Trademark Office. (n.d.). Retrieved December 3, 2021, from https://www.trademark.af.mil/Licensing/Applications.aspx.

[6] Kara-Pabani, K., & Sherman, J. (2021, May 11). How a Norwegian government report shows the limits of Cfius Data Reviews. Lawfare. Retrieved December 10, 2021, from https://www.lawfareblog.com/how-norwegian-government-report-shows-limits-cfius-data-reviews.

Cyberspace Influence Operations Marcus Laird Military Veterans and Military Members Option Papers Social Media United States

Analyzing Social Media as a Means to Undermine the United States

Michael Martinez is a consultant who specializes in data analysis, project management and community engagement. has a M.S. of Intelligence Management from University of Maryland University College. He can be found on Twitter @MichaelMartinez. Divergent Optionscontent does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Analyzing Social Media as a Means to Undermine the United States

Date Originally Written:  November 30, 2021.

Date Originally Published:  December 27, 2021.

Author and / or Article Point of View:  The author believes that social media is not inherently good nor bad, but a tool to enhance discussion. Unless the national security apparatus understands how to best utilize Open Source Intelligence to achieve its stated goals, i.e. engaging the public on social media and public forums, it will lag behind its adversaries in this space.

Summary:  Stopping online radicalization of all varieties is complex and includes the individual, the government, social media companies, and Internet Service Providers. Artificial intelligence reviewing information online and flagging potential threats may not be adequate. Only through public-private partnerships can an effective system by created to support anti-radicalization endeavors.

Text:  The adage, “If you’re not paying for the product, you are the product[1],” cannot be further from the truth in the age of social media. Every user’s click and purchase are recorded by private entities such as Facebook and Twitter. These records can be utilized by other nations to gather information on the United States economy, intellectual property, as well as information on government personnel and agencies. This collation of data can be packaged together and be used to inform operations to prey on U.S. personnel.  Examples include extortion through ransomware, an adversary intelligence service probing an employee for specific national information by appealing to their subject matter expertise, and online influence / radicalization.

It is crucial to accept that the United States and its citizens are more heavily reliant on social media than ever before. Social media entities such as Meta (formerly Facebook) have new and yet to be released products for children (i.e., the “Instagram for Kids” product) enabling adversaries to prey upon any age a potential. Terrorist organizations such as Al-Qaeda utilize cartoons on outlets like YouTube and Instagram to entice vulnerable youth to carry out attacks or help radicalize potential suicide bombers[2]. 

While Facebook and YouTube are the most common among most age groups, Tik-Tok and Snapchat have undergone a meteoric arise among youth under thirty[3]. Intelligence services and terrorist organizations have vastly improved their online recruiting techniques including video and media as the platforms have become just as sophisticated. Unless federal, state, and local governments strengthen their public-private partnerships to stay ahead of growth in next generation social media platforms this adversary behavior will continue.  The national security community has tools at its disposal to help protect Americans from being turned into cybercriminals through coercion, or radicalizing individuals from overseas entities such as the Islamic State to potentially carry out domestic attacks.

To counter such trends within social media radicalization, the National Institutes of Justice (NIJ) worked with the National Academies to identify traits and agendas to facilitate disruption of these efforts. Some of the things identified include functional databases, considering links between terrorism and lesser crimes, and exploring the culture of terrorism, including structure and goals[4]. While a solid federal infrastructure and deterrence mechanism is vital, it is also important for the social media platform themselves to eliminate radical media that may influence at-risk individuals. 

According to the NIJ, there are several characteristics that contribute to social media radicalization: being unemployed, a loner, having a criminal history, a history of mental illness, and having prior military experience[5]. These are only potential factors that do not apply to all who are radicalized[6]. However, these factors do provide a base to begin investigation and mitigation strategies. 

As a long-term solution, the Bipartisan Policy Center recommends enacting and teaching media literacy to understand and spot internet radicalization[7]. Social media algorithms are not fool proof. These algorithms require the cyberspace equivalent of “see something, say something” and for users to report any suspicious activity to the platforms. The risks of these companies not acting is also vital as their main goal is to monetize. Acting in this manner does not help companies make more money. This inaction is when the government steps in to ensure that private enterprise is not impeding national security. 

Creating a system that works will balance the rights of the individual with the national security of the United States. It will also respect the rights of private enterprise and the pipelines that carry the information to homes, the Internet Service Providers. Until this system can be created, the radicalization of Americans will be a pitfall for the entire National Security apparatus. 


Endnotes:

[1] Oremus, W. (2018, April 27). Are You Really the Product? Retrieved on November 15, 2021, from https://slate.com/technology/2018/04/are-you-really-facebooks-product-the-history-of-a-dangerous-idea.html. 

[2] Thompson, R. (2011). Radicalization and the Use of Social Media. Journal of Strategic Security, 4(4), 167–190. http://www.jstor.org/stable/26463917 

[3] Pew Research Center. (2021, April 7). Social Media Use in 2021. Retrieved from https://www.pewresearch.org/internet/2021/04/07/social-media-use-in-2021/ 

[4] Aisha Javed Qureshi, “Understanding Domestic Radicalization and Terrorism,” August 14, 2020, nij.ojp.gov: https://nij.ojp.gov/topics/articles/understanding-domestic-radicalization-and-terrorism.

[5] The National Counterintelligence and Security Center. Intelligence Threats & Social Media Deception. Retrieved November 15, 2021, from https://www.dni.gov/index.php/ncsc-features/2780-ncsc-intelligence-threats-social-media-deception. 

[6] Schleffer, G., & Miller, B. (2021). The Political Effects of Social Media Platforms on Different Regime Types. Austin, TX. Retrieved November 29, 2021, from http://dx.doi.org/10.26153/tsw/13987. 

[7] Bipartisan Policy Center. (2012, December). Countering Online Radicalization in America. Retrieved November 29, 2021, from https://bipartisanpolicy.org/download/?file=/wp-content/uploads/2019/03/BPC-_Online-Radicalization-Report.pdf 

Assessment Papers Cyberspace Influence Operations Michael Martinez Social Media United States

Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Alex Buck is a currently serving officer in the Canadian Armed Forces. He has deployed twice to Afghanistan, once to Ukraine, and is now working towards an MA in National Security.  Alex can be found on Twitter @RCRbuck.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group. 


Title:  Assessing Russian Use of Social Media as a Means to Influence U.S. Policy

Date Originally Written:  August 29, 2021.

Date Originally Published:  December 13, 2021.

Author and / or Article Point of View: The author believes that without appropriate action, the United States’ political climate will continue to be exploited by Russian influence campaigns. These campaigns will have broad impacts across the Western world, and potentially generate an increased competitive advantage for Russia.

Summary:  To achieve a competitive advantage over the United States, Russia uses social media-based influence campaigns to influence American foreign policy. Political polarization makes the United States an optimal target for such campaigns. 

Text:  Russia aspires to regain influence over the international system that they once had as the Soviet Union. To achieve this aim, Russia’s interest lies in building a stronger economy and expanding their regional influence over Eastern Europe[1]. Following the Cold War, Russia recognized that these national interests were at risk of being completely destroyed by Western influence. The Russian economy was threatened by the United States’ unipolar hegemony over the global economy[2]. A strong North Atlantic Treaty Organization (NATO) has threatened Russia’s regional influence in Eastern Europe. NATO’s collective security agreement was originally conceived to counter the Soviet threat following World War II and has continued to do so to this day. Through the late 1990s and early 2000s, NATO expanded their membership to include former Soviet states in Eastern Europe. This expansion was done in an effort to reduce Russian regional influence [1]. Russia perceives these actions as a threat to their survival as a state, and needs a method to regain competitive advantage.

Following the Cold War, Russia began to identify opportunities they could exploit to increase their competitive advantage in the international system. One of those opportunities began to develop in the early-2000s as social media emerged. During this time, social media began to impact American culture in such a significant way that it could not be ignored. Social media has two significant impacts on society. First, it causes people to create very dense clusters of social connections. Second, these clusters are populated by very similar types of people[3]. These two factors caused follow-on effects to American society in that they created a divided social structure and an extremely polarized political system. Russia viewed these as opportunities ripe for their exploitation. Russia sees U.S. social media as a cost-effective medium to exert influence on the United States. 

In the late 2000s, Russia began experimenting with their concept of exploiting the cyber domain as a means of exerting influence on other nation-states. After the successful use of cyber operations against Ukraine, Estonia, Georgia and again in Ukraine in 2004, 2007, 2008, and 2014 respectively, Russia was poised to attempt utilizing their concept against the United States and NATO[4]. In 2014, Russia slowly built a network of social media accounts that would eventually begin sowing disinformation amongst American social media users[3]. The significance of the Russian information campaign leading up to the 2016 U.S. presidential election can not be underestimated. The Russian Internet Research Agency propagated ~10.4 million tweets on Twitter, 76.5 million engagements on Facebook, and 187 million engagements on Instagram[5]. Although within the context of 200 billion tweets sent annually this may seem like a small-scale effort, the targeted nature of the tweets contributed to their effectiveness. This Russian social media campaign was estimated to expose between 110 and 130 million American social media users to misinformation aimed at skewing the results of the presidential election[3]. The 2000 presidential election was decided by 537 votes in the state of Florida. To change the results of an American election like that of 2000, a Russian information campaign could potentially sway electoral results with a campaign that is 0.00049% effective.

The bifurcated nature of the current American political arena has created the perfect target for Russian attacks via the cyber domain. Due to the persistently slim margins of electoral results, Russia will continue to exploit this opportunity until it achieves its national aims and gains a competitive advantage over the United States. Social media’s influence offers Russia a cost effective and highly impactful tool that has the potential to sway American policies in its favor. Without coherent strategies to protect national networks and decrease Russian social influence the United States, and the broader Western world, will continue to be subject to Russian influence. 


Endnotes:

[1] Arakelyan, L. A. (2017). Russian Foreign Policy in Eurasia: National Interests and Regional Integration (1st ed.). Routledge. https://doi.org/10.4324/9781315468372

[2] Blank, S. (2008). Threats to and from Russia: An Assessment. The Journal of Slavic Military Studies, 21(3), 491–526. https://doi.org/10.1080/13518040802313746

[3] Aral, S. (2020). The hype machine: How social media disrupts our elections, our economy, and our health–and how we must adapt (First edition). Currency.

[4] Geers, K. & NATO Cooperative Cyber Defence Centre of Excellence. (2015). Cyber war in perspective: Russian aggression against Ukraine. https://www.ccdcoe.org/library/publications/cyber-war-in-perspective-russian-aggression-against-ukraine/

[5] DiResta, R., Shaffer, K., Ruppel, B., Sullivan, D., & Matney, R. (2019). The Tactics & Tropes of the Internet Research Agency. US Senate Documents.

Alex Buck Assessment Papers Cyberspace Influence Operations Russia Social Media United States

Assessing a Situation where the Mission is a Headline

Samir Srivastava is serving in the Indian Armed Forces. The views expressed and suggestions made in the article are solely of the author in his personal capacity and do not have any official endorsement.  Divergent Opinions’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing a Situation where the Mission is a Headline

Date Originally Written:  July 5, 2021.

Date Originally Published:  July 26, 2021.

Author and / or Article Point of View:  The author is serving with the Indian Armed Forces.   The article is written from the point of view of India in its prevailing environment.

Summary:  While headlines in news media describe the outcome of military operations, in this information age, the world could now be heading towards a situation where military operations are the outcome of a desired headline.  In situations like this, goals can be achieved by taking into assured success, the target audience, connectivity in a retaliatory context, verifiability, and deniability.

Text:  When nations fight each other, there will be news media headlines. Through various mediums and platforms, headline(s) will travel to everyone – the belligerents, their allies/supporters and also neutral parties. Conflict will be presented as a series of headlines culminating in one headline that describes the final outcome. Thus, when operations happen, headlines also happen. Yet to be considered is when  an operation  is planned and executed to make a headline happen.

In nation versus nation conflict, the days of large scale wars are certainly not over, but as trends suggest these will be more of an exception rather than rule. The future war in all likelihood will be fought at a level without a formal war declaration and quite localised. The world has seen wars where each side endeavours to prevail upon the adversary’s bodies and materiel, but already greater emphasis is being laid on prevailing upon the enemy’s mind. In that case, a decision will be required regarding what objective is being pursued – attrition, territory or just a headline.

Today, a military operation is more often than not planned at the strategic level and executed at a tactical level. This model is likely to become a norm because if a strategic outcome is achievable through a standalone tactical action, there is no reason to let the fight get bigger and more costly in terms of blood and treasure. The Balakote Airstrike[1] by the Indian Air Force is a case in point. It has been over two years since that strike took place but there is nothing to show a change in Pakistan’s attitude, which continues to harbour terrorists on its soil who would very well be plotting the next strike on India. However, what has endured is the headlines of February 26-28, 2019, which carried different messages for different people and one for Pakistan as well.

Unlike propaganda where a story is made out of nothing, if the mission is to make a headline, then that particular operation will have taken place on ground.  In this context, Headline Selection and Target Selection are two sides of the same coin but the former is the driving force.  Beyond this, success is enhanced by taking into account the probability of success, the target audience, connectivity in a retaliatory context, verifiability and deniability.  

Without assured success, the outcome will be a mismatch between the desired headline and  target selection. Taking an example from movies, in the 1997 film  “Tomorrow Never Dies[2],” the entire plot focuses on  the protagonist, Agent 007,  spoiling antagonist Carver’s scheme of creating headlines to be beamed by his media network. Once a shot is fired or ordnance dropped, there will be a headline and it is best to make sure it is the desired one.

Regarding the target audience, it is not necessary that an event gains the interest of the masses. The recipient population may be receptive, non-receptive or simply indifferent.  A headline focused on  the largest receptive group who can further propagate it has the best chance of success. 

If the operation is carried out in a retaliatory context,  it is best to connect  the enemy action and friendly reaction. For example, while cyber-attacks or economic sanctions may be an apt response to an armed attack, the likelihood of achieving the desired headline is enhanced if there is something connecting the two- action and reaction.

The headline will have much more impact if the event and its effects can be easily verified, preferably by neutral agencies and individuals. A perfect headline would be that which an under resourced freelance journalist can easily report. To that end, targets in inaccessible locations or at places that don’t strike a chord with the intended audience will be of little use. No amount of satellite photos can match one reporter on ground.   

The headline cannot lend itself to any possibility of denial because even a feeble denial can lead to credibility being questioned. It therefore goes without saying that choice of target and mode of attack should be such. During U.S. Operation NEPTUNE SPEAR[3], the raid on Osama bin Laden’s compound in Abbotabad, Pakistan,  the first sliver of publicly available information was a tweet by someone nearby. This tweet could have very well closed any avenue for denial by Pakistan or Al Qaeda.

A well thought out headline can be the start point when planning an operation or even a campaign. This vision of a headline however needs different thinking tempered with a lot of imagination and creativity. Pre-planned headlines, understanding the expertise of journalists and having platforms at the ready can be of value.      

Every field commander, division and above should have some pre-planned headlines to speak of that their organization can create if given the opportunity. These headlines include both national headlines flowing out of the higher commander’s intent, and local headlines that are more focused on the immediate engagement area.

There is benefit to be gained from the expertise of journalists – both Indian and Foreign. Their practical experience will be invaluable when deciding on the correct headline and pinpointing a target audience. Journalists are already seen in war zones and media rooms as reporters, and getting them into the operations room as planners is worthy of consideration.

An array of reporters, platforms amd mediums can be kept ready to carry the desired headline far and wide. Freelance journalists in foreign countries coupled with internet will be a potent combination. In addition, the military’s public information organization cannot succeed in this new reality without restructuring.

Every battle in military history has name of some commander(s) attached to it. Hannibal crossing the Alps, U.S. General George S. Patton’s exploits during Battle of the Bulge, and then Indian Colonel Desmond Hayde in the Battle of Dograi. The day is not far when some field commander will etch his or her name in history fighting the Battle of the Headline or, more apt, the Battle for the Headline.      


Endnotes:

[1] BBC. (2019, February 26). Balakot: Indian air strikes target militants in Pakistan. BBC News. https://www.bbc.com/news/world-asia-47366718.

[2] IMDb.com. (1997, December 19). Tomorrow Never Dies. IMDb. https://www.imdb.com/title/tt0120347.

[3] Olson, P. (2011, August 11). Man Inadvertently Live Tweets Osama Bin Laden Raid. Forbes. https://www.forbes.com/sites/parmyolson/2011/05/02/man-inadvertently-live-tweets-osama-bin-laden-raid.

Assessment Papers India Influence Operations Information and Intelligence Samir Srivastava Social Media

Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Scott Harr is a U.S. Army Special Forces officer with deployment and service experience throughout the Middle East.  He has contributed articles on national security and foreign policy topics to military journals and professional websites focusing on strategic security issues.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Impact of the Information Domain on the Classic Security Dilemma from Realist Theory

Date Originally Written:  September 26, 2020.

Date Originally Published:  December 2, 2020.

Author and / or Article Point of View:  The author believes that realist theory of international relations will have to take into account the weaponization of information in order to continue to be viable.

Summary:  The weaponization of information as an instrument of security has re-shaped the traditional security dilemma faced by nation-states under realist theory. While yielding to the anarchic ordering principle from realist thought, the information domain also extends the classic security dilemma and layers it with new dynamics. These dynamics put liberal democracies on the defensive compared to authoritarian regimes.

Text:  According to realist theory, the Westphalian nation-state exists in a self-interested international community[1]. Because of the lack of binding international law, anarchy, as an ordering principle, characterizes the international environment as each nation-state, not knowing the intentions of those around it, is incentivized to provide for its own security and survival[2]. This self-help system differentiates insecure nations according to their capabilities to provide and project security. While this state-of-play within the international community holds the structure together, it also creates a classic security dilemma: the more each insecure state invests in its own security, the more such actions are interpreted as aggression by other insecure states which initiates and perpetuates a never-ending cycle of escalating aggression amongst them[3]. Traditionally, the effects of the realist security dilemma have been observed and measured through arms-races between nations or the general buildup of military capabilities. In the emerging battlefield of the 21st century, however, states have weaponized the Information Domain as both nation-states and non-state actors realize and leverage the power of information (and new ways to transmit it) to achieve security objectives. Many, like author Sean McFate, see the end of traditional warfare as these new methods captivate entities with security interests while altering and supplanting the traditional military means to wage conflict[4]. If the emergence and weaponization of information technology is changing the instruments of security, it is worth assessing how the realist security dilemma may be changing along with it.

One way to assess the Information Domain’s impact on the realist security dilemma is to examine the ordering principle that undergirds this dilemma. As mentioned above, the realist security dilemma hinges on the anarchic ordering principle of the international community that drives (compels) nations to militarily invest in security for their survival. Broadly, because no (enforceable) international law exists to uniformly regulate nation-state actions weaponizing information as a security tool, the anarchic ordering principle still exists. However, on closer inspection, while the anarchic ordering principle from realist theory remains intact, the weaponization of information creates a domain with distinctly different operating principles for nation-states existing in an anarchic international environment and using information as an instrument of security. Nation-states espousing liberal-democratic values operate on the premise that information should flow freely and (largely) uncontrolled or regulated by government authority. For this reason, countries such as the United States do not have large-scale and monopolistic “state-run” information or media channels. Rather, information is, relatively, free to flow unimpeded on social media, private news corporations, and print journalism. Countries that leverage the “freedom” operating principle for information implicitly rely on the strength and attractiveness of liberal-democratic values endorsing liberty and freedom as the centerpiece for efforts in the information domain. The power of enticing ideals, they seem to say, is the best application of power within the Information Domain and surest means to preserve security. Nevertheless, reliance on the “freedom” operating principle puts liberal democratic countries on the defensive when it comes to the security dimensions of the information domain.

In contrast to the “freedom” operating principle employed by liberal democratic nations in the information domain, nations with authoritarian regimes utilize an operating principle of “control” for information. According to authors Irina Borogan and Andrei Soldatov, when the photocopier was first invented in Russia in the early 20th century, Russian authorities promptly seized the device and hid the technology deep within government archives to prevent its proliferation[5]. Plainly, the information-disseminating capabilities implied by the photocopier terrified the Russian authorities. Such paranoid efforts to control information have shaped the Russian approach to information technology through every new technological development from the telephone, computer, and internet. Since authoritarian regimes maintain tight control of information as their operating principle, they remain less concerned about adhering to liberal values and can thus assume a more offensive stance in the information domain. For this reason, the Russian use of information technology is characterized by wide-scale distributed denial of services attacks on opposition voices domestically and “patriot hackers” spreading disinformation internationally to achieve security objectives[6]. Plausible deniability surrounding information used in this way allows authoritarian regimes to skirt and obscure the ideological values cherished by liberal democracies under the “freedom” ordering principle.

The realist security dilemma is far too durable to be abolished at the first sign of nation-states developing and employing new capabilities for security. But even as the weaponization of information has not abolished the classic realist dilemma, it has undoubtedly extended and complicated it by adding a new layer with new considerations. Whereas in the past the operating principles of nation-states addressing their security has been uniformly observed through the straight-forward build-up of overtly military capabilities, the information domain, while preserving the anarchic ordering principle from realist theory, creates a new dynamic where nation-states employ opposite operating principles in the much-more-subtle Information Domain. Such dynamics create “sub-dilemmas” for liberal democracies put on the defensive in the Information Domain. As renowned realist scholar Kenneth Waltz notes, a democratic nation may have to “consider whether it would prefer to violate its code of behavior” (i.e. compromise its liberal democratic values) or “abide by its code and risk its survival[7].” This is the crux of the matter as democracies determine how to compete in the Information Domain and all the challenges it poses (adds) to the realist security dilemma: they must find a way to leverage the strength (and attractiveness) of their values in the Information Domain while not succumbing to temptations to forsake those values and stoop to the levels of adversaries. In sum, regarding the emerging operating principles, “freedom” is the harder right to “control’s” easier wrong. To forget this maxim is to sacrifice the foundations that liberal democracies hope to build upon in the international community.


Endnotes:

[1] Waltz, Kenneth. Realism and International Politics. New York: Taylor and Francis, 2008.

[2] Ibid, Waltz, Realism.

[3] Ibid, Waltz, Realsim

[4] Mcfate, Sean. The New Rules Of War: Victory in the Age of Durable Disorder. New York: Harper Collins Press, 2019.

[5] Soldatov, Andrei and Borogan, Irina. The Red Web: The Struggle Between Russia’s Digital Dictators and the New Online Revolutionaries. New York: Perseus Books Group, 2015.

[6] Ibid, Soldatov.

[7] Waltz, Kenneth Neal. Man, the State, and War: A Theoretical Analysis. New York: Columbia University Press, 1959.

Assessment Papers Cyberspace Influence Operations Scott Harr

Assessment of Opportunities to Engage with the Chinese Film Market

Editor’s Note:  This article is part of our Below Threshold Competition: China writing contest which took place from May 1, 2020 to July 31, 2020.  More information about the contest can be found by clicking here.


Irk is a freelance writer. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of Opportunities to Engage with the Chinese Film Market

Date Originally Written:  July 29, 2020.

Date Originally Published:  November 11, 2020.

Author and / or Article Point of View:  The author believes that the film industry remains a relatively underexploited channel that can be used to shape the soft power dynamic in the U.S.-China relationship.

Summary:  While China’s film industry has grown in recent years, the market for Chinese films remains primarily domestic. Access to China’s film market remains heavily restricted, allowing the Chinese Communist Party to craft a film industry that can reinforce its values at home and abroad. However, there are opportunities for the United States to liberalize the Chinese film market which could contribute to long-term social and political change.

Text:  The highest-grossing Chinese film is 2017’s Wolf Warrior 2, netting nearly $900 million globally. For the Chinese Communist Party (CCP), the only problem is that a mere 2% of this gross came from outside the country. For the CCP, this is a troubling pattern replicated across many of China’s most financially successful films[1]. Last year, PricewaterhouseCoopers predicted that the Chinese film market would surpass the United States’ (U.S.) in 2020, growing to a total value of $15.5 billion by 2023[2]. Despite tremendous growth by every metric – new cinema screens, films released, ticket revenue – the Chinese film industry has failed to market itself to the outside world[3].

This failure is not for lack of trying: film is a key aspect of China’s project to accumulate soft power in Africa[4], and may leave a significant footprint on the emergent film markets in many countries. The Chinese film offensive abroad has been paired with heavy-handed protectionism at home, fulfilling a desire to develop the domestic film industry and guard against the influence introduced by foreign films. In 1994 China instituted an annual quota on foreign films which has slowly crept upwards, sometimes being broken to meet growing demand[5]. But even so, the number of foreign films entering the Chinese market each year floats between only 30-40. From the perspective of the CCP, there may be good reasons to be so conservative. In the U.S., research has indicated that some films may nudge audiences in ideological directions[6] or change their opinion of the government[7]. As might be expected, Chinese censorship targets concepts like “sex, violence, and rebellious individualism”[8]. While it remains difficult to draw any definite conclusions from this research, the threat is sufficient for the CCP to carefully monitor what sorts of messaging (and how much) it makes widely available for consumption. In India, economic liberalization was reflected in the values expressed by the most popular domestic films[9] – if messaging in film can be reflected in political attitudes, and political attitudes can be reflected in messaging in film, there is the possibility of a slow but consistent feedback loop creating serious social change. That is, unless the government clamps down on this relationship.

China’s “national film strategy” has gone largely un-countered by the U.S., in spite of its potential relevance to political change within the country. In 2018, Hollywood’s attempt to push quota liberalization was largely sidelined[10] and earlier this year the Independent Film & Television Alliance stated that little progress had been made since the start of the China-U.S. trade war[11]. Despite all this, 2018 revealed that quota liberalization was something China was willing to negotiate. This is an opportunity which could be exploited in order to begin seriously engaging with China’s approach to film.

In a reappraisal of common criticisms levied against Chinese engagement in the late 1990s and early 2000s, Alastair Iain Johnston of Harvard University notes that Chinese citizens with more connections to the outside world (facilitated by opening and reform) have developed “more liberal worldviews and are less nationalistic on average than older or less internationalized members of Chinese societies”[12]. The primary market for foreign films in China is this group of “internationalized” urban citizens, both those with higher disposable income in Guangdong, Zhejiang, Shanghai, Jiangsu, Beijing, and Tianjin[13] and those in non-coastal “Anchor Cities” which are integrated into transport networks and often boast international airports[14]. These demographics are both likely to be more amenable to the messaging in foreign films and capable of consuming them in large amounts.

During future trade negotiations, the U.S. could be willing to aggressively pursue the offered concession regarding film quotas, raising the cap as high as possible. In exchange, the United States Trade Representative could offer to revoke tariffs imposed since the trade war. As an example, the “phase one” trade deal was able to secure commitments from China solely by promising not to impose further tariffs and cutting a previous tariffs package by 50%[15]. The commitments asked of China in this agreement are far more financially intensive than film market liberalization, but it is difficult to put a price tag on the ideological component of film. Even so, the party has demonstrated willingness to put the quota on the table, and this is an offer that could be explored as part of a strategy to affect change within China.

In addition to focusing on quota liberalization in trade negotiations, state and city governments in the U.S. could engage in local diplomacy to establish cultural exchange through film. In 2017, China initiated a China-Africa film festival[16], and a similar model could be pursued by local government in the U.S. The low appeal of Chinese films outside of China (compared to the high appeal of American films within China) means that the exchange would likely be a “net gain” for the U.S. in terms of cultural impression. Chinese localities with citizens more open to foreign film would have another avenue of engagement, while Chinese producers who wanted to take advantage of the opportunity to present in exclusive U.S. markets may have to adjust the overtones in their films, possibly shedding some nationalist messaging. Federal or local government could provide incentives for theaters to show films banned in China for failing to meet these messaging standards. Films like A Touch of Sin that have enjoyed critical acclaim within the U.S. could reach a wider audience and create an alternate current of Chinese film in opposition to CCP preference.

Disrupting the development of China’s film industry may provide an opportunity to initiate a process of long-term attitudinal change in a wealthy and open segment of the Chinese population. At the same time, increasing the market share of foreign films and creating countervailing notions of “the Chinese film” could make China’s soft power accumulation more difficult. Hollywood is intent on marketing to China; instead of forcing them to collaborate with Chinese censors, it may serve American strategic objectives to allow competition to consume the Chinese market. If Chinese film producers adapt in response, they will have to shed certain limitations. Either way, slow-moving change will have taken root.


Endnotes:

[1] Magnan-Park, A. (2019, May 29). The global failure of cinematic soft power ‘with Chinese characteristics’. Retrieved July 29, 2020, from https://theasiadialogue.com/2019/05/27/the-global-failure-of-cinematic-soft-power-with-chinese-characteristics

[2] PricewaterhouseCoopers. (2019, June 17). Strong revenue growth continues in China’s cinema market. Retrieved July 29, 2020, from https://www.pwccn.com/en/press-room/press-releases/pr-170619.html

[3] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[4] Wu, Y. (2020, June 24). How media and film can help China grow its soft power in Africa. Retrieved July 30, 2020, from https://theconversation.com/how-media-and-film-can-help-china-grow-its-soft-power-in-africa-97401

[5] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[6] Glas, J. M., & Taylor, J. B. (2017). The Silver Screen and Authoritarianism: How Popular Films Activate Latent Personality Dispositions and Affect American Political Attitudes. American Politics Research, 46(2), 246-275. doi:10.1177/1532673×17744172

[7] Pautz, M. C. (2014). Argo and Zero Dark Thirty: Film, Government, and Audiences. PS: Political Science & Politics, 48(01), 120-128. doi:10.1017/s1049096514001656

[8] Do Chinese films hold global appeal? (2020, March 13). Retrieved July 29, 2020 from
https://chinapower.csis.org/chinese-films

[9] Adhia, N. (2013). The role of ideological change in India’s economic liberalization. The Journal of Socio-Economics, 44, 103-111. doi:10.1016/j.socec.2013.02.015

[10] Li, P., & Martina, M. (2018, May 20). Hollywood’s China dreams get tangled in trade talks. Retrieved July 29, 2020, from https://www.reuters.com/article/us-usa-trade-china-movies/hollywoods-china-dreams-get-tangled-in-trade-talks-idUSKCN1IK0W0

[11] Frater, P. (2020, February 15). IFTA Says U.S. Should Punish China for Cheating on Film Trade Deal. Retrieved July 30, 2020, from https://variety.com/2020/film/asia/ifta-china-film-trade-deal-1203505171

[12] Johnston, A. I. (2019). The Failures of the ‘Failure of Engagement’ with China. The Washington Quarterly, 42(2), 99-114. doi:10.1080/0163660x.2019.1626688

[13] Figure 2.4 Urban per capita disposable income, by province, 2017. (n.d.). Retrieved July 30, 2020, from https://www.unicef.cn/en/figure-24-urban-capita-disposable-income-province-2017

[14] Liu, S., & Parilla, J. (2019, August 08). Meet the five urban Chinas. Retrieved July 30, 2020, from https://www.brookings.edu/blog/the-avenue/2018/06/19/meet-the-five-urban-chinas

[15] Lawder, D., Shalal, A., & Mason, J. (2019, December 14). What’s in the U.S.-China ‘phase one’ trade deal. Retrieved July 30, 2020, from https://www.reuters.com/article/us-usa-trade-china-details-factbox/whats-in-the-u-s-china-phase-one-trade-deal-idUSKBN1YH2IL

[16] Fei, X. (2017, June 19). China Africa International Film Festival to open in October. Retrieved July 30, 2020, from http://chinaplus.cri.cn/news/showbiz/14/20170619/6644.html

2020 - Contest: PRC Below Threshold Writing Contest Assessment Papers Film and Entertainment Influence Operations Irk

Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Marijn Pronk is a Master Student at the University of Glasgow, focusing on identity politics, propaganda, and technology. Currently Marijn is finishing her dissertation on the use of populist propagandic tactics of the Far-Right online. She can be found on Twitter @marijnpronk9. Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessing the Threat posed by Artificial Intelligence and Computational Propaganda

Date Originally Written:  April 1, 2020.

Date Originally Published:  May 18, 2020.

Author and / or Article Point of View:  The Author is a Master Student in Security, Intelligence, and Strategic Studies at the University of Glasgow. The Author believes that a nuanced perspective towards the influence of Artificial Intelligence (AI) on technical communication services is paramount to understanding its threat.

Summary: 
 AI has greatly impacted communication technology worldwide. Computational propaganda is an example of the unregulated use of AI weaponized for malign political purposes. Changing online realities through botnets which creates a distortion of online environments could affect voter’s health, and democracies’ ability to function. However, this type of AI is currently limited to Big Tech companies and governmental powers.

Text:  
A cornerstone of the democratic political structure is media; an unbiased, uncensored, and unaltered flow of information is paramount to continue the health of the democratic process. In a fluctuating political environment, digital spaces and technologies offer great platforms for political action and civic engagement[1]. Currently, more people use Facebook as their main source of news than via any news organization[2]. Therefore, manipulating the flow of information in the digital sphere could not only pose as a great threat to the democratic values that the internet was founded upon, but also the health of democracies worldwide. Imagine a world where those pillars of democracy can be artificially altered, where people can manipulate the digital information sphere; from the content to the exposure range of information. In this scenario, one would be unable to distinguish real from fake, making critical perspectives obsolete. One practical embodiment of this phenomenon is computational propaganda, which describes the process of digital misinformation and manipulation of public opinion via the internet[3]. Generally, these practices range from the fabrication of messages, the artificial amplification of certain information, to the highly influential use of botnets (a network of software applications programmed to do certain tasks). With the emergence of AI, computational propaganda could be enhanced, and the outcomes can become qualitatively better and more difficult to spot.

Computational propaganda is defined as ‘’the assemblage of social media platforms, autonomous agents, algorithms, and big data tasked with manipulating public opinion[3].‘’ AI has the power to enhance computational propaganda in various ways, such as increased amplification and reach of political disinformation through bots. However, qualitatively AI can also increase the sophistication and the automation quality of bots. AI already plays an intrinsic role in the gathering process, being used in datamining of individuals’ online activity and monitoring and processing of large volumes of online data. Datamining combines tools from AI and statistics to recognize useful patterns and handle large datasets[4]. These technologies and databases are often grounded in in the digital advertising industry. With the help of AI, data collection can be done more targeted and thus more efficiently.

Concerning the malicious use of these techniques in the realm of computational propaganda, these improvements of AI can enhance ‘’[..] the processes that enable the creation of more persuasive manipulations of visual imagery, and enabling disinformation campaigns that can be targeted and personalized much more efficiently[4].’’ Botnets are still relatively reliant on human input for the political messages, but AI can also improve the capabilities of the bots interacting with humans online, making them seem more credible. Though the self-learning capabilities of some chat bots are relatively rudimentary, improved automation through computational propaganda tools aided by AI could be a powerful tool to influence public opinion. The self-learning aspect of AI-powered bots and the increasing volume of data that can be used for training, gives rise for concern. ‘’[..] advances in deep and machine learning, natural language understanding, big data processing, reinforcement learning, and computer vision algorithms are paving the way for the rise in AI-powered bots, that are faster, getting better at understanding human interaction and can even mimic human behaviour[5].’’ With this improved automation and data gathering power, computational propaganda tools aided by AI could act more precise by affecting the data gathering process quantitatively and qualitatively. Consequently, this hyper-specialized data and the increasing credibility of bots online due to increasing contextual understanding can greatly enhance the capabilities and effects of computational propaganda.

However, relativizing AI capabilities should be considered in three areas: data, the power of the AI, and the quality of the output. Starting with AI and data, technical knowledge is necessary in order to work with those massive databases used for audience targeting[6]. This quality of AI is within the capabilities of a nation-state or big corporations, but still stays out of reach for the masses[7]. Secondly, the level of entrenchment and strength of AI will determine its final capabilities. One must differ between ‘narrow’ and ‘strong’ AI to consider the possible threat to society. Narrow AI is simply rule based, meaning that you have the data running through multiple levels coded with algorithmic rules, for the AI to come to a decision. Strong AI means that the rules-model can learn from the data, and can adapt this set of pre-programmed of rules itself, without interference of humans (this is called ‘Artificial General Intelligence’). Currently, such strong AI is still a concept of the future. Human labour still creates the content for the bots to distribute, simply because the AI power is not strong enough to think outside the pre-programmed box of rules, and therefore cannot (yet) create their own content solely based on the data fed to the model[7]. So, computational propaganda is dependent on narrow AI, which requires a relatively large amount of high-quality data to yield accurate results. Deviating from this programmed path or task severely affects its effectiveness[8]. Thirdly, the output or the produced propaganda by the computational propaganda tools vary greatly in quality. The real danger lies in the quantity of information that botnets can spread. Regarding the chatbots, which are supposed to be high quality and indistinguishable from humans, these models often fail tests when tried outside their training data environments.

To address this emerging threat, policy changes across the media ecosystem are happening to mitigate the effects of disinformation[9]. Secondly, recently researchers have investigated the possibility of AI assisting in combating falsehoods and bots online[10]. One proposal is to build automated and semi-automated systems on the web, purposed for fact-checking and content analysis. Eventually, these bottom-top solutions will considerably help counter the effects of computational propaganda. Thirdly, the influence that Big Tech companies have on these issues cannot be negated, and their accountability towards creation and possible power of mitigation of these problems will be considered. Top-to-bottom co-operation between states and the public will be paramount. ‘’The technologies of precision propaganda do not distinguish between commerce and politics. But democracies do[11].’


Endnotes:

[1] Vaccari, C. (2017). Online Mobilization in Comparative Perspective: Digital Appeals and Political Engagement in Germany, Italy, and the United Kingdom. Political Communication, 34(1), pp. 69-88. doi:10.1080/10584609.2016.1201558

[2] Majo-Vazquez, S., & González-Bailón, S. (2018). Digital News and the Consumption of Political Information. In G. M. Forthcoming, & W. H. Dutton, Society and the Internet. How Networks of Information and Communication are Changing Our Lives (pp. 1-12). Oxford: Oxford University Press. doi:10.2139/ssrn.3351334

[3] Woolley, S. C., & Howard, P. N. (2018). Introduction: Computational Propaganda Worldwide. In S. C. Woolley, & P. N. Howard, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media (pp. 1-18). Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.003.0001

[4] Wardle, C. (2018, July 6). Information Disorder: The Essential Glossary. Retrieved December 4, 2019, from First Draft News: https://firstdraftnews.org/latest/infodisorder-definitional-toolbox

[5] Dutt, D. (2018, April 2). Reducing the impact of AI-powered bot attacks. CSO. Retrieved December 5, 2019, from https://www.csoonline.com/article/3267828/reducing-the-impact-of-ai-powered-bot-attacks.html

[6] Bolsover, G., & Howard, P. (2017). Computational Propaganda and Political Big Data: Moving Toward a More Critical Research Agenda. Big Data, 5(4), pp. 273–276. doi:10.1089/big.2017.29024.cpr

[7] Chessen, M. (2017). The MADCOM Future: how artificial intelligence will enhance computational propaganda, reprogram human culture, and threaten democracy… and what can be done about it. Washington DC: The Atlantic Council of the United States. Retrieved December 4, 2019

[8] Davidson, L. (2019, August 12). Narrow vs. General AI: What’s Next for Artificial Intelligence? Retrieved December 11, 2019, from Springboard: https://www.springboard.com/blog/narrow-vs-general-ai

[9] Hassan, N., Li, C., Yang, J., & Yu, C. (2019, July). Introduction to the Special Issue on Combating Digital Misinformation and Disinformation. ACM Journal of Data and Information Quality, 11(3), 1-3. Retrieved December 11, 2019

[10] Woolley, S., & Guilbeault, D. (2017). Computational Propaganda in the United States of America: Manufactoring Consensus Online. Oxford, UK: Project on Computational Propaganda. Retrieved December 5, 2019

[11] Ghosh, D., & Scott, B. (2018, January). #DigitalDeceit: The Technologies Behind Precision Propaganda on the Internet. Retrieved December 11, 2019, from New America: https://www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Cyberspace Emerging Technology Influence Operations Marijn Pronk