Assessment of the Role of Cyber Power in Interstate Conflict

Eric Altamura is a graduate student in the Security Studies Program at Georgetown University’s School of Foreign Service. He previously served for four years on active duty as an armor officer in the United States Army.  He regularly writes for Georgetown Security Studies Review and can be found on Twitter @eric_senlu.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Assessment of the Role of Cyber Power in Interstate Conflict

Date Originally Written:  May 05, 2018 / Revised for Divergent Options July 14, 2018.

Date Originally Published:  September 17, 2018.

Summary:  The targeting of computer networks and digitized information during war can prevent escalation by providing an alternative means for states to create the strategic effects necessary to accomplish limited objectives, thereby bolstering the political viability of the use of force as a lever of state power.

Text:  Prussian General and military theorist Carl von Clausewitz wrote that in reality, one uses, “no greater force, and setting himself no greater military aim, than would be sufficient for the achievement of his political purpose.” State actors, thus far, have opted to limit cyberattacks in size and scope pursuant to specific political objectives when choosing to target information for accomplishing desired outcomes. This limiting occurs because as warfare approaches its unlimited form in cyberspace, computer network attacks increasingly affect the physical domain in areas where societies have become reliant upon IT systems for everyday functions. Many government and corporate network servers host data from industrial control systems (ICS) or supervisory control and data acquisition (SCADA) systems that control power generation, utilities, and virtually all other public services. Broader attacks on an adversary’s networks consequently affect the populations supported by these systems, so that the impacts of an attack go beyond simply denying an opponent the ability to communicate through digital networks.

At some point, a threshold exists where it becomes more practical for states to utilize other means to directly target the physical assets of an adversary rather than through information systems. Unlimited cyberattacks on infrastructure would come close to replicating warfare in its total form, with the goal of fully disarming an opponent of its means to generate resistance, so states become more willing to expend resources and effort towards accomplishing their objectives. In this case, cyber power decreases in utility relative to the use of physical munitions (i.e. bullets and bombs) as the scale of warfare increases, mainly due to the lower probability of producing enduring effects in cyberspace. As such, the targeting and attacking of an opponent’s digital communication networks tends to occur in a more limited fashion because alternative levers of state power provide more reliable solutions as warfare nears its absolute form. In other words, cyberspace offers much more value to states seeking to accomplish limited political objectives, rather than for waging total war against an adversary.

To understand how actors attack computer systems and networks to accomplish limited objectives during war, one must first identify what states actually seek to accomplish in cyberspace. Just as the prominent British naval historian Julian Corbett explains that command of the sea does not entail “the conquest of water territory,” states do not use information technology for the purpose of conquering the computer systems and supporting infrastructure that comprise an adversary’s information network. Furthermore, cyberattacks do not occur in isolation from the broader context of war, nor do they need to result in the total destruction of the enemy’s capabilities to successfully accomplish political objectives. Rather, the tactical objective in any environment is to exploit the activity that takes place within it – in this case, the communication of information across a series of interconnected digital networks – in a way that provides a relative advantage in war. Once the enemy’s communication of information is exploited, and an advantage achieved, states can then use force to accomplish otherwise unattainable political objectives.

Achieving such an advantage requires targeting the key functions and assets in cyberspace that enable states to accomplish political objectives. Italian General Giulio Douhet, an airpower theorist, describes command of the air as, “the ability to fly against an enemy so as to injure him, while he has been deprived of the power to do likewise.” Whereas airpower theorists propose targeting airfields alongside destroying airplanes as ways to deny an adversary access to the air, a similar concept prevails with cyber power. To deny an opponent the ability to utilize cyberspace for its own purposes, states can either attack information directly or target the means by which the enemy communicates its information. Once an actor achieves uncontested use of cyberspace, it can subsequently control or manipulate information for its own limited purposes, particularly by preventing the escalation of war toward its total form.

More specifically, the ability to communicate information while preventing an adversary from doing so has a limiting effect on warfare for three reasons. Primarily, access to information through networked communications systems provides a decisive advantage to military forces by allowing for “analyses and synthesis across a variety of domains” that enables rapid and informed decision-making at all echelons. The greater a decision advantage one military force has over another, the less costly military action becomes. Secondly, the ubiquity of networked information technologies creates an alternative way for actors to affect targets that would otherwise be politically, geographically, or normatively infeasible to target with physical munitions. Finally, actors can mask their activities in cyberspace, which makes attribution difficult. This added layer of ambiguity enables face-saving measures by opponents, who can opt to not respond to attacks overtly without necessarily appearing weak.

In essence, cyber power has become particularly useful for states as a tool for preventing conflict escalation, as an opponent’s ability to respond to attacks becomes constrained when denied access to communication networks. Societies’ dependence on information technology and resulting vulnerability to computer network attacks continues to increase, indicating that interstate violence may become much more prevalent in the near term if aggressors can use cyberattacks to decrease the likelihood of escalation by an adversary.


Endnotes:

[1] von Clausewitz, C. (1976). On War. (M. Howard, & P. Paret, Trans.) Princeton: Princeton University Press.

[2] United States Computer Emergency Readiness Team. (2018, March 15). Russian Government Cyber Activity Targeting Energy and Other Critical Infrastructure Sectors. (United States Department of Homeland Security) Retrieved May 1, 2018, from https://www.us-cert.gov/ncas/alerts/TA18-074A

[3] Fischer, E. A. (2016, August 12). Cybersecurity Issues and Challenges: In Brief. Retrieved May 1, 2018, from https://fas.org/sgp/crs/misc/R43831.pdf

[4] Corbett, J. S. (2005, February 16). Some Principles of Maritime Strategy. (S. Shell, & K. Edkins, Eds.) Retrieved May 2, 2018, from The Project Gutenberg: http://www.gutenberg.org/ebooks/15076

[5] Ibid.

[6] Douhet, G. (1942). The Command of the Air. (D. Ferrari, Trans.) New York: Coward-McCann.

[7] Singer, P. W., & Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know. New York: Oxford University Press.

[8] Boyd, J. R. (2010, August). The Essence of Winning and Losing. (C. Richards, & C. Spinney, Eds.) Atlanta.

Aggression Assessment Papers Cyberspace Emerging Technology Eric Altamura

Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity

Options for Next Generation Blue Force Biometrics

Sarah Soliman is a Technical Analyst at the nonprofit, nonpartisan RAND Corporation.  Sarah’s research interests lie at the intersection of national security, emerging technology, and identity.  She can be found on Twitter @BiometricsNerd.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  Next Generation Biometrics for U.S. Forces.

Date Originally Written:  March 18, 2017.

Date Originally Published:  June 26, 2017.

Author and / or Article Point of View:  Sarah Soliman is a biometrics engineer who spent two years in Iraq and Afghanistan as contracted field support to Department of Defense biometrics initiatives.

Background:  When a U.S. Army specialist challenged Secretary of Defense Donald Rumsfeld in 2004, it became tech-innovation legend within the military.  The specialist asked what the secretary was doing to up-armor military vehicles against Improvised Explosive Device (IED) attacks[1].  This town hall question led to technical innovations that became the class of military vehicles known as Mine-Resistant Ambush Protected, the MRAP.

History repeated itself in a way last year when U.S. Marine Corps General Robert B. Neller was asked in a Marine Corps town hall what he was doing to “up-armor” military personnel—not against attacks from other forces, but against suicide within their ranks[2].  The technical innovation path to strengthening troop resiliency is less clear, but just as in need of an MRAP-like focus on solutions.  Here are three approaches to consider in applying “blue force” biometrics, the collection of physiological or behavioral data from U.S. military troops, that could help develop diagnostic applications to benefit individual servicemembers.

1

US Army Specialist Thomas Wilson addresses the Secretary of Defense on base in Kuwait in 2004. Credit: Gustavo Ferrari / AP http://www.nbcnews.com/id/6679801/ns/world_news-mideast_n_africa/t/rumsfeld-inquisitor-not-one-bite-his-tongue

Significance:  The September 11th terrorists struck at a weakness—the United States’ ability to identify enemy combatants.  So the U.S. military took what was once blue force biometrics—a measurement of human signatures like facial images, fingerprints and deoxyribonucleic acid (DNA) (which are all a part of an enrolling military member’s record)—and flipped their use to track combatants rather than their own personnel.  This shift led to record use of biometrics in Operation Iraqi Freedom and Operation Enduring Freedom to assist in green (partner), grey (unknown), and red (enemy) force identification.

After 9/11, the U.S. military rallied for advances in biometrics, developing mobile tactical handheld devices, creating databases of IED networks, and cutting the time it takes to analyze DNA from days to hours[3].  The U.S. military became highly equipped for a type of identification that validates a person is who they say they are, yet in some ways these red force biometric advances have plateaued alongside dwindling funding for overseas operations and troop presence.  As a biometric toolset is developed to up-armor military personnel for health concerns, it may be worth considering expanding the narrow definition of biometrics that the Department of Defense currently uses[4].

The options presented below represent research that is shifting from red force biometrics back to the need for more blue force diagnostics as it relates to traumatic brain injury, sleep and social media.

Option #1:  Traumatic Brain Injury (TBI).

The bumps and grooves of the brain can contain identification information much like the loops and whorls in a fingerprint.  Science is only on the cusp of understanding the benefits of brain mapping, particularly as it relates to injury for military members[5].

Gain:  Research into Wearables.

Getting military members to a field hospital equipped with a magnetic resonance imaging (MRI) scanner soon after an explosion is often unrealistic.  One trend has been to catalog the series of blast waves experienced—instead of measuring one individual biometric response—through a wearable “blast gauge” device.  The blast gauge program made news recently as the markers failed to give vibrant enough data and the program was cancelled[6].  Though not field expedient, another traumatic brain injury (TBI) sensor type to watch is brain activity trackers, which CNN’s Jake Tapper experienced when he donned a MYnd Analytics electroencephalogram brain scanning cap, drawing attention to blue force biometrics topics alongside Veterans Day[7].

 

2

Blast Gauge. Credit: DARPA http://www.npr.org/sections/health-shots/2016/12/20/506146595/pentagon-shelves-blast-gauges-meant-to-detect-battlefield-brain-injuries?utm_medium=RSS&utm_campaign=storiesfromnpr

Risk:  Overpromising, Underdelivering or “Having a Theranos Moment.”

Since these wearable devices aren’t currently viable solutions, another approach being considered is uncovering biometrics in blood.  TBI may cause certain proteins to spike in the blood[8]. Instead of relying on a subjective self-assessment by a soldier, a quick pin-prick blood draw could be taken.  Military members can be hesitant to admit to injury, since receiving treatment is often equated with stigma and may require having to depart from a unit.  This approach would get around that while helping the Department of Defense (DoD) gain a stronger definition of whether treatment is required.

3

Credit: Intelligent Optical Systems Inc http://www.intopsys.com/downloads/BioMedical/TBI-Brochure.pdf

Option #2:  Sleep.

Thirty-one percent of members of the U.S. military get five hours or less of sleep a night, according to RAND research[9].  This level of sleep deprivation affects cognitive, interpersonal, and motor skills whether that means leading a convoy, a patrol or back home leading a family.  This health concern bleeds across personal and professional lines.

Gain:  Follow the Pilots.

The military already requires flight crews to rest between missions, a policy in place to allow flight crews the opportunity to be mission ready through sleep, and the same concept could be instituted across the military.  Keeping positive sleep biometrics—the measurement of human signatures based on metrics like amount of total sleep time or how often a person wakes up during a sleep cycle, oxygen levels during sleep and the repeat consistent length of sleep—can lower rates of daytime impairment.

4
The prevalence of insufficient sleep duration and poor sleep quality across the force. Credit: RAND, Clock by Dmitry Fisher/iStock; Pillow by Yobro10/iStockhttp://www.rand.org/pubs/research_briefs/RB9823.html

Risk:  More memoirs by personnel bragging how little sleep they need to function[10].

What if a minimal level of rest became a requirement for the larger military community?  What sleep-tracking wearables could military members opt to wear to better grasp their own readiness?  What if sleep data were factored into a military command’s performance evaluation?

Option #3:  Social Media.

The traces of identity left behind through the language, images, and even emoji[11] used in social media have been studied, and they can provide clues to mental health.

Gain:  It’s easier to pull text than to pull blood.

Biometric markers include interactivity like engagement (how often posts are made), what time a message is sent (which can act as an “insomnia index”), and emotion detection through text analysis of the language used[12].  Social media ostracism can also be measured by “embeddedness” or how close-knit one’s online connections are[13].

 

5

Credit: https://twitter.com/DeptofDefense/status/823515639302262784?ref_src=twsrc%5Etfw

Risk:  Misunderstanding in social media research.

The DoD’s tweet about this research was misconstrued as a subtweet or mockery[14].  True to its text, the tweet was about research under development at the Department of Defense and in particular the DoD Suicide Prevention Office.  Though conclusions at the scale of the DoD have yet to be reached, important research is being built-in this area including studies like one done by Microsoft Research, which demonstrated 70 percent accuracy in estimating onset of a major depressive disorder[15].  Computer programs have identified Instagram photos as a predictive marker of depression[16] and Twitter data as a quantifiable signal of suicide attempts[17].

Other Comments:  Whether by mapping the brain, breaking barriers to getting good sleep, or improving linguistic understanding of social media calls for help, how will the military look to blue force biometrics to strengthen the health of its core?  What type of intervention should be aligned once data indicators are defined?  Many tombs of untapped data remain in the digital world, but data protection and privacy measures must be in place before they are mined.

Recommendations:  None.


Endnotes:

[1]  Gilmore, G. J. (2004, December 08). Rumsfeld Handles Tough Questions at Town Hall Meeting. Retrieved June 03, 2017, from http://archive.defense.gov/news/newsarticle.aspx?id=24643

[2]  Schogol, J. (2016, May 29). Hidden-battle-scars-robert-neller-mission-to-save-marines-suicide. Retrieved June 03, 2017, from http://www.marinecorpstimes.com/story/military/2016/05/29/hidden-battle-scars-robert-neller-mission-to-save-marines-suicide/84807982/

[3]  Tucker, P. (2015, May 20). Special Operators Are Using Rapid DNA Readers. Retrieved June 03, 2017, from http://www.defenseone.com/technology/2015/05/special-operators-are-using-rapid-dna-readers/113383/

[4]  The DoD’s Joint Publication 2-0 defines biometrics as “The process of recognizing an individual based on measurable anatomical, physiological, and behavioral characteristics.”

[5]  DoD Worldwide Numbers for TBI. (2017, May 22). Retrieved June 03, 2017, from http://dvbic.dcoe.mil/dod-worldwide-numbers-tbi

[6]  Hamilton, J. (2016, December 20). Pentagon Shelves Blast Gauges Meant To Detect Battlefield Brain Injuries. Retrieved June 03, 2017, from http://www.npr.org/sections/health-shots/2016/12/20/506146595/pentagon-shelves-blast-gauges-meant-to-detect-battlefield-brain-injuries?utm_medium=RSS&utm_campaign=storiesfromnpr

[7]  CNN – The Lead with Jake Tapper. (2016, November 11). Retrieved June 03, 2017, from https://vimeo.com/191229323

[8]  West Virginia University. (2014, May 29). WVU research team developing test strips to diagnose traumatic brain injury, heavy metals. Retrieved June 03, 2017, from http://wvutoday-archive.wvu.edu/n/2014/05/29/wvu-research-team-developing-test-strips-to-diagnose-traumatic-brain-injury-heavy-metals.html

[9]  Troxel, W. M., Shih, R. A., Pedersen, E. R., Geyer, L., Fisher, M. P., Griffin, B. A., . . . Steinberg, P. S. (2015, April 06). Sleep Problems and Their Impact on U.S. Servicemembers. Retrieved June 03, 2017, from http://www.rand.org/pubs/research_briefs/RB9823.html

[10]  Mullany, A. (2017, May 02). Here’s Arianna Huffington’s Recipe For A Great Night Of Sleep. Retrieved June 03, 2017, from https://www.fastcompany.com/3060801/heres-arianna-huffingtons-recipe-for-a-great-night-of-sleep

[11]  Ruiz, R. (2016, June 26). What you post on social media might help prevent suicide. Retrieved June 03, 2017, from http://mashable.com/2016/06/26/suicide-prevention-social-media.amp

[12]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[13]  Ibid.

[14]  Brogan, J. (2017, January 23). Did the Department of Defense Just Subtweet Donald Trump? Retrieved June 03, 2017, from http://www.slate.com/blogs/future_tense/2017/01/23/did_the_department_of_defense_subtweet_donald_trump_about_mental_health.html

[15]  Choudhury, M. D., Gamon, M., Counts, S., & Horvitz, E. (2013, July 01). Predicting Depression via Social Media. Retrieved June 03, 2017, from https://www.microsoft.com/en-us/research/publication/predicting-depression-via-social-media/

[16]  Reece, A. G., & Danforth, C. M. (2016, August 13). Instagram photos reveal predictive markers of depression. Retrieved June 03, 2017, from https://arxiv.org/abs/1608.03282

[17]  Coppersmith, G., Ngo, K., Leary, R., & Wood, A. (2016, June 16). Exploratory Analysis of Social Media Prior to a Suicide Attempt. Retrieved June 03, 2017, from https://www.semanticscholar.org/paper/Exploratory-Analysis-of-Social-Media-Prior-to-a-Su-Coppersmith-Ngo/3bb21a197b29e2b25fe8befbe6ac5cec66d25413

Biometrics Emerging Technology Option Papers Psychological Factors Sarah Soliman United States