Options to Manage the Risks of Integrating Artificial Intelligence into National Security and Critical Industry Organizations

Lee Clark is a cyber intelligence analyst.  He holds an MA in intelligence and international security from the University of Kentucky’s Patterson School.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


National Security Situation:  What are the potential risks of integrating artificial intelligence (AI) into national security and critical infrastructure organizations and potential options for mitigating these risks?

Date Originally Written:  May 19, 2018.

Date Originally Published:  July 2, 2018.

Author and / or Article Point of View:  The author is currently an intelligence professional focused on threats to critical infrastructure and the private sector.  This article will use the U.S. Department of Homeland Security’s definition of “critical infrastructure,” referring to 16 public and private sectors that are deemed vital to the U.S. economy and national functions.  The designated sectors include financial services, emergency response, food and agriculture, energy, government facilities, defense industry, transportation, critical manufacturing, communications, commercial facilities, chemical production, civil nuclear functions, dams, healthcare, information technology, and water/wastewater management[1].  This article will examine some broad options to mitigate some of the most prevalent non-technical risks of AI integration, including legal protections and contingency planning.

Background:  The benefits of incorporating AI into the daily functions of an organization are widely championed in both the private and public sectors.  The technology has the capability to revolutionize facets of government and private sector functions like record keeping, data management, and customer service, for better or worse.  Bringing AI into the workplace has significant risks on several fronts, including privacy/security of information, record keeping/institutional memory, and decision-making.  Additionally, the technology carries a risk of backlash over job losses as automation increases in the global economy, especially for more skilled labor.  The national security and critical industry spheres are not facing an existential threat, but these are risks that cannot be dismissed.

Significance:  Real world examples of these concerns have been reported in open source with clear implications for major corporations and national security organizations.  In terms of record keeping/surveillance related issues, one need only look to recent court cases in which authorities subpoenaed the records of an Amazon Alexa, an appliance that acts as a digital personal assistant via a rudimentary AI system.  This subpoena situation becomes especially concerning to users, given recent reports of Alexa’s being converted into spying tools[2].  Critical infrastructure organizations, especially defense, finance, and energy companies, exist within complex legal frameworks that involve international laws and security concerns, making legal protections of AI data all the more vital.

In the case of issues involving decision-making and information security, the dangers are no less severe.  AIs are susceptible to a variety of methods that seek to manipulate decision-making, including social engineering and, more specifically, disinformation efforts.  Perhaps the most evident case of social engineering against an AI is an instance in which Microsoft’s AI endorsed genocidal statements after a brief conversation with users on Twitter[3].  If it is possible to convince an AI to support genocide, it is not difficult to imagine the potential to convince it to divulge state secrets or turn over financial information with some key information fed in a meaningful sequence[4].  In another public instance, an Amazon Echo device recently recorded a private conversation in an owner’s home and sent the conversation to another user without requesting permission from the owner[5].  Similar instances are easy to foresee in a critical infrastructure organization such as a nuclear energy plant, in which an AI may send proprietary information to an uncleared user.

AI decisions also have the capacity to surprise developers and engineers tasked with maintenance, which could present problems of data recovery and control.  For instance, developers discovered that Facebook’s AI had begun writing a modified version of a coding language for efficiency, having essentially created its own code dialect, causing transparency concerns.  Losing the ability to examine and assess coding decisions presents problems for replicating processes and maintenance of a system[6].

AI integration into industry also carries a significant risk of backlash from workers.  Economists and labor scholars have been discussing the impacts of automation and AI on employment and labor in the global economy.  This discussion is not merely theoretical in nature, as evidenced by leaders of major tech companies making public remarks supporting basic income as automation will likely replace a significant portion of labor market in the coming decades[7].

Option #1:  Leaders in national security and critical infrastructure organizations work with internal legal teams to develop legal protections for organizations while lobbying for legislation to secure legal privileges for information stored by AI systems (perhaps resembling attorney-client privilege or spousal privileges).

Risk:  Legal teams may lack the technical knowledge to foresee some vulnerabilities related to AI.

Gain:  Option #1 proactively builds liability shields, protections, non-disclosure agreements, and other common legal tools to anticipate needs for AI-human interactions.

Option #2:  National security and critical infrastructure organizations build task forces to plan protocols and define a clear AI vision for organizations.

Risk:  In addition to common pitfalls of group work like bandwagoning and group think, this option is vulnerable to insider threats like sabotage or espionage attempts.  There is also a risk that such groups may develop plans that are too rigid or short-sighted to be adaptive in unforeseen emergencies.

Gain:  Task forces can develop strategies and contingency plans for when emergencies arise.  Such emergencies could include hacks, data breaches, sabotage by rogue insiders, technical/equipment failures, or side effects of actions taken by an AI in a system.

Option #3:  Organization leaders work with intelligence and information security professionals to try to make AI more resilient against hacker methods, including distributed denial-of-service attacks, social engineering, and crypto-mining.

Risk:  Potential to “over-secure” systems, resulting in loss of efficiency or overcomplicating maintenance processes.

Gain:  Reduced risk of hacks or other attacks from malicious actors outside of organizations.

Other Comments:  None.

Recommendation: None.


Endnotes:

[1] DHS. (2017, July 11). Critical Infrastructure Sectors. Retrieved May 28, 2018, from https://www.dhs.gov/critical-infrastructure-sectors

[2] Boughman, E. (2017, September 18). Is There an Echo in Here? What You Need to Consider About Privacy Protection. Retrieved May 19, 2018, from https://www.forbes.com/sites/forbeslegalcouncil/2017/09/18/is-there-an-echo-in-here-what-you-need-to-consider-about-privacy-protection/

[3] Price, R. (2016, March 24). Microsoft Is Deleting Its AI Chatbot’s Incredibly Racist Tweets. Retrieved May 19, 2018, from http://www.businessinsider.com/microsoft-deletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3

[4] Osaba, O. A., & Welser, W., IV. (2017, December 06). The Risks of AI to Security and the Future of Work. Retrieved May 19, 2018, from https://www.rand.org/pubs/perspectives/PE237.html

[5] Shaban, H. (2018, May 24). An Amazon Echo recorded a family’s conversation, then sent it to a random person in their contacts, report says. Retrieved May 28, 2018, from https://www.washingtonpost.com/news/the-switch/wp/2018/05/24/an-amazon-echo-recorded-a-familys-conversation-then-sent-it-to-a-random-person-in-their-contacts-report-says/

[6] Bradley, T. (2017, July 31). Facebook AI Creates Its Own Language in Creepy Preview Of Our Potential Future. Retrieved May 19, 2018, from https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/

[7] Kharpal, A. (2017, February 21). Tech CEOs Back Call for Basic Income as AI Job Losses Threaten Industry Backlash. Retrieved May 19, 2018, from https://www.cnbc.com/2017/02/21/technology-ceos-back-basic-income-as-ai-job-losses-threaten-industry-backlash.html

Critical Infrastructure Cyberspace Emerging Technology Lee Clark Option Papers Private Sector Resource Scarcity