Does Rising Artificial Intelligence Pose a Threat?

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  Does Rising Artificial Intelligence Pose a Threat?

Date Originally Written:  February 3, 2019.

Date Originally Published:  February 18, 2019. 

Summary:  Artificial Intelligence or A.I. has been a long-standing subject of science fiction that usually ends badly for the human race in some way. From the ‘Terminator’ films to ‘Wargames,’ an A.I. being dangerous is a common theme. The reality though is that A.I. could go either way depending on the circumstances. However, at the present state of A.I. and it’s uses today, it is more of a danger than a boon in it’s use on the battlefield both political and militarily.

Text:  Artificial intelligence (A.I.) has been a staple in science fiction over the years but recently the technology has become a more probable reality[1]. The use of semi-intelligent computer programs and systems have made our lives a bit easier with regard to certain things like turning your lights on in a room with an Alexa or maybe playing some music or answering questions for you. However, other uses for such technologies have already been planned and in some cases implemented within the military and private industry for security oriented and offensive means.

The notion of automated or A.I. systems that could find weaknesses in networks and systems as well as automated A.I.’s that have fire control on certain remotely operated vehicles are on the near horizon. Just as Google and others have made automated self-driving cars that have an A.I. component that make decisions in emergency situations like crash scenarios with pedestrians, the same technologies are already being talked about in warfare. In the case of automated cars with rudimentary A.I., we have already seen deaths and mishaps because the technology is not truly aware and capable of handling every permutation that is put in front of it[2].

Conversely, if one were to hack or program these technologies to disregard safety heuristics a very lethal outcome is possible. This is where we have the potential of A.I. that is not fully aware and able to determine right from wrong leading to the possibility for abuse of these technologies and fears of this happening with devices like Alexa and others[3]. In one recent case a baby was put in danger after a Nest device was hacked through poor passwords and the temp in the room set above 90 degrees. In another instance recently an Internet of Things device was hacked in much the same way and used to scare the inhabitants of the home with an alert that North Korea had launched nuclear missiles on the U.S.

Both of the previous cases cited were low-level attacks on semi dumb devices —  now imagine one of these devices with access to weapons systems that are networked and perhaps has a weakness that could be subverted[4]. In another scenario, such A.I. programs as those discussed in cyber warfare, could also be copied or subverted and unleashed not only by nation-state actors but a smart teen or a group of criminals for their own desires. Such programs are a thing of the near future, but if you want an analogy, you can look at open source hacking tools or platforms like MetaSploit which have automated scripts and are now used by adversaries as well as our own forces.

Hackers and crackers today have already begun using A.I. technologies in their attacks and as the technology becomes more stable and accessible, there will be a move toward whole campaigns being carried out by automated systems attacking targets all over the world[5]. This automation will cause collateral issues at the nation state-level in trying to attribute the actions of such systems as to who may have set them upon the victim. How will attribution work when the system itself doing the attacking is actually self-sufficient and perhaps not under the control of anyone?

Finally, the trope of a true A.I. that goes rogue is not just a trope. It is entirely possible that a program or system that is truly sentient might consider humans an impediment to its own existence and attempt to eradicate us from its access. This of course is a long distant possibility, but, let us leave you with one thought — in the last presidential election and the 2020 election cycle to come, the use of automated and A.I. systems have and will be deployed to game social media and perhaps election systems themselves. This technology is not just a far-flung possibility, rudimentary systems are extant and being used.

The only difference between now and tomorrow is that at the moment, people are pointing these technologies at the problems they want to solve. In the future, the A.I. may be the one choosing the problem in need of solving and this choice may not be in our favor.


Endnotes:

[1] Cummings, M. (2017, January 1). Artificial Intelligence and the Future of Warfare. Retrieved February 2, 2019, from https://www.chathamhouse.org/sites/default/files/publications/research/2017-01-26-artificial-intelligence-future-warfare-cummings-final.pdf

[2] Levin, S., & Wong, J. C. (2018, March 19). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. Retrieved February 2, 2019, from https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe

[3] Menn, J. (2018, August 08). New genre of artificial intelligence programs take computer hacking… Retrieved February 2, 2019, from https://www.reuters.com/article/us-cyber-conference-ai/new-genre-of-artificial-intelligence-programs-take-computer-hacking-to-another-level-idUSKBN1KT120

[4] Jowitt, T. (2018, August 08). IBM DeepLocker Turns AI Into Hacking Weapon | Silicon UK Tech News. Retrieved February 1, 2019, from https://www.silicon.co.uk/e-innovation/artificial-intelligence/ibm-deeplocker-ai-hacking-weapon-235783

[5] Dvorsky, G. (2017, September 12). Hackers Have Already Started to Weaponize Artificial Intelligence. Retrieved February 1, 2019, from https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425

Artificial Intelligence / Machine Learning / Human-Machine Teaming Assessment Papers Emerging Technology Scot A. Terban

An Assessment of Violent Extremist Use of Social Media Technologies

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.


Title:  An Assessment of Violent Extremist Use of Social Media Technologies

Date Originally Written:  November 9, 2017.

Date Originally Published:  February 5, 2018.

Summary:  The leveraging of social media technologies by violent extremists like Al-Qaeda (AQ) and Daesh have created a road map for others to do the same.  Without a combined effort by social media companies and intelligence and law enforcement organizations, violent extremists and others will continue to operate nearly unchecked on social media platforms and inspire others to acts of violence.

Text:  Following the 9/11 attacks the U.S. invaded Afghanistan and AQ, the violent extremist organization who launched these attacks, lost ground.  With the loss of ground came an increase in online activity.  In the time before the worldwide embrace of social media, jihadi’s like Irhabi007 (Younis Tsouli) led AQ hacking operations by breaking into vulnerable web pages and defacing them with AQ propaganda as well as establishing dead drop sites for materials others could use.  This method was pioneered by Irhabi007, who was later hunted down by other hackers and finally arrested in 2005[1].  Five years after Tsouli’s arrest, Al-Qaeda in the Arabian Peninsula (AQAP) established Inspire Magazine as a way to communicate with its existing followers and “inspire” new ones[2].  Unfortunately for AQAP, creating and distributing an online magazine became a challenge.

Today, social media platforms such as Twitter, Facebook, VKontakte, and YouTube are now the primary modus for jihadi extremists to spread the call to jihad as well as sow fear into those they target.  Social media is perfect for connecting people because of the popularity of the platforms and the ease of use, creation of accounts, and ability to send messages that could have a large audience.  In the case of Daesh, they use Twitter and YouTube as their primary means of messaging not only for fear but also command and control as well as recruitment.  Daesh sees the benefits of using social media, and their use has paved the way for others.  Even after Twitter and YouTube began to catch on and act against the Daesh accounts, it is still easy still for Daesh to create new accounts and keep the messages flowing with a new user name followed by a digit.

AQ’s loss of terrain combined with the expansion of social media set the conditions for movement toward inciting the “far war” over the local struggle as AQ saw it before Osama bin Laden was killed.  In fact, the call to the West had been made in Inspire magazine on many occasions.  Inspire even created a section of their magazine on “Open Source Jihad” which was later adopted by Dabiq[3] (Daesh’s magazine), but the problem was actually motivating the Western faithful into action.  This paradigm was finally worked out in social media where recruiters and mouthpieces could, in real-time, talk to these potential recruits and work with them to act.

Online messaging by violent extremist organizations has now reached a point of asymmetry where very little energy or money invested on the jihadi’s part can produce large returns on investments like the incident in Garland Texas[4].  To AQ, Daesh, and others, it is now clear that social media could be the bedrock of the fight against the West and anywhere else if others can be incited to act.  This incited activity takes the form of what has been called as “Lone Wolf Jihad” which has caused several incidents like the Garland shootings to current day events like the attack in New York City on the bike path by Sayfullo Saipov, a green card holder in the U.S. from Uzbekistan[5].

With the activating of certain individuals to the cause using the propaganda and manuals put out by the jihadi’s on social media, it is clear that the medium works and that even with all the attempts by companies like Facebook and Twitter to root accounts out and delete them, the messaging still gets to those who may act upon it.  The memetic virus of violent extremism has a carrier and that is social media.  Now, with the advent of social media’s leveraging by Russia in the campaign against the U.S. electoral system, we are seeing a paradigm shift into larger and more dangerous memetic and asymmetric warfare.

Additionally, with the advent of encryption technologies to the social media platforms the net effect has been to create channels of radicalization, recruitment, and activation over live chats and messages that cannot be indicted by authorities easily.  This use for encryption and live chats and messages makes the notion of social media as a means of asymmetric warfare even more prescient.  The jihadis now have not only a means to reach out to would be followers, but also a constant contact at a distance, where before they would have to radicalize potential recruits a physical location.

Expanding this out further, the methodologies that the jihadi’s have created and used online are now studied by other like-minded groups and can be emulated.  This means that whatever the bent, a group of like-minded individuals seeking extremist ends can simply sign up and replicate the jihadi model to the same ends of activating individuals to action.  We have already started to see this with the Russian hybrid warfare at a nominal level by activating people in the U.S. such as neo nazi’s and empowering them to act.

Social media is a boon and a bane depending on it’s use and it’s moderation by the companies that create the platforms and manage them.  However, with the First Amendment protecting freedom of speech in the U.S., it is hard for companies to delineate what is free speech and what is exhortation to violence.  This is the crux of the issue for companies and governments in the fight against violent extremism on platforms such as YouTube or Twitter.  Social media utilization boils down to terms of service and policing, and until now the companies have not been willing to monitor and take action.  Post Russian meddling in the U.S. election though, social media company attitudes seems to be changing.

Ultimately, the use of social media for extremist ideas and action will always be a problem.  This is not going away, and policing is key.  The challenge lies in working out the details and legal interpretations concerning the balance of what constitutes freedom of speech and what constitutes illegal activity.  The real task will be to see if algorithms and technical means will be helpful in sorting between the two.  The battle however, will never end.  It is my assessment that the remediation will have to be a melding of human intelligence activities and technical means together to monitor and interdict those users and feeds that are seeking to incite violence within the medium.


Endnotes:

[1] Katz, R., & Kern, M. (2006, March 26). Terrorist 007, Exposed. Retrieved November 17, 2017, from http://www.washingtonpost.com/wp-dyn/content/article/2006/03/25/AR2006032500020.html

[2] Zelin, A. Y. (2017, August 14). Inspire Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/inspire-magazine/

[3] Zelin, A. Y. (2016, July 31). Dabiq Magazine. Retrieved November 17, 2017, from http://jihadology.net/category/dabiq-magazine/

[4] Chandler, A. (2015, May 04). A Terror Attack in Texas. Retrieved November 17, 2017, from https://www.theatlantic.com/national/archive/2015/05/a-terror-attack-in-texas/392288/

[5] Kilgannon, C., & Goldstein, J. (2017, October 31). Sayfullo Saipov, the Suspect in the New York Terror Attack, and His Past. Retrieved November 17, 2017, from https://www.nytimes.com/2017/10/31/nyregion/sayfullo-saipov-manhattan-truck-attack.html

 

Al-Qaeda Assessment Papers Cyberspace Islamic State Variants Scot A. Terban Violent Extremism

Options for Paying Ransoms to Advanced Persistent Threat Actors

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  Paying ransom for exploits being extorted by Advanced Persistent Threat Actors: Weighing the Options.

Date Originally Written:  June 1, 2017.

Date Originally Published:  June 8, 2017.

Author and / or Article Point of View:  Recent events have given rise to the notion of crowd funding monies to pay for exploits being held by a hacking group called ShadowBrokers in their new “Dump of the month club” they have ostensibly started.  This article examines, from a red team point of view,  the idea of meeting actors’ extortion demands to get access to new nation state-level exploits and, in doing so, being able to reverse engineer them and immunize the community.

Background:  On May 30, 2017 the ShadowBrokers posted to their new blog site that they were starting a monthly dump service wherein clients could pay a fee for access to exploits and other materials that the ShadowBrokers had stolen from the U.S. Intelligence Community (USIC).  On May 31, 2017 a collective of hackers created a Patreon site to crowd fund monies in an effort to pay the ShadowBrokers for their wares and gather the exploits to reverse engineer them in the hopes of disarming them for the greater community.  This idea was roundly debated on the internet and as of this writing  has since been pulled by the collective after collecting about $3,000.00 of funds.  In the end it was the legal counsel of one of the hackers who had the Patreon eite shut down due to potential illegalities with buying such exploits from actors like ShadowBrokers.  There were many who supported the idea with a smaller but vocal dissenting group warning that it was bad idea.

Significance:  The significance of these events has import on many levels of national security issues that now deal with information security and information warfare.  The fact that ShadowBrokers exist and have been dumping nation-state hacking tools is only one order of magnitude here.  Since the ShadowBrokers dumped their last package of files a direct international event ensued in the WannaCrypt0r malware being augmented with code from ETERNALBLUE and DOUBLEPULSAR U.S. National Security Agency exploits and infecting large numbers of hosts all over the globe with ransomware.  An additional aspect of this was that the code for those exploits may have been copied from the open source sites of reverse engineers working on the exploits to secure networks via penetration testing tools.  This was the crux of the argument that the hackers were making, simply put, they would pay for the access to deny others from having it while trying to make the exploits safe.  Would this model work for both public and private entities?  Would this actually stop the ShadowBrokers from posting the data publicly even if paid privately?

Option #1:  Private actors buy the exploits through crowd funding and reverse the exploits to make them safe (i.e. report them to vendors for patching).

Risk:  Private actors like the hacker collective who attempted this could be at risk to the following scenarios:

1) Legal issues over buying classified information could lead to arrest and incarceration.

2) Buying the exploits could further encourage ShadowBrokers’ attempts to extort the United States Intelligence Community and government in an active measures campaign.

3) Set a precedent with other actors by showing that the criminal activity will in fact produce monetary gain and thus more extortion campaigns can occur.

4) The actor could be paid and still dump the data to the internet and thus the scheme moot.

Gain:  Private actors like the hacker collective who attempted this could have net gains from the following scenarios:

1) The actor is paid, and the data is given leaving the hacker collective to reverse engineer the exploits and immunize the community.

2) The hacker collective could garner attention to the issues and themselves, this perhaps could get more traction on such issues and secure more environments.

Option #2:  Private actors do not pay for the exploits and do not reward such activities like ransomware and extortion on a global scale.

Risk:  By not paying the extortionists the data is dumped on the internet and the exploits are used in malware and other hacking attacks globally by those capable of understanding the exploits and using or modifying them.  This has already happened and even with the exploits being in the wild and known of by vendors the attacks still happened to great effect.  Another side effect is that all operations that had been using these exploits have been burned, but, this is already a known quantity to the USIC as they likely already know what exploits have been stolen and or remediated in country.

Gain:  By not paying the extortionists the community at large is not feeding the cost to benefit calculation that the attackers must make in their plans of profit.  If we do not deal with extortionists or terrorists you are not giving them positive incentive to carry out such attacks for monetary benefit.

Other Comments:  While it may be laudable to consider such schemes as crowd funding and attempting to open source such exploit reversal and mitigation, it is hubris to consider that this will stop the actor with bad intent to just sell the data and be done with it.  It is also of note that the current situation that this red team article is based on involves a nation-state actor, Russia and its military intelligence service Glavnoye Razvedyvatel’noye Upravleniye (GRU) and its foreign intelligence service the Sluzhba Vneshney Razvedki (SVR) that are understood to not care about the money.  This current situation is not about money, it is about active measures and sowing chaos in the USIC and the world.  However, the precepts still hold true, dealing with terrorists and extortionists is a bad practice that will only incentivize the behavior.  The takeaway here is that one must understand the actors and the playing field to make an informed decision on such activities.

Recommendation:  None.


Endnotes:

None.

Cyberspace Extortion Option Papers Scot A. Terban

Options for Private Sector Hacking Back

Scot A. Terban is a security professional with over 13 years experience specializing in areas such as Ethical Hacking/Pen Testing, Social Engineering Information, Security Auditing, ISO27001, Threat Intelligence Analysis, Steganography Application and Detection.  He tweets at @krypt3ia and his website is https://krypt3ia.wordpress.com.  Divergent Options’ content does not contain information of an official nature nor does the content represent the official position of any government, any organization, or any group.  


National Security Situation:  A future where Hacking Back / Offensive Cyber Operations in the Private Sphere are allowed by the U.S. Government.

Date Originally Written:  April 3, 2017.

Date Originally Published:  May 15, 2017.

Author and / or Article Point of View:  This article is written from the point of view of a future where Hacking Back / Offensive Cyber Operations as a means for corporations to react offensively as a defensive act has been legally sanctioned by the U.S. Government and the U.S. Department of Justice.  While this government sanctioning may seem encouraging to some, it could lead to national and international complications.

Background:  It is the year X and hacking back by companies in the U.S. has been given official sanction.  As such, any company that has been hacked may offensively react to the hacking by hacking the adversaries infrastructure to steal back data and / or deny and degrade the adversaries from attacking further.

Significance:  At present, Hacking Back / Offensive Cyber Operations are not sanctioned activities that the U.S. Government allows U.S. corporations to conduct.  If this were to come to pass, then U.S. corporations would have the capabilities to stand up offensive cyber operations divisions in their corporate structure or perhaps hire companies to carry out such actions for them i.e. Information Warfare Mercenaries.  These forces and actions taken by corporations, if allowed, could cause larger tensions within the geopolitical landscape and force other nation states to react.

Option #1:  The U.S. Government sanctions the act of hacking back against adversaries as fair game.  U.S. corporations stand up hacking teams to work with Blue Teams (Employees in companies who attempt to thwart incidents and respond to them) to react to incidents and to attempt to hack the adversaries back to recover information, determine who the adversaries are, and to prevent their infrastructure from being operational.

Risk:  Hacking teams at U.S. corporations, while hacking back, make mistakes and attack innocent companies/entities/foreign countries whose infrastructure may have been unwittingly used as part of the original attack.

Gain:  The hacking teams of these U.S. corporations manage to hack back, steal information, and determine if it had been copied and further exfiltrated.  This also allows the U.S. corporations to try to determine who the actor is and gather evidence as well as degrade the actor’s ability to attack others.

Option #2:  The U.S. Government allows for the formation of teams/companies of information warfare specialists that are non-governmental bodies to hack back as an offering.  This offensive activity would be sanctioned and monitored by the government but work for companies under a letter of marque approach with payment and / or bounties on actors stopped or for evidence brought to the judicial and used to prosecute actors.

Risk:  Letters of marque could be misused and attackers could go outside their mandates.  The same types of mistakes could also be made as those of the corporations that formed offensive teams internally.  Offensive actions could affect geopolitics as well as get in the way of other governmental operations that may be taking place.  Infrastructures could be hacked and abused of innocent actors who were just a pivot point and other not yet defined mistakes could be made.

Gain:  Such actors and operations could deter some adversaries and in fact could retrieve data that has been stolen and perhaps prevent that data from being further exploited.

Other Comments:  Clearly the idea of hacking back has been in the news these last few years and the notion has been something many security professionals have said was a terrible idea.  There are certain advantages to the idea that firms can protect themselves from hacking by hacking back, but generally the sense of things today is that many companies cannot even protect their data properly to start with so the idea of hacking back is a red herring to larger security concerns.

Recommendation:  None.


Endnotes:

None.

Cyberspace Offensive Operations Option Papers Private Sector Scot A. Terban United States