Pages

14 December 2019

The cybersecurity battle of the future – AI vs. AI

By Nadav Maman 

Artificial intelligence and machine learning continue to gain a foothold in our everyday lives. Whether for complex tasks like computer vision and natural language processing, or something as basic as an online chatbot, their popularity shows no signs of slowing. Companies have also started to explore deep learning, which is an advanced subset of machine learning. By applying “deep neural networks” deep learning takes inspiration from how the human brain works. Unlike machine learning, deep learning can actually train its processes directly on raw data, requiring little to no human intervention.

Recent research from analyst firm Gartner noted that the number of companies implementing AI technology has increased by around 270 per cent over the past four years. The return on investment is unmistakable as so many industries have started to implement the technology. However, even with the significant progress and given the nature of AI, that same, once helpful technology could fall into the wrong hands and be used to inflict damage on a company or the end user.


This ongoing battle that pits AI for good versus AI for malicious purposes, may not be something playing out in front of our eyes yet, but it’s not far off. Thankfully, the cost for implementing malicious AI, at any scale, is still somewhat cost prohibitive and requires tools and skills not readily available on the market. But knowing that it could become reality one day, means that companies should start preparing early for what lies ahead. 

Here’s a look into what that could look like, and what companies can do now to quell the storm.

Malware operating AI

When malware uses AI algorithms as an integral part of its business logic – it learns from its situation and gets smarter at evading detection. But unlike typical malware that is one program running on a server, for example, AI-based malware can shift and change its behavior quickly, adjusting its evasion techniques as needed when it senses something is wrong or detects a threat to its own systems. It’s a capability that most companies simply aren’t prepared for yet.

One example of situational awareness in AI-based malware came from BlackHat 2018. Created by IBM Security, DeepLocker is an encrypted ransomware that can autonomously decide which computer to attack based on a facial recognition algorithm. And as researchers noted, it’s “designed to be stealthy.”

The highly targeted malware hides itself in unsuspecting applications, evading detection by most antivirus scanning programs until it has identified its target victim. Once the target is identified through several indicators, including facial feature recognition, audio, location or system-level features, the AI algorithm unlocks the malware and launches the attack. According to researchers, IBM created it to demonstrate how they could combine “open-source AI tools with straightforward evasion techniques to build a targeted, evasive and highly effective malware.”

The amplified efficiency of AI means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor.

And while the researchers also noted that they haven’t seen something like DeepLocker in the wild yet, the technology they used to create it is readily available, as were the malware techniques they used. Only time will tell whether something like it will emerge – that is, if it hasn’t already.

Companies can guard against malware like this by fighting fire with fire, using cybersecurity solutions that are based on deep learning, the most advanced form of AI. It’s not enough to just get a firewall or basic anti-virus system, companies need to implement systems that can detect AI based malware and take the necessary steps to prevent harm. But also, to go one step further to achieve longer-term detection and pre-emptively stop continued damage. A necessary task with a future that includes AI-based malware.

Another harmful scenario is when “malicious” AI-based algorithms are used to hinder the functionality of benign AI algorithms by using the same algorithms and techniques used in traditional machine learning.

Rather than provide any helpful functionality, the malware is used to breach the useful algorithm, and manipulate it as a means to take over the functionality or use it for malicious purposes.

One example comes from several researchers studying adversarial machine learning. They investigated how self-driving cars processed street signs, and whether the technology could be manipulated. And while most self-driving cars have the ability to “read” street signs and act accordingly, researchers were able to trick the technology into believing it was reading a street sign, in this case a stop sign as a speed limit. This was a simple change that the technology onboard the vehicle couldn’t detect as harmful. Taking a step back to look at the implications, it meant that the technology available today in self-driving cars could be exploited into causing collisions, resulting in possible deadly outcomes.

Adversarial learning can also be applied to subvert and confuse the efforts of computer vision algorithms, NLP (Natural Language Processing) and malware classifiers, which trick the technology into thinking it’s something else. The process typically injects malicious data into benign data streams with the intent to overwhelm or to block legitimate data. An example of this is a Distributed Denial of Service (DDoS) attack, which is when a cyberattack aimed at a server is purposely overwhelmed with data and internet traffic, disrupting normal traffic or service to that server, effectively bringing it down.

To block the harmful effects of this technology, companies need a system that understands when an algorithm is benign and working properly, versus one that’s been tampered. It’s not only protecting systems and the overall functionality of the tech, but could be protecting lives, as seen in the stop sign example. This is where advanced AI becomes necessary for analysis capabilities that enable it to understand and identify when something is amiss.

Server-side AI

This type of attack is seen when malware runs on the victim’s endpoint, but AI-based algorithms are used on the server side to facilitate the attack. A command and control server – which is used by an attacker to send and receive information from systems compromised by malware – can control any number of functions.

For example, malware that steals data and information, which it then uploads onto a command and control server. Once complete, an additional algorithm identifies relevant details – e.g. credit card numbers, passwords and the like – which it then passes on to the server and ultimately the attacker on the other end. Through the use of AI, the malware can be executed on mass, without requiring any human intervention and be disseminated on a large scale to encompass thousands of victims.

One recent example Deep Instinct researchers uncovered was ServHelper. A new variant of the ServHelper malware uses an Excel 4.0 macro Dropper, a legacy mechanism still supported by Microsoft Office, and an executable payload signed with a valid digital signature. ServHelper can receive several types of commands from its ‘Command & Control’ server, including: download a file, enter sleep mode, or even a “self-kill” function that allows it to remove the malware from the infected machine. This is a classic example of hacker groups using increasingly sophisticated methods, such as certificates, to propagate malware and launch cyberattacks.

Similar to the others, it’s not enough to just put up a firewall and hope for the best. Companies need to think holistically and protect all of an organisation’s endpoints and devices from Windows through servers and other platforms such as Mac, Android and iOS. An AI-based solution can help by constantly learning from what is or isn’t malicious, helping its human counterparts to act once it’s identified and ideally stopped the harmful malware from spreading and hurting systems more.
The future of AI vs. AI

Companies are just beginning to grasp that AI and machine learning can help with customer-facing technology and be used to help create stronger defences against a future of AI-enabled attacks. While the future of malware using AI might still be a few years away, companies can prepare themselves now against attacks of the future.

By using these technologies to spot trends and patterns in behavior now, companies can better prepare themselves against a future that employs AI against them. One way to ensure the technological advantage over any potential AI-based threat is a deep learning-based approach, which fights malicious AI with friendly AI.

Unlike other forms of anti-virus that remain stagnant, once implemented, deep learning is highly scalable. This is especially important as AI-based malware can grow and change constantly, and deep learning can scale to hundreds of millions of training samples, which means that as the training dataset gets larger, the deep learning neural network can continuously improve its ability to detect anomalies, no matter what the future will bring. It’s truly fighting AI with AI.

No comments:

Post a Comment