Max Heinemeyer
It has now been over three decades since the Morris Worm infected an estimated 10% of the 60,000 computers that were online in 1988. It was the personal malware project of a Harvard graduate named Robert Tappan Morris, and is now widely deemed to be the world’s first cyber-attack.
Fast forward to today, and cyber attacks now stand among natural disasters and climate change in the World Economic Forum’s annual list of global society’s gravest threats. As businesses, schools, hospitals, and pretty much every other thread in the fabric of society have embraced the internet, cyber crime has transformed from an academic research project into a global marketplace of professional hacking services, and on the geopolitical stage, governments have turned to hyper-advanced cyber attack tools as a means of causing physical damage and disruption to their adversaries’ critical infrastructure.
Over the years, hackers have consistently reinforced the old adage: ‘where there’s a will there’s a way’. Defenders have inputted new rules into their firewalls or developed new detection signatures based on attacks they have seen, and hackers have constantly reoriented their attack methodologies to evade them, leaving organisations playing catch-up and scrambling for a plan B in the face of an attack. A paradigm shift came in 2017 when the destructive ransomware ‘worms’ WannaCry and NotPetya caught the security world unaware, bypassing traditional tools like firewalls to cripple thousands of organisations across 150 countries, including a number of NHS agencies.
A critical response to the onset of increasingly sophisticated and novel attacks has been AI-powered defences, a development driven by the philosophy that information about yesterday’s attacks cannot predict tomorrow’s threats. In recent years, thousands of organisations have embraced AI to understand what is ‘normal’ for their digital environment and identify behaviour that is anomalous and potentially threatening. Many have even entrusted machine algorithms to autonomously interrupt fast-moving attacks. This active, defensive use of AI has changed the role of security teams fundamentally, freeing up humans to focus on higher level tasks.
But if attackers can find a way to scale up their attacks, they will do it. Adversaries ultimately think like enterprises: How can I make my hackers more efficient? How can we attack even more targets? How can I achieve more results with less resources?
In what is the attack landscape’s next evolution, hackers are taking advantage of machine learning themselves to deploy malicious algorithms that can adapt, learn, and continuously improve in order to evade detection, signalling the next paradigm shift in the cyber security landscape: AI-powered attacks.
We can expect Offensive AI to be used throughout the attack life cycle – be it to use natural language processing to understand written language and to craft contextualised spear-phishing emails at scale or image classification to speed up the exfiltration of sensitive documents once an environment is compromised and the attackers are on the hunt for material they can profit from.
A recent study by Forrester found that 88% of security professionals expect AI-driven attacks will become mainstream in what has already proven to be an era of hyper-change in cyber attacks, and close to half of them see this happening in the next year – it is only a matter of time. Open source AI research projects, tools which could be leveraged to supercharge every phase of the attack lifecycle, already exist today and soon, they will indubitably join the list of paid-for hacker services available for purchase on the dark web.
However, there are now offensive AI prototypes available that autonomously determine an organisation’s most high-profile targets based on their social media exposure – all in a matter of seconds. The AI then crafts contextualised phishing emails and selects a fitting sender to spoof and fires the emails away, tricking victims into clicking on a malicious link or opening an attachment that will grant further access into the target organisation. These have been tested against defensive AI, mimicking what we expect to see happening soon in the real world: AI combatting AI in what is essentially a war of algorithms.
Armed with this research and data, defensive AI sees more. Powered by unsupervised machine learning, it is equipped with a complex understanding of every user and device across the network it’s protecting, and uses this evolving understanding to detect subtle deviations that might be the hallmarks of an emerging attack. With this ‘birds eye’ view of the digital business, cyber AI will spot offensive AI as soon as it starts to manipulate data.
When an AI attacker makes any kind of noise, defensive AI will make intelligent micro-decisions to block the activity – offensive AI may well be leveraged for its speed, but this is something that defensive AI will also bring to the arms race.
When this major leap in attacker innovation inevitably occurs, investigation, response and remediation must be conducted with the speed and intuition of a machine brain. The reality is that traditional security controls are already struggling to detect attacks that have never been seen before in the wild – be it malware without known signatures, new command & control domains or individualised spear-phishing emails. There is no chance that traditional tools will be able to cope with future attacks as this becomes the norm and easier to realise than ever before. Only AI can fight AI.
This is yet another new battlefield in the ongoing war for control over digital infrastructures, but fortunately, it’s one that the AI defenders have long been preparing for.
No comments:
Post a Comment