Dashveenjit Kaur
AI and machine learning are making things convenient for internet users but also for hackers who use AI to orchestrate multiple cyber-attacks.
This is yet another new battlefield in the ongoing war for control over digital infrastructures, but fortunately, it’s one that the AI defenders have long been preparing for.
Over three decades ago, the Morris Worm infected an estimated 10% of the 60,000 computers that were online in 1988. It was the personal malware project of a Harvard graduate named Robert Tappan Morris, that was widely deemed to be the world’s first cyber-attack.
Fast forward to today, and cyberattacks now stand among natural disasters and climate change in the World Economic Forum’s annual list of global society’s gravest threats. In fact, with machine learning (ML) and artificial intelligence (AI) coming into the picture, cybersecurity is becoming more effective and powerful but there is another side of the coin too. Breaking into computer systems has become a child’s play using AI and ML.
This cycle of “innovation” will continue, and according to Forrester’s Using AI for Evil report, “mainstream artificial intelligence (AI)-powered hacking is just a matter of time”. After all, the tools of AI, from text analytics to facial recognition to ML platforms, are transforming almost every aspect of the business from personalized customer engagement to cybersecurity.
Offensive AI: a paradigm shift in cyberattacks
Cyberattacks are becoming more ubiquitous and it is inevitable that AI will change the nature of it. Almost no sectors are immune from cyberattacks, in fact, the level of sophistication of the threats faced is continually increasing.
Frankly, computer systems that can learn, reason, and act are still in their infancy. To top it off, machine learning requires huge data sets and for many real-world systems, like driverless cars, a complex blend of physical computer vision sensors, complex programming for real-time decision making, and robotics are required.
Hence, while deployment is simpler for businesses adopting AI, giving AI access to information and allowing any measure of autonomy brings serious risks that must be considered.
The risks AI poses
If you got an email supposedly from your boss that emulated their writing style and even used some pertinent information, wouldn’t you be more likely to open it? That is what it is like. AI has the potential to automate intrusion techniques by launching attacks at unprecedented speed. After automatically profiling multiple targets’ communications patterns, it could launch artificially-generated phishing attacks that mimic them.
Additionally, AI-powered malware could also move more easily through an organization by using machine learning to probe internal systems without giving itself away. By analyzing network traffic, it could more easily blend its own communications into other communications happening on the network, hiding in plain sight.
The time is now for intelligence and espionage services to embrace AI in order to protect national security as cybercriminals and hostile nation-states increasingly look to use the technology for nefarious purposes.
According to a Gartner report, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, or adversarial samples to attack machine learning-powered systems. Despite these reasons to secure systems, Microsoft claims its internal studies find most industry practitioners have yet to come to terms with adversarial machine learning.
To put it in context, for the US, In 2018, 2.8 billion consumer data records were exposed in 342 breaches, ranging from credential stuffing to ransomware, at an estimated cost of more than US$654 billionn. In 2019, this had increased to an exposure of 4.1 billion records.
No comments:
Post a Comment