By SHOSHANNA SOLOMON
Unbeknownst to the CEO of a company who was interviewed on TV last year, a hacking group that was trailing the CEO taped the interview and then taught a computer to perfectly imitate the CEO’s voice — so it could then give credible instructions for a wire transfer of funds to a third party.
This “voice phishing” hack brought to light the growing abilities of artificial intelligence-based technologies to perpetuate cyber-attacks and cyber-crime.
Using new AI-based software, hackers have imitated the voices of a number of senior company officials around the world and thereby given out instructions to perform transactions for them, such as money transfers. The software can learn how to perfectly imitate a voice after just 20 minutes of listening to it and can then speak with that voice and say things that the hacker types into the software.
Some of these attempts were foiled, but other hackers were successful in getting their hands on money.
“This is a ramp up” of hacker capabilities, Israel’s Cyber Directorate said in a warning memo sent out to companies, organizations and individuals in July, warning of the threat but specifying that no such events had yet occurred in Israel.
Illustrative. Hackers/cybersecurity (iStock by Getty Images)
Leading officials at a cybersecurity conference in Tel Aviv last month warned of the growing threat of hackers using AI tools to create new attack surfaces and causing new threats.
“We find more and more challenges in the emerging technologies, mainly in artificial intelligence,” said Yigal Unna, the head of Israel’s National Cyber Directorate, at the Cybertech 2020 conference last month. This is the new “playground” of hackers, he said, and is “the most frightening.”
Artificial intelligence is a field that gives computers the ability to think and learn, and although the concept has been around since the 1950s it is only now enjoying a resurgence made possible by chips’ higher computational power. The artificial intelligence market is expected to grow almost 37% annually and reach $191 billion by 2025, according to research firm MarketsandMarkets.
National Cyber Directorate head Yigal Unna at the Cybertech 2020 conference in Tel Aviv, January 29. 2020; (Cybertech)
Artificial intelligence and machine learning are used today for a wide range of applications, from facial recognition to detection of diseases in medical images to global competitions in games such as chess and Go.
And as our world becomes more and more digitalized — with everything from home appliances to hospital equipment being connected to the internet — the opportunity for hackers to disrupt our lives becomes ever greater.
Whereas human hackers once spent considerable time poring over lines of code for a weak point they could penetrate, today AI tools can find vulnerabilities at a much faster speed, warned Yaniv Balmas, head of cyber research at Israel’s largest cybersecurity firm, Check Point Software Technologies.
“AI is basically machine learning,” Balmas said in an interview with The Times of Israel. “It is a way for a machine to be able to process large amounts of data” that humans can then use to make “smart decisions.”
Yaniv Balmas, head of cyber research at Israel’s largest cybersecurity firm Check Point Software Technologies Ltd. (Courtesy)
The technology is generally used to replace “very, very, very intense manual labor,” he said. So, when used offensively by hackers, it “opens new doors, ” as they can now do in an hour or a day what used to take “days, weeks and months, years to do.”
For example, when targeting an app, the goal of hackers would be to find a vulnerability through which they could take full control of a phone it’s installed on. “Before AI came to be,” said Balmas, hackers would to examine the app’s code line by line.
With AI, a new technology has been developed called fuzzing, which is the “art of finding vulnerabilities” in an automated way. “You replicate the human work, but you automate it,” Balmas said, which provides results much more quickly.
In a study, Check Point researchers used fuzzing to find vulnerabilities in a popular app, the Adobe Reader. Human researchers could probably have found one or two vulnerabilities in the 50 days set for the task, while the fuzzing AI tech managed to find more than 50 vulnerabilities in that time period, said Balmas.
“That’s a huge amount,” he said, and “it shows really well the power of AI.”
The vulnerabilities were reported to the app’s makers and have since been fixed, Balmas said.
Spear-phishing on the rise
Artificial intelligence tools are also already being used to create extremely sophisticated phishing campaigns, said Hudi Zack, chief executive director, Technology Unit, of the Israel National Cyber Directorate, in charge of the nation’s civilian cybersecurity.
Traditional phishing campaigns use emails or messages to get people to click on a link and then infect them with a virus or get them to perform certain actions.
Users are today generally able to easily identify these campaigns and avoid responding to them, because the phishing emails come from unfamiliar people or addresses and have content that is generic or irrelevant to the recipient.
Now however, sophisticated AI systems create “very sophisticated spear-phishing campaigns” against “high-value” people, such as company CEOs or high-ranking officials, and send emails addressing them directly — sometimes even ostensibly from someone they know personally, and often with very relevant content, like a CV for a position they are looking to staff.
“To do that, the attacker needs to spend a lot of time and effort learning the target’s social network, understanding the relevant business environment and looking for potential ‘hooks’ that will make the victim believe this is a relevant email” — approaching them for real business reasons that will increase the attack’s chance of success, said Zack.
Hudi Zack, chief executive director, Technology Unit, Israel National Cyber Directorate (Cybertech)
A sophisticated AI system would enable an attacker to “perform most of these actions for any target in a matter of seconds,” and thus spear phishing campaigns could aim at “thousands or even millions of targets,” Zack said.
These tools are mainly in the hands of well-funded state hackers, Zack said, declining to mention which ones, but he foresaw them spreading in time to less sophisticated groups.
Even so, perhaps the greatest AI-based threat that lurks ahead is the ability to interfere with the integrity of products embedded with AI technologies that support important processes in such fields as finance, energy or transportation.
AI systems, such as automated cars or trains or planes, for example, “can make better and quicker decisions and improve the quality of life for all of us,” Zack said. On the other hand, the fact that machines now take action independently, “with only a limited ability for humans to overview and if needed overrule their decisions, makes them susceptible to manipulation and deception.”
Most artificial intelligence systems use machine learning mechanisms that rely on information these machines are fed.
“A sophisticated attacker” could hijack these the machine learning mechanisms to “tilt the computer decisions to achieve the desired malicious impact,” Zack said.
Illustrative image of robots and AI. (PhonlamaiPhoto; iStock by Getty Images)
Hackers could also “poison” the data fed into the machine during its training phase to alter its behavior or create a bias in the output.
Thus, an AI-based system that approves loans could be fraudulently taught to approve them even if the customer’s credit status isn’t good; an AI-based security system that uses facial recognition could be prevented from identifying a known terrorist; an electricity distribution system could be instructed to create an unbalanced current distribution, causing large-scale power outages.
All these potential changes “presumably serve the adversary’s goals,” said Zack.
This kind of sophisticated attack is “more academic, and theoretical at the moment,” said Check Point’s Balmas. “But I think that this is really something that we should pay attention to. Because this technology is advancing really, really very fast. if we fall asleep on the watch, we might find ourselves in a sticky situation.”
The Cyber Directorate’s Zack agreed that this kind of attack has not yet been seen on the ground. “We are not there yet,” he said. “But there is certainly a concern about it, and it could happen in the coming years, when AI systems become more widespread in our everyday use.”
To prepare for this scenario, the Cyber Directorate is now setting out guidelines to firms and entities that are developing AI-based solutions to make sure they incorporate cybersecurity solutions within the products.
“The guidelines will set out criteria” to ensure the resilience of the AI products the firms are using or developing, “especially if the system affects the public at large,” Zack said
Companies are not yet aware of the risk, and along with ignorance there are economical considerations. “Everyone wants to be first to market,” and security is not always a high enough priority when a product is created, he said.
In truth, the AI threat is still nascent, he said. but it will be very difficult to upgrade systems once they have been already developed and deployed.
Defense systems globally are already incorporating AI capabilities to battle AI-based attackers,ad so companies like Check Point.
“We use AI tools to find vulnerabilities and we use AI tools to understand how malware and other attacks actually operate,” said Balmas.
“To enable an AI-driven defense system to perform this battle against an AI-based attacker will require a totally new set of capabilities from the defense system,” said the Cyber Directorate’s Zack. And increasinlgy sophisticated attacks will cause the ensuing cyber-battles to move from “human-to-human mind games to machine-to-machine battles.”
No comments:
Post a Comment