Casey Tonkin
Artificial intelligence (AI) systems like ChatGPT could be devastating for the cyber security landscape with experts warning “it’s only a matter of time” before AI-assisted attacks unleash cyber warfare on a previously unimaginable scale.
Late last year, OpenAI let the public start testing ChatGPT, a natural language processing model that can seamlessly converse on any number of topics, debug code, and write news articles about itself.
ChatGPT immediately created a stir around the world, raising questions about academic integrity and what the knowledge economy will look like in an AI-powered future.
For the world of cyber security, in which a small team of underground hackers can send shivers down the collective spines of executives at multi-billion dollar companies, the public availability of fast, accurate, and highly intelligent AI systems like ChatGPT threatens to make attacks even easier to pull off.
Louay Ghashash, Director and CISO of security company SpartanSec and Chair of the Australian Computer Society (ACS) Cyber Security Committee, has been experimenting with ChatGPT and has found some alarming results.
“Just last week we uploaded malicious code to VirusTotal which the vast majority of vendors detected as malicious,” he told Information Age.
“Then we pasted that code into ChatGPT and asked it to make changes – to optimise the code, and do things like add extra loops.
“ChatGPT didn’t change the code’s ultimate function yet, to my disbelief, more than two-thirds of vendors didn’t see the code as malicious when we uploaded it back to VirusTotal.”
Making dynamic changes to malware as a way of bypassing antivirus detection is a scary prospect, especially when tools like ChatGPT enhance the skills of cyber criminals.
In a recent blog post, cyber firm Check Point analysed hacking forums and found discussion of ways to use ChatGPT for nefarious means.
One example showed someone who claimed to have limited coding experience spinning up a cryptographic script that Check Point said “can easily be modified to encrypt someone’s machine completely without any user interaction”.
In the same way that guided code writing tools can help everyday developers do their jobs more efficiently, low code AI tools could lower the bar for would-be cyber criminals who want to eke out a living from extorting businesses or writing and selling ransomware.
Check Point noted that the hacking forum users presented limited development skills but warns “it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad”.
Ghashash agrees that the advent of ChatGPT points toward a dangerous future for cyber security, presenting his own doomsday scenario for when a similar AI system is given the ability to start scanning the internet.
“Today you can’t get ChatGPT to give you a list of exploitable websites, but it’s only a matter of time before something similar is connected to the internet, or to a search engine like Google – then you will have a nuclear weapon of cyber warfare,” he told Information Age.
“Imagine being able to say ‘give me all the ecommerce websites in Australia vulnerable to an SQL injection’ and it printing out a list of IP addresses.”
He said it’s up to the companies developing and releasing this technology to make sure they have the right safeguards in place.
“We need to see some limitations to make sure it is asking if you own this code or this site,” Ghashash said.
“They have put all this intelligence into answering the question, but not in validating the legitimacy of who’s asking it.”
For its part, OpenAI – creator of ChatGPT – is aware of potential misuse and said it is “eager to collect user feedback” to improve the AI.
No comments:
Post a Comment