Christian Espinosa
Generative AI presents many opportunities for businesses to improve operations and reduce costs. On the bright side, there is great potential for this form of AI to deliver value to organizations. However, it has a dark side in the cybersecurity landscape, as hackers can easily manipulate it to spread malware.
How can companies avoid being the target of a generative AI attack? It's not an easy, simple answer. Here's what you should know about how generative AI can be a substantial threat.
What Is Generative AI?
Generative AI is a broad label, with ChatGPT being the most well-known subset. It uses natural language processing (NLP) and natural language generation (NLG) along with the power of large language models (LLMs). At its core is language, but it can do more than produce text. It can also generate images, NLP text generation, videos, music composition and more. Because of how advanced the AI is, it can apply to many different areas of business.
Businesses are taking note. According to a survey conducted by Gartner, Inc., 70% of organizations are actively exploring generative AI. Additionally, 45% of companies are investing more in AI to enhance the customer experience, grow revenue, reduce costs and improve business continuity. Organizations have an awareness of their risks but believe their value outweighs them.
How Hackers Are Using Generative AI
Developers and programmers have been testing out generative AI. It acts as a coding assistant to accelerate software development, iteration and testing. A study on GitHub Copilot found that professionals who used it completed tasks 55.8% faster than those who didn't.
These productivity gains align with what companies expect to realize by adopting it. Unfortunately, it also presents new risks. For example, a developer may input custom code into ChatGPT to help identify bugs. That information becomes part of the LLM training dataset, which cybercriminals can access.
Attackers are using this channel to spread malicious packages into development environments. Generative AI's outputs could be compromised, and its generation of simulated code libraries now becomes exploitable.
Hackers can also use the technology in other ways. Cybercriminals could issue a prompt to generative AI to solve a specific problem coding issue. The response would involve multiple recommendations, some of which can contain malware. Coders pick this up, and then it lives in the internal ecosystem of networks and applications.
They can also ask it to write malware. In the hands of hackers, ChatGPT can be really good at creating hard-to-detect malware. If they combine it with machine learning models, it's even more powerful.
It's no surprise that technology can have a dark side when it falls into the wrong hands. Balancing the risk and reward is critical for any company. Cyber leaders must be aware of all of the possibilities and create new defenses to keep them at bay.
Protecting Your Organization From Generative AI-Spread Malware
In developing your strategies to protect against these threats, there are several mechanisms you can put in place.
• Use ChatGPT to be more proactive and enhance observability. This approach is "fight fire with fire" and would involve cyber professionals training an AI-powered system to act as a shield from these attacks.
• Strengthen security and privacy measures. What more can you be doing to lock down your network and the devices on it? It's critical to keep asking these questions. Some techniques to do this include multifactor authentication (MFA), biometrics, password managers and zero-trust architecture.
• Don't depend on monitoring and detection technology alone. In the case of generative AI-written malware, you can't expect cybersecurity tools to be your only line of defense. The latest and greatest are still behind when it comes to finding this code. You still need humans in the loop to review and analyze any anomalies or patterns. Upskill your cyber team with training on generative AI to add this layer of checks and balances.
• Remain adaptable and agile. Having a cyber environment that's adaptable and agile is a best practice for any company. That can be a challenge for many reasons, most of which come down to your people. If they aren't flexible, your strategy can't be flexible, either. This newish threat represents uncertainty and the unknown for them. It could cause them to become defensive and bluster. To avoid this, you need to assert the importance of shifting to address the current threat landscape. Do this by being communicative, collaborative and transparent.
The threats from generative AI will only become more complex. Your best defense involves using it to combat attacks, enhancing security practices and leaning on human intelligence. What happens next is still largely unknown, but you can still empower your technical folks with the hard and soft skills needed to win this round of the cyber war.
No comments:
Post a Comment