Russell Haworth
It’s fair to say that recent months have sparked a growing interest in AI in the general media. Hollywood’s robot apocalypse may be fiction, but advances in computing power, intelligent unsupervised algorithms, and applications like chatbots are fuelling genuine fears about job displacement. Not without justification. Telecoms giant BT plans to cut 55,000 jobs by the end of the decade. Up to a fifth of cuts to customer services as staff are replaced by technologies including artificial intelligence.
Amidst the hype, there is a darker side of AI that we need to rapidly address. Even the godfathers of AI, including Sam Altman, CEO of AI research laboratory OpenAI and Google DeepMind’s CEO, Demis Hassabis, are sounding the warning bell.
They’re talking not just AI, but AI2.0—“Artificial General Intelligence” or AGI—an AI system capable of tackling any task a human can achieve. And it’s coming fast.
AGI systems pose an existential risk to humanity unless governments collaborate now to establish guardrails for responsible development over the coming decade. According to the Center for AI Safety, “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Virtual bunker, anyone?
Businesses are scrambling to avoid disruption from Generative AI like ChatGPT. That presents both opportunities and threats. But in the rush to implement AI, are we addressing the risks posed by a prolific group of AI adopters, cybercriminals?
In the last few weeks, we’ve seen yet another large-scale cyber-attack. British Airways, Boots, and the BBC fell victim to hackers exploiting a vulnerability in third-party file transfer software MOVEit, used by payroll provider Zellis. Over 100,000 employees’ personal and financial data was compromised, with the attackers demanding negotiations with victims.
The UK’s National Cyber Security Centre (NCSC) predicts a significant increase in cyber tools and services over the next five years, that “will have a profound impact on the threat landscape, as more state and non-state actors obtain capabilities and intelligence not previously available to them." It said the sophistication of these commercial products rivals tools developed by nation-states.
Unrestrained by ethics or law, cybercriminals are racing to use AI to find innovative new hacks. As a result, AI-powered cybersecurity threats are a growing concern for organizations and individuals alike, as they can evade traditional security measures and cause significant damage. Some threats include:
1. Advanced Persistent Threats (APTs): A sophisticated, sustained cyberattack occurs when an intruder enters a network undetected, remaining for a long time to steal sensitive data. They frequently involve the use of AI to avoid detection and target specific organizations or individuals.
2. AI-powered malware: Malware that uses AI has been taught to think for itself, adapt its course of action in response to the situation, and particularly target its victims’ systems.
3. Phishing: Using natural language processing and machine learning, attackers create convincing phishing emails and messages that are designed to trick individuals into revealing sensitive information.
4. Deepfake attacks: These employ artificial intelligence-generated synthetic media, such as fake images, videos, or audio recordings that are indistinguishable from real ones. They can be used to impersonate people in authority within a company, such as a CEO or network administrators or even used to spread false information, which can be used for malicious purposes.
An old northern English phrase says, “Where there’s muck there’s brass.” Translation: there is a very large industry of cyber security professionals looking to make money by defending individuals, companies, and governments from these growing threats. AI-powered threat hunters play a crucial role, connecting the dots across multiple sources of information and hidden patterns. The global cyber security defence market is expected to exceed $33 billion by 2028, growing at over 7% per year. So, the cyber security Generative AI arms race has begun.
As AI becomes more integrated into society, it's important for lawmakers, judges, and other decision-makers to understand the technology and its implications. Building strong alliances between technical experts and policymakers will be crucial in navigating the future of AI in threat hunting and beyond. Here’s to the good guys!
No comments:
Post a Comment