John Palmer
TL;DR Breakdown
DARPA launches AI Cyber Challenge (AIxCC) to advance AI-based security tools, offering $20M in prizes.
AI's impact on security remains uncertain, but experts warn against ignoring its potential.
Multimodal AI, evolving from generative AI, will soon interpret chat, video, and body language.
Artificial intelligence (AI), particularly large language models (LLMs) like GPT, has been the focus of discussions at the recent Black Hat and Def Con conferences in Las Vegas. However, experts are divided on how AI will impact the security posture of companies, from protecting internal data to developing applications.
Maria Markstedter, an Arm reverse engineer and exploitation expert, paraphrased OpenAI CEO Sam Altman’s statement that “AI will most likely lead to the end of the world, but in the meantime, there’ll be great companies.” Markstedter, during her keynote speech at Black Hat, recommended the motto “Move fast, break shit,” emphasizing that products often initially lack security functions and companies have to be forced to invest in security.
Generative AI and multimodal AI
Generative AI, which currently focuses on text, is evolving into multimodal AI, capable of handling chat, live video responses, sentiment analysis, and even body language interpretation, according to Markstedter. She stressed the importance of maintaining anonymity in these systems and rethinking data security.
Impact on Security
The impact of AI on security is still uncertain. Markstedter pointed out that AI is already affecting data security, as evidenced by employees copying data into black box AI chatbots. However, the broader implications for security remain unclear. LLMs can generate malicious code, but they can’t execute it, according to two former OpenAI employees who discussed
AI’s potential use in security.
Despite the uncertainty, there was consensus that banning AI is not a viable long-term solution. Businesses are eager to adopt AI technologies, and AI security will eventually require embracing LLMs and other AI technology. Those who don’t will fall behind, security experts warned.
Embracing AI in security
Markstedter argued that integrating autonomous agents is risky, but it’s essential to accept their reality and develop solutions to make them safer. “This is our chance to reinvent ourselves [and] our security posture,” she said, calling for the community to come together and foster research.
The DARPA AI cyber challenge
The defense organization DARPA has challenged Black Hat and Def Con attendees to help create a next-generation AI-based responsive security system. Perri Adams, a program manager in DARPA’s Information Innovation Office, announced the two-year AI Cyber Challenge (AIxCC) at the Black Hat keynote event.
AIxCC aims to develop a new generation of cybersecurity tools and offers a total of $20 million in prizes to the teams that create the best systems. The competition has two tracks: the Funded Track and the Open Track. Funded Track competitors will be selected from proposals submitted to a Small Business Innovation Research solicitation, with up to seven small businesses receiving funding of up to $1 million. Open Track competitors can register via the competition website without DARPA funding.
In 2024, DefCon will determine the top five companies, who will proceed to a second round of experimentation with an additional $2 million in funding. The 2025 Def Con winners will receive $4 million for first place, $3 million for second place, and $1.5 million for third place.
Collaboration with leading AI companies
AIxCC is partnering with leading AI companies, including Anthropic, Google, Microsoft, and OpenAI. The Open Source Security Foundation (OpenSSF), a project of the Linux Foundation, will serve as a challenge advisor to guide teams in creating AI systems capable of addressing vital cybersecurity issues, such as the security of the nation’s critical infrastructure and software supply chains.
No comments:
Post a Comment