Pages

16 November 2024

Cybersecurity Risks of AI-Generated Code

Jessica Ji, Jenny Jun, Maggie Wu and Rebecca Gelles

Introduction

Advancements in artificial intelligence have resulted in a leap in the ability of AI systems to generate functional computer code. While improvements in large language models have driven a great deal of recent interest and investment in AI, code generation has been a viable use case for AI systems for the last several years. Specialized AI coding models, such as code infilling models which function similarly to “autocomplete for code,” and “general-purpose” LLM-based foundation models are both being used to generate code today. An increasing number of applications and software development tools have incorporated these models to be offered as products easily accessible by a broad audience.

These models and associated tools are being adopted rapidly by the software developer community and individual users. According to GitHub’s June 2023 survey, 92% of surveyed U.S.-based developers report using AI coding tools in and out of work.1 Another industry survey from November 2023 similarly reported a high usage rate, with 96% of surveyed developers using AI coding tools and more than half of respondents using the tools most of the time.2 If this trend continues, LLM-generated code will become an integral part of the software supply chain.


No comments:

Post a Comment