James Reddick
Threat actors linked to the governments of Russia, China and Iran used OpenAI’s tools for influence operations, the company said Thursday.
In its first report on the abuse of its models, OpenAI said that over the last three months it had disrupted five campaigns carrying out influence operations.
The groups used the company’s tools to generate a variety of content — usually text, with some photos — including articles and social media posts, and to debug code and analyze social media activity. Multiple groups used the service to create phony engagement by replying to artificial content with fake comments.
“All of these operations used AI to some degree, but none used it exclusively,” the company said. “Instead, AI-generated material was just one of many types of content they posted, alongside more traditional formats, such as manually written texts, or memes copied from across the internet.”
No comments:
Post a Comment