Vittoria Elliott
Recently, former president and convicted felon Donald Trump posted a series of photos that appeared to show fans of pop star Taylor Swift supporting his bid for the US presidency. The pictures looked AI-generated, and WIRED was able to confirm they probably were by running them through the nonprofit True Media’s detection tool to confirm that they showed “substantial evidence of manipulation.”
Things aren’t always that easy. The use of generative AI, including for political purposes, has become increasingly common, and WIRED has been tracking its use in elections around the world. But in much of the world outside the US and parts of Europe, detecting AI-generated content is difficult because of biases in the training of systems, leaving journalists and researchers with few resources to address the deluge of disinformation headed their way.
Detecting media generated or manipulated using AI is still a burgeoning field, a response to the sudden explosion of generative AI companies. (AI startups pulled in over $21 billion in investment in 2023 alone.) “There's a lot more easily accessible tools and tech available that actually allows someone to create synthetic media than the ones that are available to actually detect it,” says Sabhanaz Rashid Diya, founder of the Tech Global Institute, a think tank focused on tech policy in the Global South.
No comments:
Post a Comment