Raul Dancel
Artificial intelligence (AI) is exponentially magnifying the fear, anger and hate that social media has already weaponised, journalist and Nobel laureate Maria Ressa has warned.
“If the first generative AI was (about) fear, anger and hate – weaponising those – this one now leads to weaponising intimacy,” Ms Ressa, who won the Nobel Peace Prize in 2021 with Russian journalist Dmitry Muratov for standing up to authoritarian regimes, told The Straits Times on Saturday.
Ms Ressa, who founded the Philippine online news site Rappler, was in Singapore this weekend for the New.Now.Next Media Conference organised by the Asia chapter of the Asian American Journalists Association.
It was hosted at Google’s Singapore office from Thursday to Saturday.
She said the first iteration of AI – seen in machine-learning programmes – was meant to get users addicted to scrolling through social media, so that companies such as Facebook and Twitter could make more money from targeted advertisements and harvested data.
But what these programmes learnt was that lies “spread six times faster than really boring facts”, she said, adding that the algorithms that power social media platforms keep churning out lies.
“What that does to you is that… it pumps you with toxic sludge – fear, anger, hate – and when you tell a lie a million times, it becomes a fact,” Ms Ressa told ST.
This, she said, has helped populist and autocratic leaders rise to power.
Ms Ressa and Rappler had been in the crosshairs of a strongman, Mr Rodrigo Duterte, who was elected president of the Philippines in 2016. He was aided by a massive social media campaign that pushed his populist platform, anchored by anti-crime rhetoric.
She is currently facing civil and criminal cases lodged by the Justice Ministry and regulators under Mr Duterte that she sees as retaliation by the former president for Rappler’s critical coverage of his brutal war on the narcotics trade.
His anti-drug crusade led to more than 20,000 suspects killed in police raids or by unnamed vigilantes.
Ms Ressa added that the impact goes beyond politics, citing a report issued by United States Surgeon-General Vivek Murthy on Tuesday that showed growing evidence that social media use may seriously harm children.
Dr Murthy said that while social media can help children and adolescents find a community to connect with, it also contains “extreme, inappropriate, and harmful content” that can “normalise” self-harm and suicide.
‘Extinction-level event’
Ms Ressa said the new generation of AI – chatbots such as ChatGPT, created by Microsoft-funded OpenAI, and Google’s Bard – would spread lies even faster, more broadly and more intimately if they are “released into the wild” without guardrails.
“It’s like open-sourcing the Manhattan Project,” she said, referring to research that led to the development of the atomic bomb.
Wrongly used, she warned, AI would allow “bad actors” to stoke more online hate and violence that could spill over to the real world, prettify the resumes of despots, and serve up even more “micro-targeted”, invasive ads.
Ms Ressa said the first iteration of AI was meant to get users addicted to scrolling through social media. PHOTO: AFP
She said even those responsible for coding these chatbots warn that there is a “10 per cent or greater chance that this leads to an extinction-level event, not hitting another species, but humanity”.
“It’s like releasing something as dangerous as nuclear fission into the hands of people with no guardrails,” she told ST.
Ms Ressa said OpenAI’s own chief executive Sam Altman has told US lawmakers about how dangerous AI can be.
“But no one asked him, ‘If it’s so dangerous, why are you releasing it?,’” she said.
Bad actors
Microsoft’s chief economist Michael Schwarz has warned of the potential risk of “bad actors” causing “real damage” making use of AI.
“I’m quite confident that, yes, AI will be used by bad actors, and, yes, it will cause real damage,” he said at an event hosted by the World Economic Forum on May 3.
“We have to put safeguards” to prevent hucksters and tyrants from profiting off AI with money-making scams and vote rigging, he said.
Ms Ressa said AI, as it is shaping up to be, has to be reined in, along with the rest of the technology sector, which she described as the “least regulated industry in the world”.
“The problem with a godlike tech is that it is being used for profit – and that’s what we need to stop. This is where governments need to come in and protect their citizens,” she said.
No comments:
Post a Comment