By CIARA NUGENT
On the evening of June 8, a 29 year-old sound engineer and a 30-year-old business man were on their way to a picnic spot in India’s northeastern Assam state when they stopped at a village to ask for directions. The villagers had been told, in a video circulating on the messaging app WhatsApp, that child kidnappers were roaming the country. Believing these strangers were the ones they’d been warned about, the villagers formed a large mob, and, before the men could convince them otherwise, beat them to death.
The video they had seen was actually an instructional safety video made in Pakistan, but was shared with some text warning about kidnappers in the local area, causing fear and anger among the community. After the killings, police arrested 16 people and demonstrations took place in the victims’ hometown of Guwahati, the largest city in Assam. Similar incidents involving false news stories circulated on WhatsApp have reportedly led to the deaths of two dozen people in India since April. Now, both the app and the government are scrambling to prevent more mob lynchings.
“This is really happening at the frontier of technology,” says Nandagopal Rajan, new media editor at The Indian Express newspaper. “India has had a 4G revolution in the past 18 months and we’ve seen around 200 million new people start using the internet, mostly on phones. That means WhatsApp has suddenly ended up in the hands of a lot of first time internet users.”
India is WhatsApp’s largest market, with more than 200 million users at last count, in February 2017. The number today is likely much higher. While the app was designed as a messaging platform for one-on-one exchanges or small groups, in India it has taken on a life of its own. “Many Indians use Whatsapp not as a messaging platform,” Rajan says, “but as a consumption platform—people join lots of groups so they receive an endless flurry of video messages, memes and random jokes and stories, directly to their inbox.”
When those messages and stories contain false information, WhatsApp’s content-sharing function can have a more sinister effect. End-to-end encryption, which effectively locks messages and gives only senders and receiver the key to open them, makes it impossible for the company to see inside chats and regulate content on their platform in the same way Twitter or Facebook can. Groups are closed and limited to 256 people, making it impossible for journalists or police to get in and track where a rumor originated.
A 2018 report from the Reuters Institute at the University of Oxford shows WhatsApp is becoming an increasingly dominant news-sharing platform across Latin America and Asia. That means the problem is only growing. India’s technology ministry said in a statement on July 19 that the company would face legal action if it remained a “mute spectator” to the violent consequences of false stories circulating. “When rumors and fake news get propagated,” it said, “the medium used for such propagation cannot evade responsibility.”
Since late June WhatsApp has been responding with a series of unprecedented interventions to curb sharing on the app. It added a setting that would allow only admins to send messages in a group. It reduced the number of people a message can be forwarded to from 100 to 5 in India (and to 20 in the rest of the world) and removed the “quick forward” button that appears next to messages containing photos, video or audio. It also introduced a “suspicious link” label, which will appear alongside links where WhatsApp detects an obvious problem, such as a strange combination of characters. Earlier in July the company took out newspaper ads in nine states warning people to “question information that upsets you.”
“WhatsApp is making an effort to understand [how people use its platform in India] but we’re in uncharted territory,” Rajan says. “Nobody has ever fathomed what happens when this kind of a platform, which is in encrypted code, starts working at scale—the kind of scale you have in India.”
While the forwarding limit has been reduced, Rajan points out that content can still spread very fast between groups. “Often, all the members of a group are admins, and they’ll all be admins of ten more groups too. All the groups are interconnected, so it’s really easy for content to go viral.” WhatsApp did not respond to TIME’s request for further comment on the situation in India.
“WhatsApp’s new measures are a band-aid, not a solution,” says Bal Krishn Birla, a tech entrepreneur from Bangalore. Two years ago, Birla and software engineer Mohammed Shammas launched volunteer service Check4Spam, in the hopes of curing their country’s burgeoning fake news epidemic. Now with a team of 15 volunteers, the pair painstakingly fact check false stories of all kinds, using lunch breaks, evenings and weekends to balance the task with their day jobs.
They receive between 100 and 250 messages a day from people asking whether or not a story is true. “We can’t respond to all of them, but we try to get most,” Birla says. The team will take a typical news story, such as a report of a young child being kidnapped, research the facts online and use tools like reverse image search to check if photos or videos included in the story come from the region or time that they claim to. “It’s pretty simple,” he says, “We’re not reinventing the wheel.” They then upload the fact-check to their website and encourage the sender to share it too, reversing the flow of fake news.
Check4Spam isn’t the only grassroots organization taking notice of the risks posed by news on WhatsApp. In Mexico, where 35% of people use WhatsApp as a primary news source according to the Reuters Institute, a group of journalists made the app the focus of their fact-checking site, Verificado 2018. In the run up to the country’s largest ever elections in July, they responded to thousands of requests from people wanting to check political stories.
Alongside volunteer groups like these, Birla believes the lasting solution lies in media literacy education, something Check4Spam is keen to get involved in. On May 30, Facebook announced the company would team up with India’s National Commission for Women and the Cyber Peace Foundation, an NGO, to run digital literacy courses at universities in several major Indian cities. They will teach 60,000 women who are new to the internet to recognize false information online, helping to avoid both fake news and scams. But similar programs could take years to roll out across a country of 1.3 billion. “These killings are very likely to continue, unfortunately,” Birla says. “To me it seems like it’s just the beginning.”
Birla questions whether social media companies are truly committed to stopping the spread of fake news. “It amounts to getting people to share less, and that directly conflicts with their business model,” he says. He believes Facebook, which owns WhatsApp, already has the technology to deal with much of the false information circulating on its platforms. “They understand how to flag content that seems inappropriate or like it violates copyright—they just haven’t done it yet for things that may seem suspicious.” While the difference between truth and lie may be more difficult for technology to spot than an unlicensed music clip, tools like reverse image search, which can reveal if an image comes from where it claims to, are currently under-used by social media companies, he says.
“When people start leaving their platform, then I think they’ll figure out the solution,” says Birla, pointing to a trend in young people abandoning Facebookfor Instagram and Snapchat where the lack of sharing function means there’s less undesired spam content. Government pressure can only go so far, Birla says. “The will to fight fake news has to come from consumers. It has to be a people’s movement.”
No comments:
Post a Comment