Nov 21, 2016
Fake News, Hate Speech and Social Media Abuse: What’s the Solution?
Jennifer Golbeck of the University of Maryland and Northeastern University's Andrea Matwyshyn discuss social media sites' crackdown on 'fake news.'
Google, Facebook and Twitter last week vowed to fight fake news, hate speech and abuse in their own ways amid the backlash over how such content may have influenced voting in the U.S. presidential election. Those actions could have come sooner, and many troubling issues persist, according to experts.
Google has said it would prevent websites carrying fake news from accessing its AdSense advertising platform that helps such sites share in advertising revenues. Facebook said it would not integrate or display ads in apps or sites that have illegal, misleading or deceptive content, including “fake” news — news that is deliberately factually incorrect. Twitter said in a statement that it would remove accounts of people posting offensive content, building on earlier measures that help users “mute” such content and report abuse. That builds on its existing policy on hate speech and other offensive content.
Following the money is the right strategy to prevent such abuses, according to Jennifer Golbeck, director of the Social Intelligence Lab and professor of information studies at the University of Maryland, and author of the book Social Media Investigation: A Hands-on Approach. The new measures by these companies have “really taken away the main source of income for those sites, which is what drives them to exist in the first place,” she said. She noted that many of the individuals and organizations posting fake news, including during the recent elections, are not in the U.S. and don’t care about the ideologies behind the content. “They just care about making money, and they figured out ways to create click feeds that will get it.”
Underpinning the moves by the social media companies is a law (section 230 of the Communications Decency Act) that that gives them “a modicum of legal protection for the content that exists on their platforms, as long as they don’t veer off too much into editorial functions,” said Andrea Matwyshyn, law professor at Northeastern University and affiliate scholar at the Center for Internet and Society at Stanford Law School. “They’re walking a bit of a legal line to create the right kind of environment from their corporate perspective but also to not run afoul of the extent to which Section 230 of the CDA gives them a buffer of legal protection.”
Golbeck and Matwyshyn discussed the broader aspects of the fight against fake news and hate speech on the Knowledge@Wharton show on Wharton Business Radio on SiriusXM channel 111. (Listen to the podcast at the top of this page.)
Facebook CEO Mark Zuckerberg, who had earlier resisted charges that his company unwittingly allowed fake news to proliferate, is no longer in denial. He put out a post November 18 describing how his company plans a multipronged front to avoid fake news being shared on its site. Even so, he saw his limits: “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote.
“The fact that Facebook and Google waited until the election was over — and the flow of advertising revenue from fake news sites subsided — before taking action is pretty damning.”–Kevin Werbach
How Far Could They Go?
The important issue is whether Facebook and other major social media platforms take responsibility for their role in shaping the informational environment, according to Kevin Werbach, Wharton professor of legal studies and business ethics. “They don’t want to think of themselves as media companies, but they are playing the same role that media companies traditionally did in influencing public opinion. With that influence comes responsibility.”
Should the government throw in its weight with the social networks and build a stronger front? Regulation and laws could help, but the challenge is in implementation of laws, said Wharton marketing professor Jonah Berger. “One person’s pornography is another person’s art,” he said. “With religious beliefs, one person’s truth is another’s falsehood. That is where this gets messy.”
There is only so much social media networks could do to prevent fake news, said Wharton marketing professor Pinar Yildirim. “Serious newspapers — gatekeepers of information — usually do a much better job in fact-checking before publishing and distributing news,” she said. “Since the barriers to distributing information is so much lower nowadays, it is hard for platforms like Twitter and Facebook to be able to filter information, which they aggregate at such a large scale.” Moreover, they do not want to offend users by blocking their content, she added.
The Lure of Business
Empowering users to decide what they want to see and what they want to avoid also makes good business sense. “After the polarized 2016 elections, it has become clear that giving more control to users on what kind of information they are exposed to will make them more likely to continue to use these platforms,” said Yildirim. “In the absence of these tools, users are likely to unfriend their connections or engage with the platforms less in order to avoid harassment or unpleasant content.”
SPONSORED CONTENT:
Twitter’s moves protect not just its users, but also its brand image that had earlier taken a beating for allowing hate speech and abusive trolls. Golbeck noted that Twitter last week suspended the accounts of some members of white supremacist and neo-Nazi groups.
Matwyshyn is less enthused by those moves. She noted that Twitter has a contract with its users and it dictates the terms of engagement, which now includes banning hate speech from its platform. “The steps it is taking now are merely a run-of-the-mill contract enforcement situation,” she said.
More significantly perhaps, Twitter’s business outlook could also get a boost as it continues to look for a buyer. Twitter has seen its platform become “in a lot of ways a cesspool of terrible things from anonymous accounts that has made it sometimes a legitimately dangerous place for a lot of users,” according to Golbeck.
“Just for their company image, in addition to the fact that it is really affecting their business and the perception of the value of their business, there couldn’t be a better time for them to start aggressively taking these measures,” Golbeck said. In recent months, several suitors — including Google and Disney — are said to have considered buying Twitter, but some were apparently put off by the hate speech and abuse that the site allowed.
“How do you know something is hate speech or fake news? Once you start restricting some of these things, it’s a slippery slope and you open yourself to legal action.”–Jonah Berger
In addition to its recent moves to combat abuse, Twitter has also been experimenting with another method — “the idea of speaker identity being a marker of credibility,” said Matwyshyn. In such a method, the platform doesn’t filter out ideas, but gives credibility or trust ratings to speakers, she added. That idea could be extrapolated in some ways with other platforms, she noted.
According to Golbeck, the trustworthiness that Matwyshyn referred to could be built into Google’s search results, too. “The way Google traditionally does that is that every time somebody links to your page, it’s a vote that you are trustworthy,” she said, adding that that is not necessarily true anymore in the present circumstances.
Why Not Earlier?
Did the social media companies wait too long to act on fake news? Werbach noted that Facebook and other advertising-based sites have “a business model that rewards activity rather than quality or accuracy.” That could present a conflict of interest in how far they would want to go in battling fake news or offensive speech. “The fact that Facebook and Google waited until the election was over — and the flow of advertising revenue from fake news sites subsided — before taking action is pretty damning,” he said.
Others want to give the companies the benefit of the doubt. “I think Twitter’s leadership has a speech-protective stance, even when the speech is unpleasant or problematic — their default position historically has been encouraging free exchange of content,” said Matwyshyn. “This [latest move] is a shift in its own corporate thinking with respect to the balance of unfettered information exchange versus creating a more curated and respectful environment on its platform.”
“As an outsider it’s easy to say they should have acted sooner,” said Berger. “Personally, I wish they had, particularly for hate speech or fake news.” However, he added that since social media networks want to support free speech, it is tough to draw the line on that aspect. “How do you know something is hate speech or fake news?” he asked. “Once you start restricting some of these things, it’s a slippery slope and you open yourself to legal action.”
The Treadmill of Fighting Abuse
“This is constantly a battle that search engines face,” said Golbeck. “They find a way to rank what’s good, and people who want to get in bad stuff … find a way to game the system. Google’s task now is to flag fake news content that shows up high on search results and then downgrade it.”
Yildirim pointed to another downside. She said she is worried about “what these new tools will do to online segregation of individuals” and thus help in creating “echo chambers.”
“The new tools … are likely to create stronger echo chambers and may result in online segregation between individuals of opposing political opinions.”–Pinar Yildirim
For example, an individual with extreme right-wing leanings may block out content from supporters of the Democratic Party, and get exposed to potentially biased views, Yildirim said. “He will continuously read about how bad Obamacare is, how high taxes are, that manufacturing jobs can be brought back to the U.S., etc.,” she explained. “He will come to believe that everyone else is thinking like he does, and he may question some of the false beliefs less often.”
Yildirim said media companies had taken some measures to filter offensive content before the current controversy, but those were not sufficient. Those measures typically involve an initial algorithm that searches and automatically deletes certain comments, which is followed up with manual checks using staff dedicated to moderate conversations, she explained.
The Reach of Fake News
The damage fake news can create is of course enormous. A recent New York Times story detailed one fake post about protesters being taken in buses to a Trump meeting in Austin, Texas. Eric Tucker, a co-founder of a marketing company in Austin, tweeted that paid protesters were being bused to demonstrations against Donald Trump. He had not verified what he tweeted, and his post was shared at least 16,000 times on Twitter and more than 350,000 times on Facebook. “[It] fueled a nationwide conspiracy theory — one that Mr. Trump joined in promoting,” the Times wrote.
“The steps [Twitter] is taking now are merely a run-of-the-mill contract enforcement situation.”–Andrea Matwyshyn
“Nobody fact-checks anything anymore — I mean, that’s how Trump got elected,” said Paul Horner, who claims to be responsible for many fake news postings in an interview with the Washington Post. “I think Trump is in the White House because of me.”
Those behind fake news might also be compromising users’ browsers and devices in order to hook them up to a botnet or for some other nefarious intent, Matwyshyn cautioned. “Part of the battle here is also to protect users from invisible security harm, in addition to the content question.” She recalled the recent case of hackers connecting IoT-enabled household devices like webcams to a botnet that took down Twitter and a few other popular sites for at least a day.
Miles to Go
Addressing fake news and other forms of gaming social media platforms will be a multi-faceted process, said Werbach, adding that no silver bullet is in sight. “Doing more will likely require a combination of human oversight, algorithms designed to weed out falsehood or abuse, and tools to further empower users,” he added.
Matwyshyn said the Federal Trade Commission, where she has served as a senior policy advisor, hasn’t thus far looked into such content filtration. First there will be an evolution in the private sector, before the government gets more involved to thwart security threats, she noted. “An overly aggressive government response is premature at this point.”
In any event, social media networks do not have the option of doing nothing in preventing fake news. “At some point, if it doesn’t recognize the damaging effects of fake news, Facebook will see a backlash from both users and regulators,” warned Werbach. “The only reason these companies aren’t legally liable for the false and malicious content their users upload is that Congress included a provision in its 1996 attempt to ban indecent material online to protect ‘good Samaritan’ companies and incentivize the growth of online platforms.”
As the social media platforms try to cut down on harassment and fake news, they will be “criticized intensively by conservatives,” Werbach predicted. Indeed, he noted that several independent assessments have found that fake news was much more prevalent and influential on the pro-Trump side. “I hope the social media platforms have the courage to stand firm,” he said. “They need to be open about what they are doing, and willing to evolve their techniques based on results.”
No comments:
Post a Comment