Ongoing complaints about misinformation and hate speech on the internet are forcing social-media companies to confront whether they need to take more responsibility for the content on their sites.

Twitter Inc. on Tuesday said it would let users block notifications of tweets that include specific words, among other moves, in an effort to combat harassment on the short-messaging service.

On Monday, Facebook Inc. said it would bar websites that post fabricated or misleading news articles from using its ad-selling tools. But it is unclear how Facebook will identify those sites, and they might still appear in the more-heavily-trafficked news feed, a source of news for 44% of Americans, according to Pew Research.

Both the Twitter and Facebook moves may fail to address many users' concerns. They show technology companies that have grown into powerful media voices struggling to find a balance between being havens for misinformation and censors of free speech.

Concerns about false news stories on Facebook intensified during the recent presidential election campaign after erroneous claims were shared widely on the network, such as reports that Pope Francis had endorsed Donald Trump and that the Clinton Foundation bought $137 million in illegal arms.

Some critics say the social-media sites should do more to promote accuracy and civil discourse. But the companies are wary of prescribing what their users should read or how they should act.

Facebook Chief Executive Mark Zuckerberg in a Facebook post on Saturday played down the impact of fake news, while also saying that his company is developing tools to curb it, including one that would allow users to flag news that they believe is fake.

But Syracuse University communications professor Jennifer Grygiel, who studies social media, said relying on users is inadequate. Instead, she said Facebook should hire more workers to review widely shared articles and remove those that are false.

"What he needs to do is hire more humans instead of pushing (the responsibility) onto the end user," Ms. Grygiel said. "Know how much the community is trained in identifying fake news? Zilch."

In his Saturday post, Mr. Zuckerberg said Facebook won't try to separate fact from fiction, because defining the truth is complicated. "We must be extremely cautious about becoming arbiters of truth ourselves," he wrote.

Karen North, director of the social-media program at the University of Southern California, agreed.

"Do you really want Facebook and Twitter deciding what you can talk about?" she asked. "It's a slippery slope and these companies already have massive control over what we see and what we don't."

Facebook has strained to appear objective, particularly after reports in May that certain politically motivated workers prevented conservative news from appearing in its "trending topics" feature.

Executives have been uneasy about taking steps that suggest Facebook is restricting free speech, current and former employees say. That has stirred dissent within the company, with some employees urging Facebook to do more to weed out misinformation, according to two people familiar with the matter. They said the topic was discussed during an all-hands meeting Thursday with the 32-year-old CEO.

Google parent Alphabet Inc. had largely avoided the controversy around internet propaganda, because it doesn't operate a thriving social network and because its search engine rewards websites that are linked to by established sites.

Still, the company was pulled into the debate on Sunday when a post from a little-known right-wing blog erroneously stating that Mr. Trump defeated Hillary Clinton in the popular vote appeared atop the Google search results for several election-related queries. Mrs. Clinton is leading by almost 700,000 votes in the Journal's tabulation.

"In this case we clearly didn't get it right, but we are continually working to improve our algorithms," a Google spokeswoman said in an email.

On Monday, shortly before Facebook's similar announcement, Google said it would ban fake-news websites from using its ad-selling system, likely hurting those sites' revenue. Google's AdSense program, which helps website operators sell ads on their sites, is the most popular way to monetize websites and has helped fund many propaganda sites. Google pulled AdSense from several sites on Monday.

Twitter, meanwhile, has long grappled with complaints that some users repeatedly post abusive and harassing messages. The moves announced Tuesday include a feature that lets users block notifications of tweets that include specific words or phrases. Users will still see such tweets on Twitter's website and app.

When flagging problem tweets, users will be able to note that the messages include hate speech or "targeted harassment." Users can also now block specific conversations between other users that include them.

Del Harvey, Twitter's vice president of trust and safety, acknowledged the steps are "not going to solve the problem of abuse on Twitter."

Ms. North, the USC professor, said the burden to report abuse falls largely on the victim. "While they keep making these small steps…there's still no major consequences for abusing anyone on Twitter," she said.

Write to Deepa Seetharaman at Deepa.Seetharaman@wsj.com, Jack Nicas at jack.nicas@wsj.com and Nathan Olivarez-Giles at Nathan.Olivarez-giles@wsj.com

 

(END) Dow Jones Newswires

November 15, 2016 16:05 ET (21:05 GMT)

Copyright (c) 2016 Dow Jones & Company, Inc.
Twitter (NYSE:TWTR)
Graphique Historique de l'Action
De Juin 2024 à Juil 2024 Plus de graphiques de la Bourse Twitter
Twitter (NYSE:TWTR)
Graphique Historique de l'Action
De Juil 2023 à Juil 2024 Plus de graphiques de la Bourse Twitter