Twitter May Fail to Fight Election Misinformation, Voting Rights Experts Say

Social

Twitter on Thursday set out a plan to combat the spread of election misinformation that revives previous strategies, but civil and voting rights experts said it would fall short of what is needed to prepare for the upcoming US midterm elections.

The social media company said it will apply its civic integrity policy, introduced in 2018, to the November 8 midterms, when numerous US Senate and House of Representatives seats will be up for election. The policy relies on labelling or removing posts with misleading content, focused on messages intended to stop voting or claims intended to undermine public confidence in an election.

In a statement, Twitter said it has taken numerous steps in recent months to “elevate reliable resources” about primaries and voting processes. Applying a label to a tweet also means the content is not recommended or distributed to more users.

The San Francisco-based company is currently in a legal battle with billionaire Elon Musk over his attempt to walk away from his $44 billion (roughly Rs. 3.5 lakh crore) deal to acquire Twitter.

Musk has called himself a “free speech absolutist,” and has said Twitter posts should only be removed if there is illegal content, a view supported by many in the tech industry.

But civil rights and online misinformation experts have long accused social media and tech platforms of not doing enough to prevent the spread of false content, including the idea that President Joe Biden did not win the 2020 election.

They warn that misinformation could be an even greater challenge this year, as candidates who question the 2020 election are running for office, and divisive rhetoric is spreading following an FBI search of former President Donald Trump’s Florida home earlier this week.

“We’re seeing the same patterns playing out,” said Evan Feeney, deputy senior campaign director at Color of Change, which advocates for the rights of Black Americans.

In the blog post, Twitter said a test of redesigned labels saw a decline in users’ retweeting, liking, and replying to misleading content.

Researchers say Twitter and other platforms have a spotty record in consistently labelling such content.

In a paper published last month, Stanford University researchers examined a sample of posts on Twitter and Meta’s Facebook that altogether contained 78 misleading claims about the 2020 election. They found that Twitter and Facebook consistently applied labels to only about 70 percent of the claims.

In a statement, Twitter said it has taken numerous steps in recent months to “elevate reliable resources” about primaries and voting processes.

Twitter’s efforts to fight misinformation during the midterms will include information prompts to debunk falsehoods before they spread widely online.

More emphasis should be placed on removing false and misleading posts, said Yosef Getachew, media and democracy program director at nonpartisan group Common Cause.

“Pointing them to other sources isn’t enough,” he said.

Experts also questioned Twitter’s practice of leaving up some tweets from world leaders in the name of public interest.

“Twitter has a responsibility and ability to stop misinformation at the source,” Feeney said, saying that world leaders and politicians should face a higher standard for what they tweet.

Twitter leads the industry in releasing data on how its efforts to intervene against misinformation are working, said Evelyn Douek, an assistant professor at Stanford Law School who studies online speech regulation.

Yet more than a year after soliciting public input on what the company should do when a world leader violates its rules, Twitter has not provided an update, she said.

© Thomson Reuters 2022


Products You May Like

Leave a Reply

Your email address will not be published. Required fields are marked *