The White House has launched an online form soliciting stories from people who believe social media platforms have censored them for political reasons. The questionnaire, which asks for each responder’s name, contact information, citizenship status, and information about the alleged censorship, does not specify what the administration plans to do with the information but signals the government’s increasing interest in social media moderation practices.
The official White House Twitter account posted a link to the form Wednesday. “The Trump Administration is fighting for free speech online,” the tweet said. “No matter your views, if you suspect political bias has caused you to be censored or silenced online, we want to hear about it!”
Earlier this month, President Donald Trump criticized Facebook for designating controversial personalities like Milo Yiannopoulos and Alex Jones as dangerous and banning them from the platform.
Users seem to agree social media networks need more accountability, but the idea of the government regulating online forums doesn’t sit well. As private companies, Facebook, Twitter, and others don’t fall under the jurisdiction of the First Amendment to the U.S. Constitution, but being removed from a social media platform significantly affects an individual’s ability to exercise his or her free speech rights.
Up until now, the default solution has been for social media companies to regulate their own content in a limited fashion. U.S. law shields companies from liability for what users post on their platforms, leaving moderators free to decide for themselves what content stays and what goes.
Since the rules were constructed piecemeal over time on various platforms, they are often inconsistently applied, according to the Electronic Frontier Foundation (EFF). Judgments about how offensive is too offensive are necessarily subjective, and individuals often make difficult calls in real time, resulting in biased enforcement of platform rules. Ryan Radia, senior policy counsel at the Lincoln Network, told me there is a “troubling lack of transparency” about how companies make moderation decisions, and political opinions are “affecting that decision-making process in ways we don’t fully understand.”
Last week, Facebook blocked a post by Trump 2020 advisory board member Jenna Ellis Rives, and when she appealed, the social media giant said the post violated community standards, calling it “hate speech.” Rives’ post included a tweet by conservative blogger Matt Walsh that pointed out inconsistencies between feminism and pro-transgender or pro-abortion arguments. The same week, Twitter temporarily suspended the account of University of Toronto psychologist Ray Blanchard, a well-known researcher of transgenderism, for a thread about his clinical approach to gender dysphoria. Twitter ultimately said it “made an error,” but the suspension displayed the scope of what the company is willing to consider hate speech.
While some say moderation often goes too far, others believe it doesn’t go far enough. The argument for increased government involvement in content moderation on social media platforms reached an especially emotional pitch after the shooter at two Christchurch, New Zealand, mosques livestreamed the attack on Facebook. The same week the White House unveiled the censorship questionnaire, French President Emmanuel Macron and New Zealand Prime Minister Jacinta Ardern held a meeting in Paris to launch the “Christchurch call,” a push to remove violent extremist content from social media. The Trump administration declined to sign the pledge. Leaders from multiple countries attended the meeting, and the United Kingdom has already unveiled a plan to fine social media executives for failing to block objectionable violent content.
Radia cautioned against the U.S. government getting too involved in social media companies’ internal policies.
“The challenge is ensuring a vibrant marketplace of ideas without denying companies their own First Amendment right not to carry information that they find objectionable,” he said. “The government, including the White House, needs to be very careful about avoiding pressuring these companies and using the bully pulpit of the White House to essentially force them to act through the threat of adverse action if they don’t comply.”
The EFF warned against putting too much stock in the ability of companies, the government, or any other body, to police themselves: “We shouldn’t look to Silicon Valley, or anyone else, to be international speech police for practical as much as political reasons. Content moderation is extremely difficult to get right, and at the scale at which some companies are operating, it may be impossible.”
Instead, Radia said he thinks that the current solution, while imperfect, may be the best option going forward: allowing the free market to protect consumers. If companies have advertised their platforms as places that allow the free exchange of ideas, then they have to try to accomplish that goal. Individuals need to make informed decisions about where they spend an increasingly valued commodity—their attention. If users find that a given companies’ moderation rules aren’t working, they can leave the platform and find one they think does a better job.