January 1, 2022, not only ushered in the new year but also marked the 39th anniversary of the creation of the internet. And with Myspace and Facebook launching in the early 2000s, it’ll soon be 20 years since social media became a part of everyday life for the majority of Americans.
But the internet’s ever-present ability to connect people carries with it an infinite amount of voices shouting competing facts and stories. And that, perhaps, is one of the biggest consequences of opening this virtual Pandora’s box: the rise of fake news and misinformation.
The fact-checking movement has sought to resolve this epistemological crisis, but their recent efforts have been far from perfect and some of their methods, especially on social media platforms, are seemingly counterintuitive.
Fake News and the necessity of Fact-Checkers
Fake news is not a new phenomenon, but it became particularly pervasive around the Clinton-Trump presidential campaigns. For example, a headline that circulated amid the Syrian refugee crisis of 2015 read, “Donald Trump Introduces ‘Nazi-Like’ Plan Requiring All Muslims & Refugees To Wear Badges.”
Despite gaining traction on the internet, the story was fake. Paul Horner, one of the most prolific creators of fake news at the time, had conceived the idea and published it online. For Horner, writing phony stories was a lucrative business. In fact, around 2016, he was making $10,000 a month just by publishing fake news.
Horner, and other fake news creators like him, posed a serious threat to society’s relationship with reality. Obama even spoke of such dangers during a 2016 press conference.
Speaking to how sophisticated fake news had become, he warned “if everything seems to be the same and no distinctions are made, then we won’t know what to protect. We won’t know what to fight for.”
“And we can lose so much of what we’ve gained in terms of the kind of democratic freedoms,” Obama continued, “and market-based economies and prosperity that we’ve come to take for granted.”
By 2019, U.S. citizens began to recognize the dangers of fake news too. In fact, 50% of Americans stated that made-up news and misinformation was a “very big problem in the country,” according to Pew Research.
In response to this ubiquitous issue, fact-checking entities—websites like politifact.com, factcheck.org, and snopes.com—became increasingly popular. The initial fact-checking concept was foolproof, too: create teams of researchers dedicated to consulting experts in various fields to find the truth of a matter.
Social Media Censorship
An extension of the fact-checking movement has been social media censorship. While the merits of censoring factually incorrect information on these platforms are debatable, censoring posts that are factually accurate, or at least plausible, is concerning.
Most notably, the narrative surrounding COVID-19’s “Lab Leak Conspiracy Theory” forced fact-checkers to make an about-face last year, after having already censored social media posts suggesting the virus could have been made in a lab.
But the Lab Leak debacle is not fact-checkers’ only misstep. As third-party fact-checkers seek to help social media platforms censor misinformation and disinformation, they have given users substantial reasons to question their validity.
Facebook admits its fact-checks are “opinions”
Near the end of 2021, veteran TV journalist John Stossel sued Facebook for alleged defamation. This came after Climate Feedback, one of Facebook’s third-party fact-checkers, flagged two of Stossel’s videos as misinformation.
Stossel’s video titled “Government Fueled Fires” was labeled as misleading on the platform. Facebook’s caption on the video read, “Missing Context. Independent fact-checkers say this information could mislead people.”
Following the “See Why” link takes users to Climate Feedback’s “Verdict Page,” where the fact-checkers explain that wildfires in the U.S. are “influenced by a variety of factors, including weather conditions, climate change, past fire suppression practices, and an increase in the number of people living near wildlands.”
While the statement scientifically checks out, Stossel’s video never made a claim to the contrary. As noted in Stossel’s lawsuit, “the scientific conclusions of Climate Feedback’s ‘Key Take Away’ and Stossel’s Fire Video are substantively identical: they both assert that climate change and land management practices are each causes of forest fires” (emphasis added).
Facebook’s response to Stossel’s suit largely shifted the blame to third-party fact-checkers. But an interesting portion of its argument states, “Stossel’s claims focus on the fact-check articles written by Climate Feedback, not the labels affixed through the Facebook platform. The labels themselves are neither false nor defamatory; to the contrary, they constitute protected opinion” (emphasis added).
Facebook’s admission that their misinformation labels constitute an opinion is ostensibly counterintuitive to fact-checking’s fundamental purpose.
Stossel censored because of his tone
Even more concerning, Stossel’s “Are We Doomed?” video about Climate Change was flagged as “Partly False” because of what it implied. This came to light when Stossel reached out to Assistant Professor from San Jose State University, Patrick Brown, who was one of the scientists that fact-checked the piece.
After claiming that Stossel “downplayed” the reality of Climate Change, which Stossel profusely denies doing, Brown said, “It’s a tonal thing, I guess.”
Twitter’s fact-checking flounders
Twitter has also made attempts to filter misinformation on its platform over the years. And as is the case for most fact-checking efforts on social media, Twitter has a heightened focus on COVID-19 related misinformation.
In its COVID-19 Misleading Information Policy, Twitter states in order for a tweet to violate its policy—and consequentially be censored—it must “advance a claim of fact, expressed in definitive terms; be demonstrably false or misleading, based on widely available, authoritative sources; and be likely to impact public safety or cause serious harm.”
In a related instance, Dr. Li-Meng Yan, a former researcher at the Hong Kong School of Public Health, was suspended on Twitter in Sep 2020 for suggesting that the Chinese Government covered up evidence that the virus had leaked from a lab.
However, it is unclear how Dr. Yan’s tweet violated the platform’s policy (which was last updated in December 2021) as the policy makes no mention of the virus’s origins, potential creation, the inclusion of a lab, or the Chinese Government. Additionally, Twitter declined to comment on the matter when contacted by the New York Post.
However, just five days later, the House of Foreign Affairs Committee, in its first investigative report on the origins, echoed Dr. Yan’s claim. “It is beyond doubt that the CCP [Chinese Communist Party] actively engaged in a cover-up designed to obfuscate data, hide relevant public health information, and suppress doctors and journalists who attempted to warn the world,” the report states.
Twitter’s new collaboration doesn’t scream “objectivity”
Twitter has since revised its fact-checking methods. On August 2, 2021, Twitter announced it would begin working closely with Associated Press and Reuters for fact-checking purposes. According to Poynter, this marked “the first time Twitter has sought the counsel of professional fact-checkers in a bid to improve its information ecosystem.”
While the move demonstrated Twitter’s dedication to the facts, it may not result in resounding approval from its users, as a Chairman for the Thomson Reuters Foundation, James C. Smith, also sits as a Board Member for Pfizer. Smith was even the President, Director, and CEO of the Thomson Reuters Corporation until 2020.
Despite there being nothing illegal about it, the optics of Smith’s dual positions don’t signal “objectivity” to Twitter users—especially when it comes to fact-checking tweets about vaccines.
Instances like these indicate that fact-checkers are straying further and further from their central duty—opting to police tone and subvert plausible hypotheses instead of merely pursuing the facts. And if facts are susceptible to subjective interpretations of tone and opinion, how can social media users be expected to read the minds of these third-party fact-checkers?
The unfortunate fallout from these failures is the looming public distrust of social media fact-checkers as the Russian invasion of Ukraine rapidly unfolds. There have already been several instances of misleading attributions to videos and misdated photos across the platforms since the conflict began.
If even the professional fact-checkers can’t be trusted to pursue and preserve the facts, how can Americans know “what to protect” and “what to fight for?”