During the first year of the pandemic, Facebook pages associated with the World Doctors Alliance -- an anti-vaccine group whose members regularly post false information about COVID-19 -- ballooned in popularity, according to ISD Global, despite consistent breaches of Facebook's own COVID-19 and vaccine policies. The group's primary page was removed from the platform in July 2021.
\"It's not about new rules, it's about enforcing the ones they already have,\" said Gallagher. \"Facebook's policies on COVID-19 and vaccine misinformation go into a lot of detail on what content is prohibited and removed, but this report shows they are failing to implement these at a very basic level,\" she added.
Facebook also said, \"Since the pandemic began, our goal has been to promote reliable information about COVID-19, take more aggressive action against misinformation, and encourage people to get vaccinated. So far, we've connected over 2 billion people to authoritative information from health experts, removed 24 million pieces of COVID misinformation, and labeled more than 195 million pieces of COVID content rated by our fact-checking partners.\"
Nick Clegg, Facebook's vice president of global affairs and communications, says he can't quantify the volume of misinformation around the virus, but that the company will remove coronavirus-related information that has the potential to cause physical harm.
The full list of claims is available here, and we already prohibit these claims in ads. These new policies will help us continue to take aggressive action against misinformation about COVID-19 and vaccines.
To make it easier for people to find accurate information about COVID-19, we recently added a new section to our COVID-19 Information Center called Get the Facts. It includes fact-checked articles from our partners that debunk misinformation about the coronavirus. The fact-check articles are selected by our News curation team and updated every week. This is now available in the US. We will soon add it to Facebook News in the US as well.
Facebook said it will try to limit the spread of false and misleading information about the coronavirus on its Facebook and Instagram platforms after the World Health Organization declared the virus a public-health emergency.
In a shift from its policy to allow false or misleading political statements to remain on its site, Facebook said it will remove content that "could cause harm to people." For example, the company said it will focus on removing misinformation that discourages people from seeking treatment or spreads false cures or prevention methods like drinking bleach to cure the virus.
When people search for information about the virus on Facebook or click on a related hashtag on Instagram, Facebook said it will show an "educational pop-up with credible information." The company is also providing free advertising for organizations to run "coronavirus education campaigns" on both Facebook and Instagram.
The action came about a month after the White House singled out Facebook in saying that about a dozen people were responsible for creating 65% of the vaccine misinformation on social media platforms -- all of whom remained active on the social networking giant.
"In the middle of a pandemic, being honest and transparent about the work that needs to be done to protect public health is absolutely vital, but Facebook still refuses to be straightforward about how much misinformation is circulating -- and being actively promoted -- on their platform," a White House spokesperson told CNN Business on Wednesday.
Facebook said it has more than 65 criteria for false claims about COVID-19 and vaccines that would prompt it to remove posts from its platforms. The company has added to this list, including false claims that COVID-19 vaccines cause Alzheimer's and that being around vaccinated people could cause secondary side effects to others.
One study found that during the coronavirus pandemic, social media platforms failed to act on 95 percent of coronavirus-related disinformation reported to them. In addition, vaccine hesitancy remains a large threat to public health. A recent poll found two-thirds of people who are not vaccinated against the coronavirus believe common myths about the coronavirus vaccine.
Following a live-streamed interview with conspiracy theorist David Icke on 06/04, in which he had linked the technology to the pandemic, Youtube announced it will ban conspiracy theory videos falsely linking coronavirus symptoms to 5G networks. We traced back the diffusion and moderation enforcement of such a video. It has been indeed removed from Youtube, but we observed that content moderation has been inconsistent across platforms and functions, from mobile to the desktop version, as well as for sharing content from one platform to another.
On 24/04, Twitter announced that it would remove tweets or trends about #COVID19 that include a call to action that could potentially cause someone harm. For example, the Trends #InjectDisinfectant and #InjectingDisinfectant were subsequently now blocked.
Google and Facebook are attempting to stay ahead of coronavirus misinformation and to provide users with helpful, accurate resources. Today, Facebook announced plans to flag and remove false information, and Google has created an SOS Alert for coronavirus searches.
Descriptions of the responses of many SMPs lacked clarity, and their implementation lacked transparency. Many SMPs did not clearly articulate the types of content prohibited. This might be partly attributable to the challenges inherent in moderating scientific content, which is dynamic and often equivocal (Baker et al., 2020). In the COVID-19 context, this has been reflected in evolving scientific positions regarding the nature of COVID-19 transmission (WHO, 2020) and lack of consensus on boosters (Krause et al., 2021), to give just a few examples. Challenges notwithstanding, clearly outlining prohibited content is important. It can help researchers assess whether the types of content SMPs prohibit correspond with the types of content associated with real-world harm. Such evidence could guide SMPs on prioritizing and updating the types of content they prohibit. For instance, Facebook initially did not prohibit personal anecdotes or first-person accounts, but evidence showed that this type of content might be contributing to vaccine hesitancy (Dwoskin, 2021). Facebook has since updated its policy on content related to COVID-19 vaccines and now reduces the distribution of alarmist or sensationalist content about vaccines. Additionally, few platforms have a specific COVID-19 misinformation policy. During an evolving emergency such as a pandemic, such a policy could make it possible for platforms to communicate their stance on various types of content, changes to their responses, and the consequences of violating policies in a transparent and accessible manner.
Due to the end-to-end encrypted features and manner in which information is shared on messaging platforms, these platforms had different strategies to deal with misinformation compared to traditional social media platforms. While Messenger and WhatsApp did not explicitly state that they prohibited COVID-19 misinformation, they implemented some actions to limit its spread. These included attaching forwarding labels to messages that did not originate with the sender and introducing forwarding limits to reduce the spread of viral messages. Additionally, both platforms collaborated with the WHO to provide users with accurate and timely information about COVID-19 via free messaging. WhatsApp also collaborated with external fact-checking organizations and used advanced machine learning approaches to identify and ban accounts engaged in mass messaging. It was unclear whether Facebook account bans also applied to Messenger. 2b1af7f3a8