Tech companies are taking unprecedented steps to deal with the flood of coronavirus-related misinformation sweeping across social media, but potentially dangerous myths and conspiracy theories are still making their way onto most major platforms. There’s a simple step that platforms could take to make it easier for users to flag this content for removal — but three months into the outbreak, they’re still not doing it.
Facebook, the world’s largest social network, doesn’t have an option for its 2.4 billion users to report harmful content related to COVID-19. Nor does Twitter, which is used as a news source by an estimated 71 per cent of people on the platform.
Both have special reporting options for election-related misinformation, but when asked why they hadn’t done the same thing for dangerous misinformation about COVID-19, neither company gave an explanation to National Observer.
In the face of our current misinformation crisis, experts say broad approaches that incorporate proactive measures as well as responsive and corrective actions are necessary to even make a dent. Adding an option to flag potentially harmful coronavirus content may not solve the problem on its own, but at a time when people are exploiting a crisis for cash and pushing misinformation that could carry deadly consequences, even the smallest steps to remove harmful medical advice and false claims about cures could save lives. And social media users want to help. With the addition of a coronavirus-specific reporting option, platforms could leverage the collective knowledge and observations of millions
On Twitter, coronavirus-related tweets are posted every 45 milliseconds on average, according to the company. An estimated 15 million tweets about the virus were sent within the first four weeks of the outbreak, and the conversation has rapidly gained momentum since then.
On March 11 — the same day the World Health Organization designated the coronavirus outbreak as a pandemic — there were more than 19 million mentions of coronavirus across social media and news media platforms, outpacing every other major world event or issue by huge margins, according to the social media analytics firm Sprinklr. For comparison, climate change had fewer than 200,000 mentions during the same 24-hour period, and mentions of U.S. President Donald Trump numbered around 4 million.
What are the platforms doing to address coronavirus misinformation?
Since the start of the coronavirus outbreak in December, Facebook and Twitter — along with other major social media platforms such as YouTube and Pinterest — have taken much more proactive approaches to addressing misinformation than we’ve seen them take for any other issue, including foreign election interference.
The actions they’re taking fall into three main categories: promoting accurate information from reputable sources, removing harmful information, and stopping misinformation from making it online in the first place.
Facebook, for example, is “removing false claims and conspiracy theories that have been flagged by leading global health organizations,” a company spokesperson told National Observer. They’re also blocking exploitative ads that promise a cure for the virus, and they’ve placed a temporary ban on ads for medical face masks.
In addition to removing harmful content, Facebook has also modified its algorithm to promote authoritative information from trusted sources such as the World Health Organization (WHO). In Canada, Facebook users will see a pop-up in their News Feed directing them to the Public Health Agency of Canada (PHAC)‘s website, and anyone who searches for coronavirus will see the same.
Three months into the worst pandemic in modern history, Twitter and Facebook users still don’t have an option to flag harmful #coronavirus misinformation for removal.
Twitter is taking similar steps. Users who search for common coronavirus-related hashtags will see posts from national and international health organizations pinned to the top of their search results, and Twitter says they’ve been “consistently monitoring the conversation […] to make sure keywords — including common misspellings — also generate the search prompt.” Opportunistic targeting of ads is prohibited, as are promoted posts that advertise medical face masks.
“In addition, we’re halting any auto-suggest results that are likely to direct individuals to non-credible content on Twitter,” the company said. “[O]ur goal is to elevate and amplify authoritative health information as far as possible.”
Both Twitter and Facebook are doing a commendable job of making accurate information about coronavirus more accessible. The platforms are also allowing agencies such as the WHO to disseminate public health information through promoted posts (at no cost) in order to maximize the reach of awareness campaigns, emergency alerts and updates. They’re also working with health organizations and nonprofits, including fact-checking agencies, to share potentially valuable data and debunk common myths and misconceptions.
But when it comes to identifying and removing potentially dangerous misinformation, there’s a lot of work to be done, and simply providing accurate information is necessary but very often not sufficient for countering the effects of the mis- and disinformation that people see on social media, especially regarding health-related issues such as vaccines. The sheer volume of false, misleading, and conspiratorial content on social media can overwhelm users and make it difficult to discern authoritative sources from the rest of the crowd.
Why isn’t there a way to flag misinformation about coronavirus?
In the face of widespread criticism over social media’s role in amplifying disinformation during the 2016 election, Twitter and Facebook started rolling out new policies and features aimed at addressing harmful election-related content.
In Oct. 2018, Facebook introduced a special reporting option “so that people can let us know if they see voting information that may be incorrect” and/or may encourage voter suppression. Before then, users could still report election-related misinformation, but there was no way to automatically flag it as such.
Twitter rolled out a similar feature in 2019 that allows users to flag and report tweets that contain false or misleading information about elections, just like they can do for tweets that include harassment, abuse, spam, or threats of self-harm. The feature was first introduced in India last April during that country’s elections, and in the European Union ahead of its May 2019 elections, then finally in the U.S. in January 2020. Twitter’s terms of service already prohibited tweets that may mislead voters about an election, as well as tweets that intend to suppress turnout or intimidate someone from voting.
“This reporting flow has been an important aspect of our efforts since early 2019 to protect the health of the conversation for elections around the globe,” Carlos Monje Jr., Twitter’s director of public policy and philanthropy, said when the tool was introduced in the U.S.
Yet Twitter hasn’t offered a special option for reporting coronavirus misinformation to “protect the health of the conversation” amid a global pandemic, nor has Facebook.
While a reporting option wouldn’t suddenly fix the problem, it would give users an easy way to flag potentially dangerous coronavirus misinformation so it doesn’t get lost in the estimated 1 million user violation reports it receives every day.
Asked how Facebook would respond to a user who questioned why they had created a special reporting option for election-related misinformation, but not coronavirus misinformation, a company spokesperson told National Observer:
“Everyone has a role to play in reporting misinformation and we encourage people to report content they think may be misleading. Right now we encourage people to use the reporting features that are currently available and we’re working as quickly as possible to offer helpful resources across our platforms.”
But the reporting features that are currently available don’t always work. I informally tested this out by reporting several misleading and false Facebook posts about coronavirus from an anti-vaccine page. Several days after I reported them, the posts — including one that suggests vitamin C and supplements are a cure — are still up.
Another post, which suggests inhaling steamed water to create a “hostile environment” for coronavirus, got flagged as false by fact-checkers, but was not removed from the website.
Facebook has long faced criticism for inconsistently enforcing its own rules. As Tech Crunch put it last year: “Inconsistency of policy enforcement is Facebook’s DNA.”
The story is similar for Twitter. When asked why the company hasn’t created a tool for reporting coronavirus misinformation similar to the one they made for election-related misinformation, a spokesperson referred National Observer to a recent blog post on the platform, outlining what the company is doing to reduce coronavirus-related misinformation.
“At present, we’re not seeing significant coordinated platform manipulation efforts around these issues,” Twitter reported.
The platform has not reported numbers or analytics that could help quantify the scope of the coronavirus misinformation problem, nor has it publicly shared data reflecting the impact of its own response. National Observer asked if Twitter could provide any of these statistics, but the spokesperson did not respond to the question.
Twitter said it would continue working to strengthen its own “proactive abilities” to protect users from malicious behaviors, adding: “As always, we also welcome constructive and open information sharing from governments and academics to further our work in these areas — we’re in this together.”
National Observer reached out to the company with follow-up questions, but had not received a response as of press time.
Social media researcher Geoff Golberg, founder of the data analytics company Social Forensics, said he’s not surprised by the response from Twitter. When developing new rules and policies, social media companies often prioritize things that sound and look good, but aren’t necessarily the most impactful in practice, he said.
“Here’s the reality: we need effective solutions, not simply bells and whistles,” Golberg told National Observer.
For the same reason, he cautioned that although adding a specific reporting option for coronavirus misinformation would make it easier for users to flag harmful content, it wouldn’t guarantee that Twitter actually acts on those reports. “[W]hat really matters is what happens after that reporting option [is] selected,” he added.
There’s good reason to be concerned about that, too. Twitter is notoriously inconsistent in its enforcement of its own rules — a long-standing problem that human rights group Amnesty International said “creates a level of mistrust and lack of confidence in the company’s reporting process.”
Take, for instance, a series of tweets sent by Trump last week that suggested taking a drug combination that has not been approved by the Food and Drug Administration, and which doctors in China and the U.S. are warning could be deadly or could lead to a deadly shortage if people hoard the medication.
A short time after Trump posted the tweets, Nigerian health officials reported two cases of poisoning from the drug Trump recommended, and on Monday, one person in Arizona reportedly died and another was hospitalized after trying to fend off the virus by ingesting chloroquine phosphate — an additive often used to clean the inside of aquariums. The two individuals are believed to have confused the chemicals with chloroquine, the drug Trump promoted on Twitter.
Trump’s tweets have not been removed, even though they contradict Twitter’s latest policies around coronavirus misinformation. They also do not carry the warning label that Twitter said it would apply when world leaders post potentially harmful or dangerous messages. The warning label was touted as a mechanism to flag harmful content in cases where banning an account is not a viable option because it may go against the public interest.
Ultimately, Golberg said, “Twitter’s inability/unwillingness to enforce their own rules results in an information environment where nothing can be trusted” — at a time when trust in authoritative information could, quite literally, mean the difference between life and death.
If social media companies don’t want to add new features to make it easier to report and remove coronavirus misinformation, “enforcing their current rules — and in a consistent, non-selective fashion — would be a good place to start,” Golberg added.
Social media both a blessing and a curse during coronavirus pandemic – KitchenerToday.com
We are facing an unprecedented crisis of public understanding. Western digital corporations and social media platforms (Facebook, Twitter, YouTube, Instagram, Snapchat and Reddit) and their Chinese equivalents (WeChat, Weibo, Tencent and Toutiao) are at the heart of this crisis. These platforms act as facilitators and multipliers of COVID-19-related misinformation.
Tedros Adhanom Ghebreyesus, the director-general of the World Health Organization (WHO), noted that urgent measures must now be taken to address the “coronavirus infodemic.”
This infodemic compromises outbreak response and increases public confusion about who and what information sources to trust; generates fear and panic due to unverified rumours and exaggerated claims; and promotes xenophobic and racist forms of digital vigilantism and scapegoating.
Governments, public health authorities and digital corporations need to not only promote digital literacy, but combat ways in which the impact of social media may be spawning an irreversible post-truth age, even after the COVID-19 pandemic dissipates.
Misinformation during outbreaks
Misinformation has been pervasive in other recent large-scale outbreaks. In the 2018 elections in the Democratic Republic of Congo, suspicions were raised when the ruling government cancelled national elections in Ebola-affected areas, eliminating opposition votes.
Rumours are a second form of misinformation. One popular conspiracy theory held that the virus was developed as a means to wage a biological war against China. In China, a rumour spread that bioweapons research in a Wuhan laboratory resulted in the genetic engineering of COVID-19 that was then released. Such rumours may have even jeopardized the working relationship between Western scientists and their Chinese counterparts searching for a COVID-19 vaccine.
Untrue, exaggerated and dubious medical claims and hoaxes are other common forms of misinformation. Various unproven natural and traditional remedies were proffered as cures to both Ebola and COVID-19, such as drinks that contained mint and spices like saffron and turmeric that spread in Iran through Twitter.
Influencing outbreak outcomes
During times of emergency and disaster, urgent questions arise and require immediate response. The problem is that officials don’t consistently provide the accurate information that’s required very quickly.
A post-truth society is one in which subjective opinions and unverified claims rival valid scientific and biomedical facts in their public influence. The need for evidence to support reasoned arguments becomes downplayed, while at the same time, the social norm concerning how and why people should be held accountable for what they say is weakened.
Scientists and other experts ultimately lose social legitimacy and authority in the eyes of the public because what they bring to the table is no longer valued.
When complex emergencies arise, public officials are cautious about making premature pronouncements, instead carefully crafting statements to ensure accuracy and avoid the pitfalls of misinterpretation and exaggeration. Somewhat paradoxically, this careful approach may also contribute to the formation of an information vacuum that rumours and falsehoods are all too ready to fill.
In the digital age, the time needed to analyze, assess and communicate information cannot compete with the instantaneous spreading of misinformation on social media platforms.
The impact of social media misinformation may be even more pronounced because of confirmation bias, the tendency to accept statements that reinforce our established views and to downplay statements that counter these views.
Misinformation & xenophobia
Racist content spread through social media may reinforce already pre-existing biases and prejudices. Xenophobic reactions that emerged during the 2003 SARS outbreaks in Toronto, amongst other cities, are being repeated during the current COVID-19 pandemic.
What’s different now is how easily social media can fuel this behaviour. A particularly poignant illustration is a viral WeChat rumour that a particular Chinese restaurant in Canada employed someone with COVID-19 and that health officials had closed the restaurant. The restaurant lost 80 per cent of its revenue.
Social media also facilitates a form of prejudiced collective organizing that, similar to crowdsourcing, rapidly enlists a large number of people, yet does so on the basis of questionable claims and beliefs. An online petition compiled by 8,000 people north of Toronto demanded that the school board ban students whose family members had recently travelled to China from attending school.
The information vacuum
During the early stages of the 2003 SARS outbreak in China, people shared information about the outbreak through simple text messaging. Despite efforts by the government to not share information about the outbreak with the WHO, information about “atypical pneumonia” circulated widely.
With COVID-19, the Chinese state’s censorship of and control over online content created an information vacuum. Despite this, citizens have used social media to express veiled criticism of government mismanagement and lack of government accountability.
During the early stages of the outbreak, before the Chinese government was releasing any information, ophthalmologist Li Wenliang — a whistleblower for COVID-19 — posted messages on the spread of a SARS-like illness. As screenshots of his posts went viral, he was disciplined by local police for promoting “untrue speech.” Li died of complications from the virus on Feb. 7, 2020.
News of his death dominated Chinese social media, with a flurry of messages expressing grief as well as anger directed at the government. “Dr. Li Wenliang passed away” became the top search record on Weibo. State censors intervened to remove posts on Li’s death, but public outrage led to increased demands for free speech and greater information transparency from the government.
By contrast, as the outbreak intensifies, social media has taken on new and increased importance with the large-scale implementation of social distancing, quarantine measures and lockdowns of complete cities. Social media platforms have become a way to enable homebound people survive isolation and seek help, co-ordinate donations, entertain and socialize with each other.
The frequency of disease outbreaks like the one we’re currently witnessing will increase, given the ways in which connections between human beings and nature continue to intensify.
Pandemics will require co-ordinated global response strategies. Digital corporations and social media platforms can and must be at the heart of these strategies, since their responses and willingness to collaborate with governments and public health officials will determine whether social media is viewed as a beneficial or pathological vector of pandemic response.
At present, it’s imperative to develop policies and mechanisms that address the digital creation and spread of misinformation about disease outbreaks. To do this will require that biomedical knowledge about pandemics be supplemented by expertise about their social, political and cultural underpinnings.
Without that understanding, efforts to contain COVID-19 will be hindered by “spreading unnecessary panic and confusion, and driving division, when solidarity and collaboration are key to saving lives and ending the health crisis.”
S. Harris Ali, Professor, Sociology, York University, Canada and Fuyuki Kurasawa, York Research Chair in Global Digital Citizenship, Associate Professor, Department of Sociology, York University, Canada
Dr. Bonnie Henry fan clubs pop up on social media – Burnaby Now
You may not have heard of her before this pandemic, but you definitely know her now.
Dr. Bonnie Henry is the provincial health officer who has been leading the fight against COVID-19 pandemic in BC. She has been delivering the sombre statistics during daily updates on the virus spread in the province, as well as providing instructions and answers on how to combat the spread of the coronavirus.
She has accumulated a cult following of fans who are very happy to voice their appreciation for Henry’s work.
There are now Dr. Bonnie Henry Fan Clubs that have popped up on Facebook and Twitter, that allow users to post and share how much they appreciate Dr. Henry and all of her work. The fan club Twitter account mostly posts updates when Henry is about to speak, or retweets posts from the public on their various thoughts about the health officer.
The Facebook group seems to be a little more on the wild side, pondering everything from Dr. Henry’s age to her relationship status. But nevertheless, every post in the group is steadfastly behind the provincial health officer, and everything she has done to keep the public informed.
Social media a blessing and a curse during time of crisis: B.C. communication expert – Campbell River Mirror
Amidst time of crisis, people around the world are in a hurry to find accurate information, but sometimes it’s not always there.
In times like these before technology, people around the would flood to a trusted news source to get the latest information. Now, even legacy news sources, mass media institutions that predominated the Information Age, are using social media to reach their readers.
A B.C. expert in communications is warning the public to check their sources and ensure what they’re reading is accurate, to help reduce the spread of misinformation.
“It’s a blessing and a curse,” said professor and director of Simon Fraser University’s School of Communication, Peter Chow-White, in an interview March 18.
“It’s a (curse) because on the one hand there’s a lot of information out there, it’s hard to know – you have to sort of sift through a lot of it to figure out what’s right, what’s wrong, what’s real and what’s not.
“The blessing of social media is that the information gets delivered very quickly to our home, so we can react much faster than we normally would around these sorts of things.
Additionally, in order to navigate through crisis, he says the public needs to practice being information and media literate.
“It’s huge on the individual these days,” explained Chow-White.
“This is sort of a place where legacy news comes back into play and becomes more important than ever.”
With thousands of news sources and websites reporting on the pandemic, and some reporting on a crisis for the first time, the professor says the accuracy of information reaching people’s news feeds can be lost.
“It’s just not their traditional domain,” he said.
Social contagion, he explained, operates very similarly to viral contagion; there is a network effect, and social media amplifies this.
“It amplifies that (misinformation) and creates fear and panic in people’s minds without giving them the oportunity and the information to understand the context; how to mitiage that fear itself.
“In moments of crisis, fear is very real and palpable.”
Earlier this month, Black Press Media reported that an Interior Health medical officer condemed an article published by an Okanagan media outlet. The article included a “projected death” calculation that upwards of 5,800 people in the Okanagan could die from the COVID-19 pandemic.
The media outlet since issued a public apology.
Chow-White says since the start of the COVID-19 pandemic in Canada, some information hasn’t been properly communicated.
“It would have been good to have messaging around – you don’t need a ton of toilet paper, and you don’t need it for two years. That’s a good case of how information gets delivered improperly and the narrative takes over instead of the science.”
However he added, there are many benefits to society tackling a crisis during the Information Age, thanks in part to social media.
“Social media becomes critical in communication. People need to be able to go to Twitter and have the algorithms push the information that is most important and that is the most trustworthy,” said Chow-White.
“Even though people are managing their own feeds, Twitter and Facebook have a social responsibility in these moments as well.”
Social media companies have had to act quick in their response to misinformation but also access to facts since COVID-19 was declared a pandemic.
For Facebook, that includes banning ads that capitalize on fears, putting more funds into fact-checking resources to comb out the false claims about treatments, and removing all non-official COVID-19 accounts from Facebook and Instagram.
Twitter has pledged to relaunch its profile verification program to help identify authoritative voices in its attempt to ensure facts are being seen by users first and foremost.
Even Snapchat, which is used mainly by younger demographics, has added a dedicated section on its app for COVID-19 news.
Not a B.C. conversation, but a global one
Chow-White furthered that the current COVID-19 situation isn’t a B.C. conversation; it’s a global conversation which works at multiple levels. These include local, national and international levels.
Over the last month, several events have reinforced why Chow-White believes the internet is an uneven approach to following information by leadership, in the context of global information.
Referencing the topic of flattening the curve, moving from a mitigation strategy to a containment strategy, he says this wasn’t done particularly well in Canada, and especially B.C.
“An example of that is – the Ministry of Education on Friday (March 13) announced that there’s no reason to close schools – and it’s good to keep them open… completely contradicting what the rest of the world is doing.
“Ninety-six hours (later), they reverse into a 180.”
B.C. has been hosting afternoon news briefings on Monday to Friday and at noon on Saturday – streamed by all TV stations but also broadcasted live on the government’s social media channels. These briefings include a daily case count, any provincial orders delivered by B.C.’s top doctor, Dr. Bonnie Henry, and a question-and-answer portion for reporters.
Such provincial orders have included a ban on large gatherings – initially for events with more than 250 attendees but which has since been lowered to 50 guests – shutting down bars an dnightclubs and banning dine-in guests at restaurants.
“We are dealing today with things happening 10 to 14 days ago. The things we do now are going to help us 10 days, 14 days from now,” Henry continues.
— Ashley WASH-YOUR-HANDS-wani (@ashwadhwani) March 18, 2020
But the ban on gatherings has proven just how difficult it is to get messaging quickly to thousands of provincial citizens. Days after Henry announced the order, people were still spotted on social media hosting weddings and other events.
Henry has spent much of her daily briefings reminding the public that the ban may be on gatherings of more than 50, but that doesn’t mean that 45 attendees or even 20 or 10 makes anyone less at-risk of contracting the virus.
In fact, she has since urged people to stay indoors and if they go outside only go with the people you live with and in grous of no more than one or two – and most importantly, stay six feet apart.
The province unveiled this week that under the current state of emergency, bylaw officers are now being enabled to enforce government restrictions.
On Friday, March 27, Henry unveiled what she called ‘cautious optimism’ that the various contact restrictions had nearly halved the potential transmission.
That report sparked Prime Minister Justin Trudeau to remind the public that while “an excellent sign,” the news offered even more of a reason for people to continue listening to advice of health officials.
“If we are seeing a reduction in the spikes, then that shows it is working but that means we need to continue what we are doing,” he said.
Unevennes has since evened out, says expert
Canada, Chow-White explained, is among the third wave of areas hit, following Asia and Italy. Currently, the U.S. is dealing with the most cases in the world right now, as China has started to see a drastic reprieve.
Iran, he said, has been one of the hardest hit areas.
Last week pictures surfaced online of football-field sized mass graves, taken from space.
“If that was a first-world country, then we’d be a lot more panicked. But we tend to ignore these sorts of things in the global north, unfortunately. Not everybody mind you, but a lot of people,” he said.
“If there was some sort of connection between that and us, a little more force through the last week, we wouldn’t have people walking around outside right now, casually wondering why they can’t go out for St. Patty’s Day.
“I’m not trying to make light of it, I’m just trying to illustrate a lag and an unevenness.”
Thankfully, he said, that unevenness had since evened out. He says people are getting it, and they’re staying home.
New COVID-19 outbreak at Ottawa retirement home as cases hit 122 – CBC.ca
8 Best Investment Strategies During A Recession – Yahoo Finance
Taking art online – Sherbrooke Record
Iran anticipates renewed protests amid social media shutdown
Popular Richmond BBQ spot speaks out about coronavirus rumours after man collapses outside restaurant – Vancouver Is Awesome
Real Estate Board of Greater Vancouver reports January housing sales up 42.4 percent
- Tech20 hours ago
Details of the Galaxy Fold 2 leak on the internet; check image – Somag News
- Health18 hours ago
Alberta doctors scrambling to get cancer surgeries done before 'tidal wave' of COVID-19 patients – CBC.ca
- News5 hours ago
WHO expert's advice for Canada: don't just flatten the curve, curtail it – CTV News
- Sports23 hours ago
SIMMONS SAYS: Confrontations with angry season ticket holders ends in compromise for Leafs and Raptors – Toronto Sun
- Tech7 hours ago
Supplier worries might mean a delay to the iPhone 12 launch – TechRadar India
- Art20 hours ago
Life mimics art for actors in play about pandemic – Saanich News
- Economy14 hours ago
The Lockdown Is an Opportunity to Redefine What Our Economy Is For – Jacobin magazine
- Health23 hours ago
Coronavirus: First COVID-19-related death reported in Middlesex-London – Global News