Connect with us

Media

How social media platforms and personalities are countering false information about COVID-19 vaccines – CBC.ca

Published

 on


Torontonians Mitchell Moffit and Greg Brown identified a need for science-based information on social media long before the pandemic hit. For the past year, though, their role as influencers has given them a front-row seat to the misunderstandings, rumours and conspiracy theories spreading online about the coronavirus.

“A lot of people are at least searching, looking for answers,” Moffit said.

Moffit and Brown’s YouTube channel, AsapSCIENCE, has amassed nearly 10 million followers by “making science make sense” through their self-made videos. Their brand has been built on explaining common experiences, such as how to fall asleep quickly or what happens when someone stops brushing their teeth. These days, it means talking a lot about COVID-19 and, increasingly, the vaccines.

“We’re in a pandemic, it’s a very scary situation,” Moffit said. “And so for us, it’s just a moment to say, hey, let’s calm down. Let’s just understand what’s happening.”

Social media provides the pair with a powerful podium from which to share their interest in science, but they’re also exposed to the flip-side of the technology. Brown said they do tend to get “a lot of positive reaction” to their videos about the COVID-19 vaccine, “but also, that’s where we really get to understand where the misinformation plot lines are coming from.”

The science-based videos that Brown, left, and Moffit create for their popular YouTube platform are increasingly focused on explaining issues related to the coronavirus. (Jared Thomas/CBC)

Moffit added that helping people find the right answers easily and “sussing out” the misinformation, “is really a responsibility of these big media corporations, like YouTube, like Facebook.”

Indeed, as governments around the world have grappled with deploying the first COVID-19 vaccines, social media companies have been working behind the scenes to deal with misinformation about them on their platforms.

This past week, Twitter and Tiktok both announced they would be expanding their policies around misinformation to include the COVID-19 vaccine as a key focus. Google and Facebook have also turned their attention to the issue.

“I think we have a very important responsibility to ensure that we strike the appropriate balance between freedom of expression and keeping our communities safe when it comes to the vaccine,” said Kevin Chan, the global director of public policy at Facebook Canada.

Kevin Chan, the global director of public policy at Facebook Canada, says the social media platform has processes in place to remove conspiracy theories or harmful misinformation about the coronavirus vaccines. 0:45

According to Chan, from the beginning of the pandemic through to October, Facebook removed “12 million pieces of individual misinformation content that is harmful in nature” regarding COVID-19. These include false claims about cures or treatments, and assertions the virus does not exist.

Now, it’s looking to filter misinformation around the vaccine in a similar way.

“If it’s about its safety, its efficacy, the ingredients, we will remove those things from Facebook,” Chan said. “If there are conspiracies about the origins of this vaccine, we will remove that as well.”

Posts from individuals who have what Chan refers to as “legitimate questions” about the vaccine won’t be targeted.

“We do want to preserve the space for people to be able to express individual thoughts [and] anecdotes about their experiences,” he said.

The spread of misinformation about COVID-19 vaccines on social media platforms like Twitter and Facebook has spurred social media companies to bring in policies for filtering posts that are misleading or foster unfounded theories. (Arun Sankar/Getty Images)

It’s a balance other social media companies are also trying to maintain.

According to Michele Austin, head of public policy at Twitter Canada, #covid-19 and #coronavirus were the top two hashtags in Canada in 2020.

When it comes to the vaccines, she said, “People want to talk about these issues. And I think it’s really important for us to provide a safe place on the service for these conversations to happen.”

Twitter announced in a blog post on Wednesday that the company would begin to remove “the most harmful misleading information” relating to the COVID-19 vaccines, and “label Tweets that contain potentially misleading information about the vaccines.”

However, in cases where someone may post something based on a misunderstanding or as “part of their journey to discover whether or not they should get vaccinated,” Austin said, “I think it would not be great for us to take down that tweet.”

Michele Austin, head of public policy at Twitter Canada, describes how the company filters out misleading information about COVID-19 vaccines on her company’s social media platform. 0:57

Montreal-based researcher Aengus Bridgman with the Media Ecosystem Observatory said such virtual conversations are no surprise as Canadians try to understand the new vaccines. But he said people who spend more time on social media and consume more of their news on those platforms “tend to be somewhat more” vaccine hesitant.

Misinformation circulating online, he said, “can generate misperceptions about the vaccine and can also lead to people then continuing to propagate it.”

According to Bridgman, “if something is widely shared, is highly up-voted or has a lot of likes,” more people are likely to start believing it.

In an effort to help prioritize trusted sources, YouTube, Twitter and Facebook now offer “prompts” on their platforms, directing users to seek out information about the coronavirus from government sites, such as the Public Health Agency of Canada.

According to a spokesperson with Health Canada, more than 11 million visits to the Government of Canada’s website have come from social media prompts since the start of the pandemic.

But with so much of the conversation happening online these days, Bridgman questions whether social media companies’ own methods to curb misinformation stretch far enough.

“Especially during a pandemic, when face-to-face contact is limited, what we have is companies deciding what is and what is not fair speech in online spaces,” he said. “Well, OK, but where is the democratic oversight of that?”

Understanding the scope and scale of people’s misperceptions online “is uniquely important right now,” Bridgman said, at a time when “these misperceptions have real life or death consequences.”

Bridgman doesn’t discount, however, what he refers to as the “enormous opportunity” for “well-crafted” COVID-19 information to reach huge audiences on social media.

Through their own videos and posts, Moffit said he and Brown are keen to connect with those people who are “reasonably just unsure” about COVID-19 vaccines.

“We all want to protect ourselves. We all want to protect our families, our kids. And so, I think to simply educate people is really powerful,” he said.


WATCH | From The National, how social media companies are countering misinformation about COVID-19:

A look at how the big platforms balance legitimate questions with misleading information. 7:43

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Social media companies should face new legal duty to 'act responsibly,' expert panel finds – WellandTribune.ca

Published

 on


Social media companies can’t be trusted to moderate themselves, so it falls to the government to enforce new restrictions to protect Canadians from harmful content online, according to a report currently under review by the federal heritage minister.

The Canadian Commission on Democratic Expression, an expert panel of seven members, including former chief justice Beverley McLachlin, said it had become difficult to ignore the fact too many real-world manifestations of online interactions are turning violent, destructive or hateful, despite social media’s parallel role in empowering positive social movements.

The panellists were particularly struck by the role they saw social media play last fall in “sowing distrust” in the aftermath of the U.S. presidential election, culminating in the lethal invasion of the U.S. Capitol. And they found, with the Quebec mosque shooting, the Toronto van attack and the armed invasion of Rideau Hall, that “Canada is not immune.”

“We recognize the charter, we recognize the ability of people to express themselves freely,” said Jean La Rose, former chief executive officer of the Aboriginal Peoples Television Network (APTN) and one of the seven commissioners, in an interview.

“But there must be limits at one point. There has to be limits as to where free speech becomes a racist discourse, or a hurtful discourse, or a hateful discourse.”

These limits would come in the form of a new law passed by Parliament, the commission recommended, that would force social media platforms like Twitter and Facebook, search engines like Google and its video-sharing site YouTube and others to adhere to a new “duty to act responsibly.”

The panel purposefully did not spell out what responsible behaviour should look like. Instead, it said this determination should be left to the government — as well as a new regulator that would oversee a code of conduct for the industry and a new “social media council” that would bring together the platforms with civil society and other groups.

La Rose said his experience in the journalism world demonstrated how there needed to be reasonable limits on what people can freely express so they are not permitted to call for the killings of Muslims, for example, or encourage violence against an individual by posting their home address or other personal details online.

“Having worked in media, having worked at APTN, for example, we have been at the receiving end of racist threats, of severe injury to our people, our reporters and others because of the view we present of the situation of the Indigenous community in Canada,” he said.

“Literally, we’ve had some reporters run off the road when they were covering a story because people were trying to block the telling of that story. So as a news entity, we have seen how far sometimes misinformation, hate and hurtful comments can go.”

Canadian Heritage Minister Steven Guilbeault has himself recently indicated that legislation to address “online hate” will be introduced “very soon.”

The minister has pointed to the popularity of such a move: a recent survey by the Canadian Race Relations Foundation (CRRF), for example, found that fully four-fifths of Canadians are on board with forcing social media companies to rapidly take down hateful content.

“Canadians are now asking their government to hold social media companies accountable for the content that appears on their platforms,” Guilbeault said after the CRRF survey was published.

“This is exactly what we intend to do, by introducing new regulations that will require online platforms to remove illegal and hateful content before they cause more harm and damage.”

Guilbeault has met with the commission to discuss their recommendations and is currently reviewing their report, press secretary Camille Gagné-Raynauld confirmed.

Representatives from Facebook Canada and Twitter Canada were among several people who provided witness testimony and participated in commission deliberations, the report said. Twitter declined comment to Canada’s National Observer.

“We haven’t reviewed the full report yet, so we can’t comment on the specific recommendations,” said Kevin Chan, global director and head of public policy for Facebook Canada. “We have community standards that govern what is and isn’t allowed on our platform, and in most cases those standards go well beyond what’s required by law.”

Loading…

Loading…Loading…Loading…Loading…Loading…

Chan also said Facebook agreed regulators should make “clear rules for the internet” so private companies aren’t left to make decisions themselves.

Google spokesperson Lauren Skelly said the company shares Canadians’ concerns about harmful content online and said YouTube takes its responsibility to remove content that violates its policies “extremely seriously.” She said the company has significantly ramped up daily removals of hate speech and removed millions of videos last quarter for violations.

“Any regulation needs to reflect the inherent complexity of the issue and the scale at which online platforms operate,” said Skelly. “We look forward to continuing our work with the government and local partners on addressing the spread of online hate to ensure a safer and open internet that works for all Canadians.”

The nine-month study by the commission, an initiative led by the Public Policy Forum, found that with everything from disinformation campaigns to conspiracy theories, hate speech and people targeted for harm, toxic content was being “amplified” by the actions of social media companies.

The study rejected the notion that social media platforms are “neutral disseminators of information,” finding instead that they curate content to serve their own commercial interests.

“The business model of some of the major social media companies involves keeping people engaged with their platforms as much as possible. And it turns out that keeping people engaged means feeding them sensational content because that’s what keeps people clicking,” said Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University and another commissioner.

“The incentives for social media companies are not aligned with the public interest. These are private companies whose obligation is to make money for their shareholders.”

The commission also proposed a tribunal to deal with dispute resolutions quickly, as well as a “transparency regime” that would require social media companies to make certain information available to the regulator, including the “algorithmic architecture used to identify problematic content.”

Jaffer wrote a “concurring statement” in the report, where he confessed it was difficult to endorse the commission’s proposed “duty to act responsibly” without going further to define how that duty will work in reality. He said defining it will require “difficult tradeoffs” between free speech, privacy and other issues.

Carl Meyer / Local Journalism Initiative / Canada’s National Observer

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Report calls for powerful new federal body to regulate social media – The Globe and Mail

Published

 on


An iPhone displays the Facebook app in New Orleans, Aug. 11, 2019,

Jenny Kane/The Associated Press

A federally funded panel is recommending the creation of a powerful new government regulator to oversee social media companies such as Facebook and Google and to require them to have strong content-moderation practices and to comply with a new legal duty to act responsibly.

The report by the Public Policy Forum (PPF)’s Canadian Commission on Democratic Expression, to be released on Wednesday, also calls for the creation of a federal “e-tribunal” to hear complaints about specific social media posts.

The federal Liberal government plans to introduce legislation early this year to regulate social media companies, with a focus on online hate and harassment. The report’s recommendations are aimed at influencing that legislation.

Story continues below advertisement

“It’s become pretty clear over the last few years that the major platform companies’ business models are causing democratic harms,” Jameel Jaffer, executive director of Columbia University’s Knight First Amendment Institute, said in an interview. Mr. Jaffer is one of seven commissioners who worked on the report. He grew up in Canada and his career has focused on civil liberties and freedom of speech in Canada and the United States. Mr. Jaffer said the companies’ algorithms, which automatically determine which social media posts get priority, can highlight “sensational and extreme” views.

“I think it’s also become evident that self regulation isn’t sufficient here because the companies’ incentives aren’t aligned with the public’s,” he said, adding that a new regulatory framework that better aligns the companies’ incentives with the public interest is needed. “What that framework should look like is a really difficult question, because inevitably, it will require us to make difficult trade-offs between multiple important values.”

The challenge is underscored by the fact that Mr. Jaffer attached a statement to the report that said he could not fully endorse the panel’s call for a duty-to-act-responsibly law and the proposed e-tribunal process.

While the commissioners say the era of self-regulation by internet giants must end, the report cautions against the kind of “reactive takedown laws” that European Union nations such as Germany have adopted that require companies to remove objectionable content in as little as 24 hours or face heavy fines. The report suggests a new Canadian regulator have quick takedown power, however, for matters involving a credible and imminent threat to safety.

The report recommends the regulator focus on ensuring social media companies have strong and transparent policies for moderating content. It says companies should be required to disclose details such as how algorithms are used to detect problematic content, the number and location of human content moderators and their guidelines for Canada. Other proposed transparency measures would be a requirement that bots – computer-generated social media accounts that can appear to be run by a human – be registered and labelled.

“Citizens should know when they are engaging with an agent, bot or other form of [artificial intelligence] impersonating a human,” the report states.

The report says, that to be effective, the regulator must have the power to impose penalties such as massive fines and possible jail time for executives.

Story continues below advertisement

The commission’s work was led by Public Policy Forum president and chief executive officer Edward Greenspon, a former editor-in-chief of The Globe and Mail.

The study also relied on a Citizens’ Assembly on Democratic Expression, a gathering of 42 randomly selected Canadians who reviewed the topic of social media regulation and issued recommendations.

The PPF said its work was funded in part by a $625,000 contribution from Canadian Heritage through its Digital Citizens Initiative.

The commissioners’ report says the focus should be on regulating how social media companies enforce their own content rules and how they deal with complaints about content that is already illegal, such as hate speech. It argues against banning additional types of speech through the Criminal Code.

“We have clearly emerged in the regulatory camp, but with a bias toward regulating the system rather than the content. Given the nature and rapid evolution of the medium, an attempt to tick off an exhaustive list of harms, deal with them individually and move on would be fanciful, partial and temporary,” the report states.

The proposed e-tribunal does open the door to government regulation of specific posts. The report said it could be modelled on the B.C. Civil Resolution Tribunal, an online body that resolves issues such as small claims and motor vehicle matters.

Story continues below advertisement

Mr. Jaffer said the panel did not define the precise role of the e-tribunal, and left many questions unanswered. In his statement, he said he is not convinced a tribunal is preferable to requiring large platforms, at their own expense, to have an efficient and transparent review and appeals process for specific posts.

He wrote that before he could endorse an e-tribunal, he would want to know more about its mandate, and what relationship it would have to the processes some platforms already use. Mr. Jaffer also cited lack of detail as the reason he could not support the call for a legislated duty to act responsibly.

Know what is happening in the halls of power with the day’s top political headlines and commentary as selected by Globe editors (subscribers only). Sign up today.

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Report calls for powerful new federal body to regulate social media – The Globe and Mail

Published

 on


An iPhone displays the Facebook app in New Orleans, Aug. 11, 2019,

Jenny Kane/The Associated Press

A federally funded panel is recommending the creation of a powerful new government regulator to oversee social media companies such as Facebook and Google and to require them to have strong content-moderation practices and to comply with a new legal duty to act responsibly.

The report by the Public Policy Forum (PPF)’s Canadian Commission on Democratic Expression, to be released on Wednesday, also calls for the creation of a federal “e-tribunal” to hear complaints about specific social media posts.

The federal Liberal government plans to introduce legislation early this year to regulate social media companies, with a focus on online hate and harassment. The report’s recommendations are aimed at influencing that legislation.

Story continues below advertisement

“It’s become pretty clear over the last few years that the major platform companies’ business models are causing democratic harms,” Jameel Jaffer, executive director of Columbia University’s Knight First Amendment Institute, said in an interview. Mr. Jaffer is one of seven commissioners who worked on the report. He grew up in Canada and his career has focused on civil liberties and freedom of speech in Canada and the United States. Mr. Jaffer said the companies’ algorithms, which automatically determine which social media posts get priority, can highlight “sensational and extreme” views.

“I think it’s also become evident that self regulation isn’t sufficient here because the companies’ incentives aren’t aligned with the public’s,” he said, adding that a new regulatory framework that better aligns the companies’ incentives with the public interest is needed. “What that framework should look like is a really difficult question, because inevitably, it will require us to make difficult trade-offs between multiple important values.”

The challenge is underscored by the fact that Mr. Jaffer attached a statement to the report that said he could not fully endorse the panel’s call for a duty-to-act-responsibly law and the proposed e-tribunal process.

While the commissioners say the era of self-regulation by internet giants must end, the report cautions against the kind of “reactive takedown laws” that European Union nations such as Germany have adopted that require companies to remove objectionable content in as little as 24 hours or face heavy fines. The report suggests a new Canadian regulator have quick takedown power, however, for matters involving a credible and imminent threat to safety.

The report recommends the regulator focus on ensuring social media companies have strong and transparent policies for moderating content. It says companies should be required to disclose details such as how algorithms are used to detect problematic content, the number and location of human content moderators and their guidelines for Canada. Other proposed transparency measures would be a requirement that bots – computer-generated social media accounts that can appear to be run by a human – be registered and labelled.

“Citizens should know when they are engaging with an agent, bot or other form of [artificial intelligence] impersonating a human,” the report states.

The report says, that to be effective, the regulator must have the power to impose penalties such as massive fines and possible jail time for executives.

Story continues below advertisement

The commission’s work was led by Public Policy Forum president and chief executive officer Edward Greenspon, a former editor-in-chief of The Globe and Mail.

The study also relied on a Citizens’ Assembly on Democratic Expression, a gathering of 42 randomly selected Canadians who reviewed the topic of social media regulation and issued recommendations.

The PPF said its work was funded in part by a $625,000 contribution from Canadian Heritage through its Digital Citizens Initiative.

The commissioners’ report says the focus should be on regulating how social media companies enforce their own content rules and how they deal with complaints about content that is already illegal, such as hate speech. It argues against banning additional types of speech through the Criminal Code.

“We have clearly emerged in the regulatory camp, but with a bias toward regulating the system rather than the content. Given the nature and rapid evolution of the medium, an attempt to tick off an exhaustive list of harms, deal with them individually and move on would be fanciful, partial and temporary,” the report states.

The proposed e-tribunal does open the door to government regulation of specific posts. The report said it could be modelled on the B.C. Civil Resolution Tribunal, an online body that resolves issues such as small claims and motor vehicle matters.

Story continues below advertisement

Mr. Jaffer said the panel did not define the precise role of the e-tribunal, and left many questions unanswered. In his statement, he said he is not convinced a tribunal is preferable to requiring large platforms, at their own expense, to have an efficient and transparent review and appeals process for specific posts.

He wrote that before he could endorse an e-tribunal, he would want to know more about its mandate, and what relationship it would have to the processes some platforms already use. Mr. Jaffer also cited lack of detail as the reason he could not support the call for a legislated duty to act responsibly.

Know what is happening in the halls of power with the day’s top political headlines and commentary as selected by Globe editors (subscribers only). Sign up today.

Let’s block ads! (Why?)



Source link

Continue Reading

Trending