Facebook isn’t the only social media platforms that needs to be held to account, according to an SFU expert in disinformation.
Facebook isn’t the only social media platforms that needs to be held to account, according to an SFU expert in disinformation.
But it’s certainly one of them, said Ahmed Al-Rawi, an assistant professor of news, social media and public communications at the Burnaby Mountain school.
He was responding to a former Facebook exec who blew the whistle on the company’s alleged role in helping Jan. 6 U.S. Capitol rioters coordinate their attack, but she said nothing that wasn’t already well-known, Al-Rawi said.
“I’m not surprised, we’ve heard such allegations before and this is something we already know about social media companies.
“She said what everyone already knows. I mean, it’s just a confirmation from an insider what academics, policy-makers and journalists have been talking about for more than two years.”
Frances Haugen, a former manager at Facebook who has also worked at Google and Pinterest, has accused Facebook of hiding internal data showing the company’s algorithm was boosting messages that sowed division, hate and misinformation.
Facebook had reversed a policy meant to limit such communication right after Joe Biden won the U.S. presidential election last November, instead of keeping the safeguards in place until after he was sworn in on Jan. 6.
Facebook, which has 2.8 billion users — 60 per cent of all internet-connected people on Earth — repeatedly chooses profits over public safety, Haugen said.
To be fair, Al-Rawi said, although the public discourse at the moment is almost exclusively about Facebook, many other social media sites, such as Google, are escaping blame they deserve.
“And there are a lot of problems on YouTube and elsewhere, so I see a lot of blame (levelled) at Facebook — and I’m not saying Facebook is innocent, far from it — but other social media sites should also be held accountable.
“Just to give you an example, if you go to Instagram and search the hashtag QAnon, it’s actually blocked. If you go to Twitter, it’s allowed.
“That means Twitter is just like, ‘Yeah, go ahead, say anything you like.’”
Chris Tenove, a UBC post-doctoral research fellow in political science who has written about harmful speech and disinformation, agreed Facebook is not alone in deserving blame and closer scrutiny.
The Jan. 6 insurrectionists used other social-media platforms, too, and some of those do a poorer job than does Facebook at monitoring problematic content, Tenove said.
“But they did use Facebook, as well, to help find each other and act.”
Regarding Facebook’s policy of trying to limit some types of political communication and misinformation around the time following the U.S. election, Tenove said on the one hand it was the company trying to get ahead of an issue.
“On the other hand it was an example of the kind of ad hoc approach that keeps getting applied to these types of issues.”
Neither expert can see the clock being rewound, social media is here to stay.
For one, social media has been invaluable for some during the pandemic, allowing people to stay at least virtually in touch when they can’t meet physically.
The federal and provincial governments use social media to promote getting vaccinated, to warn about wildfires and tsunamis; small businesses use them to promote themselves; those who can’t afford data plans use them to stay in touch with others; police and citizens use them to help find and prosecute criminal behaviour such as, say, an assault on public transit.
“We need them,” Al-Rawi said. “I would call them necessary evils.”
So what can governments as regulators and policy setters, do?
Al-Rawi took a deep breath.
“They’re not doing enough,” he said. “Governments should definitely put more pressure on social media sites, they should hold them more accountable. Monetary, financial penalties would be very useful.
“But again, it’s so easy for me to say (Facebook, Google and YouTube) are all bad, but to be fair there are so many bad actors out there, so many of them. I mean, Twitter has been escaping blame, though I’m seeing a lot of disinformation on Twitter, far more than on Facebook and Instagram.”
While governments might want to break up Facebook and other social media platforms for reasons of unfair competition or possible monopolistic behaviour, Tenove added, he wasn’t sure that would cure the issue of problem content.
“We need to be able to think about alternatives to Facebook that allow us to communicate individually and as groups around political issues and hopefully more productive ways,” he said.
“We also need to think about how to ensure Facebook is incentivized to do better through carrots and sticks.
Facebook’s vice-president of policy and public affairs, Nick Clegg, told employees in a memo last week that “what evidence there is simply does not support the idea that Facebook, or social media more generally, is the primary cause of polarization.”
Bearing down on hugely popular social media platforms and their impact on children, the leaders of a Senate panel have called executives from YouTube, TikTok and Snapchat to face questions on what their companies are doing to ensure young users’ safety.
The Senate Commerce subcommittee on consumer protection is fresh off a highly charged hearing with a former Facebook data scientist, who laid out internal company research showing that the company’s Instagram photo-sharing service appears to seriously harm some teens.
The panel is widening its focus to examine other tech platforms, with millions or billions of users, that also compete for young people’s attention and loyalty.
The three executives — Michael Beckerman, a TikTok vice president and head of public policy for the Americas; Leslie Miller, vice president for government affairs and public policy of YouTube’s owner Google; and Jennifer Stout, vice president for global public policy of Snapchat parent Snap Inc. — are due to appear at a subcommittee hearing Tuesday.
The three platforms are woven into the fabric of young people’s lives, often influencing their dress, dance moves and diet, potentially to the point of obsession. Peer pressure to get on the apps is strong. Social media can offer entertainment and education, but platforms have been misused to harm children and promote bullying, vandalism in schools, eating disorders and manipulative marketing, lawmakers say.
“We need to understand the impact of popular platforms like Snapchat, TikTok and YouTube on children and what companies can do better to keep them safe,” Sen. Richard Blumenthal, D-Conn., the subcommittee’s chairman, said in a statement.
The panel wants to learn how algorithms and product designs can magnify harm to children, foster addiction and intrusions of privacy, Blumenthal says. The aim is to develop legislation to protect young people and give parents tools to protect their children.
The video platform TikTok, wildly popular with teens and younger children, is owned by the Chinese company ByteDance. In only five years since launching, it has gained an estimated 1 billion monthly users.
TikTok denies allegations, most notably from conservative Republican lawmakers, that it operates at the behest of the Chinese government and provides it with users’ personal data. The company says it stores all TikTok U.S. data in the United States. The company also rejects criticisms of promoting harmful content to children.
TikTok says it has tools in place, such as screen time management, to help young people and parents moderate how long children spend on the app and what they see. The company says it focuses on age-appropriate experiences, noting that some features, such as direct messaging, are not available to younger users.
Early this year after federal regulators ordered TikTok to disclose how its practices affect children and teenagers, the platform tightened its privacy practices for the under-18 crowd.
A separate House committee has investigated video service YouTube Kids this year. Lawmakers said the YouTube offshoot feeds children inappropriate material in “a wasteland of vapid, consumerist content” so it can serve ads to them. The app, with both video hosting and original shows, is available in about 70 countries.
A panel of the House Oversight and Reform Committee told YouTube CEO Susan Wojcicki that the service doesn’t do enough to protect children from potentially harmful material. Instead it relies on artificial intelligence and self-policing by content creators to decide which videos make it onto the platform, the panel’s chairman said in a letter to Wojcicki.
Parent company Google agreed to pay $170 million in 2019 settlements with the Federal Trade Commission and New York state of allegations that YouTube collected personal data on children without their parents’ consent.
Despite changes made after the settlements, the lawmaker’s letter said, YouTube Kids still shows ads to children.
YouTube says it has worked to provide children and families with protections and parental controls like time limits, to limit viewing to age-appropriate content. It emphasizes that the 2019 settlements involved the primary YouTube platform, not the kids’ version.
“We took action on more than 7 million accounts in the first three quarters of 2021 when we learned they may belong to a user under the age of 13 — 3 million of those in the third quarter alone — as we have ramped up our automated removal efforts,” Miller, the Google vice president, says in written testimony prepared for the hearing.
Snap Inc.’s Snapchat service allows people to send photos, videos and messages that are meant to quickly disappear, an enticement to its young users seeking to avoid snooping parents and teachers. Hence its “Ghostface Chillah” faceless (and word-less) white logo.
Only 10 years old, Snapchat says an eye-popping 90% of 13- to 24-year-olds in the U.S. use the service. It reported 306 million daily users in the July-September quarter.
The company agreed in 2014 to settle the FTC’s allegations that it deceived users about how effectively the shared material vanished and that it collected users’ contacts without telling them or asking permission. The messages, known as “snaps,” could be saved by using third-party apps or other ways, the regulators said.
Snapchat wasn’t fined but agreed to establish a privacy program to be monitored by an outside expert for the next 20 years — similar to oversight imposed on Facebook, Google and Myspace in privacy settlements in recent years.
© 2021 The Canadian Press
Justin Trudeau’s new heritage minister has a chance to reset the Liberal government’s controversial plans to regulate social media and internet giants. The question is whether he will take it.
Trudeau tapped veteran MP and cabinet minister Pablo Rodriguez to lead the heritage portfolio Tuesday, part of a wider reset of his cabinet after September’s general election.
The heritage file has presented surprising political risks for Trudeau’s ministers. Melanie Joly ran into trouble over a deal with Netflix that saw the streaming giant promise a $500-million investment in Canadian content, but did not subject the company to Canadian sale taxes.
More recently, rookie minister Steven Guilbeault introduced Bill C-10, a poorly received attempt to modernize broadcasting rules to reflect the new internet-driven landscape.
The legislation was meant to bring internet content under broadcasting rules, in recognition that Canadians consume media differently in the internet age than when the Broadcasting Act was last reformed in 1991.
But it became a political lightning rod after the Liberals removed protections for user-generated content, which critics argued would subject Canadians’ social media accounts to CRTC regulations. And it wasn’t just opposition parties that were critical of the bill; it was widely panned by civil society organizations and academics.
Rodriguez, who served as Trudeau’s House leader in the last Parliament, is a longtime Quebec MP and seen as a steady hand in the Liberals’ front bench. He also remains Trudeau’s Quebec lieutenant in cabinet.
That responsibility could play a role in C-10’s fate. While the legislation was widely criticized, it was politically popular in Quebec – where C-10’s stated purpose of making Canadian content more “discoverable” on streaming platforms was well received.
© 2021 Global News, a division of Corus Entertainment Inc.
Facebook and YouTube have removed from their platforms a video by Brazilian President Jair Bolsonaro in which the far-right leader made a false claim that COVID-19 vaccines were linked with developing AIDS.
Both Facebook and Alphabet Inc’s YouTube said the video, which was recorded on Thursday, violated their policies.
“Our policies don’t allow claims that COVID-19 vaccines kill or seriously harm people,” a Facebook spokesperson said in a statement on Monday.
YouTube confirmed that it had taken the same step later in the day.
“We removed a video from Jair Bolsonaro’s channel for violating our medical disinformation policy regarding COVID-19 for alleging that vaccines don’t reduce the risk of contracting the disease and that they cause other infectious diseases,” YouTube said in a statement.
According to the Joint United Nations Programme on HIV and AIDS (UNAIDS), COVID-19 vaccines approved by health regulators are safe for most people, including those living with HIV, the virus that causes acquired immunodeficiency syndrome, known as AIDS.
Bolsonaro’s office did not respond immediately to a request for comment outside normal hours.
In July, YouTube removed videos from Bolsonaro’s official channel in which he recommended using hydroxychloroquine and ivermectin against COVID-19, despite scientific proof that these drugs are not effective in treating the disease.
Since then, Bolsonaro has avoided naming both drugs on his live broadcasts, saying the videos could be removed and advocating “early treatment” in general for COVID-19.
Bolsonaro, who tested positive for the coronavirus in July last year, had credited his taking hydroxychloroquine, an anti-malarial drug, for his mild symptoms. While Bolsonaro himself last January said that he wouldn’t take any COVID-19 vaccine, he did vow to quickly inoculate all Brazilians.
In addition to removing the video, YouTube has suspended Bolsonaro for seven days, national newspapers O Estado de S. Paulo and O Globo reported, citing a source familiar with the matter.
YouTube did not respond to a separate Reuters request for comment regarding the suspension on Monday night.
(Reporting by Pedro Fonseca in Rio de Janeiro; Additional reporting by Gram Slattery in Rio de Janeiro and Anthony Boadle in Brasilia; Writing by Gabriel Araujo; Editing by Leslie Adler)
Is PayPal (PYPL) Still a Great Investment Choice? – Yahoo Finance
Knitting for Guelph's Art Not Shame: 3 things to know about the organization and fundraiser – GuelphMercury.com
Analysis: Capitol Hill drug pricing reform opponents among the biggest beneficiaries of pharma funds
Exclusive: African Union to buy up to 110 million Moderna COVID-19 vaccines – officials
Sir David Amess: Priest quits social media over MP last rites abuse – BBC News
Air Canada introduces COVID self-testing option for customers – Global News
Saudi Arabia to set up investment fund for carbon capture – Aljazeera.com
What 2022 Holds for the Canadian Sports Betting Sector