Connect with us

Media

Trump’s Future in Right-Wing Media Is Seen Undimmed by Riot – Bloomberg

Published

 on

Media

GOP pushes bills to allow social media ‘censorship’ lawsuits – 570 News

Published

 on


Republican state lawmakers are pushing for social media giants to face costly lawsuits for policing content on their websites, taking aim at a federal law that prevents internet companies from being sued for removing posts.

GOP politicians in roughly two dozen states have introduced bills that would allow for civil lawsuits against platforms for what they call the “censorship” of posts. Many protest the deletion of political and religious statements, according to the National Conference of State Legislatures. Democrats, who also have called for greater scrutiny of big tech, are sponsoring the same measures in at least two states.

The federal liability shield has long been a target of former President Donald Trump and other Republicans, whose complaints about Silicon Valley stifling conservative viewpoints were amplified when the companies cracked down on misleading posts about the 2020 election.

Twitter and Facebook, which are often criticized for opaque policing policies, took the additional step of silencing Trump on their platforms after the Jan. 6 insurrection at the U.S. Capitol. Twitter has banned him, while a semi-independent panel is reviewing Facebook’s indefinite suspension of his account and considering whether to reinstate access.

Experts argue the legislative proposals are doomed to fail while the federal law, Section 230 of the Communications Decency Act, is in place. They said state lawmakers are wading into unconstitutional territory by trying to interfere with the editorial policies of private companies.

Len Niehoff, a professor at the University of Michigan Law School, described the idea as a “constitutional non-starter.”

“If an online platform wants to have a policy that it will delete certain kinds of tweets, delete certain kinds of users, forbid certain kinds of content, that is in the exercise of their right as a information distributer,” he said. “And the idea that you would create a cause of action that would allow people to sue when that happens is deeply problematic under the First Amendment.”

The bills vary slightly but many allow for civil lawsuits if a social media user is censored over posts having to do with politics or religion, with some proposals allowing for damages of $75,000 for each blocked post. They would apply to companies with millions of users and carve out exemptions for posts that call for violence, entice criminal acts or other similar conduct.

The sponsor of Oklahoma’s version, Republican state Sen. Rob Standridge, said social media posts are being unjustly censored and that people should have a way to challenge the platforms’ actions given their powerful place in American discourse. His bill passed committee in late February on a 5-3 vote, with Democrats opposed.

“This just gives citizens recourse,” he said, adding that the companies “can’t abuse that immunity” given to them through federal law.

Part of a broad, 1996 federal law on telecoms, Section 230 generally exempts internet companies from being sued over what users post on their sites. The statute, which was meant to promote growth of the internet, exempts websites from being sued for removing content deemed to be “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” as long as the companies are acting in “good faith.”

As the power of social media has grown, so has the prospect of government regulation. Several congressional hearings have been held on content moderation, sometimes with Silicon Valley CEOs called to testify. Republicans, and some Democrats, have argued that the companies should lose their liability shield or that Section 230 should be updated to make the companies meet certain criteria before receiving the legal protection.

Twitter and Facebook also have been hounded over what critics have described as sluggish, after-the-fact account suspensions or post takedowns, with liberals complaining they have given too much latitude to conservatives and hate groups.

Trump railed against Section 230 throughout his term in office, well before Twitter and Facebook blocked his access to their platforms after the assault on the Capitol. Last May, he signed a largely symbolic executive order that directed the executive branch to ask independent rule-making agencies whether new regulations could be placed on the companies.

“All of these tech monopolies are going to abuse their power and interfere in our elections, and it has to be stopped,” he told supporters at the Capitol hours before the riot.

Antigone Davis, global head of safety for Facebook, said these kinds of proposals would make it harder for the site to remove posts involving hate speech, sexualized photos of minors and other harmful content.

“We will continue advocating for updated rules for the internet, including reforms to federal law that protect free expression while allowing platforms like ours to remove content that threatens the safety and security of people across the United States,” she said.

In a statement, Twitter said: “We enforce the Twitter rules judiciously and impartially for everyone on our service – regardless of ideology or political affiliation – and our policies help us to protect the diversity and health of the public conversation.”

Researchers have not found widespread evidence that social media companies are biased against conservative news, posts or materials.

In a February report, New York University’s Stern Center for Business and Human Rights called the accusations political disinformation spread by Republicans. The report recommended that social media sites give clear reasoning when they take action against material on their platforms.

“Greater transparency — such as that which Twitter and Facebook offered when they took action against President Trump in January — would help to defuse claims of political bias, while clarifying the boundaries of acceptable user conduct,” the report read.

While the federal law is in place, the state proposals mostly amount to political posturing, said Darrell West, vice-president of governance studies at the Brookings Institution, a public policy group.

“This is red meat for the base. It’s a way to show conservatives they don’t like being pushed around,” he said. “They’ve seen Trump get kicked off Facebook and Twitter, and so this is a way to tell Republican voters this is unfair and Republicans are fighting for them.”

___

Izaguirre reported from Lindenhurst, New York

___

Associated Press coverage of voting rights receives support in part from Carnegie Corporation of New York. The AP is solely responsible for this content.

Anthony Izaguirre, The Associated Press

Let’s block ads! (Why?)



Source link

Continue Reading

Media

How vaccine misinformation spreads on social media – Varsity

Published

 on


Misinformation about vaccines is widely recognized as a motivator for vaccine hesitancy and anti-vax conspiracy theories. Both attitudes could hamper COVID-19 vaccine rollouts across the country, and the government is very aware of the risk: Ottawa plans to invest $64 million in education campaigns to fight vaccine hesitancy and misinformation.

Misinformation can range from unwarranted suspicions about what vaccines are made of to claims that taking vaccines can cause infertility. Social media platforms are a major source of this misinformation — and companies are very aware of it. 

On March 1, Twitter introduced a new labelling policy to alert users about misinformation and a strike system that would lock users out of the app if they repeatedly violate the company’s COVID-19 policy. Facebook and Instagram already announced a blanket ban on vaccine misinformation last month. 

Vaccine misinformation on social media predates the pandemic. In 2016, information about an illegal vaccine distribution network that administered unrefrigerated or expired vaccines in China’s Shandong province spread on social media, which led to a 43.7 per cent decrease in the willingness of parents to vaccinate their children. Most of the people surveyed had learned about the story exclusively through social media. 

How social media platforms shape beliefs and attitudes

To understand the roots of the vaccine misinformation problem, one has to understand how social media algorithms recommend content to users in the first place.  

Social media allows anyone to share information. This is its primary strength, but it can also be a weakness when that information is unchecked, unverified, or unedited. Social media feeds can become catalysts for misinformation and a lack of trust in public officials. They have the power to change the minds of individuals on many different subjects, primarily through repeated suggestions of the same ideas.

Algorithms on Facebook and Twitter push accounts that users interact with the most to the top of their feeds. As posts or tweets become more popular, they are amplified and spread to more users. When these posts confirm existing biases those users may have, misinformation may spread. For example, those who are borderline questioning vaccine safety and efficacy might interact with a few posts that question the efficiency of vaccines, and then encounter even more similar posts due to the algorithm. 

Misinformation researchers Claire Wardle and Eric Singerman wrote in the British Medical Journal that while Facebook, Twitter, and Google have “stated that they will take more action against false and misleading information,” it’s the personal stories and anecdotes on their platforms — which they are not controlling — that are potentially detrimental to users’ collective understanding of vaccine safety, necessity, and efficiency. 

The duo also highlights the complexity of the situation: people accuse censorship of being a violation of freedom of speech, but at the same time, there is still an argument for platforms removing posts that spread misinformation entirely.

Closer to home, Deena Abul-Fottouh, an assistant professor in the Faculty of Information, researches the impacts social media networks have on their users. A recent paper she co-wrote with researchers from U of T and Ryerson University analyzes how YouTube handles vaccine misinformation. 

The YouTube algorithm is built on homophily — the belief that “like-minded individuals… tend to act in a similar way” — in that it pushes content that users already find interesting or of priority onto other users who are judged to have similar tastes. According to the study, this creates a filter bubble, “which occurs when a recommender system makes assumptions of user preferences based on prior collected information about that user, making it less likely that the user would be exposed to diverse perspectives.”

How are social media companies responding to misinformation? 

Facebook and Twitter began to take steps to prevent the spread of health misinformation in 2018. These were small measures, such as the addition of educational pop-ups and the suppression of false claims that were deemed threatening. Meanwhile, Pinterest changed its settings so that the search term “vaccines’” would only yield information from reliable sources such as the World Health Organization. 

However, social media companies are still under increased pressure from governments, the public, and health authorities to alter their policies regarding public health. Following new guidelines, Facebook has been removing posts that include any false information regarding the vaccines, as well as adding labels to posts that need clarification. 

Wardle and Singerman describe these measures as positive but still insufficient, relying on tackling individual instances of misinformation rather than the larger psychological effects of suspicion and fear they generate. The research sums up, “What’s required is more innovative, agile responses that go beyond the simple questions of whether to simply remove, demote, or label.” 

YouTube has also made changes to its policies and is now more likely to recommend pro-vaccine videos. But Abul-Fottouh and her colleagues wrote that the “filter bubble” effect is still prevalent and that those who engage with anti-vaccine content will be on the receiving end of more anti-vaccine content.

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Halifax police, school investigate attack on student in social media video – Global News

Published

 on


A video showing several teenagers attacking another student at the Halifax Common surfaced on social media on Friday. Halifax police and the regional education centre say they are investigating the “very disturbing” incident.

The incident, involving students from Citadel High School, occurred Thursday afternoon after class.

The video circulating on social media shows three teenagers walking behind a student. After the student lays on the ground, a second student is seen stomping on his head. Another individual filming could be heard egging the fight.

“The school has spoken with everyone involved and the aggressors and their families know there are consequences for those actions, even though it happened off school property,” says Doug Hadley, spokesperson for the Halifax Regional Centre for Education.

Read more:
Two people dead after early morning fire in Hilden: N.S. RCMP

Story continues below advertisement

The school has also contacted police, who have confirmed they are investigating the incident.

“Halifax Regional Police would like to confirm that we have received reports of an incident captured in a video involving a physical altercation between some youths,” HRP said in a news release Friday night.

“We can confirm that we have an ongoing investigation into this matter. Due to the age of the parties involved, we are unable to provide specific details.

“We would like to assure the public that we take the matter seriously and are taking all necessary steps in this ongoing matter.”

Early Saturday afternoon, Citadel High School principal Joe Morrison released a statement saying the incident is being taken seriously.

“We were made aware of the incident on Friday and spent considerable time addressing the situation.

“Late Friday afternoon, we became aware that a video of the incident was circulating on social media,” Morrison wrote.

“We also learned there is a narrative circulating suggesting the person on the ground has special needs. This is not the case and distracts from what actually happened.”


Click to play video 'Surveillance video shows RCMP officers after shooting at Nova Scotia fire hall'



1:52
Surveillance video shows RCMP officers after shooting at Nova Scotia fire hall


Surveillance video shows RCMP officers after shooting at Nova Scotia fire hall

As for the student attacked, authorities say he is doing fine and attended school on Friday, but that doesn’t change the seriousness of the incident.

Story continues below advertisement

“It’s very disturbing for anyone who’s seen it, so it’s a matter of great concern for everyone who is involved,” Hadley says.

He is also pleading for everyone to stop sharing the video of the attack.

“It’s sharing a video of someone being attacked, and they’re being filmed without their consent, and so every time we share that it can lead to further victimization,” Hadley says.

“It can be harmful on many different levels,” he says, adding that there is also an issue of reputation. “This might be reflective of the larger community when in fact we know that not to be true, and that Citadel High and its students have many things to be proud of.

“But there’s also a feeling that there’s a risk, that others might think that’s acceptable behavior when it’s clearly not.”

Police say the investigation into the incident is ongoing and the school has stated that there will be consequences for the students involved.

© 2021 Global News, a division of Corus Entertainment Inc.

Let’s block ads! (Why?)



Source link

Continue Reading

Trending