Connect with us

Media

Bill Barr bashed in right-wing media after election fraud comments: 'He is either a liar or a fool or both' – CNN

Published

 on


Since he was confirmed as attorney general, William Barr has been somewhat of a hero in the right-wing media universe. He has assailed the Russia probe. He has talked a big game about cracking down on Antifa. He has sharply criticized the news media. On and on it goes.
But his celebrity status took a hit on Tuesday when he undercut President Trump’s brazenly false contention that there was massive voter fraud in the 2020 election. Speaking to the Associated Press, Barr said that, “to date, we have not seen fraud on a scale that could have effected a different outcome in the election.”
The statement from Barr, which merely recited a simple fact, not only cut against what Trump has been saying, but also what Trump’s propagandists and allies in right-wing media have been feeding their audiences. For weeks, these media personalities have strung their audiences along, suggesting that damning proof of fraud was just around the corner. Which is why the comment from Barr stung so bad.
The comment effectively forced these right-wing stars to pick between acknowledging the reality Barr laid out or continuing Trump’s fantasy. Trump’s most devoted propagandists chose the latter. And so they started to throw Barr under the bus, just as they’ve done with every other conservative who has dared to contradict the president. (Think about how former conservative stars such as Jeff Sessions, Justin Amash, Paul Ryan, and others were treated when they didn’t blindly oblige Trump’s demands.)

“A liar or a fool or both”

Fox Business host Lou Dobbs, whose conspiratorial program is a favorite of the president, attacked Barr in brutal terms on his show. “For the attorney general of the United States to make that statement — he is either a liar or a fool or both,” Dobbs said. Dobbs then went further, suggesting Barr was “perhaps compromised.” He characterized Barr as having “appeared to join in with the radical Dems and the deep-state and the resistance.”
Dobbs wasn’t the only one. Newsmax host Greg Kelly, who has risen to fame in right-wing media circles in the last few weeks for suggesting Trump could emerge as the winner of the election, went after Barr on his show. “Some of us are wondering if he is a warrior with the Constitution or if he’s just a bureaucrat,” Kelly said. Kelly added that he “can’t believe” if Barr “looked for voter fraud he wouldn’t find any.” And Mark Levin said he “regret[ted] to say” that Barr’s comments were “misleading.”
The far-right blogs were even harsher. The Gateway Pundit, a fringe website which Trump has repeatedly promoted, published a post that said Barr had revealed himself as “totally deaf, dumb and blind.” The post went on to say that Barr’s “masquerade as someone opposed to the criminality of the Deep State” had been “exposed as a venal lie” and that he was a “fraud.” It concluded, “You either fix the damn corrupt system or we will abandon you…Our days of tolerating betrayal are over.”

Some hold fire

While Barr faced strong criticism from some notable names in right-wing media, others refrained from attacking him on Tuesday night. Notably, heavyweights Tucker Carlson and Sean Hannity didn’t skewer the AG. It will be interesting over the next 24 hours if this anti-Barr narrative takes greater hold in the Trump-friendly media, or if it dissipates.

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Social media companies should face new legal duty to 'act responsibly,' expert panel finds – The Tri-City News

Published

 on


Social media companies can’t be trusted to moderate themselves, so it falls to the government to enforce new restrictions to protect Canadians from harmful content online, according to a report currently under review by the federal heritage minister.

The Canadian Commission on Democratic Expression, an expert panel of seven members, including former chief justice Beverley McLachlin, said it had become difficult to ignore the fact too many real-world manifestations of online interactions are turning violent, destructive or hateful, despite social media’s parallel role in empowering positive social movements.

The panellists were particularly struck by the role they saw social media play last fall in “sowing distrust” in the aftermath of the U.S. presidential election, culminating in the lethal invasion of the U.S. Capitol. And they found, with the Quebec mosque shooting, the Toronto van attack and the armed invasion of Rideau Hall, that “Canada is not immune.”

“We recognize the charter, we recognize the ability of people to express themselves freely,” said Jean La Rose, former chief executive officer of the Aboriginal Peoples Television Network (APTN) and one of the seven commissioners, in an interview.

“But there must be limits at one point. There has to be limits as to where free speech becomes a racist discourse, or a hurtful discourse, or a hateful discourse.”

‘We have been at the receiving end of racist threats’

These limits would come in the form of a new law passed by Parliament, the commission recommended, that would force social media platforms like Twitter and Facebook, search engines like Google and its video-sharing site YouTube and others to adhere to a new “duty to act responsibly.”

The panel purposefully did not spell out what responsible behaviour should look like. Instead, it said this determination should be left to the government — as well as a new regulator that would oversee a code of conduct for the industry and a new “social media council” that would bring together the platforms with civil society and other groups.

La Rose said his experience in the journalism world demonstrated how there needed to be reasonable limits on what people can freely express so they are not permitted to call for the killings of Muslims, for example, or encourage violence against an individual by posting their home address or other personal details online.

“Having worked in media, having worked at APTN, for example, we have been at the receiving end of racist threats, of severe injury to our people, our reporters and others because of the view we present of the situation of the Indigenous community in Canada,” he said.

“Literally, we’ve had some reporters run off the road when they were covering a story because people were trying to block the telling of that story. So as a news entity, we have seen how far sometimes misinformation, hate and hurtful comments can go.”

Rules must reflect issue’s ‘inherent complexity’: Google

Canadian Heritage Minister Steven Guilbeault has himself recently indicated that legislation to address “online hate” will be introduced “very soon.”

The minister has pointed to the popularity of such a move: a recent survey by the Canadian Race Relations Foundation (CRRF), for example, found that fully four-fifths of Canadians are on board with forcing social media companies to rapidly take down hateful content.

“Canadians are now asking their government to hold social media companies accountable for the content that appears on their platforms,” Guilbeault said after the CRRF survey was published.

“This is exactly what we intend to do, by introducing new regulations that will require online platforms to remove illegal and hateful content before they cause more harm and damage.”

Guilbeault has met with the commission to discuss their recommendations and is currently reviewing their report, press secretary Camille Gagné-Raynauld confirmed.

Representatives from Facebook Canada and Twitter Canada were among several people who provided witness testimony and participated in commission deliberations, the report said. Twitter declined comment to Canada’s National Observer.

“We haven’t reviewed the full report yet, so we can’t comment on the specific recommendations,” said Kevin Chan, global director and head of public policy for Facebook Canada. “We have community standards that govern what is and isn’t allowed on our platform, and in most cases those standards go well beyond what’s required by law.”

Chan also said Facebook agreed regulators should make “clear rules for the internet” so private companies aren’t left to make decisions themselves.

Google spokesperson Lauren Skelly said the company shares Canadians’ concerns about harmful content online and said YouTube takes its responsibility to remove content that violates its policies “extremely seriously.” She said the company has significantly ramped up daily removals of hate speech and removed millions of videos last quarter for violations.

“Any regulation needs to reflect the inherent complexity of the issue and the scale at which online platforms operate,” said Skelly. “We look forward to continuing our work with the government and local partners on addressing the spread of online hate to ensure a safer and open internet that works for all Canadians.”

Incentives ‘not aligned with the public interest’: Jaffer

The nine-month study by the commission, an initiative led by the Public Policy Forum, found that with everything from disinformation campaigns to conspiracy theories, hate speech and people targeted for harm, toxic content was being “amplified” by the actions of social media companies.

The study rejected the notion that social media platforms are “neutral disseminators of information,” finding instead that they curate content to serve their own commercial interests.

“The business model of some of the major social media companies involves keeping people engaged with their platforms as much as possible. And it turns out that keeping people engaged means feeding them sensational content because that’s what keeps people clicking,” said Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University and another commissioner.

“The incentives for social media companies are not aligned with the public interest. These are private companies whose obligation is to make money for their shareholders.”

The commission also proposed a tribunal to deal with dispute resolutions quickly, as well as a “transparency regime” that would require social media companies to make certain information available to the regulator, including the “algorithmic architecture used to identify problematic content.”

Jaffer wrote a “concurring statement” in the report, where he confessed it was difficult to endorse the commission’s proposed “duty to act responsibly” without going further to define how that duty will work in reality. He said defining it will require “difficult tradeoffs” between free speech, privacy and other issues.

Carl Meyer / Local Journalism Initiative / Canada’s National Observer

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Social media companies should face new legal duty to 'act responsibly,' expert panel finds – North Shore News

Published

 on


Social media companies can’t be trusted to moderate themselves, so it falls to the government to enforce new restrictions to protect Canadians from harmful content online, according to a report currently under review by the federal heritage minister.

The Canadian Commission on Democratic Expression, an expert panel of seven members, including former chief justice Beverley McLachlin, said it had become difficult to ignore the fact too many real-world manifestations of online interactions are turning violent, destructive or hateful, despite social media’s parallel role in empowering positive social movements.

The panellists were particularly struck by the role they saw social media play last fall in “sowing distrust” in the aftermath of the U.S. presidential election, culminating in the lethal invasion of the U.S. Capitol. And they found, with the Quebec mosque shooting, the Toronto van attack and the armed invasion of Rideau Hall, that “Canada is not immune.”

“We recognize the charter, we recognize the ability of people to express themselves freely,” said Jean La Rose, former chief executive officer of the Aboriginal Peoples Television Network (APTN) and one of the seven commissioners, in an interview.

“But there must be limits at one point. There has to be limits as to where free speech becomes a racist discourse, or a hurtful discourse, or a hateful discourse.”

‘We have been at the receiving end of racist threats’

These limits would come in the form of a new law passed by Parliament, the commission recommended, that would force social media platforms like Twitter and Facebook, search engines like Google and its video-sharing site YouTube and others to adhere to a new “duty to act responsibly.”

The panel purposefully did not spell out what responsible behaviour should look like. Instead, it said this determination should be left to the government — as well as a new regulator that would oversee a code of conduct for the industry and a new “social media council” that would bring together the platforms with civil society and other groups.

La Rose said his experience in the journalism world demonstrated how there needed to be reasonable limits on what people can freely express so they are not permitted to call for the killings of Muslims, for example, or encourage violence against an individual by posting their home address or other personal details online.

“Having worked in media, having worked at APTN, for example, we have been at the receiving end of racist threats, of severe injury to our people, our reporters and others because of the view we present of the situation of the Indigenous community in Canada,” he said.

“Literally, we’ve had some reporters run off the road when they were covering a story because people were trying to block the telling of that story. So as a news entity, we have seen how far sometimes misinformation, hate and hurtful comments can go.”

Rules must reflect issue’s ‘inherent complexity’: Google

Canadian Heritage Minister Steven Guilbeault has himself recently indicated that legislation to address “online hate” will be introduced “very soon.”

The minister has pointed to the popularity of such a move: a recent survey by the Canadian Race Relations Foundation (CRRF), for example, found that fully four-fifths of Canadians are on board with forcing social media companies to rapidly take down hateful content.

“Canadians are now asking their government to hold social media companies accountable for the content that appears on their platforms,” Guilbeault said after the CRRF survey was published.

“This is exactly what we intend to do, by introducing new regulations that will require online platforms to remove illegal and hateful content before they cause more harm and damage.”

Guilbeault has met with the commission to discuss their recommendations and is currently reviewing their report, press secretary Camille Gagné-Raynauld confirmed.

Representatives from Facebook Canada and Twitter Canada were among several people who provided witness testimony and participated in commission deliberations, the report said. Twitter declined comment to Canada’s National Observer.

“We haven’t reviewed the full report yet, so we can’t comment on the specific recommendations,” said Kevin Chan, global director and head of public policy for Facebook Canada. “We have community standards that govern what is and isn’t allowed on our platform, and in most cases those standards go well beyond what’s required by law.”

Chan also said Facebook agreed regulators should make “clear rules for the internet” so private companies aren’t left to make decisions themselves.

Google spokesperson Lauren Skelly said the company shares Canadians’ concerns about harmful content online and said YouTube takes its responsibility to remove content that violates its policies “extremely seriously.” She said the company has significantly ramped up daily removals of hate speech and removed millions of videos last quarter for violations.

“Any regulation needs to reflect the inherent complexity of the issue and the scale at which online platforms operate,” said Skelly. “We look forward to continuing our work with the government and local partners on addressing the spread of online hate to ensure a safer and open internet that works for all Canadians.”

Incentives ‘not aligned with the public interest’: Jaffer

The nine-month study by the commission, an initiative led by the Public Policy Forum, found that with everything from disinformation campaigns to conspiracy theories, hate speech and people targeted for harm, toxic content was being “amplified” by the actions of social media companies.

The study rejected the notion that social media platforms are “neutral disseminators of information,” finding instead that they curate content to serve their own commercial interests.

“The business model of some of the major social media companies involves keeping people engaged with their platforms as much as possible. And it turns out that keeping people engaged means feeding them sensational content because that’s what keeps people clicking,” said Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University and another commissioner.

“The incentives for social media companies are not aligned with the public interest. These are private companies whose obligation is to make money for their shareholders.”

The commission also proposed a tribunal to deal with dispute resolutions quickly, as well as a “transparency regime” that would require social media companies to make certain information available to the regulator, including the “algorithmic architecture used to identify problematic content.”

Jaffer wrote a “concurring statement” in the report, where he confessed it was difficult to endorse the commission’s proposed “duty to act responsibly” without going further to define how that duty will work in reality. He said defining it will require “difficult tradeoffs” between free speech, privacy and other issues.

Carl Meyer / Local Journalism Initiative / Canada’s National Observer

Let’s block ads! (Why?)



Source link

Continue Reading

Media

What the Capitol Riot Data Download Shows about Social Media Vulnerabilities – Scientific American

Published

 on


What the Capitol Riot Data Download Shows about Social Media Vulnerabilities
Pro-Trump protesters at the Capitol used their phones to record and post photos and videos on social media. Credit: Lev Radin Getty Images
Advertisement

<div class="article-block article-text" data-behavior="newsletter_promo dfp_article_rendering " data-dfp-adword="Advertisement" data-newsletterpromo_article-text="

Sign up for Scientific American&rsquo;s free newsletters.

” data-newsletterpromo_article-image=”https://static.scientificamerican.com/sciam/cache/file/CF54EB21-65FD-4978-9EEF80245C772996_source.jpg” data-newsletterpromo_article-button-text=”Sign Up” data-newsletterpromo_article-button-link=”https://www.scientificamerican.com/page/newsletter-sign-up/?origincode=2018_sciam_ArticlePromo_NewsletterSignUp” name=”articleBody” itemprop=”articleBody”>

During the January 6 assault on the Capitol Building in Washington, D.C., rioters posted photographs and videos of their rampage on social media. The platforms they used ranged from mainstream sites such as Facebook to niche ones such as Parler—a social networking service popular with right-wing groups. Once they realized this documentation could get them in trouble, many started deleting their posts. But Internet sleuths had already begun downloading the potentially incriminating material. One researcher, who publicly identifies herself only by the Twitter handle @donk_enby, led an effort that she claims downloaded and archived more than 99 percent of all data posted to Parler before Amazon Web Services stopped hosting the platform. Scientific American repeatedly e-mailed Parler’s media team for comment but had not received a response at the time of publication.

Amateur and federal investigators can extract a lot of information from this massive trove, including the locations and identities of Parler users. Although many of those studying the Parler data are law enforcement officials looking into the Capitol insurrection, the situation provides a vivid example of the way social media posts—whether extreme or innocuous—can inadvertently reveal much more information than intended. And vulnerabilities that are legitimately used by investigators can be just as easily exploited by bad actors.

To learn more about this issue, Scientific American spoke with Rachel Tobac, an ethical hacker and CEO of SocialProof Security, an organization that helps companies spot potential vulnerabilities to cyberattacks. “The people that most people are talking about when they think of a hacker, those are criminals,” she says. “In the hacker community, we’re trying to help people understand that hackers are helpers. We’re the people who are trying to keep you safe.” To that end, Tobac also explained how even tame posts on mainstream social media sites could reveal more personal information than many users expect—and how they can protect themselves.

[An edited transcript of the interview follows.]

How was it possible to download so much data from Parler?

Folks were able to download and archive the majority of Parler’s content … through automated site scraping. [Parler] ordered their posts by number in the URL itself, so anyone with any programming knowledge could just download all of the public content. This is a fundamental security vulnerability. We call this an insecure direct object reference, or IDOR: the Parler posts were listed one after another, so if you just add “1” to the [number in the] URL, you could then scrape the next post, and so on. This specific type of vulnerability would not be found in mainstream social media sites such as Facebook or Twitter. For instance, Twitter randomizes the URLs of posts and requires authentication to even work with those randomized URLs. This [IDOR vulnerability]—coupled with a lack of authentication required to look at each post and a lack of rate limiting (rate limiting basically means the number of requests that you can make to pull data)—means that even an easy program could allow a person to scrape every post, every photo, every video, all the metadata on the Web site.

What makes the archived data so revealing?

The images and videos still contained GPS metadata when they went online, which means that anyone can now map the detailed GPS locations of all the users who posted. This is because our smartphone logs the GPS coordinates and other data, such as the lens and the timing of the photo and video. We call this EXIF data—we can turn this off on our phones, but many people just don’t know to turn that off. And so they leave it embedded within the files that they upload, such as a video or a photo, and they unknowingly disclose information about their location. Folks on the Internet, law enforcement, the FBI can use this information to determine where those specific users live, work, spend time—or where they were when they posted that content.

Can investigators extract similar information from posts on more mainstream platforms?

This EXIF data are scrubbed on places such as Facebook and Twitter, but we still have a lot of people who don’t realize how much they’re compromising their location and information about themselves when they’re posting. Even if Parler did scrub the EXIF data, we saw on a lot of posts during this event that people were geolocation tagging their Instagram Stories to the Capitol Building that day or broadcasting their actions on Facebook Live publicly and tagging where they were located. I think it’s a general lack of understanding or maybe not realizing just how much data they’re leaking. And I think plenty of folks also didn’t realize that maybe they wouldn’t want to geolocation tag during that event.

Under more normal circumstances, is there a problem with geolocation tagging?

Many people think, “Well, I’m not doing anything wrong, so why would I care if I post a photo?” But let’s just take a really innocuous example, such as going on vacation. [If] you geolocation tag the hotel, what could I do as an attacker? Well, the obvious thing is: you’re not home. But I feel like most people get that. What they don’t probably get is that I can social engineer: I can gain access to information about you through human systems at that hotel. I could call up your hotel pretending to be you and gain information about your travel plans. I could steal your hotel points. I could change your room. I could do all this nefarious stuff. We can do so much and really manipulate because our service providers don’t authenticate the way that I would recommend that they authenticate over the phone. Can you imagine if you could log into your Gmail account, your calendar or something like that by just using your current address, your last name and your phone number? But that’s how it works with a lot of these different companies. They don’t use the same authentication protocols that they would use, say, on a Web site.

How can people protect themselves?

I don’t think it would be fair to tell people that they couldn’t post. I post on Twitter multiple times a day! Instead of saying, “You can’t do this,” I would recommend being what I call “politely paranoid” about what we post online. For instance, we can post about the vacation, but we don’t want location- or service-provider-identifying markers within the post. So how about you post a picture of the sunset and the margarita but don’t geolocation tag the hotel? These very small changes can help folks protect their privacy and safety in the long run while still getting everything that they want out of social media. If you really want a geolocation tag, you can save the city that you’re in rather than the hotel: [then] I can’t call up the city and try and get access to your hotel points or change your plans.

Should social media sites just prevent geolocation tagging? What responsibilities do platforms have to protect their users?

I think it’s really important that all platforms, including social media platforms, follow best practices regarding security and privacy to keep their users safe. It’s also a best practice to scrub metadata for your users before they post their photos or videos so they don’t unknowingly compromise themselves. All of that is the platform’s responsibility; we have to hold them to that [and] make sure that they do those things. After that, I would say individuals get to choose how much risk they would like to take. I work hard to ensure nonsecurity folks understand risks: things such as geolocation tagging, [mentioning] service providers [and] taking pictures of their license, credit cards, gift cards, passports, airplane tickets—now we’re seeing COVID-19 vaccination cards with sensitive data on them. I don’t think it’s the social media company’s responsibility, for instance, to dictate what somebody can or cannot post when it comes to their travel photos. I think that’s up to the user to decide how they would like to use that platform. And I think it’s up to us as [information security] professionals to clearly communicate what those risks are so people can make an informed decision.


Rights & Permissions

Let’s block ads! (Why?)



Source link

Continue Reading

Trending