Connect with us


Twitter's slap-down of Liberal video reveals growing role of social media giants in election –



They were two-words launched into the middle of a Canadian election that exploded online.

A week into the election campaign, Deputy Prime Minister Chrystia Freeland tweeted a video of Conservative Leader Erin O’Toole responding to a question about his views on private, for profit, health care in Canada.

Twitter suddenly slapped a tag on the posts, first in French and then in English, saying they were “manipulated media,” apparently because part of O’Toole’s answer upholding the principle of universal access had been edited out.


An online furor erupted, spawning criticism and conspiracy theories. The commotion eventually died down but not before the English video was viewed nearly 232,000 times – far more than it likely would have been seen if Twitter had not tagged it.

“If Freeland had posted this doctored video and sent it out into the Twittersphere, a small number of people would have seen it and the conversation would have moved on,” said Aengus Bridgman, director of the Canadian Election Misinformation Project, which is monitoring what is happening online during the election.

“The fact that Twitter flagged it as manipulated media meant that, suddenly, the issue and the tweet got an enormous amount of attention and sort of has driven the news cycle.”

The incident shines a light on the role of American social media giants in Canada’s election – a role that risks being a lot more active than in past campaigns.

Aengus Bridgman, director of the Canadian Election Misinformation Project, says social media companies are trying to navigate a difficult path between controlling misinformation and allowing free speech. (Louis-Marie Philidor/CBC)

Companies such as Facebook and Twitter have come under fire in recent years for not doing enough to stop their platforms from being used to spread misinformation or to manipulate elections and public opinion.

Concern about the role social media companies could play in political campaigns came to the fore in 2018, when it was revealed that British consulting firm Cambridge Analytica used the data of millions of Facebook users to help former U.S. president Donald Trump’s successful 2016 election campaign.

Now, faced with the prospect of governments in various countries moving to regulate what happens on their platforms, some of the larger players have been starting to act and have become more proactive, removing, labelling or limiting the visibility of some posts in the name of fighting misinformation or election tampering.

While some social media companies are taking steps to combat misinformation, there are others, such as Telegram, where it is spreading rapidly. Telegram, owned by Russian billionaire Pavel Durov, is also being used by those opposed to COVID-19 vaccines, lockdowns and mask mandates to organize loud, angry protests in recent days at Prime Minister Justin Trudeau’s campaign stops.

However, it also raises the question of what role the decisions of corporations based in other countries should play in the middle of a Canadian election when it comes to limiting free speech by removing posts or reducing the number of people who see them.

As part of its updated election integrity policy, Facebook is taking several steps, including beefing up its fact-checking, applying warning labels to posts with false information and blocking fake accounts. Its monitors are also on the lookout for attempts by foreign state actors to influence the course of the election campaign.

Facebook will also be continuing a pilot project it introduced in Canada in February to reduce the amount of political content in the feeds of Canadian users, although it won’t reduce the number of paid political ads that they see.

Twitter began acting on posts by politicians even before the election call. In July, it suspended MP Derek Sloan from its platform for 12 hours after he posted a link to a Reuters article about the U.K. deciding against a mass vaccination program for teenagers and urged Canada to do the same. Twitter has also slapped labels on tweets by Ontario MPP Randy Hillier, who has opposed COVID vaccines and lockdowns, and on a manipulated video of NDP Leader Jagmeet Singh posted during the election by a regular Twitter user. It has since been taken down by the user.

Bridgman said Twitter began increasing its enforcement actions several months ago.

I think what Twitter did is a real shot across the bow that is going to shake up how the campaigns are being run-NDP MP Charlie Angus

“This is part of an initiative of Twitter that started last year during COVID-19, when they really ramped up their labelling of media content on the platform,” Bridgman explained.

“So they did it initially because there was so much misinformation about COVID-19 circulating, and they were getting a lot of flak for that. So they put this in place. Then it became applied to political content, sort of famously through the American election with Donald Trump in particular. And now it’s being applied to the Canadian election.”

Bridgman said Twitter has a small army of algorithmically assisted human fact-checkers who manually label problematic tweets.

Bridgman said social media companies find themselves trying to steer a difficult course.

“It’s hard for social media companies to win the PR role here,” Bridgman said.  “They’re in a tight place, because they want to clean up misinformation on their platforms, but they also don’t want to be playing kingmaker. That’s not in their interest and it’s not a good look.”

University of Ottawa professor Michael Geist said the incidents highlights the challenges that come with trying to moderate content online.

“The government wants the platforms to be more aggressive in moderating content, including creating liability and incentives for failure to take down content within 24 hours. But this case highlights that many of these cases are very difficult.”

NDP MP Charlie Angus says Canada, thanks to lax rules, is a destination for white-collar criminals looking to hide money. (CBC)

New Democratic Party MP Charlie Angus, who has been part of Canadian and international committees that have studied the role of social media companies in society, said the fact that someone with Freeland’s status was given an edited video to tweet out is “very concerning.”

“I think what Twitter did is a real shot across the bow that is going to shake up how the campaigns are being run,” Angus said, adding the Liberal government is supposed to be fighting disinformation.

“The fact that Twitter was willing to call out someone of the stature of Chrystia Freeland for posting disinformation, I think that’s a very healthy sign.”

It’s also coming at a good time, he said.

“Things are going to heat up a lot, so Twitter stepping in at this point in the campaign, I think, is going to make everyone think they’re going to have to be a little bit more careful.”

Liberal MP Nathaniel Erskine-Smith, who was also a key figure in Canadian and international committee hearings into social media companies, said Twitter should have been more transparent about how it makes decisions — not just pointing to a multi-pronged policy the way it did with the video tweeted by Freeland.

Liberal MP Nathaniel Erskine-Smith told CBC News Network’s Power and Politics his government missed an opportunity to be a leader on treating the global drug problem as a health issue. (CBC)

“I assume that it’s the isolated editing that they are drawing from there. But again, yes, it removed the reference ‘universal access.’ But given the nature of the comments in relation to the Saskatchewan MRI policy, I don’t think it inaccurately characterized the concern around private pay in a universal system.”

Erskine-Smith also questioned the way Twitter applied its policy when it labelled the video tweeted by Freeland.

“The government wants the platforms to be more aggressive in moderating content, including creating liability and incentives for failure to take down content within 24 hours. But this case highlights that many of these cases are very difficult.”

Conservative MP Bob Zimmer, who served with Angus and Erskine-Smith on the committees that studied the impact of social media companies, and the Conservative Party did not respond to several requests from CBC News for an interview.

When it comes to how the actions of social media companies risk affecting the election, opinions vary.

“I think there is no question that social media companies impact the election now, with their policies around moderation and misinformation,” said Bridgman. “I think that that ship has sailed, and it’s not a question of whether they will or not. It’s a question of how much and how they will do it.”

Erskine-Smith, however, is convinced that traditional campaign elements like door-knocking, policies and the debates will have more impact than what happens on social media.

“I don’t think it will have a great impact in the end, in so far as I don’t think the decisions that the private platforms make will have a great impact in the end.”

Daniel Bernhard, executive director of Friends of Canadian Broadcasting, was sharply critical of Facebook’s track record when it comes to cracking down on misuse of its platform.

“Canada is foolish to depend for the health of our democracy on the good will and the competency of a company like Facebook that has proven over and over and over again that it is both incapable and unwilling to act in an ethical and democratic way.”

Bernhard also wants social media companies to have to divulge the algorithms they are using to govern how their platforms operate.

“These algorithms make hugely consequential editorial choices that have major consequences for politics and democracy. And so their operation but also their transparency, should be a matter of regulation — not of good will and voluntary compliance.”

Erskine-Smith would like to see new rules to require more transparency from social media companies, pointing out that Canada has a Broadcast Standards Council but no watchdog for social media companies.

“When we see the power and influence that private platforms do wield in our public discourse, bringing a level of transparency to the way decisions are made by those platforms is incredibly important … not only in relation to specific, discrete policy decisions, like Twitter’s decision to apply its own standards, but how the algorithms themselves are promoting or downgrading certain content,” he said.

“As algorithms replace editors, and increasingly so, we do need greater algorithmic transparency.”

Elizabeth Thompson can be reached at

Adblock test (Why?)


Source link

Continue Reading


Who is Lachlan Murdoch, heir apparent of Rupert Murdoch’s media empire?



For Lachlan Murdoch, this moment has been a long time coming. Assuming, of course, that his moment has actually arrived.

On Thursday, his father Rupert Murdoch announced that in November he’ll step down as the head of his two media companies: News Corp. and Fox Corp. Lachlan will become the chair of News Corp. while remaining chief executive and chair at Fox Corp., the parent of Fox News Channel.

The changes make Rupert’s eldest son the undisputed leader of the media empire his father built over decades. There’s no real sign that his siblings and former rivals James and Elisabeth contested him for the top job; James in particular has distanced himself from the company and his father’s politics for several years. But Rupert, now 92, has long had a penchant for building up his oldest children only to later undermine them — and sometimes to set them against one another — often flipping the table without notice.

Given Rupert Murdoch’s advanced age, this might be his last power move. But there’s a reason the HBO drama “Succession” was often interpreted as a thinly disguised and dark satire of his family business. In Murdoch World, as in the fictional world of the Roy family, seemingly sure things can go sideways in an instant, particularly when unexpected opportunities arise.


Lachlan Murdoch has lived that first hand. Born in London, he grew up in New York City and attended Princeton, where he focused not on business, but philosophy. His bachelor’s thesis, titled “A Study of Freedom and Morality in Kant’s Practical Philosophy,” addressed those weighty topics alongside passages of Hindu scripture. The thesis closed on a line from the Bhagavad Gita referencing “the infinite spirit” and “the pure calm of infinity,” according to a 2019 article in The Intercept.

Béatrice Longuenesse, Lachlan’s thesis advisor at Princeton, confirmed the accuracy of that report via email.

After graduation, though, Lachlan plunged headlong into his father’s business, moving to Australia to work for the Murdoch newspapers that were once the core of News Corp.’s business. Many assumed he was being groomed for higher things at News Corp., and they were not wrong. Within just a few years, Lachlan was deputy CEO of the News Corp. holding company for its Australian properties; shortly thereafter, he took an executive position at News Corp. itself and was soon running the company’s television stations and print publishing operations.

Lachlan’s ascent came to an abrupt halt in 2005, when he resigned from News Corp. with no public explanation. According to Paddy Manning, an Australian journalist who last year published a biography of Lachlan Murdoch, the core problem involved two relatively minor issues on which Lachlan disagreed with Roger Ailes, who then ran Fox News.

“The real point was that Lachlan felt Rupert had backed his executives over his son,” Manning said in an interview. “So Lachlan felt, ‘If I’m not going to be supported, then what’s the point?’” Manning did not have direct access to Lachlan for his book “The Successor,” but said he spoke in depth with the people closest to his subject.

Lachlan returned to Australia, where he has often described feeling most at home, and founded an investment group that purchased a string of local radio stations among other properties.

While he was away, News Corp. entered choppy waters. The U.K. phone-hacking scandal, in which tabloid journalists at the News of the World and other Murdoch-owned publications had found a way to listen to voicemails of the British royal family, journalistic competitors and even a missing schoolgirl, had seriously damaged the company. The fracas led to resignations of several News Corp. officials, criminal charges against some, and the closure of News of the World as its finances went south.

Manning said that the damage the scandal inflicted on News Corp. — and on both Lachlan Murdoch’s father and his brother James, chief executive of News’ British newspaper group at the time — helped pull Lachlan back to the company.

“He was watching the family tear itself apart over the phone-hacking scandal,” Manning said. Lachlan was “instrumental in trying to circle the wagons and turn the guns outwards, and stop Rupert from sacking James.”

While it took more convincing, Lachlan eventually returned to the company in 2014 as co-chairman of News Corp. alongside James.

Not long afterward, Ailes was forced out of his job at Fox News following numerous credible allegations of sexual harassment.

Lachlan Murdoch has drawn criticism from media watchdogs for what many called Fox News’ increasingly conspiratorial and misinformation-promoting broadcasts. The network hit a nadir following the 2020 election when voting machine company Dominion Voting Systems sued Fox News for $1.6 billion, alleging that Fox knowingly promoted false conspiracy theories about the security of its voting machines.

Fox settled that suit for $787.5 million in March of this year. A similar lawsuit filed by Smartmatic, another voting-machine maker, may go to trial in 2025, Fox has suggested.

In certain respects, though, Lachlan Murdoch’s behavior suggests some ambivalence about his role at News Corp. In 2021 he moved back to Sidney and has been mixing commuting and remote work from Australia ever since. “I think there’s a legitimate question about whether you can continue to do that and for how long” while running companies based in the U.S., Manning said.


Source link

Continue Reading


Ukraine war: US to give Kyiv long-range ATACMS missiles



US President Joe Biden plans to give Ukraine advanced long-range missiles to help Kyiv with its ongoing counter-offensive, US media report.

They quote US officials familiar with the issue as saying Ukraine will get some ATACMS missiles with a range of up to 190 miles (300km).

This would enable Kyiv to hit Russian targets deep behind the front line.

At least two Ukrainian missiles hit the headquarters of Russia’s Black Sea fleet in annexed Crimea on Friday.


A Ukrainian military source told the BBC that the attack in the port of Sevastopol used Storm Shadow missiles, which are supplied by Britain and France.

Such missiles have a range of just over 150 miles.

An ATACMS missile being fired. File photoIMAGE SOURCE,REUTERS
Image caption,

Kyiv has for months been pushing for ATACMS to boost its hard-going counter-offensive

NBC News and the Wall Street Journal quote unnamed US officials as saying Mr Biden told his Ukrainian counterpart Volodymyr Zelensky that Kyiv would get “a small number” of ATACMS (Army Tactical Missile System) missiles. The two leaders met at the White House on Thursday.

The WSJ adds that the weapons will be sent in the coming weeks.

Meanwhile, the Washington Post cited several people familiar with the discussions as saying Ukraine would get ATACMS armed with cluster bomblets rather than single warheads.

Neither the US nor Ukraine have officially confirmed the reports.

After the Biden-Zelensky talks Washington announced a new tranche of $325m (£265m) in military aid – including artillery and ammunition – for Ukraine. America’s Abrams tanks will be delivered to Kyiv next week.

However, both presidents have been evasive on the ATACMS issue.

“I believe that most of what we were discussing with President Biden yesterday… we will be able to reach an agreement,” Mr Zelensky said on Friday during a visit to Canada.

“Yes, [this is] a matter of time. Not everything depends on Ukraine,” he added.

Kyiv has for months been pushing for ATACMS to boost its tough and bloody counter-offensive in the south.

It says key Russian supply lines, command positions and other logistical hubs deep behind the front line would then be within striking distance, forcing Moscow to move them further away and thus making it harder to resupply troops and weaponry.

Russian positions in the occupied Ukrainian regions in the south – including Crimea – would be particularly vulnerable, Ukraine says.

President Vladimir Putin launched a full-scale Russian invasion of Ukraine in February 2022, and the Biden administration was initially hesitant to provide Ukraine with modern weaponry.

But its stance has since shifted dramatically, with Kyiv getting high-precision Himars long-range rocket systems and Patriot air defence missiles.

President Biden has been hesitant on ATACMS amid fears that such missiles could bring a direct clash with nuclear-armed Russia closer.


Source link

Continue Reading


The Supreme Court showdown over social media “censorship” and free speech online



About a year ago, an especially right-wing panel of the far-right United States Court of Appeals for the Fifth Circuit held that Texas’s state government may effectively seize control of content moderation on social media websites such as TwitterYouTube, and Facebook.

The Fifth Circuit’s opinion in NetChoice v. Paxton upheld an unconstitutional law that requires social media companies to publish content produced by their users that they do not wish to publish, but that the government of Texas insists that they must publish. That potentially includes content by NazisKu Klux Klansmen, and other individuals calling for the outright extermination of minority groups.

Meanwhile, earlier this month the same Fifth Circuit handed down a decision that effectively prohibits the Biden administration from asking social media companies to pull down or otherwise moderate content. According to the Justice Department, the federal government often asks these platforms to remove content that seeks to recruit terrorists, that was produced by America’s foreign adversaries, or that spreads disinformation that could harm public health.

Again, the Fifth Circuit’s more recent decision, which is known as Murthy v. Missouri, would devastate a Democratic administration’s ability to ask media companies to voluntarily remove content. Meanwhile, the NetChoice decision holds that Texas’s Republican government may compel those same companies to adopt a government-mandated editorial policy.


These two decisions obviously cannot be reconciled, unless you believe that the First Amendment applies differently to Democrats and Republicans. And the Supreme Court has already signaled, albeit in a 5-4 decision, that a majority of the justices believe that the Fifth Circuit has gone off the rails. Soon after the Fifth Circuit first signaled that it would uphold Texas’s law, the Supreme Court stepped in with a brief order temporarily putting the law on ice.

Yet, while the Fifth Circuit’s approach to social media has been partisan and hackish, these cases raise genuinely difficult policy questions. Social media companies control powerful platforms that potentially allow virtually anyone to communicate their views to millions of people at a time. These same companies also have the power to exclude anyone they want from these platforms either for good reasons (because someone is a recruiter for the terrorist group ISIS, for example), or for arbitrary or malicious reasons (such as if the company’s CEO disagrees with an individual’s ordinary political views).

Worse, once a social media platform develops a broad user base, it is often difficult for other companies to build competing social networks. After Twitter, now known as X, implemented a number of unpopular new policies that favored trolls and hate speech, for example, at least eight other platforms tried to muscle into this space with Twitter-like apps of their own. Thus far, however, these new platforms have struggled to consolidate the kind of user base that can rival Twitter’s. And the one that most likely presents the greatest threat to Twitter, Threads, is owned by social media giant Meta.

It is entirely reasonable, in other words, for consumers to be uncomfortable with so few corporations wielding so much authority over public discourse. What is less clear is what role the government legitimately can play in dealing with this concentration of power.

What the First Amendment actually says about the government’s relationship with media companies

Before we dive into the details of the NetChoice and Murthy decisions, it’s helpful to understand a few basics about First Amendment doctrine, and just how much pressure the government may place on a private media company before that pressure crosses the line into illegal coercion.

First, the First Amendment protects against both government actions that censor speech and government actions that attempt to compel someone to speak against their will. As the Supreme Court explained in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”

Second, the First Amendment also protects speech by corporations. This principle became controversial after the Supreme Court’s decision in Citizens United v. FEC (2010) held that corporations may spend unlimited sums of money to influence elections, but it also long predates Citizens United. Indeed, a world without First Amendment protections for corporations is incompatible with freedom of the press. Vox Media, the New York Times, the Washington Post, and numerous other media companies are all corporations. That doesn’t mean that the government can tell them what to print.

Third, the First Amendment specifically protects the right of traditional media companies to decide what content they carry and what content they reject. Thus, in Miami Herald v. Tornillo (1974), the Supreme Court held that a news outlet’s “choice of material to go into a newspaper” is subject only to the paper’s “editorial control and judgment,” and that “it has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press.”

Fourth, this regime applies equally to internet-based media. The Supreme Court’s decision in Reno v. ACLU (1997) acknowledged that the internet is distinct from other mediums because it “can hardly be considered a ‘scarce’ expressive commodity” — that is, unlike a newspaper, there is no physical limit on how much content can be published on a website. But Reno concluded that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.”

Taken together, these four principles establish that neither Texas nor any other governmental body may require a media company, social or otherwise, to publish content that the company does not want to print. If Twitter announces tomorrow that it will delete all tweets written by someone named “Jake,” for example, the government may not pass a law requiring Twitter to publish tweets by Jake Tapper. Similarly, if a social media company announces that it will only publish content by Democrats, and not by Republicans, it may do so without government interference.

That said, while the government may neither censor a media platform’s speech nor demand that the platform publish speakers it does not want to publish, government officials are allowed to express the government’s view on any topic. Indeed, as the Supreme Court said in Pleasant Grove v. Summum (2009), “it is not easy to imagine how government could function if it lacked this freedom.”

The government’s freedom to express its own views extends both to statements made to the general public and to statements made in private communications with business leaders. Federal officials may, for example, tell YouTube that the United States government believes that the company should pull down every ISIS recruitment video on the site. And those officials may also ask a social media company to pull down other content that the government deems to be harmful, dangerous, or even merely annoying.

Of course, the general principle that the government can say what it wants can sometimes be in tension with the rule against censorship. While the First Amendment allows, say, Florida Gov. Ron DeSantis (R) to make a hypothetical statement saying that he opposes all books that present transgender people in a positive light, DeSantis would cross an impermissible line if he sends a police officer to a bookstore to make a thinly veiled threat — such as if the cop told the storeowner that “bad things happen to people who sell these kinds of books.”

But a government statement to a private business must be pretty egregious before it crosses the line into impermissible censorship. As the Court held in Blum v. Yaretsky (1982), the government may be held responsible for a private media company’s decision to alter its speech only when the government “has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.”

So how should these First Amendment principles apply to government-mandated content moderation?

In fairness, none of the Supreme Court decisions discussed in the previous section involve social media companies. So it’s at least possible that these longstanding First Amendment principles need to be tweaked to deal with a world where, say, a single billionaire can buy up a single website and fundamentally alter public discourse among important political and media figures.

But there are two powerful reasons to tread carefully before remaking the First Amendment to deal with The Problem of Elon Musk.

One is that no matter how powerful Musk or Mark Zuckerberg or any other media executive may become, they will always be categorically different from the government. If Facebook doesn’t like what you have to say, it can kick you off Facebook. But if the government doesn’t like what you say (and if there are no constitutional safeguards against government overreach), it can send armed police officers to haul you off to prison forever.

The other is that the actual law that Texas passed to deal with the Texas GOP’s concerns about social media companies is so poorly designed that it suggests that a world where the government can regulate social media speech would be much worse than one where important content moderation decisions are made by Musk.

That law, which Texas Gov. Greg Abbott (R) claims was enacted to stop a “dangerous movement by social media companies to silence conservative viewpoints and ideas,” prohibits the major social media companies from moderating content based on “the viewpoint of the user or another person” or on “the viewpoint represented in the user’s expression or another person’s expression.”

Such a sweeping ban on viewpoint discrimination is incompatible with any meaningful moderation of abusive content. Suppose, for example, that a literal Nazi posts videos on YouTube calling for the systematic extermination of all Jewish people. Texas’s law prohibits YouTube from banning this user or from pulling down his Nazi videos, unless it also takes the same action against users who express the opposite viewpoint — that is, the view that Jews should not be exterminated.

In any event, the Supreme Court already blocked the Texas law once, so it’s unlikely that it will reverse course when it hears the case a second time (the Court could announce that it will rehear the NetChoice case soon after its next conference, which will take place on Tuesday).

What should happen when the government merely asks a social media company to remove content?

But what about a case like Murthy? That case is currently before the Supreme Court on its shadow docket — a mix of emergency motions and other matters that the Court sometimes decides on an expedited basis — so the Court could decide any day now whether to leave the Fifth Circuit’s decision censoring the Biden administration in effect.

The Fifth Circuit’s Murthy decision spends about 14 pages describing cases where various federal officials, including some in the Biden White House, asked social media companies to remove content — often because federal officials thought the content was harmful to public health because it contained misinformation about Covid-19.

In many cases, these conversations happened because those companies proactively reached out to the government to solicit its views. As the Fifth Circuit admits, for example, platforms often “sought answers” from the Centers for Disease Control and Prevention about “whether certain controversial claims were ‘true or false’” in order to inform their own independent decisions about whether or not to remove those claims.

That said, the Fifth Circuit also lists some examples where government officials appear to have initiated a particular conversation. In one instance, for example, a White House official allegedly told an unidentified platform that it “remain[ed] concerned” that some of the content on the platform encouraged vaccine hesitancy. In another instance, Surgeon General Vivek Murthy allegedly “asked the platforms to take part in an ‘all-of-society’ approach to COVID by implementing stronger misinformation ‘monitoring’ program.”

It’s difficult to assess the wisdom of these communications between the government and the platforms because the Fifth Circuit offers few details about what content was being discussed or why the government thought this content was sufficiently harmful that the platforms should intervene. Significantly, however, the Fifth Circuit does not identify a single example — not one — of a government official taking coercive action against a platform or threatening such action.

The court does attempt to spin a couple of examples where the White House endorsed policy changes as such a threat. In a 2022 news conference, for example, the White House press secretary said that President Biden supports reforms that would impact the social media industry — including “reforms to section 230, enacting antitrust reforms, requiring more transparency, and more.” But the president does not have the authority to enact legislative reforms without congressional approval. And the platforms themselves did not behave as if they faced any kind of threat.

Indeed, the Fifth Circuit’s own data suggests that the platforms felt perfectly free to deny the government’s requests, even when those requests came from law enforcement. The FBI often reached out to social media platforms to flag content by “Russian troll farms” and other malign foreign actors. But, as the Fifth Circuit concedes, the platforms rejected the FBI’s requests to pull down this content about half of the time.

And, regardless of how one should feel about the government communicating with media sites about whether Russian and anti-vax disinformation should remain online, the Fifth Circuit’s approach to these communications is ham-handed and unworkable.

At several points in its opinion, for example, the Fifth Circuit faults government officials who “entangled themselves in the platforms’ decision-making processes.” But the court never defines this term “entangled,” or even provides any meaningful hints about what it might mean, other than using equally vague adjectives to describe the administration’s communications with the platforms, such as “consistent and consequential.”

The Biden administration, in other words, appears to have been ordered not to have “consistent and consequential” communications with social media companies — whatever the hell that means. Normally, when courts hand down injunctions binding the government, they define the scope of that injunction clearly enough that it’s actually possible to figure out what the government is and is not allowed to do.

The common element in NetChoice and Murthy is that, in both cases, government officials (the Texas legislature in NetChoice and three Fifth Circuit judges in Murthy) were concerned about certain views being suppressed on social media. And, in both cases, they came up with a solution that is so poorly thought out that it is worse than whatever perceived problem they were trying to solve.

Thanks to the Fifth Circuit, for example, the FBI has no idea what it is allowed to do if it discovers that Vladimir Putin is flooding Facebook, YouTube, and Twitter with content that is actively trying to incite an insurrection within the United States. And, thanks to the Fifth Circuit, there’s now one First Amendment that Democrats must comply with, and a different, weaker First Amendment that applies to Republican officials.

We can only hope that the Supreme Court decides to step back and hit pause on this debate, at least until someone can come up with a sensible and workable framework that can address whatever problems the Texas legislature and the Fifth Circuit thought they were solving.


Source link

Continue Reading