“I would never have called myself ‘disabled’ if it hadn’t been for Twitter, if I hadn’t realized that the umbrella of disability was much larger than my understanding of it was,” says Lauren Allen.
The Saskatoon-raised theatre artist and social media professional regularly shares her experiences of complex post-traumatic stress disorder and anxiety online. She first got immersed in disability-centred social media spaces by following writers like Alice Wong (@SFDirewolf) and Imani Barbarin (@Imani_Barbarin).
One of the more insidious mindsets that Ableds quietly promote through their pandemic behavior is the belief that disabled people don’t have lives that we want to get back to. <br><br>Outside of consuming us for their inspiration, we don’t exist. <a href=”https://t.co/jPBLHkCaeQ”>https://t.co/jPBLHkCaeQ</a>
But alongside the sense of identity she found among disabled people on Twitter, Allen has also found a reluctance by many to acknowledge and engage with those who have less visible disabilities.
“The care that people receive when their physical limitations are apparent to able-bodied people is different than people who have mental health limitations or who have something like chronic pain, for example,” she said.
While social media platforms like Twitter, TikTok and Facebook have become a haven for disabled people looking to connect with those who have similar experiences, they are also brimming with triggering discussions of trauma. Balancing the search for support with cultivating bits of joy in a digital place that is often unforgiving has become a critical challenge for many disabled people.
Like Allen, social media was a big factor in how Brit Sippola found their way to identifying with disability.
“When I started using the [TikTok] app, it started pushing me autistic content and that was the first time that I had ever seriously considered that I could be autistic,” said the Regina-based engineer and disabilityactivist.
Sippola had turned to TikTok seeking comfort when their other disabilities, which include fibromyalgia, endometriosis, attention deficit hyperactivity disorder (ADHD), and borderline personality disorder, were having a severe impact on their health. These online interactions have helpedSippola learn about possible symptoms from what they call “the hive mind,” before having them affirmed by a medical professional.
“It’s really great to have a sounding board, especially with fibromyalgia, because it’s such a condition that can change from day to day so insanely that it can be really, weird to parse out like, ‘OK, this is a symptom; this isn’t a symptom.'”
Sippola said mental and emotional support in relation to their disabilities comes largely from digital sources because they know “very few disabled people in real life.”
Being seen and heard
Creating a space for those with chronic illnesses or invisible disabilities was part of the impetus for Brianne Benness to launch the No End in Sight podcast and accompanying Twitter hashtag, #NEISVOID. There, people with various disabilities share their experiences and look for support, while the podcast collects what she calls “oral histories of chronic, complex, and contested illnesses.”
Welcome to the void, here’s what you need to know: <a href=”https://t.co/1wDX6bfYSz”>https://t.co/1wDX6bfYSz</a>
The root of her frustration was the tendency of other social media spaces to prioritize the voices of those already diagnosed or who are privileged in other ways, like race or class.
She also wanted to offer a safe space where people could feel heard and be offered support on their own terms.
“I don’t need you to comfort me – that’s not what I’m asking for here,” said Benness, who hails from Hamilton, Ont. “What I’m asking for is a place where I can just name my experience out loud and be seen and feel seen for it.”
For disabled people, access to the internet is often synonymous with access to friendship and peer support, mutual aid and assistance with health issues. With the COVID pandemic limiting travel – something that was already restricted for many disabled people – that dependence has become even more pronounced.
One of the challenges, then, is how disabled people can step away from social media when these sites feel like their only lifeline.
Some people move their friendships offline, if you will, choosing to engage by text message, for instance.
Others opt to engage on social media in a different way.
Sippola said that their engagement with social media isn’t always about deep conversation related to identity, but about giving permission to care for themselves after their job as an engineer leaves them spent.
“It’s not like I have the energy to be doing anything else. My old life? Yeah, I would go for a bike ride or any number of random things. But in my current life, like in the body I currently live in, I can’t be doing the fun things I want to do all the time, and sometimes I need to just rest.”
The solution isn’t necessarily so simple when being glued to social media is part of your job.
Liam O’Dell is a multiply-disabled journalist and founder of the Twitter Community “Disability Twitter” (a feature that allows for users to gather together in a more intimate, direct way).
“[F]or me, social media is not only a lifeline, as a disabled person, to connect with my disabled community, but it’s my work. … And so to pull myself away from that is exceptionally difficult.”
When asked about managing time on social media, Allen offered a few tools that she suggests to her clients: “Set timers to limit your engagement on a platform. Have specific goals like, ‘I’m going to like and comment on X number of posts; I’m going to create X number of posts today.'”
For Benness, part of the solution is offering an alternative. She aims to cultivate joy in the digital sphere to give people a break from the constant pain and fear being shared online. That means regularly sharing a Twitter list of animal accounts – keeping her chosen audience in mind, who are often disabled people with sensory overwhelm.
“Most people who I am talking to on Twitter … are using Twitter specifically in that state of illness,” she said. “And so when I approach Twitter, I’m approaching it as if I’m talking to people whose alternative is lying there with their eyes closed in pain.”
Meanwhile, Sippola seeks out artists on TikTok for inspiration to get creative instead of simply “social media couch surfing.”
Finally, in a digital space where you’re bombarded with discussions of trauma, stepping away can mean finding joy elsewhere. Allen engages in fun offline activities, like jigsaw or sudoku puzzles.
“Just something to take my mind off of it, get me into a different space,” said Allen, “usually a space that isn’t inundated with other people’s thoughts, so that I can form my own opinions and become more solid in my own experience.”
Track outages and protect against spam, fraud, and abuse
Measure audience engagement and site statistics to understand how our services are used and enhance the quality of those services
Develop and improve new services
Deliver and measure the effectiveness of ads
Show personalized content, depending on your settings
Show personalized ads, depending on your settings
Select “More options” to see additional information, including details about managing your privacy settings. You can also visit g.co/privacytools at any time.
The Fifth Circuit’s opinion in NetChoice v. Paxton upheld an unconstitutional law that requires social media companies to publish content produced by their users that they do not wish to publish, but that the government of Texas insists that they must publish. That potentially includes content by Nazis, Ku Klux Klansmen, and other individuals calling for the outright extermination of minority groups.
Meanwhile, earlier this month the same Fifth Circuit handed down a decision that effectively prohibits the Biden administration from asking social media companies to pull down or otherwise moderate content. According to the Justice Department, the federal government often asks these platforms to remove content that seeks to recruit terrorists, that was produced by America’s foreign adversaries, or that spreads disinformation that could harm public health.
Again, the Fifth Circuit’s more recent decision, which is known as Murthy v. Missouri, would devastate a Democratic administration’s ability to ask media companies to voluntarily remove content. Meanwhile, the NetChoice decision holds that Texas’s Republican government may compel those same companies to adopt a government-mandated editorial policy.
These two decisions obviously cannot be reconciled, unless you believe that the First Amendment applies differently to Democrats and Republicans. And the Supreme Court has already signaled, albeit in a 5-4 decision, that a majority of the justices believe that the Fifth Circuit has gone off the rails. Soon after the Fifth Circuit first signaled that it would uphold Texas’s law, the Supreme Court stepped in with a brief order temporarily putting the law on ice.
Yet, while the Fifth Circuit’s approach to social media has been partisan and hackish, these cases raise genuinely difficult policy questions. Social media companies control powerful platforms that potentially allow virtually anyone to communicate their views to millions of people at a time. These same companies also have the power to exclude anyone they want from these platforms either for good reasons (because someone is a recruiter for the terrorist group ISIS, for example), or for arbitrary or malicious reasons (such as if the company’s CEO disagrees with an individual’s ordinary political views).
Worse, once a social media platform develops a broad user base, it is often difficult for other companies to build competing social networks. After Twitter, now known as X, implemented a number of unpopular new policies that favored trolls and hate speech, for example, at least eight other platforms tried to muscle into this space with Twitter-like apps of their own. Thus far, however, these new platforms have struggled to consolidate the kind of user base that can rival Twitter’s. And the one that most likely presents the greatest threat to Twitter, Threads, is owned by social media giant Meta.
It is entirely reasonable, in other words, for consumers to be uncomfortable with so few corporations wielding so much authority over public discourse. What is less clear is what role the government legitimately can play in dealing with this concentration of power.
What the First Amendment actually says about the government’s relationship with media companies
Before we dive into the details of the NetChoice and Murthy decisions, it’s helpful to understand a few basics about First Amendment doctrine, and just how much pressure the government may place on a private media company before that pressure crosses the line into illegal coercion.
First, the First Amendment protects against both government actions that censor speech and government actions that attempt to compel someone to speak against their will. As the Supreme Court explained in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”
Second, the First Amendment also protects speech by corporations. This principle became controversial after the Supreme Court’s decision in Citizens United v. FEC (2010) held that corporations may spend unlimited sums of money to influence elections, but it also long predates Citizens United. Indeed, a world without First Amendment protections for corporations is incompatible with freedom of the press. Vox Media, the New York Times, the Washington Post, and numerous other media companies are all corporations. That doesn’t mean that the government can tell them what to print.
Third, the First Amendment specifically protects the right of traditional media companies to decide what content they carry and what content they reject. Thus, in Miami Herald v. Tornillo (1974), the Supreme Court held that a news outlet’s “choice of material to go into a newspaper” is subject only to the paper’s “editorial control and judgment,” and that “it has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press.”
Fourth, this regime applies equally to internet-based media. The Supreme Court’s decision in Reno v. ACLU (1997) acknowledged that the internet is distinct from other mediums because it “can hardly be considered a ‘scarce’ expressive commodity” — that is, unlike a newspaper, there is no physical limit on how much content can be published on a website. But Reno concluded that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.”
Taken together, these four principles establish that neither Texas nor any other governmental body may require a media company, social or otherwise, to publish content that the company does not want to print. If Twitter announces tomorrow that it will delete all tweets written by someone named “Jake,” for example, the government may not pass a law requiring Twitter to publish tweets by Jake Tapper. Similarly, if a social media company announces that it will only publish content by Democrats, and not by Republicans, it may do so without government interference.
That said, while the government may neither censor a media platform’s speech nor demand that the platform publish speakers it does not want to publish, government officials are allowed to express the government’s view on any topic. Indeed, as the Supreme Court said in Pleasant Grove v. Summum(2009), “it is not easy to imagine how government could function if it lacked this freedom.”
The government’s freedom to express its own views extends both to statements made to the general public and to statements made in private communications with business leaders. Federal officials may, for example, tell YouTube that the United States government believes that the company should pull down every ISIS recruitment video on the site. And those officials may also ask a social media company to pull down other content that the government deems to be harmful, dangerous, or even merely annoying.
Of course, the general principle that the government can say what it wants can sometimes be in tension with the rule against censorship. While the First Amendment allows, say, Florida Gov. Ron DeSantis (R) to make a hypothetical statement saying that he opposes all books that present transgender people in a positive light, DeSantis would cross an impermissible line if he sends a police officer to a bookstore to make a thinly veiled threat — such as if the cop told the storeowner that “bad things happen to people who sell these kinds of books.”
But a government statement to a private business must be pretty egregious before it crosses the line into impermissible censorship. As the Court held in Blum v. Yaretsky (1982), the government may be held responsible for a private media company’s decision to alter its speech only when the government “has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.”
So how should these First Amendment principles apply to government-mandated content moderation?
In fairness, none of the Supreme Court decisions discussed in the previous section involve social media companies. So it’s at least possible that these longstanding First Amendment principles need to be tweaked to deal with a world where, say, a single billionaire can buy up a single website and fundamentally alter public discourse among important political and media figures.
But there are two powerful reasons to tread carefully before remaking the First Amendment to deal with The Problem of Elon Musk.
One is that no matter how powerful Musk or Mark Zuckerberg or any other media executive may become, they will always be categorically different from the government. If Facebook doesn’t like what you have to say, it can kick you off Facebook. But if the government doesn’t like what you say (and if there are no constitutional safeguards against government overreach), it can send armed police officers to haul you off to prison forever.
The other is that the actual law that Texas passed to deal with the Texas GOP’s concerns about social media companies is so poorly designed that it suggests that a world where the government can regulate social media speech would be much worse than one where important content moderation decisions are made by Musk.
That law, which Texas Gov. Greg Abbott (R) claims was enacted to stop a “dangerous movement by social media companies to silence conservative viewpoints and ideas,” prohibits the major social media companies from moderating content based on “the viewpoint of the user or another person” or on “the viewpoint represented in the user’s expression or another person’s expression.”
Such a sweeping ban on viewpoint discrimination is incompatible with any meaningful moderation of abusive content. Suppose, for example, that a literal Nazi posts videos on YouTube calling for the systematic extermination of all Jewish people. Texas’s law prohibits YouTube from banning this user or from pulling down his Nazi videos, unless it also takes the same action against users who express the opposite viewpoint — that is, the view that Jews should not be exterminated.
What should happen when the government merely asks a social media company to remove content?
But what about a case like Murthy? That case is currently before the Supreme Court on its shadow docket — a mix of emergency motions and other matters that the Court sometimes decides on an expedited basis — so the Court could decide any day now whether to leave the Fifth Circuit’s decision censoring the Biden administration in effect.
The Fifth Circuit’s Murthy decision spends about 14 pages describing cases where various federal officials, including some in the Biden White House, asked social media companies to remove content — often because federal officials thought the content was harmful to public health because it contained misinformation about Covid-19.
In many cases, these conversations happened because those companies proactively reached out to the government to solicit its views. As the Fifth Circuit admits, for example, platforms often “sought answers” from the Centers for Disease Control and Prevention about “whether certain controversial claims were ‘true or false’” in order to inform their own independent decisions about whether or not to remove those claims.
That said, the Fifth Circuit also lists some examples where government officials appear to have initiated a particular conversation. In one instance, for example, a White House official allegedly told an unidentified platform that it “remain[ed] concerned” that some of the content on the platform encouraged vaccine hesitancy. In another instance, Surgeon General Vivek Murthy allegedly “asked the platforms to take part in an ‘all-of-society’ approach to COVID by implementing stronger misinformation ‘monitoring’ program.”
It’s difficult to assess the wisdom of these communications between the government and the platforms because the Fifth Circuit offers few details about what content was being discussed or why the government thought this content was sufficiently harmful that the platforms should intervene. Significantly, however, the Fifth Circuit does not identify a single example — not one — of a government official taking coercive action against a platform or threatening such action.
The court does attempt to spin a couple of examples where the White House endorsed policy changes as such a threat. In a 2022 news conference, for example, the White House press secretary said that President Biden supports reforms that would impact the social media industry — including “reforms to section 230, enacting antitrust reforms, requiring more transparency, and more.” But the president does not have the authority to enact legislative reforms without congressional approval. And the platforms themselves did not behave as if they faced any kind of threat.
Indeed, the Fifth Circuit’s own data suggests that the platforms felt perfectly free to deny the government’s requests, even when those requests came from law enforcement. The FBI often reached out to social media platforms to flag content by “Russian troll farms” and other malign foreign actors. But, as the Fifth Circuit concedes, the platforms rejected the FBI’s requests to pull down this content about half of the time.
And, regardless of how one should feel about the government communicating with media sites about whether Russian and anti-vax disinformation should remain online, the Fifth Circuit’s approach to these communications is ham-handed and unworkable.
At several points in its opinion, for example, the Fifth Circuit faults government officials who “entangled themselves in the platforms’ decision-making processes.” But the court never defines this term “entangled,” or even provides any meaningful hints about what it might mean, other than using equally vague adjectives to describe the administration’s communications with the platforms, such as “consistent and consequential.”
The Biden administration, in other words, appears to have been ordered not to have “consistent and consequential” communications with social media companies — whatever the hell that means. Normally, when courts hand down injunctions binding the government, they define the scope of that injunction clearly enough that it’s actually possible to figure out what the government is and is not allowed to do.
The common element in NetChoice and Murthy is that, in both cases, government officials (the Texas legislature in NetChoice and three Fifth Circuit judges in Murthy) were concerned about certain views being suppressed on social media. And, in both cases, they came up with a solution that is so poorly thought out that it is worse than whatever perceived problem they were trying to solve.
Thanks to the Fifth Circuit, for example, the FBI has no idea what it is allowed to do if it discovers that Vladimir Putin is flooding Facebook, YouTube, and Twitter with content that is actively trying to incite an insurrection within the United States. And, thanks to the Fifth Circuit, there’s now one First Amendment that Democrats must comply with, and a different, weaker First Amendment that applies to Republican officials.
We can only hope that the Supreme Court decides to step back and hit pause on this debate, at least until someone can come up with a sensible and workable framework that can address whatever problems the Texas legislature and the Fifth Circuit thought they were solving.
At the centre of the affair is the June shooting death of Hardeep Singh Nijjar in Surrey, B.C. Canadian news accounts usually refer to him as a “Sikh community leader,” a “peaceful advocate for Sikh independence” or as president of the Guru Nanak Sikh Gurdwara in Surrey, B.C.
But almost every Indian mention of Nijjar is very explicit about noting that he is considered a terrorist by the Indian government. A recent column in the Times of India, for one, called him a “terrorist fugitive from India who emigrated to Canada.” In fact, when Trudeau made his abortive trip to India in 2018, the then chief minister of Punjab Amarinder Singh made a point of handing Trudeau a list of “Khalistani operatives” in Canada. As an Indian Express story noted this week, Nijjar was one of the names.
In just the last week, Nijjar’s lawyer, Gurpatwant Singh Pannu, recorded a viral video urging Indo-Canadian Hindus to “leave Canada.” “You not only support India, but you are also supporting the suppression of speech and expression of pro-Khalistan Sikhs,” he said. When Indian media reported on the video, they were issued a warning by the Indian Ministry of Information alleging that Pannu is also considered a terrorist, and that giving him a “platform” risked disturbing “public order.”
Canada is a hotbed for Sikh nationalist extremism
It’s nothing new that the government of Indian Prime Minister Narendra Modi sees Canada as a safe harbour for extremist Khalistanis; Sikhs who seek an independent ethnostate in what is now the Indian state of Punjab. To date, the worst mass-murder in Canadian history remains the 1985 Air India Bombing, a terrorist attack organized by Khalistani extremists based out of British Columbia. And according to the Modi government, Canada is doing next to nothing to prevent any future violence from erupting out of its Sikh expat communities.