The United States is a political ground of polarizing chaos that often feels like watching a circus performance. And certainly not an elegant Cirque du Soleil show, but an exhausting display of ideological gymnastics that is insufferable to watch.
After five minutes of the candidates shouting over each other in the initial presidential debate, one might deduce that an official from the Serie B football league would be more effective than the moderator (no disrespect to Italian footballers).
It’s clear that the imaginary lines separating politics and sports have all but disintegrated. Donald Trump has invoked the importance of sport during his time in office, rebuking athletes who have kneeled during the anthem to support campaigns against anti-Blackness and police brutality.
Athletes and electoral mobilization have always been connected in the American political scene; women’s suffrage in the U.S. has been affected by sports and women.
Historically, athletes in the margins have been outspoken. Jackie Robinson and Muhammad Ali wadded into the complex world of political commentary. Following their examples, more and more athletes have spoken up and spoken out.
Athlete activism in the political sphere has taken a central role in this U.S. election, and women are starting at centre court.
WNBA players have been incredibly vocal about league-wide initiatives on racial justice, LGBTIQ rights, and pay equity.
Players have been actively supporting the candidate running in this election against Atlanta Dream co-owner, Kelly Loeffler, a Republican Senator from Georgia and an ardent supporter of Trump. In a declaration of political support, many WNBA players mobilized against Loeffler and wore T-shirts supporting her opponent, Rev. Raphael Warnock.
Dr. Amira Rose Davis, assistant professor of History, African American Studies, Women, Gender and Sexuality Studies at Penn State University and co-host of the Burn It All Down podcast, has called this “unprecedented.”
In an interview with Sports Illustrated Dr. Davis remarked: “Certainly you had times where individual athletes or what seems like a considerable amount of athletes from a certain team or a certain league being in political alliance, but I think what we’re seeing with the WNBA and Kelly Loeffler is very different because it has been very coordinated and it’s been strategic.”
The WNBA also released a video encouraging eligible voters to get educated and get involved at every level of elections: local, state and federal. WNBA.com has information for election information and civic engagement on their website.
There are athletes who have gone on record to state they would not engage in the political process. In 2016, tennis legend Serena Williams declared she would not be voting for Trump or “anyone else” including his then-opponent Hillary Clinton. Although Williams has advocated for Black women’s maternal health, racial justice, and pay equity, she admits she does not vote. Williams cited her religion as the main reason – she is Jehovah’s Witness.
But 2020 US Open winner and Nike athlete Naomi Osaka was recently featured in a video alongside LeBron James available on Nike’s website.
In September, while Osaka was on her way to claiming her second US Open title, she wore seven different masks recognizing the victims of violent racism. The day that she won the tournament, she wore a face mask with the name of Tamir Rice on it. Rice was 12 years old when he was shot and killed by police in 2014. Her views on politics and policy are not divorced from her competition.
There are many examples of women in sports being active in areas of politics – particularly now. The rise of female athletes to the political forefront has not only been in endorsing candidates, but also supporting voter registration and political participation.
U.S. soccer star Megan Rapinoe has been actively encouraging Americans to get involved in the political discourse. Over the summer, Rapinoe created an HBO special about politics called ‘Seeing America with Megan Rapinoe.’
“I’m trying to break it down for people and make it a little more relatable and then get people energized in the civic process, and getting involved in just being more active in their communities, also for themselves,” she told comedian Jimmy Fallon.
Making politics “cool” is definitely one way to encourage youth and first-time voters to get out and cast their ballots. Preliminary reports of voter turnout data shows that first- time voters are at high despite allegations of intentional barriers created to render the process needlessly complicated.
Twenty-year-old Olympic gold-medal gymnast Laurie Hernandez voted as soon as she was eligible. The 19th Amendment was passed 100 years ago and permitted women to vote. But Black women and minority women (such as those from Latinx communities) were not until the Voting Rights Acts defeated Jim Crow law that continued to discriminate against racialized and ethnic Americans.
Hernandez is aware of the history and feels compelled to act.
“For women, and especially women of colour, we didn’t have the right [to vote] in the first place,” she said. “We had to fight for it.”
American tennis legend Billie Jean King has been tweeting about the importance of voting. A longtime advocate of political participation, King has been so explicit as to remind voters to remember the names of Black victims George Floyd and Breonna Taylor, as they cast their decisions.
There is a plethora of ways to get involved in the political process, and in civic and community engagement. Los Angeles Sparks player Chiney Ogwumike publicly announced last week that she will be working the polls in her hometown of Houston on election day.
Female athletes sharing information on social media, encouraging education and mobilization, is a testament to their influence as leaders on and off the playing field. As athletes continue their roles as ambassadors of sport, and as full participants in the citizenry, the ebb and flows of politics in the U.S. will only lessen the spaces in which female athletes don’t power forward in their passion and politicking.
Shireen Ahmed is a writer, TEDx Speaker, and award-winning sports activist who focuses on Muslim women, and the intersections of racism and misogyny in sports. She is co-creator and co-host of the “Burn It All Down” feminist sports podcast. She lives in the Greater Toronto Area with her children and her cat.
The Politics Of Student Debt Cancellation – Forbes
President-elect Joe Biden has promised to provide student debt cancellation of some kind to student borrowers. While he and Vice President-elect Kamala Harris both pitched that plan during the campaign, the policy has become central to many political debates in recent weeks.
But there is division around the issue, both within the Democratic Party and across the aisle. Progressives are pushing Biden to cancel student debt, while moderate Democrats are less supportive and Republican elected officials are fully opposed.
Part of the case some are making against student debt cancellation is around the fairness of the policy. One columnist recently went viral on Twitter when he said the policy would anger those who didn’t go to college or who already paid off their debts.
While the latter may be less compelling, the former argument is one that could have real political implications. About 45.1 million Americans have student debt, but that’s less than 15% of the population of the United States. There are already political divides across education where those without a college degree are more likely to vote for Republican candidates.
New data shows that these divides are becoming even more stark. Many worry that enacting a cancellation would ignite a backlash from those who never went to college, particularly given the income divides across levels of educational attainment. Of course, in a large legislative package, many policies could be enacted at the same time, so the backlash might be easier to avoid.
But because Democrats did not win as many Senate races as expected, Biden’s legislative proposals will be more difficult to pass. Even if Democrats win both Georgia runoff elections, that would yield a 50 – 50 split with Vice President-elect Kamala Harris being the tie breaker. And that best-case scenario assumes all Democrats support each proposal and support eliminating the filibuster. That has put a lot of focus on what President-elect Biden can do without Congress.
Part of the reason that student debt cancellation has received so much focus is that some believe student debt cancellation can be done via executive authority rather than only legislatively. Senator Elizabeth Warren made the first high-profile push for this idea into the political discourse during her presidential campaign. In September, Warren and Senate Minority Leader Chuck Schumer introduced a resolution calling on the next president to use this authority to cancel up to $50,000 in student debt per borrower.
This issue is a high profile one that many on the left will see as a significant win if Biden is able to forgive some debt. A number of Democrats are pushing President-elect to use this executive authority to provide some form of relief, especially if Senate Republicans don’t agree to a larger economic relief package as proposed by President-elect Biden. Biden will have weigh both sides of the political spectrum as he decides what to do.
Tackling Online Abuse and Disinformation Targeting Women in Politics – Carnegie Endowment for International Peace
In 2017, soon after then Ukrainian member of parliament Svitlana Zalishchuk gave a speech to the United Nations on the impact of the Russian-Ukrainian conflict on women, a fake tweet began to circulate on social media claiming that she had promised to run naked through the streets of Kiev if Russia-backed separatists won a critical battle. Zalishchuk said, “The story kept circulating on the Internet for a year,” casting a shadow over her political accomplishments.
Zalishchuk is not alone in her experience. Around the world, women in politics receive an overwhelming amount of online abuse, harassment, and gendered defamation via social media platforms. For example, a recent analysis of the 2020 U.S. congressional races found that female candidates were significantly more likely to receive online abuse than their male counterparts. On Facebook, female Democrats running for office received ten times more abusive comments than male Democratic candidates. Similar trends have been documented in India, the UK, Ukraine, and Zimbabwe.
Social media companies have come under increasing pressure to take a tougher stance against all forms of hate speech and harassment on their platforms, including against women, racial minorities, and other marginalized groups. Yet their patchwork approach to date has proven insufficient. Governments and international institutions need to press for more action and develop new standards for platform transparency and accountability that can help address the widespread toxicity that is currently undermining online political debate. If effectively designed and implemented, the EU’s Digital Services Act and U.S. President-elect Joe Biden’s proposed National Task Force on Online Harassment and Abuse will represent steps in the right direction.
The Global Challenge
Online abuse against politicians is often misunderstood as inevitable: after all, most public figures occasionally find themselves on the receiving end of vitriolic attacks. Yet over the past several years, the gendered and racialized nature of the phenomenon has received increasing policy attention, as women appear to be disproportionately targeted by online abuse and disinformation attacks.
This pattern tends to be even more pronounced for female political leaders from racial, ethnic, religious, or other minority groups; for those who are highly visible in the media; and for those who speak out on feminist issues. In India, for example, an Amnesty International investigation found that one in every seven tweets that mentioned women politicians was problematic or abusive—and that both Muslim women politicians and women politicians belonging to marginalized castes received substantially more abuse than those from other social groups.
Female politicians are not only targeted disproportionately but also subjected to different forms of harassment and abuse. Attacks targeting male politicians mostly relate to their professional duties, whereas online harassment directed at female politicians is more likely to focus on their physical appearance and sexuality and include threats of sexual violence and humiliating or sexualized imagery. Women in politics are also frequent targets of gendered disinformation campaigns, defined as the spreading of deceptive or inaccurate information and images. Such campaigns often create story lines that draw on misogyny and gender stereotypes. For example, a recent analysis shows that immediately following Kamala Harris’s nomination as the 2020 U.S. vice presidential candidate, false claims about Harris were being shared at least 3,000 times per hour on Twitter, in what appeared to be a coordinated effort. Similar tactics have been used throughout Europe and in Brazil.
The disproportionate and often strategic targeting of women politicians and activists has direct implications for the democratic process: it can discourage women from running for office, push women out of politics, or lead them to disengage from online political discourse in ways that harms their political effectiveness. For those women who persevere, the abuse can cause psychological harm and waste significant energy and time, particularly if politicians struggle to verify whether or when online threats pose real-life dangers to their safety.
What’s Driving Gendered Online Abuse
Some political scientists and social psychologists point to gender role theory to explain harassment and threats targeting female politicians. In many societies, the characteristics traditionally associated with politicians—such as ambition and assertiveness—tend to be coded “male,” which means that women who display these traits may be perceived as transgressing traditional social norms. Online harassment of women seeking political power could thus be understood as a form of gender role enforcement, facilitated by anonymity.
However, online abuse and sexist narratives targeting politically active women are not just the product of everyday misogyny: they are reinforced by political actors and deployed as a political strategy. Illiberal political actors often encourage online abuse against female political leaders and activists as a deliberate tactic to silence oppositional voices and push feminist politicians out of the political arena.
Laura Boldrini, an Italian politician and former UN official who served as president of the country’s Chamber of Deputies, experienced this situation firsthand: following sexist attacks by Matteo Salvini, leader of the far-right Northern League party, and other male politicians, she was targeted by a wave of threatening and misogynistic abuse both online and offline. “Today, in my country, threats of rape are used to intimidate women politicians and push them out of the publish sphere—even by public figures,” notes Boldrini. “Political leaders themselves unleash this type of reaction.”1
What Can Be Done
In recent years, women politicians and activists have launched campaigns to raise awareness of the problem and its impact on democratic processes. Last August, the U.S. Democratic Women’s Caucus sent a letter to Facebook urging the company to protect women from rampant online attacks on the platform and to revise algorithms that reward extremist content. Similar advocacy initiatives have proliferated in different parts of the world, from the global #NotTheCost campaign to Reclaim the Internet in the UK, #WebWithoutViolence in Germany, and the #BetterThanThis campaign in Kenya.
Civil society organizations that support women running for office are also spearheading new strategies to respond to gendered online abuse. Some are offering specialized training and toolkits to help women political leaders protect themselves and counter sexualized and racialized disinformation. In Canada, a social enterprise created ParityBOT, a bot that detects problematic tweets about women candidates and responds with positive messages, thus serving both as a monitoring mechanism and a counterbalancing tool.
Yet despite rising external pressure from politicians and civil society, social media companies’ responses have so far been inadequate to tackle a problem as vast and complex as gendered disinformation and online abuse—whether it targets female politicians, activists, or ordinary citizens. For example, Facebook recently created an Oversight Board tasked with improving the platform’s decisionmaking around content moderation—yet many experts are highly skeptical of the board’s ability to drive change given its limited scope and goals. Twitter reportedly increased enforcement of its hate speech and abuse policies in the second half of 2019, as well as expanded its definition of dehumanizing speech. However, its policies to date lack a clear focus on the safety of women and other marginalized groups. Broader reforms are urgently needed.
Increase Platform Transparency and Accountability
Major social media platforms should do more to ensure transparency, accountability, and gender sensitivity in their mechanisms for content moderation, complaints, and redress. They should also take steps to proactively prevent the spread of hateful speech online, including through changes in risk assessment practices and product design.
To date, most tech companies still have inadequate and unclear content moderation systems. For example, social media companies currently do not disclose their exact guidelines on what constitutes hate speech and harassment or how they implement those guidelines. To address this problem, nonprofits such as Glitch and ISD have suggested that social media platforms allow civil society organizations and independent researchers to access and analyze their data on the number and nature of complaints received, disaggregated by gender, country, and the redress actions taken. According to Amnesty International, tech companies should also be more transparent about their language detection mechanisms, the number of content moderators employed by region and language, the volume of reports handled, and how moderators are trained to recognize culturally specific and gendered forms of abuse. To this day, most tech companies focus on tackling online abuse primarily in Europe and the United States, resulting in an enforcement gap in the Global South. Greater transparency about companies’ current content moderation capacity would enable governments and civil society to better identify shortcomings and push for targeted resource investments.
The move to more automated content moderation is unlikely to solve the problem of widespread and culturally specific gendered and racialized online abuse. Until now, social media companies have used automated tools primarily for content that is easier to identify computationally. Yet these tools are blunt and often biased. So far during the coronavirus pandemic, Facebook, Twitter, and Google have all relied more heavily on automation to remove harmful content. As a result, significantly more accounts have been suspended and more content has been flagged and removed than in the months leading up to the pandemic. But some of this content was posted by human rights activists who had no mechanism for appealing those decisions, and some clearly hateful content—such as racist and anti-Semitic hate speech in France—remained online. “Machine learning will always be a limited tool, given that context plays an enormous part of how harassment and gendered disinformation work online,” notes Chloe Colliver, the head of digital policy and strategy at ISD. “We need some combination of greater human resources and expertise along with a focus on developing AI systems that are more accurate in detecting gendered disinformation.”2
The proliferation of online harassment, hate speech, and disinformation is not only driven by gaps in content moderation but also by a business model that monetizes user engagement with little regard for risk. At the moment, Twitter and other platforms rely on deep learning algorithms that prioritize disseminating content with greater engagement. Inflammatory posts often quickly generate comments and retweets, which means that newsfeed algorithms will show them to more users. Online abuse that relies on sensational language and images targeting female politicians thus tends to spread rapidly. Higher levels of engagement generate more user behavior data that brings in advertising revenue, which means social media companies currently have few financial incentives to change the status quo.
Advocates and experts have put forward different proposals to tackle this problem. For example, social media companies could proactively tweak their recommendation systems to prevent users from being nudged toward hateful content. They also could improve their mechanism for detecting and suspending algorithms that amplify gendered and racialized hate speech—a step that some organizations have suggested to help address pandemic-related mis/disinformation. As part of this process, companies could disclose and explain their content-shaping algorithms and ad-targeting systems, which currently operate almost entirely beyond public scrutiny.
In addition, they could improve their risk assessment practices prior to launching new products or tools or before expanding into a new political and cultural context. At the moment, content moderation is often siloed from product design and engineering, which means that social media companies are permanently focused on investigating and redressing complaints instead of building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Moreover, decisions around risk are often taken by predominantly male, white senior staffers: this type of homogeneity frequently leads to gender and race blindness in product development and rollout. Across all of these domains, experts call for greater transparency and collaboration with outside expertise, including researchers working on humane technology and ethical design.
Step Up Government Action
Given tech companies’ limited action to date, democratic governments also have a responsibility to do more. Rather than asking social media companies to become the final arbiters of online speech, they should advance broader regulatory frameworks that require platforms to become more transparent about their moderation practices and algorithmic decisionmaking, as well as ensure compliance through independent monitoring and accountability mechanisms. Governments also have an important role to play in supporting civil society advocacy, research, and public education on gendered and racialized patterns of online abuse, including against political figures.
The first wave of legislation aimed at mitigating abuse, harassment, and hate speech on social media platforms focused primarily on criminalizing and removing different types of harmful online content. Some efforts have targeted individual perpetrators. For example, in the UK, legal guidelines issued in 2016 and in 2018 enable the Crown Prosecution Service to prosecute internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2019, Mexico passed a new law that specifically targets gendered online abuse: it punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks. The law also includes the concept of “digital violence” in the Mexican penal code.
Such legal reforms are important steps, particularly if they are paired with targeted resources and training for law enforcement. Female politicians often report that law enforcement officials do not take their experiences with online threats and abuse seriously enough; legal reforms and prosecution guidelines can help change this pattern. However, efforts to go after individual perpetrators are insufficient to tackle the current scale of misogynistic online harassment and abuse targeting women politicians and women and girls more generally: even if applicable legal frameworks exist, thresholds for prosecution are often set very high and not all victims want to press charges. Moreover, anonymous perpetrators can be difficult to trace, and the caseload easily exceeds current policing capacity. In the UK, for example, fewer than 1 percent of cases taken up by the police unit charged with tackling online hate crimes have resulted in charges.
Other countries have passed laws that make social media companies responsible for the removal of illegal material. For example, in 2017, Germany introduced a new law that requires platforms to remove hate speech or illegal content within twenty-four hours or risk millions of dollars in fines. However, this approach has raised strong concerns among human rights activists, who argue that this measure shifts the responsibility to social media companies to determine what constitutes legal speech without providing adequate mechanisms for judicial oversight or judicial remedy. In June 2020, the French constitutional court struck down a similar law due to concerns about overreach and censorship. French feminist and antiracist organizations had previously criticized the measure, noting that it could restrict the speech of those advocating against hate and extremism online and that victims would benefit more from sustained investments in existing legal remedies.
In light of these challenges, many researchers and advocates have started . One example of this approach is the UK’s 2019 Online Harms White Paper, which “proposes establishing in law a new duty of care towards users” to deal proactively with possible risks that platform users might encounter, under the oversight of an independent regulator. The proposed regulatory framework—which is set to result in a new UK law in early 2021—would “outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.” It would also set strict standards for transparency and require companies to ensure that their algorithms do not amplify extreme and unreliable material for the sake of user engagement. The EU’s Digital Services Act, currently in development, is another opportunity to advance a regulatory approach focused on harm prevention. The act should demand greater transparency from social media platforms about content moderation practices and algorithmic systems, as well as require better risk assessment practices. It also should incentivize companies to move away from a business model that values user engagement above everything else.
Of course, governments can take action beyond passing and enforcing platform regulations. They can promote digital citizenship education in school curricula to ensure that teenagers and young adults develop the skills to recognize and report inappropriate online conduct and to communicate respectfully online. In Europe, as part of negotiations around the Digital Services Act, activists are demanding that governments dedicate part of the Digital Services Tax to fund broader efforts to tackle online abuse, including additional research on patterns of gendered and racialized online harassment. In the United States, Biden’s proposal to set up a national task force—bringing together federal and state agencies, advocates, law enforcement, and tech companies—to tackle online harassment and abuse and understand its connection to violence against women and extremism represents a welcome and important step toward developing longer-term solutions. Equally welcome are his proposals to allocate new funding for law enforcement trainings on online harassments and threats and to support legislation that establishes a civil and criminal cause of action for unauthorized disclosure of intimate images.
Who Is Responsible
The problem of gendered and racialized harassment and abuse targeting women political leaders extends far beyond the online realm: traditional media outlets, political parties, and civil society all have crucial roles to play in committing to and modeling a more respectful and humane political discourse.
However, social media companies have the primary responsibility to prevent the amplification of online abuse and disinformation—a responsibility that they are currently failing to meet. As the coronavirus pandemic has further accelerated the global shift to online campaigning and mobilization, there is now an even greater need for governments to hold these companies accountable for addressing all forms of hate speech, harassment, and disinformation on their platforms. Both Biden’s proposed national task force and the EU’s Digital Services Act represent key opportunities for developing new regulatory approaches mandating greater transparency and accountability in content moderation, algorithmic decisionmaking, and risk assessment.
These reform efforts need to include a gender lens. As Boldrini emphasizes, “It is extremely important to speak out against sexism and misogyny in our societies, particularly in light of the global movement against women’s rights inspired by the far right. The time has come to start a new feminist revolution to defend the rights we already have—as well as to acquire new rights.” Ensuring that all women political leaders and activists can engage in democratic processes online without fear of harassment, threats, and abuse will be a central piece of this struggle.3
1 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.
2 Authors’ interview with Chloe Colliver, video call, October 28, 2020.
3 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.
Politics Podcast: Why Did Down-Ballot Democrats Have Such A Mediocre Showing? – FiveThirtyEight
Biden’s 2020 electoral map was ultimately a pretty good one for Democrats. While several states may have been closer than Democrats would have liked, Biden won back the “blue wall” states in the Upper Midwest and expanded Democrats’ map in the Sun Belt. He also won a record breaking 80 million votes nationally. Democrats down-ballot can’t quite say the same, though. In this installment of the FiveThirtyEight Politics podcast, the crew discusses the challenges that the party faced in House, Senate and state legislature races. They also ask whether it was a good use of polling to survey preferences for the 2024 Republican primary before Trump has even left the White House.
You can listen to the episode by clicking the “play” button in the audio player above or by downloading it in iTunes, the ESPN App or your favorite podcast platform. If you are new to podcasts, learn how to listen.
The FiveThirtyEight Politics podcast is recorded Mondays and Thursdays. Help new listeners discover the show by leaving us a rating and review on iTunes. Have a comment, question or suggestion for “good polling vs. bad polling”? Get in touch by email, on Twitter or in the comments.
Cyber Monday Unlocked Cell Phone Deals 2020: Prepaid Samsung Galaxy, Google Pixel & Apple iPhone Deals Researched by Deal Stripe – Press Release – Digital Journal
Parents welcome asymptomatic COVID-19 tests in schools, even if the news isn't always good – CBC.ca
Statistics Canada to say today how country's economy fared in third quarter of 2020 – Humboldt Journal
Silver investment demand jumped 12% in 2019
Iran anticipates renewed protests amid social media shutdown
Galaxy M31 July 2020 security update brings Glance, a content-driven lockscreen wallpaper service
Tech22 hours ago
Review: PS5 is big and pricey, but boasts impressive speed and visual upgrades – CityNews Toronto
Art22 hours ago
Members of Beach Guild of Fine Art face COVID-19 challenges by hosting Online Holiday Show – Beach Metro News
Sports11 hours ago
Current and former Lions players rejoice over Matt Patricia's firing on social media – Yahoo
Health21 hours ago
19 test positive for COVID-19 at Toronto's Thorncliffe Park Public School – Toronto Sun
Art13 hours ago
Hariri Pontarini To Design Art Gallery of York University – Urban Toronto
Art15 hours ago
Province puts up $100K to get more art into public places on P.E.I. – CBC.ca
Health15 hours ago
Ontario parents can now apply for second COVID-19 payout. Here's how – CTV Toronto
Health24 hours ago
‘Long-term care facilities are at a breaking point’: Calls for action as more deaths linked to Alberta continuing care centres – Global News