Connect with us

Politics

Commons showdown highlights tension between politics and science – Humboldt Journal

Published

 on


OTTAWA — Monday’s vote on a Conservative motion to launch an in-depth review of the Liberal government’s COVID-19 response highlights a key challenge of pandemic politics: how to hold a government accountable for decisions based on science, when the science itself is changing nearly every day.

The opposition wants a committee probe into everything from why regulators are taking so long to approve rapid testing to an early decision not to close the border to international travel, and what concerns the Liberals is how that probe is being framed.

article continues below

“One of the narratives that I find most distressing coming from the opposition, is that somehow because advice changed at some point that the government was hiding information or that the government was giving misinformation,” Health Minister Patty Hajdu said late last week.

“And nothing could be further from the truth.”

It’s not the science itself that’s up for debate, said Conservative Leader Erin O’Toole.

“In a pandemic, borders, since the Middle Ages, have been part of a stop of spreading of the virus and that was a failure of elected officials to put the health of Canadians first,” O’Toole told reporters last week.

“There has been conflicting information on masks and other things. My concern is that the Trudeau government relies more on open source data from China than our own science and intelligence experts.”

The relationship between a nation’s scientists and their senior politicians is a challenging one, said Ian Culbert, executive director of the Canadian Public Health Association.

Chief Public Health Officer Dr. Theresa Tam provides the scientific evidence there is, but at the end of the day, it is the politicians who make the call, he said.

A decision on whether or not to close the borders is a good example, he said.

In the early days of the pandemic, the World Health Organization cautioned against widespread border closures. Scientific research has suggested there’s little medical benefit to them and the economic impacts can be severe and wide-ranging.

But the optics of border closures, the idea that if countries can keep out a virus out they will be immune, creates political pressure to act, Culbert said .

“The tension between what is in the public’s good, as opposed to all of the varying political considerations the politicians have to take into consideration — there’s always a tension there,” Culbert said.

While heated, the interplay between Liberal government and Opposition Conservatives is a far cry from the hyper-partisanship around pandemic response in the U.S., where even the president has circulated misinformation and challenged that country’s top scientists.

Canadian researchers studying the response of political elites here in the early days of the pandemic found no evidence of MPs casting doubt on the seriousness of the pandemic, or spreading conspiracy theories about it. In fact, there was a cross partisan consensus around how seriously it needed to be taken.

“As far as we can tell, that story hasn’t changed,” said Eric Merkley, a University of Toronto political scientist who led the study.

Both he and Culbert said a review of the Liberals’ pandemic response is warranted, but a balancing act is required.

“Everyone has 20/20 hindsight and thinks that they can go, look back, and and point to points at which bad decisions were made,” Culbert said.

“But that’s with the knowledge that we have today. We didn’t have that knowledge back in March.”

The Liberals have sometimes hit back at criticism by pointing to how the previous Conservative government handled the science and health files, including budget cuts and efforts to muzzle scientists.

But critics can’t be painted as anti-science for asking questions, Merkley said.

“There’s plenty of scope for democratic debate about proper responses to the pandemic, there’s plenty of scope for disagreement,” Merkley said.

“And just because there’s that disagreement and an Opposition party holding government accountable, that’s not necessarily a bad thing. In fact, that’s a sign of a healthy democracy.”

This report by The Canadian Press was first published Oct. 25, 2020.

Let’s block ads! (Why?)



Source link

Politics

Trump's fights with fellow Republicans have political consequences beyond 2020 – NBC News

Published

 on


WASHINGTON — President Trump is leaving office the same way he started his political career — by attacking fellow Republicans.

But the fights he’s picked with Gov. Brian Kemp and Secretary of State Brad Raffensperger in Georgia, as well as with Gov. Doug Ducey in Arizona, are different than those insults at John McCain, Mitt Romney, Jeb Bush and Rick Perry in 2015.

One, they’re taking place with the top elected leaders in onetime GOP-leaning states that just turned blue in 2020 — and with one of them (Georgia) holding twin runoffs in January that will decide the control of the Senate.

And two, Trump is upset that these Republican officials aren’t helping him overturn election results in states that he narrowly yet clearly lost.

Dec. 1, 202002:22

Let us repeat that again: He. Wants. Them. To. Reverse. The. Results.

“ALL 15 counties in Arizona — counties run by both parties — certified their results,” Ducey replied to Trump via Twitter. “That’s the law. I’ve sworn an oath to uphold it, and I take my responsibility seriously.”

“Georgia law prohibits the governor from interfering in elections,” Gov. Kemp’s spokesman said in a statement, per NBC’s Vaughn Hillyard.

These intraparty fights not only complicate the Senate runoffs in Georgia, but also future statewide contests in these two states.

As NBC’s Ed Demaria reminds us, Ducey might be the best Republican on paper who could win both a GOP Senate primary and a general election in Arizona in either 2022 or 2024. But what if Trump decides to sink his chances?

And that’s the dilemma for Republicans if Trump — once out of office — becomes the face of the GOP opposition to Biden.

Does he use his powers to help the party? Or exact revenge?

Tweet of the day

NYT: Trump has raised $170 million since Election Day

“President Trump has raised about $170 million since Election Day as his campaign operation has continued to aggressively solicit donations with hyped-up appeals that have funded his fruitless attempts to overturn the election,” the New York Times reports, citing one person familiar with the matter.

The rub: The fine print on the president’s call for donations to his “Official Election Defense Fund” show that the vast majority of donations don’t necessarily support a recount at all. Most of the money instead is headed for the president’s personal leadership pac, which he’ll be able to use to fund his post-presidential political activity, and to the Republican National Committee.

It’s not surprising, but it’s still astonishing.

Data Download: The numbers you need to know today

6,238,766: Joe Biden’s lead in the popular vote at the time of publication.

13,624,624: The number of confirmed cases of coronavirus in the United States, per the most recent data from NBC News and health officials. (That’s 170,294 more than yesterday morning.)

268,990: The number of deaths in the United States from the virus so far. (That’s 1,394 more than yesterday morning.)

192.77 million: The number of coronavirus tests that have been administered in the United States so far, according to researchers at The COVID Tracking Project.

96,039: The number of people currently hospitalized with coronavirus

35: The number of days until the Jan. 5 Senate runoffs.

50: The number of days until Inauguration Day.

Biden rolls out his economic team

“President-elect Joe Biden on Tuesday will formally introduce his picks for his economic policy team, including Janet Yellen for treasury secretary,” NBC’s Geoff Bennett and Rebecca Shabad write.

Biden Cabinet/Transition Watch

State: Tony Blinken (announced)

Treasury: Janet Yellen (announced)

Homeland Security: Alejandro Mayorkas (announced)

UN Ambassador: Linda Thomas-Greenfield (announced)

Director of National Intelligence: Avril Haines (announced)

Defense: Michèle Flournoy, Jeh Johnson, Illinois Sen. Tammy Duckworth

Attorney General: Doug Jones, Xavier Becerra, Sally Yates

HHS: New Mexico Gov, Michelle Lujan Grisham, Calif. Rep. Raul Ruiz, Calif. Rep. Karen Bass, Dr. Vivek Murthy

Interior: Deb Haaland

Agriculture: Heidi Heitkamp

Labor: Andy Levin, Bernie Sanders, Marty Walsh

Education: Lily Eskelsen Garcia, Randi Weingarten

OMB Director: Neera Tanden (announced)

CIA: Michael Morell

Chief of Staff: Ron Klain (announced)

National Security Adviser: Jake Sullivan (announced)

Climate Envoy: John Kerry (announced)

White House Communications Director: Kate Bedingfield (announced)

White House Press Secretary: Jen Psaki (announced)

VP Communications Director: Ashley Etienne (announced)

VP Chief Spokesperson: Symone Sanders (announced)

Georgia Runoff Watch by Ben Kamisar

Today’s Runoff Watch checks in on the enormous amount of money pouring into Georgia over the next few months.

As of now, there’s been $293 million devoted to both runoffs (this includes TV and radio advertising already spent and booked, per Advertising Analytics). The special runoff (Loeffler vs. Warnock) has $158 million devoted to it, compared to the other runoff’s (Perdue vs. Ossoff) $135 million, with Republican groups outspending Democrats in both.

If no one else commits a dime to either race, the special runoff alone (from Nov. 4 on) will have more TV and radio spending in it than every single 2020 Senate race except for three (North Carolina, Iowa and Arizona). And in the Perdue-Ossoff runoff, that $135 million spent and booked between Nov. 4 and Jan. 5 is virtually the same amount spent on the race by Election Day.

But of course, it’s almost certain that there will be a lot more money flooding the state as both parties dig deep into the piggybank to help decide control of the Senate.

The Lid: Man! I feel like a woman

Don’t miss the pod from yesterday, when we looked at the rise of women in Congress and now in Joe Biden’s Cabinet picks.

Shameless plug

All this week, NBC News will have in-depth coverage on the “Race for a Vaccine” across its programs and platforms, including NBC Nightly News with Lester Holt, TODAY, Dateline NBC, MSNBC, NBCNews.com and NBC News NOW.

ICYMI: What else is happening in the world

Joe Biden is outpacing or exceeding Barack Obama and Donald Trump’s timelines for choosing Cabinet members.

Some progressives aren’t happy about the business ties of some of Biden’s top picks for White House jobs.

Politico looks at how Janet Yellen became Biden’s pick for Treasury secretary.

The Washington Post looks at how Neera Tanden has become one lightning rod for the transition.

Biden received his first presidential daily briefing yesterday.

Controversial White House coronavirus advisor Scott Atlas has resigned.

Some House Republicans want to challenge the Electoral College count on the House floor.

Georgia voters will choose John Lewis’s short-term successor in a runoff election today.

There could be a huge economic recovery next year. But a lot depends on the winter.

What does an inauguration look like in a pandemic?

Let’s block ads! (Why?)



Source link

Continue Reading

Politics

The Politics Of Student Debt Cancellation – Forbes

Published

 on


President-elect Joe Biden has promised to provide student debt cancellation of some kind to student borrowers. While he and Vice President-elect Kamala Harris both pitched that plan during the campaign, the policy has become central to many political debates in recent weeks.

But there is division around the issue, both within the Democratic Party and across the aisle. Progressives are pushing Biden to cancel student debt, while moderate Democrats are less supportive and Republican elected officials are fully opposed.

Part of the case some are making against student debt cancellation is around the fairness of the policy. One columnist recently went viral on Twitter when he said the policy would anger those who didn’t go to college or who already paid off their debts.

While the latter may be less compelling, the former argument is one that could have real political implications. About 45.1 million Americans have student debt, but that’s less than 15% of the population of the United States. There are already political divides across education where those without a college degree are more likely to vote for Republican candidates.

New data shows that these divides are becoming even more stark. Many worry that enacting a cancellation would ignite a backlash from those who never went to college, particularly given the income divides across levels of educational attainment. Of course, in a large legislative package, many policies could be enacted at the same time, so the backlash might be easier to avoid.

But because Democrats did not win as many Senate races as expected, Biden’s legislative proposals will be more difficult to pass. Even if Democrats win both Georgia runoff elections, that would yield a 50 – 50 split with Vice President-elect Kamala Harris being the tie breaker. And that best-case scenario assumes all Democrats support each proposal and support eliminating the filibuster. That has put a lot of focus on what President-elect Biden can do without Congress.

Part of the reason that student debt cancellation has received so much focus is that some believe student debt cancellation can be done via executive authority rather than only legislatively. Senator Elizabeth Warren made the first high-profile push for this idea into the political discourse during her presidential campaign. In September, Warren and Senate Minority Leader Chuck Schumer introduced a resolution calling on the next president to use this authority to cancel up to $50,000 in student debt per borrower.

This issue is a high profile one that many on the left will see as a significant win if Biden is able to forgive some debt. A number of Democrats are pushing President-elect to use this executive authority to provide some form of relief, especially if Senate Republicans don’t agree to a larger economic relief package as proposed by President-elect Biden. Biden will have weigh both sides of the political spectrum as he decides what to do.


Related Readings:

Will The Biden Administration Give Students $10,000 In Loan Relief?

Joe Biden’s Plan For Student Debt Cancellation

Let’s block ads! (Why?)



Source link

Continue Reading

Politics

Tackling Online Abuse and Disinformation Targeting Women in Politics – Carnegie Endowment for International Peace

Published

 on


In 2017, soon after then Ukrainian member of parliament Svitlana Zalishchuk gave a speech to the United Nations on the impact of the Russian-Ukrainian conflict on women, a fake tweet began to circulate on social media claiming that she had promised to run naked through the streets of Kiev if Russia-backed separatists won a critical battle. Zalishchuk said, “The story kept circulating on the Internet for a year,” casting a shadow over her political accomplishments.

Zalishchuk is not alone in her experience. Around the world, women in politics receive an overwhelming amount of online abuse, harassment, and gendered defamation via social media platforms. For example, a recent analysis of the 2020 U.S. congressional races found that female candidates were significantly more likely to receive online abuse than their male counterparts. On Facebook, female Democrats running for office received ten times more abusive comments than male Democratic candidates. Similar trends have been documented in India, the UK, Ukraine, and Zimbabwe.

Social media companies have come under increasing pressure to take a tougher stance against all forms of hate speech and harassment on their platforms, including against women, racial minorities, and other marginalized groups. Yet their patchwork approach to date has proven insufficient. Governments and international institutions need to press for more action and develop new standards for platform transparency and accountability that can help address the widespread toxicity that is currently undermining online political debate. If effectively designed and implemented, the EU’s Digital Services Act and U.S. President-elect Joe Biden’s proposed National Task Force on Online Harassment and Abuse will represent steps in the right direction.

The Global Challenge

Online abuse against politicians is often misunderstood as inevitable: after all, most public figures occasionally find themselves on the receiving end of vitriolic attacks. Yet over the past several years, the gendered and racialized nature of the phenomenon has received increasing policy attention, as women appear to be disproportionately targeted by online abuse and disinformation attacks.

This pattern tends to be even more pronounced for female political leaders from racial, ethnic, religious, or other minority groups; for those who are highly visible in the media; and for those who speak out on feminist issues. In India, for example, an Amnesty International investigation found that one in every seven tweets that mentioned women politicians was problematic or abusive—and that both Muslim women politicians and women politicians belonging to marginalized castes received substantially more abuse than those from other social groups.

Lucina Di Meco

Lucina Di Meco is a women’s rights and gender equality expert, advocate, and author. She currently serves as senior director of the Girls’ Education & Gender Equality program at Room to Read and as a member of the Advisory Board at Fund Her.

Female politicians are not only targeted disproportionately but also subjected to different forms of harassment and abuse. Attacks targeting male politicians mostly relate to their professional duties, whereas online harassment directed at female politicians is more likely to focus on their physical appearance and sexuality and include threats of sexual violence and humiliating or sexualized imagery. Women in politics are also frequent targets of gendered disinformation campaigns, defined as the spreading of deceptive or inaccurate information and images. Such campaigns often create story lines that draw on misogyny and gender stereotypes. For example, a recent analysis shows that immediately following Kamala Harris’s nomination as the 2020 U.S. vice presidential candidate, false claims about Harris were being shared at least 3,000 times per hour on Twitter, in what appeared to be a coordinated effort. Similar tactics have been used throughout Europe and in Brazil.

The disproportionate and often strategic targeting of women politicians and activists has direct implications for the democratic process: it can discourage women from running for office, push women out of politics, or lead them to disengage from online political discourse in ways that harms their political effectiveness. For those women who persevere, the abuse can cause psychological harm and waste significant energy and time, particularly if politicians struggle to verify whether or when online threats pose real-life dangers to their safety.

What’s Driving Gendered Online Abuse

Some political scientists and social psychologists point to gender role theory to explain harassment and threats targeting female politicians. In many societies, the characteristics traditionally associated with politicians—such as ambition and assertiveness—tend to be coded “male,” which means that women who display these traits may be perceived as transgressing traditional social norms. Online harassment of women seeking political power could thus be understood as a form of gender role enforcement, facilitated by anonymity.

However, online abuse and sexist narratives targeting politically active women are not just the product of everyday misogyny: they are reinforced by political actors and deployed as a political strategy. Illiberal political actors often encourage online abuse against female political leaders and activists as a deliberate tactic to silence oppositional voices and push feminist politicians out of the political arena.

Saskia Brechenmacher

Fellow
Democracy, Conflict, and Governance Program

Saskia Brechenmacher is a PhD candidate at the University of Cambridge and a fellow in Carnegie’s Democracy, Conflict, and Governance Program, where her research focuses on gender, civil society, and democratic governance.

Laura Boldrini, an Italian politician and former UN official who served as president of the country’s Chamber of Deputies, experienced this situation firsthand: following sexist attacks by Matteo Salvini, leader of the far-right Northern League party, and other male politicians, she was targeted by a wave of threatening and misogynistic abuse both online and offline. “Today, in my country, threats of rape are used to intimidate women politicians and push them out of the publish sphere—even by public figures,” notes Boldrini. “Political leaders themselves unleash this type of reaction.”1

What Can Be Done

In recent years, women politicians and activists have launched campaigns to raise awareness of the problem and its impact on democratic processes. Last August, the U.S. Democratic Women’s Caucus sent a letter to Facebook urging the company to protect women from rampant online attacks on the platform and to revise algorithms that reward extremist content. Similar advocacy initiatives have proliferated in different parts of the world, from the global #NotTheCost campaign to Reclaim the Internet in the UK, #WebWithoutViolence in Germany, and the #BetterThanThis campaign in Kenya.

Civil society organizations that support women running for office are also spearheading new strategies to respond to gendered online abuse. Some are offering specialized training and toolkits to help women political leaders protect themselves and counter sexualized and racialized disinformation. In Canada, a social enterprise created ParityBOT, a bot that detects problematic tweets about women candidates and responds with positive messages, thus serving both as a monitoring mechanism and a counterbalancing tool.

Yet despite rising external pressure from politicians and civil society, social media companies’ responses have so far been inadequate to tackle a problem as vast and complex as gendered disinformation and online abuse—whether it targets female politicians, activists, or ordinary citizens. For example, Facebook recently created an Oversight Board tasked with improving the platform’s decisionmaking around content moderation—yet many experts are highly skeptical of the board’s ability to drive change given its limited scope and goals. Twitter reportedly increased enforcement of its hate speech and abuse policies in the second half of 2019, as well as expanded its definition of dehumanizing speech. However, its policies to date lack a clear focus on the safety of women and other marginalized groups. Broader reforms are urgently needed.

Increase Platform Transparency and Accountability

Major social media platforms should do more to ensure transparency, accountability, and gender sensitivity in their mechanisms for content moderation, complaints, and redress. They should also take steps to proactively prevent the spread of hateful speech online, including through changes in risk assessment practices and product design.

To date, most tech companies still have inadequate and unclear content moderation systems. For example, social media companies currently do not disclose their exact guidelines on what constitutes hate speech and harassment or how they implement those guidelines. To address this problem, nonprofits such as Glitch and ISD have suggested that social media platforms allow civil society organizations and independent researchers to access and analyze their data on the number and nature of complaints received, disaggregated by gender, country, and the redress actions taken. According to Amnesty International, tech companies should also be more transparent about their language detection mechanisms, the number of content moderators employed by region and language, the volume of reports handled, and how moderators are trained to recognize culturally specific and gendered forms of abuse. To this day, most tech companies focus on tackling online abuse primarily in Europe and the United States, resulting in an enforcement gap in the Global South. Greater transparency about companies’ current content moderation capacity would enable governments and civil society to better identify shortcomings and push for targeted resource investments.

The move to more automated content moderation is unlikely to solve the problem of widespread and culturally specific gendered and racialized online abuse. Until now, social media companies have used automated tools primarily for content that is easier to identify computationally. Yet these tools are blunt and often biased. So far during the coronavirus pandemic, Facebook, Twitter, and Google have all relied more heavily on automation to remove harmful content. As a result, significantly more accounts have been suspended and more content has been flagged and removed than in the months leading up to the pandemic. But some of this content was posted by human rights activists who had no mechanism for appealing those decisions, and some clearly hateful content—such as racist and anti-Semitic hate speech in France—remained online. “Machine learning will always be a limited tool, given that context plays an enormous part of how harassment and gendered disinformation work online,” notes Chloe Colliver, the head of digital policy and strategy at ISD. “We need some combination of greater human resources and expertise along with a focus on developing AI systems that are more accurate in detecting gendered disinformation.”2

The proliferation of online harassment, hate speech, and disinformation is not only driven by gaps in content moderation but also by a business model that monetizes user engagement with little regard for risk. At the moment, Twitter and other platforms rely on deep learning algorithms that prioritize disseminating content with greater engagement. Inflammatory posts often quickly generate comments and retweets, which means that newsfeed algorithms will show them to more users. Online abuse that relies on sensational language and images targeting female politicians thus tends to spread rapidly. Higher levels of engagement generate more user behavior data that brings in advertising revenue, which means social media companies currently have few financial incentives to change the status quo.

Advocates and experts have put forward different proposals to tackle this problem. For example, social media companies could proactively tweak their recommendation systems to prevent users from being nudged toward hateful content. They also could improve their mechanism for detecting and suspending algorithms that amplify gendered and racialized hate speech—a step that some organizations have suggested to help address pandemic-related mis/disinformation. As part of this process, companies could disclose and explain their content-shaping algorithms and ad-targeting systems, which currently operate almost entirely beyond public scrutiny.

In addition, they could improve their risk assessment practices prior to launching new products or tools or before expanding into a new political and cultural context. At the moment, content moderation is often siloed from product design and engineering, which means that social media companies are permanently focused on investigating and redressing complaints instead of building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Moreover, decisions around risk are often taken by predominantly male, white senior staffers: this type of homogeneity frequently leads to gender and race blindness in product development and rollout. Across all of these domains, experts call for greater transparency and collaboration with outside expertise, including researchers working on humane technology and ethical design.

Step Up Government Action

Given tech companies’ limited action to date, democratic governments also have a responsibility to do more. Rather than asking social media companies to become the final arbiters of online speech, they should advance broader regulatory frameworks that require platforms to become more transparent about their moderation practices and algorithmic decisionmaking, as well as ensure compliance through independent monitoring and accountability mechanisms. Governments also have an important role to play in supporting civil society advocacy, research, and public education on gendered and racialized patterns of online abuse, including against political figures.

The first wave of legislation aimed at mitigating abuse, harassment, and hate speech on social media platforms focused primarily on criminalizing and removing different types of harmful online content. Some efforts have targeted individual perpetrators. For example, in the UK, legal guidelines issued in 2016 and in 2018 enable the Crown Prosecution Service to prosecute internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2019, Mexico passed a new law that specifically targets gendered online abuse: it punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks. The law also includes the concept of “digital violence” in the Mexican penal code.

Such legal reforms are important steps, particularly if they are paired with targeted resources and training for law enforcement. Female politicians often report that law enforcement officials do not take their experiences with online threats and abuse seriously enough; legal reforms and prosecution guidelines can help change this pattern. However, efforts to go after individual perpetrators are insufficient to tackle the current scale of misogynistic online harassment and abuse targeting women politicians and women and girls more generally: even if applicable legal frameworks exist, thresholds for prosecution are often set very high and not all victims want to press charges. Moreover, anonymous perpetrators can be difficult to trace, and the caseload easily exceeds current policing capacity. In the UK, for example, fewer than 1 percent of cases taken up by the police unit charged with tackling online hate crimes have resulted in charges.

Other countries have passed laws that make social media companies responsible for the removal of illegal material. For example, in 2017, Germany introduced a new law that requires platforms to remove hate speech or illegal content within twenty-four hours or risk millions of dollars in fines. However, this approach has raised strong concerns among human rights activists, who argue that this measure shifts the responsibility to social media companies to determine what constitutes legal speech without providing adequate mechanisms for judicial oversight or judicial remedy. In June 2020, the French constitutional court struck down a similar law due to concerns about overreach and censorship. French feminist and antiracist organizations had previously criticized the measure, noting that it could restrict the speech of those advocating against hate and extremism online and that victims would benefit more from sustained investments in existing legal remedies.

In light of these challenges, many researchers and advocates have started . One example of this approach is the UK’s 2019 Online Harms White Paper, which “proposes establishing in law a new duty of care towards users” to deal proactively with possible risks that platform users might encounter, under the oversight of an independent regulator. The proposed regulatory framework—which is set to result in a new UK law in early 2021—would “outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.” It would also set strict standards for transparency and require companies to ensure that their algorithms do not amplify extreme and unreliable material for the sake of user engagement. The EU’s Digital Services Act, currently in development, is another opportunity to advance a regulatory approach focused on harm prevention. The act should demand greater transparency from social media platforms about content moderation practices and algorithmic systems, as well as require better risk assessment practices. It also should incentivize companies to move away from a business model that values user engagement above everything else.

Of course, governments can take action beyond passing and enforcing platform regulations. They can promote digital citizenship education in school curricula to ensure that teenagers and young adults develop the skills to recognize and report inappropriate online conduct and to communicate respectfully online. In Europe, as part of negotiations around the Digital Services Act, activists are demanding that governments dedicate part of the Digital Services Tax to fund broader efforts to tackle online abuse, including additional research on patterns of gendered and racialized online harassment. In the United States, Biden’s proposal to set up a national task force—bringing together federal and state agencies, advocates, law enforcement, and tech companies—to tackle online harassment and abuse and understand its connection to violence against women and extremism represents a welcome and important step toward developing longer-term solutions. Equally welcome are his proposals to allocate new funding for law enforcement trainings on online harassments and threats and to support legislation that establishes a civil and criminal cause of action for unauthorized disclosure of intimate images.

Who Is Responsible

The problem of gendered and racialized harassment and abuse targeting women political leaders extends far beyond the online realm: traditional media outlets, political parties, and civil society all have crucial roles to play in committing to and modeling a more respectful and humane political discourse.

However, social media companies have the primary responsibility to prevent the amplification of online abuse and disinformation—a responsibility that they are currently failing to meet. As the coronavirus pandemic has further accelerated the global shift to online campaigning and mobilization, there is now an even greater need for governments to hold these companies accountable for addressing all forms of hate speech, harassment, and disinformation on their platforms. Both Biden’s proposed national task force and the EU’s Digital Services Act represent key opportunities for developing new regulatory approaches mandating greater transparency and accountability in content moderation, algorithmic decisionmaking, and risk assessment.

These reform efforts need to include a gender lens. As Boldrini emphasizes, “It is extremely important to speak out against sexism and misogyny in our societies, particularly in light of the global movement against women’s rights inspired by the far right. The time has come to start a new feminist revolution to defend the rights we already have—as well as to acquire new rights.” Ensuring that all women political leaders and activists can engage in democratic processes online without fear of harassment, threats, and abuse will be a central piece of this struggle.3

Notes

1 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.

2 Authors’ interview with Chloe Colliver, video call, October 28, 2020.

3 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.

Let’s block ads! (Why?)



Source link

Continue Reading

Trending