Connect with us

Politics

Tackling Online Abuse and Disinformation Targeting Women in Politics – Carnegie Endowment for International Peace

Published

 on


In 2017, soon after then Ukrainian member of parliament Svitlana Zalishchuk gave a speech to the United Nations on the impact of the Russian-Ukrainian conflict on women, a fake tweet began to circulate on social media claiming that she had promised to run naked through the streets of Kiev if Russia-backed separatists won a critical battle. Zalishchuk said, “The story kept circulating on the Internet for a year,” casting a shadow over her political accomplishments.

Zalishchuk is not alone in her experience. Around the world, women in politics receive an overwhelming amount of online abuse, harassment, and gendered defamation via social media platforms. For example, a recent analysis of the 2020 U.S. congressional races found that female candidates were significantly more likely to receive online abuse than their male counterparts. On Facebook, female Democrats running for office received ten times more abusive comments than male Democratic candidates. Similar trends have been documented in India, the UK, Ukraine, and Zimbabwe.

Social media companies have come under increasing pressure to take a tougher stance against all forms of hate speech and harassment on their platforms, including against women, racial minorities, and other marginalized groups. Yet their patchwork approach to date has proven insufficient. Governments and international institutions need to press for more action and develop new standards for platform transparency and accountability that can help address the widespread toxicity that is currently undermining online political debate. If effectively designed and implemented, the EU’s Digital Services Act and U.S. President-elect Joe Biden’s proposed National Task Force on Online Harassment and Abuse will represent steps in the right direction.

The Global Challenge

Online abuse against politicians is often misunderstood as inevitable: after all, most public figures occasionally find themselves on the receiving end of vitriolic attacks. Yet over the past several years, the gendered and racialized nature of the phenomenon has received increasing policy attention, as women appear to be disproportionately targeted by online abuse and disinformation attacks.

This pattern tends to be even more pronounced for female political leaders from racial, ethnic, religious, or other minority groups; for those who are highly visible in the media; and for those who speak out on feminist issues. In India, for example, an Amnesty International investigation found that one in every seven tweets that mentioned women politicians was problematic or abusive—and that both Muslim women politicians and women politicians belonging to marginalized castes received substantially more abuse than those from other social groups.

Lucina Di Meco

Lucina Di Meco is a women’s rights and gender equality expert, advocate, and author. She currently serves as senior director of the Girls’ Education & Gender Equality program at Room to Read and as a member of the Advisory Board at Fund Her.

Female politicians are not only targeted disproportionately but also subjected to different forms of harassment and abuse. Attacks targeting male politicians mostly relate to their professional duties, whereas online harassment directed at female politicians is more likely to focus on their physical appearance and sexuality and include threats of sexual violence and humiliating or sexualized imagery. Women in politics are also frequent targets of gendered disinformation campaigns, defined as the spreading of deceptive or inaccurate information and images. Such campaigns often create story lines that draw on misogyny and gender stereotypes. For example, a recent analysis shows that immediately following Kamala Harris’s nomination as the 2020 U.S. vice presidential candidate, false claims about Harris were being shared at least 3,000 times per hour on Twitter, in what appeared to be a coordinated effort. Similar tactics have been used throughout Europe and in Brazil.

The disproportionate and often strategic targeting of women politicians and activists has direct implications for the democratic process: it can discourage women from running for office, push women out of politics, or lead them to disengage from online political discourse in ways that harms their political effectiveness. For those women who persevere, the abuse can cause psychological harm and waste significant energy and time, particularly if politicians struggle to verify whether or when online threats pose real-life dangers to their safety.

What’s Driving Gendered Online Abuse

Some political scientists and social psychologists point to gender role theory to explain harassment and threats targeting female politicians. In many societies, the characteristics traditionally associated with politicians—such as ambition and assertiveness—tend to be coded “male,” which means that women who display these traits may be perceived as transgressing traditional social norms. Online harassment of women seeking political power could thus be understood as a form of gender role enforcement, facilitated by anonymity.

However, online abuse and sexist narratives targeting politically active women are not just the product of everyday misogyny: they are reinforced by political actors and deployed as a political strategy. Illiberal political actors often encourage online abuse against female political leaders and activists as a deliberate tactic to silence oppositional voices and push feminist politicians out of the political arena.

Saskia Brechenmacher

Fellow
Democracy, Conflict, and Governance Program

Saskia Brechenmacher is a PhD candidate at the University of Cambridge and a fellow in Carnegie’s Democracy, Conflict, and Governance Program, where her research focuses on gender, civil society, and democratic governance.

Laura Boldrini, an Italian politician and former UN official who served as president of the country’s Chamber of Deputies, experienced this situation firsthand: following sexist attacks by Matteo Salvini, leader of the far-right Northern League party, and other male politicians, she was targeted by a wave of threatening and misogynistic abuse both online and offline. “Today, in my country, threats of rape are used to intimidate women politicians and push them out of the publish sphere—even by public figures,” notes Boldrini. “Political leaders themselves unleash this type of reaction.”1

What Can Be Done

In recent years, women politicians and activists have launched campaigns to raise awareness of the problem and its impact on democratic processes. Last August, the U.S. Democratic Women’s Caucus sent a letter to Facebook urging the company to protect women from rampant online attacks on the platform and to revise algorithms that reward extremist content. Similar advocacy initiatives have proliferated in different parts of the world, from the global #NotTheCost campaign to Reclaim the Internet in the UK, #WebWithoutViolence in Germany, and the #BetterThanThis campaign in Kenya.

Civil society organizations that support women running for office are also spearheading new strategies to respond to gendered online abuse. Some are offering specialized training and toolkits to help women political leaders protect themselves and counter sexualized and racialized disinformation. In Canada, a social enterprise created ParityBOT, a bot that detects problematic tweets about women candidates and responds with positive messages, thus serving both as a monitoring mechanism and a counterbalancing tool.

Yet despite rising external pressure from politicians and civil society, social media companies’ responses have so far been inadequate to tackle a problem as vast and complex as gendered disinformation and online abuse—whether it targets female politicians, activists, or ordinary citizens. For example, Facebook recently created an Oversight Board tasked with improving the platform’s decisionmaking around content moderation—yet many experts are highly skeptical of the board’s ability to drive change given its limited scope and goals. Twitter reportedly increased enforcement of its hate speech and abuse policies in the second half of 2019, as well as expanded its definition of dehumanizing speech. However, its policies to date lack a clear focus on the safety of women and other marginalized groups. Broader reforms are urgently needed.

Increase Platform Transparency and Accountability

Major social media platforms should do more to ensure transparency, accountability, and gender sensitivity in their mechanisms for content moderation, complaints, and redress. They should also take steps to proactively prevent the spread of hateful speech online, including through changes in risk assessment practices and product design.

To date, most tech companies still have inadequate and unclear content moderation systems. For example, social media companies currently do not disclose their exact guidelines on what constitutes hate speech and harassment or how they implement those guidelines. To address this problem, nonprofits such as Glitch and ISD have suggested that social media platforms allow civil society organizations and independent researchers to access and analyze their data on the number and nature of complaints received, disaggregated by gender, country, and the redress actions taken. According to Amnesty International, tech companies should also be more transparent about their language detection mechanisms, the number of content moderators employed by region and language, the volume of reports handled, and how moderators are trained to recognize culturally specific and gendered forms of abuse. To this day, most tech companies focus on tackling online abuse primarily in Europe and the United States, resulting in an enforcement gap in the Global South. Greater transparency about companies’ current content moderation capacity would enable governments and civil society to better identify shortcomings and push for targeted resource investments.

The move to more automated content moderation is unlikely to solve the problem of widespread and culturally specific gendered and racialized online abuse. Until now, social media companies have used automated tools primarily for content that is easier to identify computationally. Yet these tools are blunt and often biased. So far during the coronavirus pandemic, Facebook, Twitter, and Google have all relied more heavily on automation to remove harmful content. As a result, significantly more accounts have been suspended and more content has been flagged and removed than in the months leading up to the pandemic. But some of this content was posted by human rights activists who had no mechanism for appealing those decisions, and some clearly hateful content—such as racist and anti-Semitic hate speech in France—remained online. “Machine learning will always be a limited tool, given that context plays an enormous part of how harassment and gendered disinformation work online,” notes Chloe Colliver, the head of digital policy and strategy at ISD. “We need some combination of greater human resources and expertise along with a focus on developing AI systems that are more accurate in detecting gendered disinformation.”2

The proliferation of online harassment, hate speech, and disinformation is not only driven by gaps in content moderation but also by a business model that monetizes user engagement with little regard for risk. At the moment, Twitter and other platforms rely on deep learning algorithms that prioritize disseminating content with greater engagement. Inflammatory posts often quickly generate comments and retweets, which means that newsfeed algorithms will show them to more users. Online abuse that relies on sensational language and images targeting female politicians thus tends to spread rapidly. Higher levels of engagement generate more user behavior data that brings in advertising revenue, which means social media companies currently have few financial incentives to change the status quo.

Advocates and experts have put forward different proposals to tackle this problem. For example, social media companies could proactively tweak their recommendation systems to prevent users from being nudged toward hateful content. They also could improve their mechanism for detecting and suspending algorithms that amplify gendered and racialized hate speech—a step that some organizations have suggested to help address pandemic-related mis/disinformation. As part of this process, companies could disclose and explain their content-shaping algorithms and ad-targeting systems, which currently operate almost entirely beyond public scrutiny.

In addition, they could improve their risk assessment practices prior to launching new products or tools or before expanding into a new political and cultural context. At the moment, content moderation is often siloed from product design and engineering, which means that social media companies are permanently focused on investigating and redressing complaints instead of building mechanisms that “increase friction” for users and make it harder for gendered hate speech and disinformation to spread in the first place. Moreover, decisions around risk are often taken by predominantly male, white senior staffers: this type of homogeneity frequently leads to gender and race blindness in product development and rollout. Across all of these domains, experts call for greater transparency and collaboration with outside expertise, including researchers working on humane technology and ethical design.

Step Up Government Action

Given tech companies’ limited action to date, democratic governments also have a responsibility to do more. Rather than asking social media companies to become the final arbiters of online speech, they should advance broader regulatory frameworks that require platforms to become more transparent about their moderation practices and algorithmic decisionmaking, as well as ensure compliance through independent monitoring and accountability mechanisms. Governments also have an important role to play in supporting civil society advocacy, research, and public education on gendered and racialized patterns of online abuse, including against political figures.

The first wave of legislation aimed at mitigating abuse, harassment, and hate speech on social media platforms focused primarily on criminalizing and removing different types of harmful online content. Some efforts have targeted individual perpetrators. For example, in the UK, legal guidelines issued in 2016 and in 2018 enable the Crown Prosecution Service to prosecute internet trolls who create derogatory hashtags, engage in virtual mobbing (inciting people to harass others), or circulate doctored images. In 2019, Mexico passed a new law that specifically targets gendered online abuse: it punishes, with up to nine years in prison, those who create or disseminate intimate images or videos of women or attack women on social networks. The law also includes the concept of “digital violence” in the Mexican penal code.

Such legal reforms are important steps, particularly if they are paired with targeted resources and training for law enforcement. Female politicians often report that law enforcement officials do not take their experiences with online threats and abuse seriously enough; legal reforms and prosecution guidelines can help change this pattern. However, efforts to go after individual perpetrators are insufficient to tackle the current scale of misogynistic online harassment and abuse targeting women politicians and women and girls more generally: even if applicable legal frameworks exist, thresholds for prosecution are often set very high and not all victims want to press charges. Moreover, anonymous perpetrators can be difficult to trace, and the caseload easily exceeds current policing capacity. In the UK, for example, fewer than 1 percent of cases taken up by the police unit charged with tackling online hate crimes have resulted in charges.

Other countries have passed laws that make social media companies responsible for the removal of illegal material. For example, in 2017, Germany introduced a new law that requires platforms to remove hate speech or illegal content within twenty-four hours or risk millions of dollars in fines. However, this approach has raised strong concerns among human rights activists, who argue that this measure shifts the responsibility to social media companies to determine what constitutes legal speech without providing adequate mechanisms for judicial oversight or judicial remedy. In June 2020, the French constitutional court struck down a similar law due to concerns about overreach and censorship. French feminist and antiracist organizations had previously criticized the measure, noting that it could restrict the speech of those advocating against hate and extremism online and that victims would benefit more from sustained investments in existing legal remedies.

In light of these challenges, many researchers and advocates have started . One example of this approach is the UK’s 2019 Online Harms White Paper, which “proposes establishing in law a new duty of care towards users” to deal proactively with possible risks that platform users might encounter, under the oversight of an independent regulator. The proposed regulatory framework—which is set to result in a new UK law in early 2021—would “outline the systems, procedures, technologies and investment, including in staffing, training and support of human moderators, that companies need to adopt to help demonstrate that they have fulfilled their duty of care to their users.” It would also set strict standards for transparency and require companies to ensure that their algorithms do not amplify extreme and unreliable material for the sake of user engagement. The EU’s Digital Services Act, currently in development, is another opportunity to advance a regulatory approach focused on harm prevention. The act should demand greater transparency from social media platforms about content moderation practices and algorithmic systems, as well as require better risk assessment practices. It also should incentivize companies to move away from a business model that values user engagement above everything else.

Of course, governments can take action beyond passing and enforcing platform regulations. They can promote digital citizenship education in school curricula to ensure that teenagers and young adults develop the skills to recognize and report inappropriate online conduct and to communicate respectfully online. In Europe, as part of negotiations around the Digital Services Act, activists are demanding that governments dedicate part of the Digital Services Tax to fund broader efforts to tackle online abuse, including additional research on patterns of gendered and racialized online harassment. In the United States, Biden’s proposal to set up a national task force—bringing together federal and state agencies, advocates, law enforcement, and tech companies—to tackle online harassment and abuse and understand its connection to violence against women and extremism represents a welcome and important step toward developing longer-term solutions. Equally welcome are his proposals to allocate new funding for law enforcement trainings on online harassments and threats and to support legislation that establishes a civil and criminal cause of action for unauthorized disclosure of intimate images.

Who Is Responsible

The problem of gendered and racialized harassment and abuse targeting women political leaders extends far beyond the online realm: traditional media outlets, political parties, and civil society all have crucial roles to play in committing to and modeling a more respectful and humane political discourse.

However, social media companies have the primary responsibility to prevent the amplification of online abuse and disinformation—a responsibility that they are currently failing to meet. As the coronavirus pandemic has further accelerated the global shift to online campaigning and mobilization, there is now an even greater need for governments to hold these companies accountable for addressing all forms of hate speech, harassment, and disinformation on their platforms. Both Biden’s proposed national task force and the EU’s Digital Services Act represent key opportunities for developing new regulatory approaches mandating greater transparency and accountability in content moderation, algorithmic decisionmaking, and risk assessment.

These reform efforts need to include a gender lens. As Boldrini emphasizes, “It is extremely important to speak out against sexism and misogyny in our societies, particularly in light of the global movement against women’s rights inspired by the far right. The time has come to start a new feminist revolution to defend the rights we already have—as well as to acquire new rights.” Ensuring that all women political leaders and activists can engage in democratic processes online without fear of harassment, threats, and abuse will be a central piece of this struggle.3

Notes

1 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.

2 Authors’ interview with Chloe Colliver, video call, October 28, 2020.

3 Authors’ interview with Laura Boldrini, written communication, November 1, 2020.

Let’s block ads! (Why?)



Source link

Politics

Boris Johnson hails Biden as ‘a big breath of fresh air’

Published

 on

British Prime Minister Boris Johnson hailed U.S. President Joe Biden on Thursday as “a big breath of fresh air”, and praised his determination to work with allies on important global issues ranging from climate change and COVID-19 to security.

Johnson did not draw an explicit parallel between Biden and his predecessor Donald Trump after talks with the Democratic president in the English seaside resort of Carbis Bay on the eve of a summit of the Group of Seven (G7) advanced economies.

But his comments made clear Biden had taken a much more multilateral approach to talks than Trump, whose vision of the world at times shocked, angered and bewildered many of Washington’s European allies.

“It’s a big breath of fresh air,” Johnson said of a meeting that lasted about an hour and 20 minutes.

“It was a long, long, good session. We covered a huge range of subjects,” he said. “It’s new, it’s interesting and we’re working very hard together.”

The two leaders appeared relaxed as they admired the view across the Atlantic alongside their wives, with Jill Biden wearing a jacket embroidered with the word “LOVE”.

“It’s a beautiful beginning,” she said.

Though Johnson said the talks were “great”, Biden brought grave concerns about a row between Britain and the European Union which he said could threaten peace in the British region of Northern Ireland, which following Britain’s departure from the EU is on the United Kingdom’s frontier with the bloc as it borders EU member state Ireland.

The two leaders did not have a joint briefing after the meeting: Johnson spoke to British media while Biden made a speech about a U.S. plan to donate half a billion vaccines to poorer countries.

NORTHERN IRELAND

Biden, who is proud of his Irish heritage, was keen to prevent difficult negotiations between Brussels and London undermining a 1998 U.S.-brokered peace deal known as the Good Friday Agreement that ended three decades of bloodshed in Northern Ireland.

White House national security adviser Jake Sullivan told reporters aboard Air Force One on the way to Britain that Biden had a “rock-solid belief” in the peace deal and that any steps that imperilled the accord would not be welcomed.

Yael Lempert, the top U.S. diplomat in Britain, issued London with a demarche – a formal diplomatic reprimand – for “inflaming” tensions, the Times newspaper reported.

Johnson sought to play down the differences with Washington.

“There’s complete harmony on the need to keep going, find solutions, and make sure we uphold the Belfast Good Friday Agreement,” said Johnson, one of the leaders of the 2016 campaign to leave the EU.

Asked if Biden had made his alarm about the situation in Northern Ireland very clear, he said: “No he didn’t.

“America, the United States, Washington, the UK, plus the European Union have one thing we absolutely all want to do,” Johnson said. “And that is to uphold the Belfast Good Friday Agreement, and make sure we keep the balance of the peace process going. That is absolutely common ground.”

The 1998 peace deal largely brought an end to the “Troubles” – three decades of conflict between Irish Catholic nationalist militants and pro-British Protestant “loyalist” paramilitaries in which 3,600 people were killed.

Britain’s exit from the EU has strained the peace in Northern Ireland. The 27-nation bloc wants to protect its markets but a border in the Irish Sea cuts off the British province from the rest of the United Kingdom.

Although Britain formally left the EU in 2020, the two sides are still trading threats over the Brexit deal after London unilaterally delayed the implementation of the Northern Irish clauses of the deal.

Johnson’s Downing Street office said he and Biden agreed that both Britain and the EU “had a responsibility to work together and to find pragmatic solutions to allow unencumbered trade” between Northern Ireland, Britain and Ireland.”

(Reporting by Steve Holland, Andrea Shalal, Padraic Halpin, John Chalmers; Writing by Guy Faulconbridge; Editing by Giles Elgood, Emelia Sithole-Matarise, Mark Potter and Timothy Heritage)

Continue Reading

Politics

U.S. senator slams Apple, Amazon, Nike, for enabling forced labor in China

Published

 on

U.S. senator

A U.S. senator on Thursday slammed American companies, including Amazon.com Inc, Apple Inc and Nike Inc, for turning a blind eye to allegations of forced labor in China, arguing they were making American consumers complicit in Beijing’s repressive policies.

Speaking at a Senate Foreign Relations Committee hearing on China’s crackdown on Uyghurs and other Muslim minorities in its western Xinjiang region, Republican Senator Marco Rubio said many U.S. companies had not woken up to the fact that they were “profiting” from the Chinese government’s abuses.

“For far too long companies like Nike and Apple and Amazon and Coca-Cola were using forced labor. They were benefiting from forced labor or sourcing from suppliers that were suspected of using forced labor,” Rubio said. “These companies, sadly, were making all of us complicit in these crimes.”

Senator Ed Markey, who led the hearing with fellow Democrat Tim Kaine, said a number of U.S. technology companies had profited from the Chinese government’s “authoritarian surveillance industry,” and that many of their products “are being used in Xinjiang right now.”

Thermo Fisher Scientific said in 2019 it would stop selling genetic sequencing equipment into Xinjiang after rights groups and media documented how authorities there were building a DNA database for Uyghurs. But critics say the move didn’t go far enough.

“All evidence is that they continue to provide these products which enabled these human rights abuses,” Rubio said of Thermo Fisher, noting that he had written the Massachusetts-based company repeatedly about the matter.

“Whenever we receive proof of forced labor, we take action and suspend privileges to sell,” an Amazon spokesperson said.

Coca-Cola declined to comment. The other companies mentioned did not respond immediately to Reuters’ questions.

U.S. lawmakers are seeking to pass legislation that would ban imports of goods made in Xinjiang over concerns about forced labor.

Rights groups, researchers, former residents and some western lawmakers say Xinjiang authorities have facilitated forced labor by arbitrarily detaining around a million Uyghurs and other primarily Muslim minorities in a network of camps since 2016.

The United States government and parliaments in countries, including Britain and Canada, have described China’s policies toward Uyghurs as genocide. China denies abuses, saying the camps are for vocational training and to counter religious extremism.

Sophie Richardson, China director for Human Rights Watch, told the Senate panel that Beijing’s “extreme repression and surveillance” made human rights due diligence for companies impossible.

“Inspectors cannot visit facilities unannounced or speak to workers without fear of reprisal. Some companies seem unwilling or unable to ascertain precise information about their own supply chains,” she said.

 

(Reporting by Michael Martina, Richa Naidu, Aishwarya Venugopal and Jeffrey Dastin; editing by Jonathan Oatis)

Continue Reading

Health

Biden’s vaccine pledge ups pressure on rich countries to give more

Published

 on

The United States on Thursday raised the pressure on other Group of Seven leaders to share their vaccine hoards to bring an end to the pandemic by pledging to donate 500 million doses of the Pfizer coronavirus vaccine to the world’s poorest countries.

The largest ever vaccine donation by a single country will cost the United States $3.5 billion but Washington expects no quid pro quo or favours for the gift, a senior Biden administration official told reporters.

U.S. President Joe Biden‘s move, on the eve of a summit of the world’s richest democracies, is likely to prompt other leaders to stump up more vaccines, though even vast numbers of vaccines would still not be enough to inoculate all of the world’s poor.

G7 leaders want to vaccinate the world by the end of 2022 to try to halt the COVID-19 pandemic that has killed more than 3.9 million people and devastated the global economy.

A senior Biden administration official described the gesture as a “major step forward that will supercharge the global effort” with the aim of “bringing hope to every corner of the world.” “We really want to underscore that this is fundamentally about a singular objective of saving lives,” the official said, adding that Washington was not seeking favours in exchange for the doses.

Vaccination efforts so far are heavily correlated with wealth: the United States, Europe, Israel and Bahrain are far ahead of other countries. A total of 2.2 billion people have been vaccinated so far out of a world population of nearly 8 billion, based on Johns Hopkins University data.

U.S. drugmaker Pfizer and its German partner BioNTech have agreed to supply the U.S. with the vaccines, delivering 200 million doses in 2021 and 300 million doses in the first half of 2022.

The shots, which will be produced at Pfizer’s U.S. sites, will be supplied at a not-for-profit price.

“Our partnership with the U.S. government will help bring hundreds of millions of doses of our vaccine to the poorest countries around the world as quickly as possible,” said Pfizer Chief Executive Albert Bourla.

‘DROP IN THE BUCKET’

Anti-poverty campaign group Oxfam called for more to be done to increase global production of vaccines.

“Surely, these 500 million vaccine doses are welcome as they will help more than 250 million people, but that’s still a drop in the bucket compared to the need across the world,” said Niko Lusiani, Oxfam America’s vaccine lead.

“We need a transformation toward more distributed vaccine manufacturing so that qualified producers worldwide can produce billions more low-cost doses on their own terms, without intellectual property constraints,” he said in a statement.

Another issue, especially in some poor countries, is the infrastructure for transporting the vaccines which often have to be stored at very cold temperatures.

Biden has also backed calls for a waiver of some vaccine intellectual property rights but there is no international consensus yet on how to proceed.

The new vaccine donations come on top of 80 million doses Washington has already pledged to donate by the end of June. There is also $2 billion in funding earmarked for the COVAX programme led by the World Health Organization (WHO) and the Global Alliance for Vaccines and Immunization (GAVI), the White House said.

GAVI and the WHO welcomed the initiative.

Washington is also taking steps to support local production of COVID-19 vaccines in other countries, including through its Quad initiative with Japan, India and Australia.

(Reporting by Steve Holland in St. Ives, England, Andrea Shalal in Washington and Caroline Copley in Berlin; Writing by Guy Faulconbridge and Keith Weir;Editing by Leslie Adler, David Evans, Emelia Sithole-Matarise, Giles Elgood and Jane Merriman)

Continue Reading

Trending