Connect with us

Media

Study: Social Media Easily Manipulated – VOA Learning English

Published

 on


New research shows that social media companies differ in their ability to stop social media manipulation.

The NATO Strategic Communications Center of Excellence carried out the study. Two American senators took part.

Researchers from the center, based in Riga, Latvia, paid three Russian companies for fake social media engagement. For around $368, researchers got 337,768 fake likes, views and shares of posts on social media, including Facebook, Instagram, Twitter, YouTube and TikTok.

Some of those fake likes, views, and shares appeared on the verified accounts of Senators Chuck Grassley and Chris Murphy. Verified accounts are those that social media companies have confirmed as owned and controlled by the individual or group named on the account.

Grassley’s office confirmed that the Republican from Iowa took part in the study.

Murphy, a Democrat from the state of Connecticut, said in a statement that he agreed to take part in the study. The senator said it is important to understand that even verified accounts are at risk of manipulation. It is easy to use social media as a tool to interfere with election campaigns and incite political unrest, Murphy said.

“It’s clear that social media companies are not doing enough to combat misinformation and paid manipulation…,” he said.

NATO StratCom director Janis Sarts told The Associated Press that social media manipulation hurts business markets and is a threat to national security.

Sarts said that fake accounts are being employed to trick computer programs that decide what information is popular.

“That in turn deepens divisions and thus weakens us as a society,” he explained.

More than 98 percent of the fake engagements – likes, views, shares – remained active after four weeks, researchers found. And 97 percent of the accounts they reported for fake activity were still active five days later.

NATO StratCom did a similar test in 2019 with the accounts of European officials. They found that Twitter is now taking down fake content faster and Facebook has made it harder to create fake accounts.

“We’ve spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokesperson said in an email.

But YouTube and Facebook-owned Instagram remain open to risk, researchers said, and TikTok appeared “defenseless.”

Researchers said that for the study they pushed content that was not political. They wanted to avoid any possible interference in the U.S. election.

So, the researchers posted pictures of dogs and food.

Ben Scott is executive director of Reset.tech, a group that works to fight digital threats to democracy. Scott said the investigation showed how easy it is to manipulate political communication and how little social media companies have done to fix the problems.

In an email to the Associated Press, Yoel Roth, Twitter’s head of site integrity described the issue is an “evolving challenge.”

Roth added that the study shows the big “effort that Twitter has made to improve the health of the public conversation.”

YouTube said it has put in place safety measures to find fake activity on its site. It noted that more than 2 million videos were removed from the site recently for breaking its policies.

TikTok said it removes content or accounts that support fake engagement or other untrue information that may cause harm.

I’m John Russell.

Erika Kinetz reported on this story for the Associated Press. John Russell adapted it for Learning English. Caty Weaver was the editor.

_____________________________________________________________

Words in This Story

fake – adj. not true or real

engagement n. the act or state of being involved with something

content – n. the ideas, facts, or images that are in a book, article, speech, movie, etc.

detection — n. the act or process of discovering, finding, or noticing something

Let’s block ads! (Why?)



Source link

Continue Reading

Media

New Messengers and Social Media Platforms on the Rise in Belarus – On Central Europe, from Central Europe – Visegrad Insight

Published

 on


How significant were online media and social media platforms in the solidarity and mobilisation of Belarusians against Alyaksandr Lukashenka’s regime? Data shows that platforms such as Telegram and YouTube are becoming a force to be reckoned with and may outcompete even Russian media.

Mikhail Doroshevich is a well-known ‘internet veteran’ in Belarus but also a media analyst who organises many studies on the Belarusian media audience.

€4/month
€40/year

See all details

  • Access to all articles
  • Download all reports
  • Monthly foresights and risk analysis delivered by e-mail
  • Free invitation to one editorial board discussion to participate in deciding on the future direction of the Visegrad Insight (annual subscription only)
  • Weekly newsletter with most important highlights
  • Free delivery of two select hardcopies of Visegrad Insight reports (annual subscription only)
  • Free Visegrad Insight Podcast access
  • Visegrad Insight social media community groups invitation

€99/month
€999/year

See all details

  • Access to all articles for all team members (up to 10 users)
  • Download all reports
  • Monthly foresights and risk analysis delivered by e-mail
  • Once a year an on-demand tailored policy brief report (annual subscription only)
  • Personal invitations to a Visegrad Insight Breakfasts with an exclusive group of diplomats, decision-makers, think-tankers and foreign correspondents in the region
  • Free invitation to one editorial board discussion to participate in deciding on the future direction of the Visegrad Insight (annual subscription only)
  • Weekly newsletter with most important highlights PLUS a first-month mention in the newsletter (annual subscription only)
  • Free delivery of up to 20 select hardcopies of Visegrad Insight reports
  • Free Visegrad Insight Podcast access PLUS a mention in four episodes (annual subscription only)
  • Visegrad Insight social media community groups invitations to all corporate users (annual subscription only)

Choose your contribution

See all details

  • Access to all articles
  • Download all reports
  • For every donation above 10 euros monthly foresights and risk analysis delivered by e-mail
  • One on demand tailored policy brief report for every donation above 600 euros
  • Invitations to select special events organised by Visegrad Insight for every donation above 99 euros
  • Free invitation to one editorial board discussion to participate in deciding on the future direction of the Visegrad Insight (annual subscription only with a minimum value of 1 euro per month)
  • Weekly newsletter with the most important highlights PLUS a mention in the newsletter for every donation above 25 euros
  • Delivery of a select report for a donation above 30 euros
  • Free Visegrad Insight Podcast access PLUS a podcast mention for every donation above 25 euros
  • Visegrad Insight social media community groups invitation

Yes. You will receive a receipt immediately after purchase and a VAT invoice upon request. The subscription amount includes tax. In case of a donation, there is no tax.

Yes. The payment is processed by STRIPE www.stripe.com entrusted also by Amazon, Zoom, Booking.com and used by other global NGOs and businesses in the world. We do not store your credit card details.

At any moment you can manage your subscription and account details. Sign in to modify or cancel.

Alexander Morozov

Alexander Morozov is an expert at iSANS and a fellow at the Department of Philosophy at the Charles University, Prague.

Newsletter

Weekly updates with our latest articles and the editorial commentary.

<!– –>
<!– –>
<!– –>
<!– –>

<!– –>
<!– –>



<!– –>
<!–

–>
<!– –>
<!–

–>

<!– –>
<!–
<!–

–>
<!–

–>
<!–

–>
<!– –>
<!–

–>

Let’s block ads! (Why?)



Source link

Continue Reading

Media

EDITORIAL: Social media giants content to pass baton on policing hate – TheChronicleHerald.ca

Published

 on


Perhaps the easiest explanation is that businesses prefer a firm set of ground rules to uncertainty — even if those ground rules are going to get tougher, thanks to other people’s problems.

Canada’s federal government was already looking at regulating social media before the U.S. presidential election and the Jan. 6 violence at the U.S. Capitol. After the riot, the necessity of that regulation came even more to the fore.

And, perhaps surprisingly, social media companies are looking forward to having guidelines that they can actually apply. That may be because regulation would take the onus off of the companies themselves — they would simply be applying this country’s rules, rather than trying to haphazardly control the Hydra-like spread of internet hate on their own.

And it may also be because those same social media companies — and their employees — are uncomfortable about the continued presence of violent and hateful language on their platforms, yet feel to a large extent unable to handle, monitor or moderate the far reaches of their own empires.

Google, Facebook and Twitter have all indicated that they would be in favour of clear rules from the federal government about what the government considers to be illegal content.

The government of Prime Minister Justin Trudeau made regulating social media an election promise in 2019, saying in Heritage Minister Steven Guilbeault’s mandate letter that the minister was charged with taking, “action on combatting hate groups and online hate and harassment, ideologically motivated violent extremism and terrorist organizations.”

Australia, France and Germany have all taken steps to address social media hate, with a German law giving social media companies a 24-hour deadline to remove material or face fines.

Though the structure of Canada’s legislation hasn’t been revealed yet, Guilbeault told the Globe and Mail that, “While preserving a fundamental right to freedom of expression, our approach will require online platforms to monitor and eliminate illegal content that appears on their platforms. That includes hate speech, terrorist propaganda, violent content, child sexual exploitation and the non-consensual sharing of intimate images.”

Google, Facebook and Twitter have all indicated that they would be in favour of clear rules from the federal government about what the government considers to be illegal content.

And that shouldn’t come as a surprise to anyone.

Companies spend a fair amount of time analyzing what constitutes a risk to their core business. Anyone who has ever worked in private business at the managerial level knows that there are two prongs that managers are supposed to be aware of — not only opportunities, but threats as well.

Those threats can hit the core of a company. At least one right-leaning social media firm, the much more permissive Parler, saw scores of its suppliers, including its web-hosting company, flee after the Jan. 6 violence brought hate-fuelled Parler postings to the fore.

Even in the world of social media, chickens can come home to roost.

Someone else’s rules can take some of that pressure away.

RELATED:

Let’s block ads! (Why?)



Source link

Continue Reading

Media

Ottawa ready to give police more powers to go after social media companies and the people who use them – StCatharinesStandard.ca

Published

 on


Additional law enforcement powers and an independent appeal process could be part of a new regulatory regime aimed at social media companies that Ottawa is in the final stages of completing, according to Heritage Minister Steven Guilbeault.

During an interview with the Star, Guilbeault also said that a new regulator will be set up to oversee the rules Ottawa is bringing in to curb the sharing of illegal content — including hate-speech, child pornography and non-consensual intimate images — on platforms like those owned by Facebook and Google.

The regulator will have auditing powers and likely will be able to “look under the hood” to observe how algorithms at the companies work, Guilbeault said, but stressed that they wouldn’t “go after proprietary information.”

“This would have to be well defined,” he said, “but it’s to understand and to be able to see whether or not the platforms are doing what they should be doing.”

Steep fines would be in place for those that are found in non-compliance of the regulations, which are expected to be introduced in February or March.

Guilbeault said the government is in the final stages of exploring an independent appeal process wherein individuals who have had their content removed on social media platforms can take it up with the regulator.

There will also be a complaint process that people can go through with the regulator.

Guilbeault also said he expects additional law enforcement measures would be put in place under the new regime. There will be a mechanism for the “off-ramping” of cases to law enforcement, he said, and “more means for law enforcement in Canada to prosecute those.”

“If you’re doing something illegal on these platforms, we will give ourselves the means to go after you,” he said.

“Law enforcement will have the ability to get information from the platforms to prosecute the individuals or groups of individuals in question.”

The implementation of an appeal process has some concerned that the government could go too far intervening into the private practices of companies and experts say adding in additional law enforcement measures for police to get information from social media companies is a complicated process.

Private companies have their own standards for removing content they deem illegal or inappropriate. A Facebook official, who spoke to the Star on the condition of anonymity, said that the idea of a government regulator having the power to hear appeals from people who take issue with that company’s policies concerns them.

It’s one thing for a government regulator to enforce its own rules around illegal content on websites — something Facebook and other tech companies have publicly welcomed — but another thing entirely for that regulator to be able to consider decisions to remove content made by a private entity, said the source.

“I think we should all pause on that,” they said.

Vivek Krishnamurthy, a law professor at the University of Ottawa, said he wants more transparency from the government around its plans for a new regulator with auditing powers.

“What are the constraints on this auditing mechanism,” he said. “Are they going to audit the content? Are they going to audit the decision-making processes of the social media companies?”

Jordan Donich, a Toronto-based criminal defence lawyer, said it will be tough to give law enforcement additional powers to gather information since the companies will want to protect their customers’ privacy.

“I don’t think (the companies will) compromise the vast majority of lawful users by appearing to just flagrantly provide information to the police,” he said.

Currently, tech companies do co-operate with law enforcement and have in-house teams that police illegal content as well, said Donich.

Sometimes tech companies deny law enforcement’s request for information and ask for a court order, said Donich.

“This is what we want,” he said. “We want our information to be protected, because, you know, illegal or not, the police should have some check and balance on their power.”

Loading…

Loading…Loading…Loading…Loading…Loading…

According to recent reports and surveys, there’s broad public support for government regulation of social media companies in Canada.

The Canadian Race Relations Foundation commissioned Abacus Data to survey 2,000 randomly selected Canadians between Jan 15-18 and found that 60 per cent support the federal government doing more to prevent hate-speech and racism online. Additionally, the survey found that 80 per cent agreed the social media companies should be required to remove hate-speech and racist content within 24 hours.

It also found that 79 per cent supported expanding the law so that people can be held accountable for what they do and say online.

Let’s block ads! (Why?)



Source link

Continue Reading

Trending