Facebook Announces New Policy to Crackdown on Manipulated Media – Social Media Today
With other social media platforms looking at how they can utilize manipulated media for features, including deepfakes, Facebook has announced the first iteration of its policy to stop the spread of misleading fake videos, as part of its broader effort to pre-empt the potential rise of problematic deepfake videos.
Facebook says that it’s been meeting with experts in the field to formulate its policy, including people with “technical, policy, media, legal, civic and academic backgrounds”.
As per Facebook:
“As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Facebook says that it’s new policies do not extend to content which is parody or satire, “or video that has been edited solely to omit or change the order of words”. The latter may seem somewhat problematic, but this type of editing is already covered in Facebook’s existing rules – though Facebook does also note that:
“Videos which don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
So why doesn’t Facebook just remove these as well – if Facebook has the capacity to identify content as fake, and it’s reported as a violation, Facebook could just remove all of it, deepfake or not, and eliminate it as a problem.
But Facebook says that this approach could be counter-intuitive, because those same images/videos will be available elsewhere online.
“This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
So, Facebook’s framing its decision not to remove some manipulated content as a civic duty, which is similar to its approach on political ads, which Facebook won’t subject to fact-checking because:
“People should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.”
So it’s helping, it’s serving the public interest – and Facebook in no way benefits from hosting such content, and the subsequent engagement it generates, on its platform, as opposed to removing it, and then, potentially, seeing users migrate to some other social network in order to facilitate the same discussion. That’s got nothing to do with it. Purely to benefit the public.
Skepticism aside, deepfakes are clearly an area of concern for the major networks heading into 2020, with Twitter, Google and Facebook all running their own, independent research projects to establish the best ways to detect and remove such content. They’re not doing this for no reason – with so much emphasis on the potential dangers of deepfakes for manipulative messaging, it may well suggest that the platforms are seeing increased focus on this type of activity from bad actors, and they’re working to head it off before it has a chance to cause problems.
Given the focus on misinformation since 2016, and the willingness of some to believe what they choose to, you can imagine that deepfakes could indeed be a major weapon for political activists. And worse, in many cases, even when a fake video has been proven false, it’s already too late. The damage has been done, the anger embedded, the opinion formed.
Case in point, this video has been circulating around Facebook for a few years, depicting a Muslim refugee smashing up a Christian statue in Italy with a hammer.
Except its not a Christian statue, he’s not a refugee, and the video wasn’t recorded in Italy. The actual incident occurred in 2017 in Algeria – a majority Muslim nation – where the statue of a naked woman has long been a subject of religious debate.
This misleading framing of the video has been debunked, repeatedly, and reported. But it still comes up every now and then, sparking anti-Muslim sentiment, even though the details are completely false (and as you can see, this version was viewed more than 1.1 million times).
This video is not a deepfake, but as noted, even though people can scroll through the comments and find out that it’s false, even though it’s been debunked over and over, it largely doesn’t matter. The social media news cycle moves fast, and sharing is easy. Most users view things like this once, take it at face value, pass it on, then move on to what’s next.
You can imagine the same approach will apply to deepfakes – what happens, for example, if someone posts a deepfake of Joe Biden saying something condemning? Various obviously manipulated Biden videos are already sifting through Facebook’s network – a deepfake would likely gain traction very fast, probably too fast to reign in. Opinions solidified, responses felt.
You can see why, then, all the major players are working so hard to head off this next level of manipulation at the pass.
As noted, this also comes as TikTok is reportedly working on a new tool which will turn deepfakes into a feature, of sorts.
TikTok says that it has no plans to release the feature into markets outside of China, with the feature actually being tested in Douyin, the Chinese version of TikTok. But given the app’s potential requirement to share its data with the Chinese Government, that could be even more concerning – through this process, users would need to provide a biometric face scan, which TikTok could then, theoretically, store on its servers.
The Chinese Government has the most sophisticated citizen surveillance network in the world, comprising of more than 170 million CCTV cameras, the equivalent of one for every 12 people in the country. All of these cameras are equipped with advanced facial recognition capacity, and China has already been using this to identify Uighar Muslims, people who have evaded fines and protesters in Hong Kong.
Imagine if it also had a database of TikTok users, made available by this feature? You could argue that most adults have a drivers’ license, and that would be enough to set off the system regardless, but only around 369 million Chinese people are registered to drive, out of 1.39 billion citizens, while TikTok users can sign up from the age of 13. That’s a lot of valuable data.
Aside from the manipulative concerns of deepfakes, TikTok may have also found a new issue to contend with (note: TikTok has said that the functionality, which is not approved, would only be available to older users).
In summary, deepfakes could become a major problem, on several fronts, which is why Facebook is putting in the work now to stop the next major misinformation trend.
As Ben Smith, the Editor in Chief of BuzzFeed noted recently:
“I think the media is totally prepared not to repeat the mistakes of the last [election] cycle… but I’m sure we will **** it up in some new way we aren’t expecting.”
Could deepfakes be the thing that throws the next election cycle off balance?
Definitely an element to watch in 2020.
Vatican singles out bishops in urging reflective not reactive social media use
VATICAN CITY (AP) — The Vatican on Monday urged the Catholic faithful, and especially bishops, to be “reflective, not reactive” on social media, issuing guidelines to try to tame the toxicity on Catholic Twitter and other social media platforms and encourage users to instead be “loving neighbors.”
The Vatican’s communications office issued a “pastoral reflection” to respond to questions it has fielded for years about a more responsible, Christian use of social media and the risks online that accompany the rise of fake news and artificial intelligence.
For decades the Holy See has offered such thoughts on different aspects of communications technologies, welcoming the chances for encounter they offer but warning of the pitfalls. Pope Francis of late has warned repeatedly about the risk of young people being so attached to their cell phones that they stop face-to-face friendships.
The new document highlights the divisions that can be sown on social media, and the risk of users remaining in their “silos” of like-minded thinkers and rejecting those who hold different opinions. Such tendencies can result in exchanges that “can cause misunderstanding, exacerbate division, incite conflict, and deepen prejudices,” the document said.
It warned that such problematic exchanges are particularly worrisome “when it comes from church leadership: bishops, pastors, and prominent lay leaders. These not only cause division in the community but also give permission and legitimacy for others likewise to promote similar type of communication,” the message said.
The message could be directed at the English-speaking Catholic Twittersphere, where some prominent Catholic figures, including bishops, frequently engage in heated debates or polemical arguments that criticize Francis and his teachings.
The prefect of the communications office, Paolo Ruffini, said it wasn’t for him to rein in divisive bishops and it was up to their own discernment. But he said the general message is one of not feeding the trolls or taking on “behavior that divides rather than unites.”
Russia says U.S. Senator should say if Ukraine took his words out of context
MOSCOW, May 29 (Reuters) – Russia on Monday said U.S. Senator Lindsey Graham should say publicly if he believes his words were taken out of context by a Ukrainian state video edit of his comments about the war that provoked widespread condemnation in Moscow.
In an edited video released by the Ukrainian president’s office of Graham’s meeting with Volodymyr Zelenskiy in Kyiv on Friday, Graham was shown saying “the Russians are dying” and then saying U.S. support was the “best money we’ve ever spent”.
After Russia criticised the remarks, Ukraine released a full video of the meeting on Sunday which showed the two remarks were not directly linked.
Russia’s foreign ministry said Western media had sought to shield the senator from criticism and said that Graham should publicly state if he feels his words were taken out of context by the initial Ukrainian video edit.
“If U.S. Senator Lindsey Graham considers his words were taken out of context by the Ukrainian regime and he doesn’t actually think in the way presented then he can make a statement on video with his phone,” Foreign Ministry Spokeswoman Maria Zakharova said in a video posted on Telegram.
“Only then will we know: does he think the way that was said or was it a performance by the Kyiv regime?”
Graham’s office did not immediately respond to a request for comment.
The initial video of Graham’s remarks triggered criticism from across Moscow, including from the Kremlin, Putin’s powerful Security Council and from the foreign ministry.
Graham said he had simply praised the spirit of Ukrainians in resisting a Russian invasion with assistance provided by Washington.
Graham said he had mentioned to Zelenskiy “that Ukraine has adopted the American mantra, ‘Live Free or Die.’ It has been a good investment by the United States to help liberate Ukraine from Russian war criminals.”
Russia’s interior ministry has put Graham on a wanted list after the Investigative Committee said it was opening a criminal probe into his comments. It did not specify what crime he was suspected of.
In response, Graham said: “I will wear the arrest warrant issued by Putin’s corrupt and immoral government as a Badge of Honor.
“…I will continue to stand with and for Ukraine’s freedom until every Russian soldier is expelled from Ukrainian territory.”
A South Carolina Republican known for his hawkish foreign policy views, Graham has been an outspoken champion of increased military support for Ukraine in its battle against Russia.
Our Standards: The Thomson Reuters Trust Principles.
Jamie Sarkonak: Liberals bring identity quotas to Canada Media Fund
In 2021, the Liberals said they would dramatically boost funding for the Canada Media Fund. And they did — but that funding came with diversity quotas and a new emphasis on diversity, equity and inclusion (DEI).
The Canada Media Fund is supposed to oversee a funding pool that supports the creation of Canadian media projects in the areas of drama, kids’ programming, documentaries and even video games. According to its most recent annual report, about half its revenue ($184 million) comes from the federal government through the Department of Canadian Heritage (another near-half comes from broadcasting companies through the country’s broadcasting regulator, the CRTC). The department also has the power to appoint two of the fund’s board members.
The Canada Media Fund is doing a lot more than broadly funding content creation, though. With more federal funding brought in after the past election, it is now responsible for greenlighting projects to meet identity quotas set out by the Liberals.
According to the Canada Media Fund’s contract with Canadian Heritage, which has been obtained by the National Post through a previously-completed access to information request, the number of projects funded with government-sourced dollars and led by “people of equity-deserving groups” will have to amount to 45 by 2024. The number of “realized projects” for people of these groups must amount to 25 by 2024. Finally, by 2024, a quarter of funded “key creative positions” must be held by people from designated diversity groups.
These funding quotas are similar to the CBC’s new diversity requirements for budgeting. When the CBC’s broadcasting licence was renewed by the CRTC last year, it was required to dedicate 30 per cent of its independent content production budget to diverse groups, which will rise to 35 per cent in 2026. While the CRTC is arm’s-length from government, a Liberal-appointed CRTC commissioner appeared eager to impose quotas that were on par with the governing party’s agenda on diversity, equity and inclusion (DEI).
The government’s agreement with the Canada Media Fund also sets aside $20 million of the new money explicitly for people considered diverse enough to check a box — anyone from “sovereignty-seeking” and “equity-seeking” groups.
“’Sovereignty- and Equity-Seeking Community’ refers to the individuals who identify as women, First Nations, Métis, Inuit, Racialized, 2SLGBTQ+, Persons with disabilities/Disabled Persons, Regional, and Official Language Minority Community,” reads the Canada Media Fund’s explainer on who gets diversity status.
Aside from getting mandatory coverage through the use of quotas, the groups listed above are shielded with “narrative positioning” policies that took effect this year. If the main character, key storyline, or subject matter has anything to do with the above groups, creators must either be from that group or take “comprehensive measures that have and will be undertaken to create the content responsibly, thoughtfully and without harm.” These can include consultations, sharing of ownership rights, and hiring policies from the community. While narrative requirements weren’t mandated by the Liberals in their grant to the fund, they complement the overall DEI strategy.
Storytellers vying for certain grants have to sign an attestation form agreeing with the narrative policy and write a compliance plan if their works have anything to do with the above groups. Plainly, it’s a force of narrative control.
This doesn’t go both ways; women can make documentaries about men consult-free, non-white people can make TV dramas about white people consult-free, and so on.
Statistically, diversity is being tracked on a internal system that logs the identities of key staff and leadership on every Canada Media Fund project. The diversity repository was rolled out this year. Internal documents indicate these stats will be used to monitor program progress and adjust policy going forward.
These changes are all directly linked to a Liberal platform point on media modernization. In the 2021 Liberal platform, the party committed to doubling the government’s contribution to the fund. Since then, the Liberal platform has been cited directly in internal documents outlining the Canada Media Fund’s three-year growth strategy (which explains how the new money will be used, in part, to ramp up DEI efforts).
Together, it looks like both the fund, and the party responsible for doubling its taxpayer support are more concerned about the identities of filmmakers and TV producers than the actual media being produced.
Creators should be able to tell stories about others without the narrative department’s oversight — the more narrative control, the more it starts to sound like propaganda. Good creators wanting to tell an authentic story should conduct research and be respectful of the people they cover — but they shouldn’t be bound to consultations and ownership agreements.
The Ultimate Solution to Selling Your Used Car in Ontario
Uganda’s president signs into law anti-gay legislation with death penalty in some cases
Alberta votes in the strangest — and closest — election in its political history
Silver investment demand jumped 12% in 2019
Iran anticipates renewed protests amid social media shutdown
Search for life on Mars accelerates as new bodies of water found below planet’s surface
News21 hours ago
The Hidden Struggles: Uncovering the Reality of Being Black in Canada
Economy22 hours ago
Lira hits record low, but stocks rise after Erdogan win in Turkey
Business21 hours ago
Halifax condo residents face obstacles trying to go green with solar panels
Media19 hours ago
Russia says U.S. Senator should say if Ukraine took his words out of context
Health23 hours ago
Coming to Terms with My Baby’s Food Allergies
Health20 hours ago
B.C. initiative aims to expand genetic screening for Ashkenazi Jewish people at risk of hereditary cancers
News18 hours ago
More Canadian companies adopt ‘stay interviews’ amid push to retain staff
Real eState19 hours ago
BCFSA rules on real estate agent’s $50K loan to client