One of the big rules on Facebook is that you have to use your real name. This policy has created some controversy over the years, since it makes life harder for some activists, and crime victims, and (most famously) drag queens. But Facebook has always said that the service works so well because you can trust that the person you’re talking to is actually your friend or family member, and not someone who is faking their identity to influence you toward some malign purpose.
If you break that rule, you are engaging in what the company calls “inauthentic behavior,” which it defines as “the use of Facebook or Instagram assets (accounts, pages, groups, or events), to mislead people or Facebook:
about the identity, purpose, or origin of the entity that they represent
about the popularity of Facebook or Instagram content or assets
about the purpose of an audience or community
about the source or origin of content
to evade enforcement under our Community Standards.”
Anyway, along with the other big tech platforms, Facebook is currently involved in a bunch of regulatory fights over privacy and competition issues. And in some crucial areas, public opinion does not appear to be on the company’s side. In this year’s Verge Tech Survey, a national poll found that 72 percent of respondents believe Facebook has too much power, and 56 percent said the government should break up tech companies if they control too much of the economy.
One thing you can do when public opinion turns against you is hire a phalanx of lobbyists and public relations people to make the case for you in Facebook’s name, and the company has done just that. The company spent about $81 million on lobbying between 2010 and 2019, and has increased its spending over 2019 levels again this year.
Another thing you can do, though, is to hire a bunch of people to defend you in someone else’s name. Broadly speaking, this practice — masking the true sponsor of an idea to make it appear as though it originates from average citizens — is called astroturfing, and it has a long history. The Wikipedia page entry for astroturfing notes that Shakespeare’s Julius Caesar begins with Cassius writing fake letters from “the public” to convince Brutus to assassinate the title character, and the idea has inspired the business community ever since.
For example, in 2011, Facebook paid a PR firm called Burston-Marsteller to plant negative stories about Google in the US media. The idea, which was based on the very-funny-in-retrospect notion that an all-but-incomprehensible Gmail feature called Google Social Circle might pose a threat to Facebook, was to scare everyone about the privacy implications of … whatever Google Social Circle was. But then Facebook got caught and apologized and we didn’t hear a lot about Facebook-led astroturfing for a long time. (Also Google Social Circle went the way of all Google social products and rapidly faded into obscurity.)
Facebook is working behind the scenes to help launch a new political advocacy group that would combat U.S. lawmakers and regulators trying to rein in the tech industry, escalating Silicon Valley’s war with Washington at a moment when government officials are threatening to break up large companies.
The organization is called American Edge, and it aims through a barrage of advertising and other political spending to convince policymakers that Silicon Valley is essential to the U.S. economy and the future of free speech, according to three people familiar with the matter as well as documents reviewed by The Washington Post.
According to Romm, American Edge is set up to ”navigate a thicket of tax laws in such a way that it can raise money, and blitz the airwaves with ads, without the obligation of disclosing all of its donors.”
Might that mislead people about the identity, purpose, or origin of the entity that American Edge represents? What about the source or origin of the content it produces?
The behavior … it feels somehow … inauthentic. At least to me.
Of course, one reason why these groups exist is that rival companies fund advocacy campaigns of their own to undermine their enemies. And Facebook, to its credit, takes the unusual step of listing the advocacy groups to which it contributes on a public page alongside its lobbying disclosures. Those groups, though, typically don’t list their donors after every ghostwritten op-ed.
David Espinoza appeared unhappy when Arizona joined scores of states investigating Google last year. The Phoenix-based owner of a shoe-and-leather store wrote in a local newspaper he was “amazed and a little dumbfounded” by regulators’ campaign to “change how digital platforms operate.”
“The current system is working for small businesses, and as the old saying goes: if it ain’t broke, don’t fix it,” he wrote.
But Espinoza’s words, published in September by the Arizona Capitol Times, weren’t entirely his own. They were written on his behalf by an advocacy group that’s backed by Google and other tech behemoths, reflecting Silicon Valley’s stealthy new attempts to shape and weaponize public perception in response to heightened antitrust scrutiny.
Romm goes on to explain that Google, Facebook, and Amazon are all funding advocacy groups that are engaging in letter-writing campaigns, polling, and placing op-eds in an effort to shift the conversation — often without any fingerprints from the companies themselves. This is made possible by an arrangement in which the advocacy groups take a huge portion of their funding from these companies, implement a variety of strategies designed to help those companies, and then swear that there is no connection between those two things.
None of this is new or unique to the tech industry, of course. But at a time when conspiracy theories are dominating the news, it feels worthwhile to point out a conspiracy that’s actually real: a group of giant corporations working in the shadows to manipulate public opinion without always disclosing their involvement.
It’s stuff you largely couldn’t do on Facebook. But you can do it if you are Facebook.
The Ratio
Today in news that could affect public perception of the big tech platforms.
Thank you to everyone who wrote in with their thoughts on the future of this item! Based on your responses, we’ve decided to retire the tracker as a daily feature. Instead, we’ll include less-frequent updates as new hotspots emerge around the world. When we do, we will still include the number of cases, deaths, and tests. But we’ll also include more analysis about what we’re seeing and why.
As always, let us know what you think of the changes.
It appears Amazon has decided that police cannot be trusted to use the technology responsibly. Although the company has never disclosed just how many police departments do actually use the tech. As of last summer, it appeared only two — one in Oregon and one in Florida — were actively using Rekognition, and Orlando has since stopped using it. It would appear a much more widely used version of facial recognition system is that of Clearview AI, a secretive company now facing down a number of privacy lawsuits after scraping social media sites for photos and building a more than 3 billion-photo database it sells to law enforcement.
Similarly, Amazon has faced constant criticism over the years for selling access to Rekognition to police departments. That’s despite artificial intelligence researchers, activists, and lawmakers citing concerns about the lack of oversight into how the tech is used in investigations and potential built-in bias that makes it unreliable and ripe for racial discrimination.
Each platform has had its share of content catastrophes, but in India, where many of the most downloaded apps are from China, a contentious border dispute has brought anti-China sentiments to the forefront of the TikTok-YouTube rivalry. An app called Remove China that scans devices and detects and discards Chinese-origin apps such as TikTok had notched 5 million downloads before Google removed it from its Play Store for policy violations. Many Indians also feel that Chinese content apps flout local values by showing inappropriate content such as vulgar dance moves. Following the Siddiqui brothers controversy, the rate of TikTok app downloads fell. Local alternatives to TikTok and YouTube — such as Mitron and Bolo Indya — are sprouting up.
it is the human’s first time trying to change the world. and they are exhausted. so while they rest for a little bit. i have stolen their sign. and will trot proudly around the house with it. until they are ready. to fight again
The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.
Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.
“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.
The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.
However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”
Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.
A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.
“We will challenge this order in court,” the spokesperson said.
“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”
The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.
At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.
A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”
Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.
Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.
Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.
Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.
While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.
Wednesday’s dissolution order was made in accordance with the act.
The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.
— With files from Anja Karadeglija in Ottawa
This report by The Canadian Press was first published Nov. 6, 2024.
LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?
It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.
Here’s how you can prepare your digital life for your survivors:
Apple
The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.
For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.
You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.
Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.
Google
Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.
When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.
You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.
There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.
Facebook and Instagram
Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.
When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.
The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.
You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.
TikTok
The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.
Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.
X
It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.
Passwords
Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?
Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.
But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.
___
Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.
LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.
The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.
The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.
“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”
San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.
Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”
“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.
The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.