Few apps made by a Big Tech company have improved more over the years than Google Maps. When it launched in 2005, it was a moderately better alternative to AOL’s MapQuest. With the rise of smartphones, it became truly essential to the lives of millions — upending incumbents whose entire business had been selling expensive, subscription-based in-car navigation systems. And with each passing year it improves: offering advice about when to change lanes, rerouting you to avoid traffic, and even telling you which exit to take when climbing out of the New York subway. Today is its 15th birthday.
It’s a happy story in a relatively dark time for consumer tech, so it makes sense that Google would want to celebrate. The company marked the occasion with a lightly refreshed design, including a good-looking new pin-shaped logo. It also sat for a portrait in Wired, where Alphabet CEO Sundar Pichai took a victory lap with Lauren Goode and Boone Ashworth:
“Overall, I think computing should work in a way where it’s much more intuitive to the way people live and not the other way around,” Pichai says. “AR and Maps is really in the sweet spot of that, because as humans we’re walking around the world, perceiving a lot, trying to understand a lot.” Pichai says he sees a future in which Maps users are walking around and an AR layer of information is popping up in Maps, showing them vegetarian menu options at nearby restaurants.
That doesn’t mean AR in Google Maps works like magic now—or will in the near future. “We talk about the double-edge sword of AR,” says Alex Komoroske, director of product management at Maps. “If you get it exactly right, it’s extremely intuitive. But if we get it wrong, it is actively confusing. It’s worse than showing nothing.”
People walking around and finding themselves subject to ubiquitous computing — whether they like it or not — is a subject that has been in the news constantly of late, as we debate the rise of for-profit facial recognition and tools like Clearview AI. It’s a story that, to my mind, starts with the rise of Google Maps.
But first, a bit of history.
“Worse than showing nothing” is what Google Maps was accused of a decade ago in Germany, where in the aftermath of the Nazi regime, privacy-conscious Germans objected to the latest feature added to the app in the name of progress: Street View, which took photos of everyone’s homes and allows anyone to browse them at their leisure. In response to criticism, then-Google CEO Eric Schmidt famously suggested that people angry about the loss of privacy should simply move. (To where?!) Angry Germans sued, but ultimately lost. The courts ruled that, because the photos had been taken from a public road, and people could opt out of having their homes shown, their privacy had not been violated.
Of course, one reason that people object to these massive data-collection schemes is that they almost always gather more data than even their creators intend. Street View cars, for example, connected to unsecured Wi-Fi networks as they made their rounds between 2008 and 2010 — and when they did, slurped up “snippets of e-mails, photographs, passwords, chat messages, [and] postings on websites and social networks,” according to a 2012 story in the New York Times.
Google said it had all been a mistake and apologized, and Germany fined just shy of the maximum for a data privacy breach on that scale: a hilarious 145,000 euros. (I am not leaving out any zeroes on accident there.) In the intervening years, like most data privacy scandals, it has been more or less forgotten.
Still, the case feels freshly relevant in light of the past month’s news about Clearview AI. Like Google in 2008, Clearview slurps up public data — in this case, photos of people posted publicly on the internet — to build a for-profit tool without the permission of anyone involved.
In fact, much of the news in the past week has been companies (including Google!) leaping up to insist that Clearview does not have permission to build its Google-for-faces tool, which the company says it sells only to law enforcement. Twitter, Facebook, LinkedIn, and Venmo have sent similar cease-and-desist letters.
No one seems terribly confident those letters will be effective, though. Last year, another for-profit company that LinkedIn sued for scraping its public content won its case. There are arguably some good reasons about that — the ability to scrape public sites is good for journalists and academics, for example.
The uses and potential misuses of Clearview’s technology strike me as plainly dangerous in a way that Street View never did. Google offered you a view of an address you could have visited yourself, and — critically — allowed homeowners to opt out of the program, blurring the view of their houses. Like other Google Maps features, it was conceived as a tool for helping people get around — not to empower the prison-industrial complex.
Still, for everything Google Maps did right — and I am a highly satisfied customer — it also heralded a new era in networked photography. You cannot make a previously unseen world visible without making it, at least in some ways, less secure. Look at the once-sleepy neighborhoods transformed into clogged wrecks the moment that Google Maps (through its acquisition of Waze) gained visibility into traffic patterns, and began rerouting the world in the name of efficiency. Once again, making something easier to see made a large group of people feel less safe.
On the whole, at least for me, I’d say it has been a good bargain. But as Maps turns 15, it seems worth noting that there’s a straight line from Street View to Clearview. We’re beginning to understand in America what Germans knew a decade ago — that whatever miracles technology can provide must always be weighed against the value of simply being left alone.
The Ratio
Today in news that could affect public perception of the big tech platforms.
After the 2016 election, much was made of the threats posed to American democracy by foreign disinformation. Stories of Russian troll farms and Macedonian fake-news mills loomed in the national imagination. But while these shadowy outside forces preoccupied politicians and journalists, Trump and his domestic allies were beginning to adopt the same tactics of information warfare that have kept the world’s demagogues and strongmen in power.
Every presidential campaign sees its share of spin and misdirection, but this year’s contest promises to be different. In conversations with political strategists and other experts, a dystopian picture of the general election comes into view—one shaped by coordinated bot attacks, Potemkin local-news sites, micro-targeted fearmongering, and anonymous mass texting. Both parties will have these tools at their disposal. But in the hands of a president who lies constantly, who traffics in conspiracy theories, and who readily manipulates the levers of government for his own gain, their potential to wreak havoc is enormous.
The two filed a class-action lawsuit against Facebook and Cognizant on Wednesday, alleging the companies made content moderators work under dangerous conditions that caused debilitating physical and psychological harm and did little to help them cope with the traumas they suffered as a result. Jeudy also has filed a discrimination charge against Cognizant with the Equal Employment Opportunity Commission.
The lawsuit says the two companies ignored the very safety standards they helped create. It also alleges that Facebook’s outsourcing relationship with Cognizant is a way for the social media giant to avoid accountability for the mental health issues that result from moderating graphic content on the platform.
In the statement for users, TikTok said that it was “extremely sad about this tragedy” and guaranteed that its top priority was to “foster a secure and positive environment on the application.” The company wrote, “We have measures in place to protect users from misusing the app, including simple mechanisms that allow you to report content that violates our terms of use.” Insofar as these mechanisms exist, however, they had clearly not worked as well as advertised. […]
According to the ByteDance source, TikTok’s chief of operations in Brazil and Latin America advised employees of the Brazilian office not to say anything about what had occurred. “Her orders were clear: ‘Don’t let it go viral,’” the source told me.
Cycling is dangerous, but emoji are cute. So naturally:
Here comes Ford with a novel solution: an emoji jacket. As part of its “Share the Road” campaign to improve cycling safety, the automaker’s European division designed a cycling jacket with an LED display on the back that lights up with various emoji to convey the cyclist’s mood. A smiley face indicates a happy cyclist, a frowny face a less happy one, and so on. There are also directional symbols for when a cyclist intends to make a turn and a hazard symbol when they may be experiencing a flat tire.
The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.
Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.
“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.
The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.
However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”
Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.
A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.
“We will challenge this order in court,” the spokesperson said.
“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”
The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.
At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.
A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”
Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.
Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.
Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.
Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.
While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.
Wednesday’s dissolution order was made in accordance with the act.
The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.
— With files from Anja Karadeglija in Ottawa
This report by The Canadian Press was first published Nov. 6, 2024.
LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?
It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.
Here’s how you can prepare your digital life for your survivors:
Apple
The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.
For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.
You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.
Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.
Google
Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.
When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.
You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.
There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.
Facebook and Instagram
Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.
When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.
The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.
You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.
TikTok
The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.
Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.
X
It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.
Passwords
Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?
Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.
But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.
___
Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.
LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.
The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.
The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.
“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”
San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.
Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”
“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.
The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.