adplus-dvertising
Connect with us

Tech

Apple and Google's COVID-19 Exposure Notification API: Questions and Answers – EFF

Published

 on


Apple and Google are undertaking an unprecedented team effort to build a system for Androids and iPhones to interoperate in the name of technology-assisted COVID-19 contact tracing.

The companies’ plan is part of a torrent of proposals to use Bluetooth signal strength to enhance manual contact tracing with proximity-based mobile apps. As Apple and Google are an effective duopoly in the mobile operating system space, their plan carries special weight. Apple and Google’s tech would be largely decentralized, keeping most of the data on users’ phones and away from central databases. This kind of app has some unavoidable privacy tradeoffs, as we’ll discuss below, and Apple and Google could do more to prevent privacy leaks. Still, their model is engineered to reduce the privacy risks of Bluetooth proximity tracking, and it’s preferable to other strategies that depend on a central server.

Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19. This use of Bluetooth technology is unproven and untested, and it’s designed for use in smartphone apps that won’t reach everyone. The apps built on top of Apple and Google’s new system will not be a “magic bullet” technosolution to the current state of shelter-in-place. Their effectiveness will rely on numerous tradeoffs and sufficient trust for widespread public adoption. Insufficient privacy protections will reduce that trust and thus undermine the apps’ efficacy.

How Will It Work?

As soon as today, Apple and Google are beginning to roll out parts of the iPhone and Android infrastructure that developers need to be able to build Bluetooth-based proximity tracking apps. If you download one of these apps, it will use your phone’s Bluetooth chip to do what Bluetooth does: emit little radio pings to find other devices. Usually, these pings are looking for your external speakers or wireless mouse. In the case of COVID-19 proximity tracking apps, they will be reaching out to nearby people who have also opted into using Bluetooth for this purpose. Their phones will also be emitting and listening for those pings. The apps will use Bluetooth signal strength to estimate the distance between the two phones. If they are sufficiently close—6 feet or closer, based on current CDC guidance—both will log a contact event.

There are now many different proposals to do basically this same thing, with slightly different considerations for efficiency, security, and privacy. The rest of this post looks at Apple and Google’s proposal (version 1.1) in particular.

Each phone will generate a new special-purpose private key each day, known as a “temporary exposure key.” It will then use that key to generate random identification numbers called “rolling proximity identifiers” (RPIDs). Pings will go out at least once every five minutes when Bluetooth is enabled. Each ping will contain the phone’s current RPID, which will change every 10 to 20 minutes. This is meant to reduce the risk that third-party trackers can use the pings to passively track people’s locations. The operating system will save all of its temporary exposure keys, and log all the RPIDs it comes into contact with, for the past 2 weeks.

Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19.

If an app user learns they are infected, they can grant a public health authority permission to publicly share their temporary exposure keys. In order to prevent people from flooding the system with false alarms, health authorities need to verify that the user is actually infected before they may upload their keys. After they are uploaded, a user’s temporary exposure keys are known as “diagnosis keys.” The diagnosis keys are stored in a public registry and available to everyone else who uses the app. 

The diagnosis keys contain all the information needed to re-generate the full set of RPIDs associated with each infected user’s device. Participating apps can use the registry to compare the RPIDs a user has been in contact with against the RPIDs of confirmed COVID-19 carriers. If the app finds a match, the user gets a notification of their risk of infection.

The program will roll out in two phases. In phase 1, Google and Apple are building a new API into their respective platforms. This API will contain the bare-bones functionality necessary to make their proximity-tracing scheme work on both iPhones and Androids. Other developers will have to build apps that actually execute the new API. Draft specifications for the API have already been published, and it could be available for developers to use this week. In phase 2, the companies say that proximity tracking “will be introduced at the operating system level to help ensure broad adoption.” We know a lot less about this second phase.

Will It Work?

Several technical and social challenges stand in the way of automated proximity tracking. First, these apps assume that “cell phone = human.” But even in the U.S., cell phone adoption is far from universal. Elderly people and low-income households are less likely to own smartphones, which could leave out many people at the highest risk for COVID-19. Many older phones won’t have the technology necessary for Bluetooth proximity tracking. Phones can be turned off, left at home, run out of battery, or be set to airplane mode. So even a proximity tracking system with near-universal adoption is going to miss millions of contacts each day.

These apps assume that “cell phone = human,” but cell phone adoption is far from universal.

Second, proximity tracking apps have to make the profound leap from “there is a strong Bluetooth signal near me” to “two humans are experiencing an epidemiologically relevant contact.” Bluetooth technology was not made for this. An app may log a connection when two people wearing masks briefly pass each other on a windy sidewalk, or when two cars with windows up sit next to each other in traffic. The proximity of a patient to a nurse in full PPE may look the same to Bluetooth as the proximity of two people kissing. Also, Bluetooth can be disrupted by large concentrations of water, like the human body. In some situations, although two people may be close enough to touch, their phones may not be able to establish radio contact. Accurately estimating the distance between two devices is even more difficult. 

Third, Apple and Google’s proposal currently specifies that phones will broadcast signals as seldom as once every five minutes. So even under otherwise optimal conditions, two phones may not log a contact until they’ve been near each other for the requisite amount of time.

Fourth, a significant portion of the population must actually use the apps. In Singapore, a government-developed app has only achieved about 20% adoption after several weeks. As a mobile platform duopoly, Apple and Google are in perhaps the best position possible to encourage the deployment of a new piece of software at scale. Even so, adoption may be slow, and it will never be universal.

Will It Be Private and Secure?

The truth is, nobody really knows how effective proximity tracking apps will be. Further, we need to weigh the potential benefits against the very real risks to privacy and security.

First, any proximity tracking system that checks a public database of diagnosis keys against RPIDs on a user’s device—as the Apple-Google proposal does—leaves open the possibility that the contacts of an infected person will figure out which of the people they encountered is infected. For example, if you have a contact with a friend, and your friend reports that they are infected, you could use your own device’s contact log to learn that they are sick. Taken to an extreme, bad actors could collect RPIDs en masse, connect them to identities using face recognition or other tech, and create a database of who’s infected. Other proposals, like the EU’s PEPP-PT and France and Germany’s ROBERT, purport to prevent this kind of attack, or at least make it more difficult, by performing matching on a central server; but this introduces more serious risks to privacy.

Second, Apple and Google’s choice to have infected users publicly share their once-per-day diagnosis keys—instead of just their every-few-minute RPIDs—exposes those people to linkage attacks. A well-resourced adversary could collect RPIDs from many different places at once by setting up static Bluetooth beacons in public places, or by convincing thousands of users to install an app. The tracker will receive a firehose of RPIDs at different times and places. With just the RPIDs, the tracker has no way of linking its observations together. 

If a bad actor were to set up a Bluetooth beacon or use an app to collect the location of people’s RPIDs, all they would get is a map like this: lots of different pings, but no indication of which pings belong to which individual.

But once a user uploads their daily diagnosis keys to the public registry, the tracker can use them to link together all of that person’s RPIDs from a single day. 

The same plain street map, this time with a line connecting some of the different red Bluetooth ping dots into one person's daily route.

If someone uploads their daily diagnosis keys to a central server, a bad actor could then use those keys to link together multiple RPID pings. This can expose their daily routine, such as where they live and work.

This can create a map of the user’s daily routine, including where they work, live, and spend time. Such maps are highly unique to each person, so they could be used to identify the person behind the uploaded diagnosis key. Furthermore, they can reveal a person’s home address, place of employment, and trips to sensitive locations like a church, an abortion clinic, a gay bar, or a substance abuse support group. The risk of location tracking is not unique to Bluetooth apps, and actors with the resources to pull off an attack like this likely have other ways of acquiring similar information from cell towers or third-party data brokers. But the risks associated with Bluetooth proximity tracking in particular should be reduced wherever possible.

This risk can be mitigated by shortening the time that a single diagnosis key is used to generate RPIDs, at the cost of increasing the download size of the exposure database. Similar projects, like MIT’s PACT, propose using hourly keys instead of daily keys. 

Third, police may seek data created by proximity apps. Each user’s phone will store a log of their physical proximity to the phones of other people, and thus of their intimate and expressive associations with some of those people, for several weeks. Anyone who has access to the proximity app data from two users’ phones will be able to see whether, and on what days, they have logged contacts with each other. This risk is likely inherent to any proximity tracking protocol. It should be mitigated by giving users the option to selectively turn off the app and delete proximity data from certain time periods. Like many other privacy threats, it should also be mitigated with strong encryption and passwords.

Apple and Google’s protocol may be susceptible to other kinds of attacks. For example, there’s currently no way to verify that the device sending an RPID is actually the one that generated it, so trolls could collect RPIDs from others and rebroadcast them as their own. Imagine a network of Bluetooth beacons set up on busy street corners that rebroadcast all the RPIDs they observe. Anyone who passes by a “bad” beacon would log the RPIDs of everyone else who was near any one of the beacons. This would lead to a lot of false positives, which might undermine public trust in proximity tracing apps—or worse, in the public health system as a whole.

What Should App Developers Do?

Apple and Google’s phase 1 is an API, which leaves it to the rest of the world to develop the actual apps that use the new API. Google and Apple have said they intend “public health authorities” to make apps. But most health authorities won’t have the in-house technical resources to do that, so it’s likely they will partner with private companies. Anyone who builds an app on top of the interface will have to do a lot of things right to make sure it’s private and secure. 

Bad-faith app developers may try to tear down the tech giants’ carefully constructed privacy guarantees. For example, although a user’s data is supposed to stay on their device, an app with access to the API might be able to upload everything to a remote server. It could then link daily private keys to a mobile ad ID or other identifier, and exploit users’ association history to profile them. It could also use the app as a “Trojan horse” to convince users to agree to a whole suite of more invasive tracking.

So, what’s a responsible app developer to do? For starters, they should respect the protocol they’re building on. Developers shouldn’t try to graft a more “centralized” protocol, which shares more data with a central authority, on top of Apple and Google’s more “decentralized” model that keeps users’ data on their devices. Also, developers shouldn’t share any data over the Internet beyond what is absolutely necessary: just uploading diagnosis keys when an infected user chooses to do so.

Developers should be extremely up-front with their users about what data the app is collecting and how to stop it. Users should be able to stop and start sharing RPIDs at any time. They also should be able to see the list of the RPIDs they’ve received, and delete some or all of that contact history.

The whole system depends on trust.

Equally important is what not to do. This is a public health crisis, not a chance to grow a startup. Developers should not force users to sign up for an account for anything. Also, they shouldn’t ship a contact tracing app with extra, unnecessary features. The app should do its job and get out of the way, not try to onboard users to a new service. 

Obviously, proximity tracing apps shouldn’t have anything to do with ads (and the exploitative, data-sucking mess that comes with them). Likewise, they shouldn’t use analytics libraries that share data with third parties. In general, developers should use strong, transparent technical and policy safeguards to wall this data off to COVID-19 purposes and only COVID-19 purposes.

The whole system depends on trust. If users don’t trust that an app is working in their best interests, they will not use it. So developers need to be as transparent as possible about how their apps work and what risks are involved. They should publish source code and documentation so that tech-savvy users and independent technologists can check their work. And they should invite security audits and penetration testing from professionals to be as confident as possible that their apps actually do what they say they will.

All of this will take time. There’s a lot that can go wrong, and too much is at stake to afford rushed, sloppy software. Public health authorities and developers should take a step back and make sure they get things right. And users should be wary of any apps that ship out in the days following Apple and Google’s first API release.

What Should Apple and Google Do?

Apple and Google should be transparent about exactly what their criteria are.

During the first phase, Apple and Google have said that the API can “only [be] used for contact tracing by public health authorities apps,” which “will receive approval based on a specific set of criteria designed to ensure they are only administered in conjunction with public health authorities, meet our privacy requirements, and protect user data.” Apple and Google should be transparent and specific about exactly what these criteria are. Through these criteria, the companies can control what other permissions apps have. For example, they could prevent COVID-19 proximity tracking apps from accessing mobile ad IDs or other device identifiers. They could also make more detailed policy prescriptions, like requiring that any app using the API have a clear mechanism for users to go back and delete parts of their contact log. Apple and Google’s app store approval criteria and related restrictions must also be evenly applied; if Apple and Google make exceptions for governments or companies that they are friendly with, they would undermine the trust necessary for informed consent.

In the second phase, the companies will build the proximity tracking technology directly into Android and iOS. This means that no app will be needed initially, though Apple and Google propose that the user be prompted to download an public health app if an exposure match is detected. All of the recommendations for app developers above also apply to Apple and Google here. Critically, the promised opt-in must obtain specific, informed consent from each user before activating any kind of proximity tracking. They need to make it easy for users who opt in to later opt out, and to view and delete the data that the device has collected. They should create strong technical barriers between the data collected for proximity tracking and everything else. And they should open-source their implementations so that independent security analysts can check their work.

This program must sunset when the COVID-19 crisis is over.

Finally, this program must sunset when the COVID-19 crisis is over. Proximity tracking apps should not be repurposed for other things, like tracking more mild seasonal flu outbreaks or finding witnesses to a crime. Google and Apple have said that they “can disable the exposure notification system on a regional basis when it is no longer needed.” This is an important ability, and Apple and Google should establish a clear, concrete plan for when to end this program and removing the APIs from their operating systems. They should publicly state how they will define “the end of the crisis,” including what criteria they will look for, and which public health authorities will guide them. 

There will be no quick tech solution to COVID-19. No app will let us return to business as usual. App-assisted contact tracing will have serious limitations, and we don’t yet know the scope of the benefits. If Apple and Google are going to spearhead this grand social experiment, they must do it in a way that keeps privacy risks to an absolute minimum. And if they want it to succeed, they must earn and keep the public’s trust.

Let’s block ads! (Why?)

728x90x4

Source link

Continue Reading

Tech

Ottawa orders TikTok’s Canadian arm to be dissolved

Published

 on

 

The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.

Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.

“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.

The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.

However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”

Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.

A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.

“We will challenge this order in court,” the spokesperson said.

“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”

The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.

At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.

A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”

Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.

Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.

Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.

Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.

While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.

Wednesday’s dissolution order was made in accordance with the act.

The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.

— With files from Anja Karadeglija in Ottawa

This report by The Canadian Press was first published Nov. 6, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Tech

Google’s partnership with AI startup Anthropic faces a UK competition investigation

Published

 on

 

LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.

The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.

The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.

“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”

San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.

Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”

“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.

The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending