Apple and Google are undertaking an unprecedented team effort to build a system for Androids and iPhones to interoperate in the name of technology-assisted COVID-19 contact tracing.
The companies’ plan is part of a torrent of proposals to use Bluetooth signal strength to enhance manual contact tracing with proximity-based mobile apps. As Apple and Google are an effective duopoly in the mobile operating system space, their plan carries special weight. Apple and Google’s tech would be largely decentralized, keeping most of the data on users’ phones and away from central databases. This kind of app has some unavoidable privacy tradeoffs, as we’ll discuss below, and Apple and Google could do more to prevent privacy leaks. Still, their model is engineered to reduce the privacy risks of Bluetooth proximity tracking, and it’s preferable to other strategies that depend on a central server.
Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19. This use of Bluetooth technology is unproven and untested, and it’s designed for use in smartphone apps that won’t reach everyone. The apps built on top of Apple and Google’s new system will not be a “magic bullet” technosolution to the current state of shelter-in-place. Their effectiveness will rely on numerous tradeoffs and sufficient trust for widespread public adoption. Insufficient privacy protections will reduce that trust and thus undermine the apps’ efficacy.
How Will It Work?
As soon as today, Apple and Google are beginning to roll out parts of the iPhone and Android infrastructure that developers need to be able to build Bluetooth-based proximity tracking apps. If you download one of these apps, it will use your phone’s Bluetooth chip to do what Bluetooth does: emit little radio pings to find other devices. Usually, these pings are looking for your external speakers or wireless mouse. In the case of COVID-19 proximity tracking apps, they will be reaching out to nearby people who have also opted into using Bluetooth for this purpose. Their phones will also be emitting and listening for those pings. The apps will use Bluetooth signal strength to estimate the distance between the two phones. If they are sufficiently close—6 feet or closer, based on current CDC guidance—both will log a contact event.
Each phone will generate a new special-purpose private key each day, known as a “temporary exposure key.” It will then use that key to generate random identification numbers called “rolling proximity identifiers” (RPIDs). Pings will go out at least once every five minutes when Bluetooth is enabled. Each ping will contain the phone’s current RPID, which will change every 10 to 20 minutes. This is meant to reduce the risk that third-party trackers can use the pings to passively track people’s locations. The operating system will save all of its temporary exposure keys, and log all the RPIDs it comes into contact with, for the past 2 weeks.
Proximity tracking apps might be, at most, a small part of a larger public health response to COVID-19.
If an app user learns they are infected, they can grant a public health authority permission to publicly share their temporary exposure keys. In order to prevent people from flooding the system with false alarms, health authorities need to verify that the user is actually infected before they may upload their keys. After they are uploaded, a user’s temporary exposure keys are known as “diagnosis keys.” The diagnosis keys are stored in a public registry and available to everyone else who uses the app.
The diagnosis keys contain all the information needed to re-generate the full set of RPIDs associated with each infected user’s device. Participating apps can use the registry to compare the RPIDs a user has been in contact with against the RPIDs of confirmed COVID-19 carriers. If the app finds a match, the user gets a notification of their risk of infection.
The program will roll out in two phases. In phase 1, Google and Apple are building a new API into their respective platforms. This API will contain the bare-bones functionality necessary to make their proximity-tracing scheme work on both iPhones and Androids. Other developers will have to build apps that actually execute the new API. Draft specifications for the API have already been published, and it could be available for developers to use this week. In phase 2, the companies say that proximity tracking “will be introduced at the operating system level to help ensure broad adoption.” We know a lot less about this second phase.
Will It Work?
Several technical and social challenges stand in the way of automated proximity tracking. First, these apps assume that “cell phone = human.” But even in the U.S., cell phone adoption is far from universal. Elderly people and low-income households are less likely to own smartphones, which could leave out many people at the highest risk for COVID-19. Many older phones won’t have the technology necessary for Bluetooth proximity tracking. Phones can be turned off, left at home, run out of battery, or be set to airplane mode. So even a proximity tracking system with near-universal adoption is going to miss millions of contacts each day.
These apps assume that “cell phone = human,” but cell phone adoption is far from universal.
Second, proximity tracking apps have to make the profound leap from “there is a strong Bluetooth signal near me” to “two humans are experiencing an epidemiologically relevant contact.” Bluetooth technology was not made for this. An app may log a connection when two people wearing masks briefly pass each other on a windy sidewalk, or when two cars with windows up sit next to each other in traffic. The proximity of a patient to a nurse in full PPE may look the same to Bluetooth as the proximity of two people kissing. Also, Bluetooth can be disrupted by large concentrations of water, like the human body. In some situations, although two people may be close enough to touch, their phones may not be able to establish radio contact. Accurately estimating the distance between two devices is even more difficult.
Third, Apple and Google’s proposal currently specifies that phones will broadcast signals as seldom as once every five minutes. So even under otherwise optimal conditions, two phones may not log a contact until they’ve been near each other for the requisite amount of time.
Fourth, a significant portion of the population must actually use the apps. In Singapore, a government-developed app has only achieved about 20% adoption after several weeks. As a mobile platform duopoly, Apple and Google are in perhaps the best position possible to encourage the deployment of a new piece of software at scale. Even so, adoption may be slow, and it will neverbe universal.
Will It Be Private and Secure?
The truth is, nobody really knows how effective proximity tracking apps will be. Further, we need to weigh the potential benefits against the very real risks to privacy and security.
First, any proximity tracking system that checks a public database of diagnosis keys against RPIDs on a user’s device—as the Apple-Google proposal does—leaves open the possibility that the contacts of an infected person will figure out which of the people they encountered is infected. For example, if you have a contact with a friend, and your friend reports that they are infected, you could use your own device’s contact log to learn that they are sick. Taken to an extreme, bad actors could collect RPIDs en masse, connect them to identities using face recognition or other tech, and create a database of who’s infected. Other proposals, like the EU’s PEPP-PT and France and Germany’s ROBERT, purport to prevent this kind of attack, or at least make it more difficult, by performing matching on a central server; but this introduces more serious risks to privacy.
Second, Apple and Google’s choice to have infected users publicly share their once-per-day diagnosis keys—instead of just their every-few-minute RPIDs—exposes those people to linkage attacks. A well-resourced adversary could collect RPIDs from many different places at once by setting up static Bluetooth beacons in public places, or by convincing thousands of users to install an app. The tracker will receive a firehose of RPIDs at different times and places. With just the RPIDs, the tracker has no way of linking its observations together.
If a bad actor were to set up a Bluetooth beacon or use an app to collect the location of people’s RPIDs, all they would get is a map like this: lots of different pings, but no indication of which pings belong to which individual.
But once a user uploads their daily diagnosis keys to the public registry, the tracker can use them to link together all of that person’s RPIDs from a single day.
If someone uploads their daily diagnosis keys to a central server, a bad actor could then use those keys to link together multiple RPID pings. This can expose their daily routine, such as where they live and work.
This can create a map of the user’s daily routine, including where they work, live, and spend time. Such maps are highly unique to each person, so they could be used to identify the person behind the uploaded diagnosis key. Furthermore, they can reveal a person’s home address, place of employment, and trips to sensitive locations like a church, an abortion clinic, a gay bar, or a substance abuse support group. The risk of location tracking is not unique to Bluetooth apps, and actors with the resources to pull off an attack like this likely have other ways of acquiring similar information from cell towers or third-party data brokers. But the risks associated with Bluetooth proximity tracking in particular should be reduced wherever possible.
This risk can be mitigated by shortening the time that a single diagnosis key is used to generate RPIDs, at the cost of increasing the download size of the exposure database. Similar projects, like MIT’s PACT, propose using hourly keys instead of daily keys.
Third, police may seek data created by proximity apps. Each user’s phone will store a log of their physical proximity to the phones of other people, and thus of their intimate and expressive associations with some of those people, for several weeks. Anyone who has access to the proximity app data from two users’ phones will be able to see whether, and on what days, they have logged contacts with each other. This risk is likely inherent to any proximity tracking protocol. It should be mitigated by giving users the option to selectively turn off the app and delete proximity data from certain time periods. Like many other privacy threats, it should also be mitigated with strong encryption and passwords.
Apple and Google’s protocol may be susceptible to other kinds of attacks. For example, there’s currently no way to verify that the device sending an RPID is actually the one that generated it, so trolls could collect RPIDs from others and rebroadcast them as their own. Imagine a network of Bluetooth beacons set up on busy street corners that rebroadcast all the RPIDs they observe. Anyone who passes by a “bad” beacon would log the RPIDs of everyone else who was near any one of the beacons. This would lead to a lot of false positives, which might undermine public trust in proximity tracing apps—or worse, in the public health system as a whole.
What Should App Developers Do?
Apple and Google’s phase 1 is an API, which leaves it to the rest of the world to develop the actual apps that use the new API. Google and Apple have said they intend “public health authorities” to make apps. But most health authorities won’t have the in-house technical resources to do that, so it’s likely they will partner with private companies. Anyone who builds an app on top of the interface will have to do a lot of things right to make sure it’s private and secure.
Bad-faith app developers may try to tear down the tech giants’ carefully constructed privacy guarantees. For example, although a user’s data is supposed to stay on their device, an app with access to the API might be able to upload everything to a remote server. It could then link daily private keys to a mobile ad ID or other identifier, and exploit users’ association history to profile them. It could also use the app as a “Trojan horse” to convince users to agree to a whole suite of more invasive tracking.
So, what’s a responsible app developer to do? For starters, they should respect the protocol they’re building on. Developers shouldn’t try to graft a more “centralized” protocol, which shares more data with a central authority, on top of Apple and Google’s more “decentralized” model that keeps users’ data on their devices. Also, developers shouldn’t share any data over the Internet beyond what is absolutely necessary: just uploading diagnosis keys when an infected user chooses to do so.
Developers should be extremely up-front with their users about what data the app is collecting and how to stop it. Users should be able to stop and start sharing RPIDs at any time. They also should be able to see the list of the RPIDs they’ve received, and delete some or all of that contact history.
The whole system depends on trust.
Equally important is what not to do. This is a public health crisis, not a chance to grow a startup. Developers should not force users to sign up for an account for anything. Also, they shouldn’t ship a contact tracing app with extra, unnecessary features. The app should do its job and get out of the way, not try to onboard users to a new service.
Obviously, proximity tracing apps shouldn’t have anything to do with ads (and the exploitative, data-sucking mess that comes with them). Likewise, they shouldn’t use analytics libraries that share data with third parties. In general, developers should use strong, transparent technical and policy safeguards to wall this data off to COVID-19 purposes and only COVID-19 purposes.
The whole system depends on trust. If users don’t trust that an app is working in their best interests, they will not use it. So developers need to be as transparent as possible about how their apps work and what risks are involved. They should publish source code and documentation so that tech-savvy users and independent technologists can check their work. And they should invite security audits and penetration testing from professionals to be as confident as possible that their apps actually do what they say they will.
All of this will take time. There’s a lot that can go wrong, and too much is at stake to afford rushed, sloppy software. Public health authorities and developers should take a step back and make sure they get things right. And users should be wary of any apps that ship out in the days following Apple and Google’s first API release.
What Should Apple and Google Do?
Apple and Google should be transparent about exactly what their criteria are.
During the first phase, Apple and Google have said that the API can “only [be] used for contact tracing by public health authorities apps,” which “will receive approval based on a specific set of criteria designed to ensure they are only administered in conjunction with public health authorities, meet our privacy requirements, and protect user data.” Apple and Google should be transparent and specific about exactly what these criteria are. Through these criteria, the companies can control what other permissions apps have. For example, they could prevent COVID-19 proximity tracking apps from accessing mobile ad IDs or other device identifiers. They could also make more detailed policy prescriptions, like requiring that any app using the API have a clear mechanism for users to go back and delete parts of their contact log. Apple and Google’s app store approval criteria and related restrictions must also be evenly applied; if Apple and Google make exceptions for governments or companies that they are friendly with, they would undermine the trust necessary for informed consent.
In the second phase, the companies will build the proximity tracking technology directly into Android and iOS. This means that no app will be needed initially, though Apple and Google propose that the user be prompted to download an public health app if an exposure match is detected. All of the recommendations for app developers above also apply to Apple and Google here. Critically, the promised opt-in must obtain specific, informed consent from each user before activating any kind of proximity tracking. They need to make it easy for users who opt in to later opt out, and to view and delete the data that the device has collected. They should create strong technical barriers between the data collected for proximity tracking and everything else. And they should open-source their implementations so that independent security analysts can check their work.
This program must sunset when the COVID-19 crisis is over.
Finally, this program must sunset when the COVID-19 crisis is over. Proximity tracking apps should not be repurposed for other things, like tracking more mild seasonal flu outbreaks or finding witnesses to a crime. Google and Apple have said that they “can disable the exposure notification system on a regional basis when it is no longer needed.” This is an important ability, and Apple and Google should establish a clear, concrete plan for when to end this program and removing the APIs from their operating systems. They should publicly state how they will define “the end of the crisis,” including what criteria they will look for, and which public health authorities will guide them.
There will be no quick tech solution to COVID-19. No app will let us return to business as usual. App-assisted contact tracing will have serious limitations, and we don’t yet know the scope of the benefits. If Apple and Google are going to spearhead this grand social experiment, they must do it in a way that keeps privacy risks to an absolute minimum. And if they want it to succeed, they must earn and keep the public’s trust.
A new mock-up of the 5.5-inch 2021 iPhone has been shared by Macotakara today that suggests a notchless screen and USB-C instead of a Lightning port (or nor port at all) could be in the works. The prototype also shows what could be a different camera setup compared to what we’re expecting on the iPhone 12 later this year.
At the end of last year, we learned that Ming-Chi Kuo expects the highest-end 2021 iPhone to be a fully wireless device, ditching the Lightning port and also skipping the USB-C port. However, today’s alleged 5.5-inch 2021 iPhone prototype shared by Macotakara suggests that the entry-level model could make the switch to USB-C along with a notchless screen.
This 2021 iPhone mock-up was made based on data from Alibaba, so it’s worth taking this rumor with grain of salt.
A 5.5-inch 2021 iPhone likely means it would be the entry model based on what we’re expecting for the 2020 iPhone lineup, with the more affordable iPhone 12 models coming in 5.4- and 6.1-inch sizes and the iPhone 12 Pro landing with 6.1- and 6.7-inch displays. Macotakara does mention that this is just one prototype that Apple is considering so naturally, there’s no guarantee this design and features will make it to market.
Macotakara says the case dimensions of this prototype are the same as the 5.4-inch 2020 iPhone but with a slightly larger screen at 5.5-inches. However, one interesting part of this prototype would be the entry-level 2021 iPhone gaining what could be a 3 or 4 camera setup. One major way Apple has differentiated its iPhone lineup is with camera hardware and features, like the 11 Pro having an additional lens over the iPhone 11.
Apple has been working toward a making iPhone with a “single slab of glass” design for many years. The iPhone X display design is still seen today in the iPhone 11 lineup (expected in the iPhone 12 series too) so removing the notch totally that houses the Face ID components and TrueDepth camera would be a big step forward in the screen to body ratio and Apple evolving the iPhone display’s design.
The iPhone 12 lineup may feature slightly smaller notches but if this prototype does turn out to ring true, the entire 2021 iPhone lineup would likely go notchless if the 5.5-inch entry-level model did.
The Macotakara video below suggests that Apple could launch its first under-screen front-facing camera with the 2021 iPhone lineup to make this potential notchless design happen.
FTC: We use income earning auto affiliate links.More.
A $5 billion class-action lawsuit filed in a California federal court alleges that Google’s Chrome incognito mode collects browser data without people’s knowledge or consent.
Google faces a $5 billion class-action lawsuit over claims that it has been collecting people’s browsing information without their knowledge even when using the incognito browsing mode that’s meant to keep their online activities private.
The lawsuit, filed in the federal court in San Jose, California, alleges that Google compiles user data through Google Analytics, Google Ad Manager and other applications and website plug-ins, including smartphone apps, regardless of whether users click on Google-supported ads, according to a report in Reuters.
Google uses this data to learn about private browsing habits of Chrome users, ranging from seemingly innocuous data that can be used for ad-targeting—such as information about hobbies, interests and favorite foods—to the “most intimate and potentially embarrassing things” that people may search for online, according to the complaint. Google “cannot continue to engage in the covert and unauthorized data collection from virtually every American with a computer or phone,” the complaint said, according to the report.
The technology problem at the root of the report is a feature called incognito mode in the Chrome browser, which ironically is one that is supposed to protect people when surfing the internet. Chrome users can turn on incognito mode to protect their browsing history, sessions and cookies from websites that want to use this information for marketing or ad-targeting purposes.
However, the feature has long had a problem in that even when using their mode, people’s activity has still been detectable by websites “for years” due to a FileSystem API implementation, Google Chrome developer Paul Irish tweeted last year.
Though Google said it implemented the FileSystem API in a different way in Chrome 76, released last year, the problem persists even in the latest version of Chrome 83, which was released last month, according to a report filed Thursday in ZDNet.
It is still possible to detect incognito mode in Chrome–as well as other Chromium-based browsers, such as Edge, Opera, Vivaldi, and Brave, which share the core of Chrome’s codebase, according to the report, which said Google still has not set a timeframe to fix the issue.
Developers even have taken the Chrome codebase scripts to expand the ability of websites to block incognito mode users from browsing, expanding it to other browsers that don’t use the same code base, including Firefox and Safari, the report said.
Ironically, the problem that’s put Google in legal hot water is nearly the same as the one the company accused browser rival Apple of having earlier this year in its Safari browser.
In January, Google researchers said they identified a number of security flaws in Safari’s private-browsing feature—called Intelligent Tracking Protection–that allow people’s browsing behavior to be tracked by third parties. Apple responded by saying it had already fixed the flaws in an update to Webkit technology in Safari.
Search-engine rival Duck Duck Go used news of the class-action suit as an opportunity to laud its own technology, which it offers as an alternative to Google search as a way to allow people to search and use the web privately.
“Incognito mode isn’t private. It never was.” the company said on Twitter. “DuckDuckGo is private. Will always be.”
Longtime Google critic, author, psychologist and researcher for the American Institute for Behavioral Research and Technology Dr. Robert Epstein also took to Twitter to reiterate his longstanding public opinion over Google’s privacy violations.
“#Google#Surveillance & Advertising just got sued for $5 BILLION for lying about its bogus ‘incognito’ mode on its Chrome browser,” he tweeted. “As I’ve always said, you’re STILL being tracked when you’re in that mode.”
The current case against the technology giant is Brown et al v Google LLC et al, U.S. District Court, Northern District of California, No. 20-03664. The New York-based law firm Boies Schiller & Flexner is representing the plaintiffs in the class-action suit, Chasom Brown, Maria Nguyen and William Byatt.
A federal judge said Apple Inc must face part of a lawsuit claiming it fraudulently concealed falling demand for iPhones, especially in China, leading to tens of billions of dollars in shareholder losses.
While dismissing most claims, U.S. District Judge Yvonne Gonzalez Rogers ruled late Tuesday that shareholders can sue over Chief Executive Tim Cook’s comments touting strong iPhone demand on a Nov. 1, 2018 analyst call, only a few days before Apple told its largest manufacturers to curb production.
“Absent some natural disaster or other intervening reason, it is simply implausible that Cook would not have known that iPhone demand in China was falling mere days before cutting production lines,” Rogers wrote.
The Oakland, California-based judge also said a decision by Apple to stop reporting iPhone unit sales “plausibly suggests that defendants expected unit sales to decline.”
Apple did not immediately respond on Wednesday to requests for comment.
The complaint, led by the Employees’ Retirement System of the State of Rhode Island, came after Cook on Jan. 2, 2019 unexpectedly reduced Apple’s quarterly revenue forecast by up to $9 billion, in part because of U.S.-China trade tensions.
It was the first time since the iPhone’s 2007 launch that the Cupertino, California-based company had cut its revenue forecast. Apple stock fell 10% the next day, erasing $74 billion of market value.
Cook had said on the analyst call that the iPhone XS and XS Max had a “really great start,” and that while some emerging markets faced downward sales pressures “I would not put China in that category.”
By mid-November 2018, Apple had told the manufacturers Foxconn and Pagatron to halt plans for new iPhone production lines, and a key supplier had been told to materially reduce shipments, the complaint said.
The case is In re Apple Inc Securities Litigation, U.S. District Court, Northern District of California, No. 19-02033.
Privacy & Cookies Policy
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.