Toronto police have admitted some of their officers have used Clearview AI — a powerful and controversial facial recognition tool that scrapes data from billions of images from the internet — one month after denying using the tool.
Spokesperson Meaghan Gray said in an email that some members of the force began using the technology in October 2019, but didn’t say what for or how many times it had been used.
Chief Mark Saunders directed those officers to stop using the technology when he became aware of its use on Feb. 5, she said. Gray didn’t say who originally approved the use to the app.
Clearview AI has the capacity to turn up search results, including a person’s name and other information such as their phone number, address or occupation, based on nothing more than a photo. The program is not available for public use.
Gray said officers were “informally testing this new and evolving technology.” She did not say how the chief found out about its use.
Concerns began mounting about the software earlier this year after a New York Times investigation revealed the software had extracted more than three billion photos from public websites like Facebook and Instagram and used them to create a database used by more than 600 law enforcement agencies in the U.S., Canada and elsewhere.
In January, Toronto police told CBC News they used facial recognition, but denied using Clearview AI. It’s unclear if police purchased the technology — if so, it was never disclosed publicly — or were allowed to demo the app.
At the time, Ontario Provincial Police also said they used facial recognition technology, but wouldn’t specific which tools they used. The RCMP would not say what tools it uses.
Vancouver’s police department said it had never used the software and had no intention of doing so.
Toronto police seek external review
Toronto police have now asked Ontario’s Information and Privacy Commissioner and the Crown attorneys’ office to work with the service to review whether Clearview AI is an appropriate investigative tool, she said.
“Until a fulsome review of the product is completed, it will not be used by the Toronto Police Service.”
There are growing concerns about how the technology is used and whether it infringes on civil liberties.
The Toronto Police Services Board said it was not aware of the technology being used by the force.
“A report on this issue has never been the subject of consideration by the board,” Sandy Murray said, speaking for the board.
A spokesperson for Toronto Mayor John Tory said the mayor was notified Thursday about the use of the tool by Toronto police.
“We understand that Chief Mark Saunders has directed its use be halted immediately and that Toronto Police is now working with the Information and Privacy Commissioner and the Crown Attorneys’ Office to review the technology and its appropriateness as an investigative tool. The Mayor supports this decision,” said the statement from spokesperson Don Peat.
U.S. lawsuit filed against Clearview AI
In Illinois, a lawsuit seeking class-action status was just filed against Clearview AI claiming the company broke privacy laws, namely the state’s Biometric Information Privacy Act (BIPA), meant to safeguard residents from having their biometric data used without consent.
The lawsuit, which is seeking, among other things, an injunction to stop Clearview from continuing its business, argues that the company “used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever.”
It’s unclear if Toronto police have made any arrests based on information generated by the app.
The Toronto revelation raises longer-term questions such as how any data that was gathered will be stored and whether it will ever be used as evidence in an Ontario court.