Federal and provincial regulators are launching an investigation into whether Clearview AI, the company that makes facial recognition technology used by at least four Ontario police forces, breaks Canadian privacy laws.
The investigation was initiated “in the wake of numerous media reports that have raised questions and concerns about whether the company is collecting and using personal information without consent,” according to a joint statement.
The probe will be carried out by the privacy commissioners of Canada, Quebec, Alberta and British Columbia, which have jurisdiction over how businesses collect and use citizens’ private information.
In the last week, Toronto police as well as the services in Peel, Halton and Durham regions, have all confirmed to the Star that they used Clearview AI, an app that allows officers to match pictures of unidentified people against what the U.S. company claims is a database of three billion images scraped off the web, including from social media sites.
Chiefs at all four police departments have since ordered officers to stop using the app. The OPP and the RCMP would not reveal whether they had also used Clearview AI.
Ontario’s privacy commissioner also said Thursday that he and his counterparts across the country are at “the formative stages” of creating guidance for use of biometric technologies, including facial recognition.
“The potential privacy dangers of facial recognition have really come to a head around the Clearview AI application,” Brian Beamish told the Star.
“I think we all understand that there are some legitimate law enforcement uses for biometrics and even facial recognition. But I think we have to be really careful about going down that road.”
Beamish said he is also writing this week to the Ontario Association of Chiefs of Police, with the message that “they should be telling their membership that if they’re using Clearview, they should stop.”
Independent experts and the privacy watchdogs themselves have argued that their enforcement powers are woefully inadequate for the massive challenges that modern technology represents — and in particular, artificial intelligence.
“One of the things that is important for Canadian authorities to have is some kind of proper monetary penalty that can be put in place to deter the bad guys. We don’t have that in Canada, anywhere,” British Columbia privacy commissioner Michael McEvoy told the Star on Wednesday.
McEvoy noted that the federal privacy commissioner doesn’t have the power to issue orders against companies that run afoul of the law.
“That is, in this day and age, absolutely ludicrous in my opinion,” he said.
Last April, McEvoy and Canada’s privacy commissioner, Daniel Therrien, released the results of an investigation into Facebook’s handling of Canadians’ personal data. They slammed the tech giant for its “serious failure” to comply with privacy laws.
Facebook disputed the findings of the investigation and refused to address the problems the probe identified, according to the federal commissioner.
Without order-making powers of its own, Therrien’s office asked a federal court this month to declare that Facebook had broken the law, and to issue several orders forcing the social media giant to start complying. It is only the first step in what is likely to be a lengthy court proceeding.
Get more of today’s top stories in your inbox
Sign up for the Star’s Morning Headlines email newsletter for a briefing of the day’s big news.
In a recent speech at the University of Ottawa’s Centre for Law, Technology and Society, Therrien addressed the insufficiency of the laws his office enforces, saying that “for good and for bad, data-driven technologies are a disruptive force,” according to a transcript of his remarks.
“My predecessors and I have for a long time called for reform of privacy laws. This was ignored for an equally long time, I think because privacy is such an abstract concept. Now it has become real.”