Controversial face matchers Clearview set to be fined over $20m

Security

The UK data protection regulator has announced its intention to issue a fine of £17m (about $23m) to controversial facial recognition company Clearview AI.

Clearview AI, as you’ll know if you’ve read any of our numerous previous articles about the company, essentially pitches itself as a social network contact finding service with extraordinary reach, even though no one in its immense facial recognition database ever signed up to “belong” to the “service”.

Simply put, the company crawls the web looking for facial images from what it calls “public-only sources, including news media, mugshot websites, public social media, and other open sources.”

The company claims to have a database of more than 10 billion facial images, and pitches itself as a friend of law enforcement, able to search for matches against mug shots and scene-of-crime footage to help track down alleged offenders who might otherwise never be found.

That’s the theory, at any rate: find criminals who would otherwise evade both recognition and justice.

In practice, of course, any picture in which you appeared that was ever posted to a social media site such as Facebook could be used to “recognise” you as a suspect or other person of interest in a criminal investigation.

Importantly, this “identification” would take place not only without your consent but also without you knowing that the system had alleged some sort of connection between you and criminal activity.

Any expectations you may have had about how your likeness was going to be used and licensed when it was uploaded to the relevant service (if you even knew it had been uploaded in the first place) would thus be ignored entirely.

Understandably, this attitude provoked an enormous privacy backlash, including from giant social media brands including Facebook, Twitter, YouTube and Google.

You can’t do that!

Early in 2020, those behemoths firmly told Clearview AI, “Stop leeching image data from our services.”

You don’t have to like any of those companies, or their own data-slurping terms-and-conditions of service, to sympathise with their position.

Uploaded images, no matter how publicly they may be displayed, don’t suddenly stop being personal information just because they’re published, and the terms and conditions applied to their ongoing use don’t magically evaporate as soon as they appear online.

Clearview, it seemed, was having none of this, with its self-confident and unapologetic founder Hoan Ton-That claiming that:

There is […] a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way.

The other side of that coin, as a commenter pointed out on the CBS video from which the above quote is taken, is the observation that:

You were so preoccupied with whether or not you could, you didn’t stop to think if you should.

Clearview AI has apparently continued scraping internet images heartily over the 22 months since that video aired, given that it claimed at that time to have processed 3 billion images, but now claims more than 10 billion images in its database.

That’s despite the obvious public opposition implied by lawsuits brought against it, including a class action suit in Illinois, which has some of the strictest biometric data processing regulations in the USA, and an action brought by the American Civil Liberties Union (ACLU) and four community organisations.

UK and Australia enter the fray

Claiming First Amendment protection is an intriguing ploy in the US, but is meaningless in other jurisdictions, including in the UK and Australia, which have completely different constitutions (and, in the case of the UK, an entirely different constitutional apparatus) to the US.

Those two countries decided to pool their resources and conduct a joint investigation into Clearview, with both country’s privacy regulators recently publishing reports on what they found, and interpreting the results in local terms.

The Office of the Australian Information Commisioner (OAIC) decided that Clearview “interfered with the privacy of Australian individuals” because the company:

  • Collected sensitive information without consent;
  • Collected information by unlawful or unfair means;
  • Did not notify individuals of data that was collected; and
  • Did not ensure that the information was accurate and up-to-date.

Their counterparts at the ICO (Information Commissioner’s Office) in the UK, came to similar conclusions, including that Clearview:

  • Had no lawful reason for collecting the information in the first place;
  • Did not process information in a way that people were likely to expect;
  • Had no process to to stop the data being retained indefinitely;
  • Did not meet the “higher data protection standards” required for biometric data;
  • Did not tell anyone what was happening to their data.

Loosely speaking, both the OAIC and the ICO clearly concluded that an individual’s right to privacy trumps any consideration of “fair use” or “free speech”, and both regulators explicity decried Clearview’s data collection as unlawful.

The ICO has now decided what it actually plans to do, as well as what it thinks about Clearview’s business model.

The proposed intervention includes: the aforementioned $17m ($23m) fine; a requirement not to touch UK residents’ data any more; and a notice to delete all data on British people that Clearview already holds.

The Aussies don’t seem to have proposed a financial penalty, but also demanded that Clearview must not scrape Australian data in future; must delete all data already collected from Australians; and must show in writing within 90 days that it has done both of those things.

What next?

According to reports, Clearview CEO Hoan Ton-That has reacted to these unequivocally adverse findings with an opening sentiment that would surely not be out of place in a tragic lovesong:

It breaks my heart that Clearview AI has been unable to assist when receiving urgent requests from UK law enforcement agencies seeking to use this technology to investigate cases of severe sexual abuse of children in the UK.

Clearview AI may, however, find its plentiful opponents replying with song lyrics of their own:

Cry me a river. (Don’t act like you don’t know it.)

What do you think?

Is Clearview AI providing a genuinely useful and acceptable service to law enforcement, or merely taking the proverbial? (Let us know in the comments. You may remain anonymous.)


Products You May Like

Articles You May Like

New MedusaLocker Ransomware Variant Deployed by Threat Actor
British Hacker Charged in the US For $3.75m Insider Trading Scheme
ACSC and CISA Launch Critical OT Cybersecurity Guidelines
How Confidence Between Teams Impacts Cyber Incident Outcomes
U.S. and Microsoft Seize 107 Russian Domains in Major Cyber Fraud Crackdown

Leave a Reply

Your email address will not be published. Required fields are marked *