Clearview AI draws ire with image scraping for facial recognition

1381
facial recognition

Although many tech journalists are now looking at facial recognition in the context of the ongoing relationship between security and privacy, the biometric method is in the news today for a different reason.

 

It seems that YouTube and its parent company Google have issued a legal cease-and-desist letter to a company called Clearview AI, which has been scraping billions of photos from the Internet, in order to accommodate its facial recognition programs and services offered to hundreds of law enforcement departments around the country

 

Citing a First Amendment right to use publicly available data, Clearview AI CEO Hoan Ton-That is telling people that the company’s practices are similar to what Google does with its search engine.

 

Google disagrees. Company spokespersons characterize ClearView AI’s methods as inappropriately intrusive and “in violation of rules” around user privacy.

 

Facebook and Twitter are also piling on with their own legal responses, and advocacy groups are talking about how methods like Clearview’s may pose a threat to the civil liberties of citizens.

 

However, the key question here is this: where are the photos?

 

In one sense, they’re out on the Internet, and all of us understand intuitively that there is no privacy for publicly posted photos. Any human user can do the same thing that Clearview AI’s machine is doing, so where’s the expectation of privacy?

 

Clearview AI critics would say, though, that while the information is there, there is the expectation that no users will be doing the kinds of broad aggregation that Clearview AI’s bots are able to do. In other words, this kind of practice smells like big brother and makes people concerned about uberveillance even if the underlying data assets are publicly available online.

 

In some senses, it’s not the scraping of one particular photo or another, but the aggregation practice that’s giving people a bad taste in their mouths.

 

Adding to the ambiguity, Ton-That indicates that he feels the photos are in some broad sense in the public domain.

 

He also says the company is fighting back.

 

“Our legal counsel has reached out to [Twitter] and are handling it accordingly,” Ton-That reportedly told Cnet. “But there is also a First Amendment right to public information. So the way we have built our system is to only take publicly available information and index it that way.”

 

Still, many users are crying foul as they scrutinize the ethics of the Clearview AI method.

 

“There is (sic) rules and engagement for photo acquisition from you to Facebook,” writes Kirielson, a moderator at The Verge. “But there is no particular opt in or opt out between you and Clearview. More importantly, there’s no opt in, or even continue consent for using your image. The fact that many people in law enforcement will be able to use this tech to find criminals within a certain match, would make sense if you can guarantee the accuracy, which you can’t, or the removal of bias, which you can’t. The reason why you can’t is because there’s no authority or outside referee actually checking for all of these issues.”

 

Keep an eye on these developments in the greater context of how we view facial recognition and its myriad uses in our societies.

NO COMMENTS

LEAVE A REPLY