Vermont AG blasts Clearview, pledges no law enforcement use within the state

936
facial recognition technology

Some of today’s most cutting-edge technology is getting a lot of pushback from privacy advocates and others, even some  government officials.

Reports today at The Verge show Vermont AG Thomas Donovan blasting the work of Clearview AI’s facial recognition program.

“This practice is unscrupulous, unethical, and contrary to public policy,” Donovan said. “I will continue to fight for the privacy of Vermonters, particularly our most vulnerable.”

Vermont, he says, is also taking the unusual step of not using Clearview AI’s facial recognition program for any state law enforcement efforts.

The basic operational model for Clearview AI’s facial recognition program is based on scraping hundreds of thousands (really, many millions) of photos of individuals freely available on the public Internet.

On the basis of this information, the company has built processes to identify individuals and even sell information.

Jon Porter’s report referenced above shows that children are not immune to the sort of data harvesting, either.

The rationale for Clearview’s work is that the foundation of the system is freely accessible; the photos are public. But privacy advocates argue that when we crafted our philosophies of digital media as consumers we didn’t plan on having robust AI programs able to do what Clearview does.

“Clearview AI operates in a manner similar to search engines like Google and Bing. Clearview AI, however, collects far less data than Google and Bing, because Clearview AI only collects public images and their web address. That’s all,” a company spokesperson recently told CNET. “Google, Bing and Facebook collect far more data, including names, addresses, financial and health information and shopping habits.”

But again, the amount of data used isn’t the issue: new AI can use any kind of data to disturbing effect, in theory.

In a sense, individuals like Donovan who want to curtail Clearview’s operations would argue that it’s the artificial intelligence use of the images that’s a problem.

“Concerns are mounting so high that Police have stopped using Clearview, and Twitter and others have sent cease and desist letters to the firm,” wrote Kate O’Flaherty at Forbes Feb. 28, also noting that Clearview has experienced a wide-ranging data breach pushing opposition to its model even higher.

While law enforcement goals are often a Trojan horse for this kind of invasive surveillance, this time, Clearview seems to be getting some resistance. Keep an eye on this to determine the limits of what we tolerate in terms of societal uberveillance.

NO COMMENTS

LEAVE A REPLY