Clearview server hack compounds problematic facial recognition ops

1064
facial recognition

Don’t look now, but Clearview AI has a new problem.

Up until now, a lot of the controversy around the company’s operation of scraping public photos from the web to create disturbing new facial recognition capabilities has been the actual operational model.

Today, tech media reports show that an independent analyst from a company called Spidersilk stumbled on a backdoor with challenging ramifications for the facial recognition giant: Mossab Hussein, Spidersilk Chief Security Officer, reportedly found he was able to walk right into parts of the company system, through a server that let any visitor register as a new user.

“Since its work became public,” writes Jon Porter today at The Verge, “Clearview AI has defended itself by saying that its software is only available to law enforcement agencies (although reports claim that Clearview has been marketing its system to private businesses including Macy’s and Best Buy). Poor cybersecurity practices like these, however, could allow this powerful tool to fall into the wrong hands outside of the company’s client list.”

Hang on a second: what about this private business marketing stuff?

Leaving that whole ball of wax aside for now, if this sounds like an open invitation to hackers, consider how much this type of security snafu is going to exacerbate Clearview’s existing problems, where even law enforcement departments are turning down its facial recognition tools. When police do opt in, civil rights advocates say, the problem is even more evident:

“Face recognition and similar technologies make it possible to identify and track people in real time, including at lawful political protests and other sensitive gatherings,” writes Jennifer Lynch at the Electric Frontier Foundation. “Widespread use of face recognition by the government—especially to identify people secretly when they walk around in public—will fundamentally change the society in which we live. It will, for example, chill and deter people from exercising their First Amendment protected rights to speak, assemble, and associate with others. Countless studies have shown that when people think the government is watching them, they alter their behavior to try to avoid scrutiny. And this burden falls disproportionately on communities of color, immigrants, religious minorities, and other marginalized groups.”

What are the chances that a company with this kind of a business model is going to comply with rules like the European GDPR?

In an age where tech companies are increasingly monitored for privacy compliance, Clearview is likely going to turn up in the red, despite protestations by top leadership that no actual personal information was revealed through the server.

It begs the question: even though the security problem may not have compromised users too much, what about any number of theoreticals?

As for what the system actually uncovered in Hussein’s server hack, Porter’s report notes that “secret keys and cloud storage credentials, and even copies of its apps” were accessed.

Facial recognition is coming-of-age at a time when governments and advocacy groups are piling on to represent consumers in the struggle for a balance between security and data privacy. Clearview really doesn’t have a lot of extra room for error in this context.

NO COMMENTS

LEAVE A REPLY