What do we know about the world’s largest ‘facial network’?

We speak to Clearview AI's CEO and co-founder Hoan Ton-That about the privacy concerns revolving around the company's facial recognition technology.

Getty Images

With a database of more than 10 billion facial images from web sources, US company Clearview AI has created an intelligence platform that uses facial recognition technology to aid law enforcement officials. 

The company was founded in 2017 and has since signed contracts with the US Air Force, Immigration and Customs Enforcement (ICE) and the FBI.

Clearview says its mission is to safeguard communities and support government agencies in identifying victims and perpetrators of crimes via image-search. 

However, the company has also been clouded in controversy for alleged attempts to venture into other sectors, such as using its facial recognition services for identity verification in apps.

Earlier this month, CEO and co-founder Hoan Ton-That told VICE News’ Motherboard that technology companies such as Uber, Lyft, and Airbnb “have expressed interest in Clearview's facial recognition technology for the purposes of consent-based identity verification.”

Although all three tech giants denied any interest in Clearview’s technology, Ton-That’s statement stoked concerns that the company continues to eye an expansion into more widely used platforms by the public.

When asked by TRT World, Ton-That said “there are no current plans to work with the companies mentioned.” 

He said they are “examples of the types of firms who have expressed interest in Clearview AI's facial recognition technology for the purposes of consent-based identity verification, since there are a lot of issues with crimes that happen on their platforms.”

Ton-That also stressed that Clearview does not plan to sell their facial recognition to non-governmental industries.

“Previous reporting has highlighted past trial use of Clearview’s technology by private companies, and we have not decided to sell the service that uses our image database to non-governmental industries at this time,” Ton-That wrote in an email.

However, critics like Jack Poulson, executive director of tech accountability group Tech Inquiry, aren’t convinced.

“Clearview AI has a pattern of deception: the company has been publicly defending its mass surveillance by claiming it will only sell to law enforcement while privately pitching an expansion into finance, retail and entertainment," Poulson told Reuters.

Last year, the US face tracking company faced several lawsuits prompting it to cancel all its accounts with customers not associated with law enforcement or government agencies.

This came after a report found that the company worked with more than 2,200 law enforcement agencies, private companies and individuals around the world.

READ MORE: South Korea tests AI facial recognition to track Covid cases

Loading...

How does it work?

Clearview matches faces to a database of more than 10 billion images lifted from the Internet, including social media platforms like Facebook.

Users upload photos of people to the platform which then scans the individual’s biometric information and provides users with other existing images and personal information, such as social media accounts, found online.

The technology has been used to solve murders, child exploitation cases, drug smuggling networks, theft, and more.

The most recent success story reported by Clearview occurred in October 2021, when the technology was used to identify a perpetrator in a major child sexual abuse case in Las Vegas that led to a 35-year prison sentence.

The New York Times first threw Clearview, founded by Ton-That and Richard Schwartz, into the spotlight in 2020.

“The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew,” the Times wrote.

The following year, the company made headlines after law enforcement officials used its technology to try to identify rioters from the January 6, 2021 attack on the US Capitol.

Clearview identified rioters from photos and videos posted online during the siege and shortly after saw a surge in usage after the attack with a “26 percent increase of searches.”

READ MORE: Facebook mulls legality of facial recognition when far more is at stake

Privacy concerns

Alongside legal threats from Google and Facebook, global data protection authorities have criticised Clearview’s technology.

They say it violates privacy norms and some US lawmakers have called for an end to use of Clearview’s “particularly dangerous” products.

Earlier this month, a group of Democrat Senators and Representatives called for a ban on the technology in letters to five federal departments that use it for domestic law enforcement: the Departments of Homeland Security, Justice, Defense, Interior and of Health and Human Services.

“Facial recognition tools pose a serious threat to the public’s civil liberties and privacy rights, and Clearview AI’s product is particularly dangerous,” said US Senators Edward J Markey and Jeff Merkley, and Representatives Pramila Jayapal and Ayanna Pressley.

When asked about these concerns, Ton-That told TRT World that the company’s “image-search engine functions within the framework of applicable laws.”

“Clearview AI searches only publicly available information, like Google, Bing, Yahoo or any other search engine,” Ton-That added.

READ MORE: Colonial surveillance: Palestinians already live in a tech dystopia

Racial bias

The US lawmakers also raised concerns over the threats posed by facial recognition technology for communities of colour and immigrant communities. 

“Communities of colour are systematically subjected to over-policing, and the proliferation of biometric surveillance tools is, therefore, likely to disproportionately infringe upon the privacy of individuals in Black, Brown, and immigrant communities,” they wrote in their letter.

According to the National Institute of Standards and Technology (NIST), people of colour are up to 100 times more likely to be misidentified than white men by facial recognition technology.

Ton-That also countered these claims of racial bias, saying that “as a person of mixed race this is highly important to me.” 

In the NIST’s 1:1 Face Recognition Vendor Test (FRVT), Clearview’s algorithm “consistently achieved greater than 99 percent accuracy across all demographics,” according to Ton-That.

Clearview also “correctly matched the correct face out of a lineup of 12 million photos at an accuracy rate of 99.85 percent” in the FRVT.

“According to the Innocence Project, 70 percent of wrongful convictions result from eyewitness lineups. Accurate facial recognition technology like Clearview AI is able to help create a world of bias-free policing,” said Ton-That.

Despite concerns, the 50-person company is looking to further expand this year by hiring 18 more people, Ton-That recently told Reuters.

Clearview also plans “to add enhancement tools” and possibly even AI technology to generate younger and older matches of subjects so people could even be matched to their childhood photos decades later.

READ MORE: China's rollout of facial recognition technology sparks concerns

Loading...
Route 6