Texas Attorney General Ken Paxton has filed a lawsuit against Meta over Facebook’s facial recognition practices, his office announced on Monday. The news was first reported by The Wall Street Journal, which notes that the lawsuit seeks civil penalties in the hundreds of billions of dollars. The lawsuit alleges that the company’s use of facial recognition technology, which it has now discontinued, violated the state’s privacy protections regarding biometric data.
A press release announcing the lawsuit alleges that Facebook has been storing millions of biometric identifiers contained in photos and videos uploaded by users. Attorney General Paxton says that Facebook exploited the personal information of users “to grow its empire and reap historic windfall profits.”
“Facebook will no longer take advantage of people and their children with the intent to turn a profit at the expense of one’s safety and well-being,” Paxton said in a statement. “This is yet another example of Big Tech’s deceitful business practices and it must stop. I will continue to fight for Texans’ privacy and security.”
A spokesperson from Meta told TechCrunch in an email that “these claims are without merit and we will defend ourselves vigorously.”
The lawsuit alleges that Facebook deceived the public by concealing the nature of its practices and that Texans who used the app were oblivious to the fact that Facebook was capturing biometric information from photos and videos. It also alleges, without providing further context, that users were unaware that Facebook was disclosing users’ personal information to other entities who further exploited it.
“Facebook often failed to destroy collected biometric identifiers within a reasonable time, exposing Texans to ever-increasing risks to their well-being, safety and security,” the lawsuit reads. “Facebook knowingly captured biometric information for its own commercial benefit, to train and improve its facial recognition technology, and thereby create a powerful artificial intelligence apparatus that reaches all corners of the world and ensnares even those who have intentionally avoided using Facebook services.”
In November 2021, Meta announced it was shutting down its Face Recognition system on Facebook, and would no longer automatically identify opted-in users in photos and videos. It also said it would delete over a billion individual facial recognition templates as part of this shutdown. But Texas officials asked Meta to preserve this data for its investigation, likely delaying the system’s full closure.
This isn’t the first time that Meta has faced legal action for its facial recognition practices. Last March, Facebook was ordered to pay $650 million for running afoul of an Illinois law designed to protect the state’s residents from invasive privacy practices. That law, the Biometric Information Privacy Act (BIPA), is a powerful state measure that’s tripped up tech companies in recent years. The suit against Facebook was first filed in 2015, alleging that Facebook’s practice of tagging people in photos using facial recognition without their consent violated state law.
Following the ruling,1.6 million Illinois residents received at least $345 under the final settlement ruling in California federal court. The final number was $100 million higher than the $550 million Facebook proposed in 2020, which a judge deemed inadequate. Facebook disabled the automatic facial recognition tagging features in 2019, making it opt-in instead and addressing some of the privacy criticisms echoed by the Illinois class-action suit.
A $650 million settlement would have been enough to significantly impact any normal company, but Facebook brushed it off as it did with the FTC’s record-setting $5 billion penalty in 2019 following its probe into the social media giant’s privacy issues.
The new Texas lawsuit shows that widespread privacy laws could have a significant impact not only on Meta’s operations but also on all big technology companies’ practices. In the past years, a cluster of lawsuits has accused Microsoft, Google and Amazon of breaking laws when users’ faces were used to train their facial recognition systems without explicit consent.