YESTERDAY'S LESSON
FTC TARGETS HIDDEN AI TRAINING, BUT BIOMETRIC SURVEILLANCE GROWS
What Happened
The clearest hard development came from the FTC. The agency finalized its case involving OkCupid parent Match Group Americas and facial-recognition company Clarifai, taking the position that undisclosed use of app photos and profile data to train AI can violate Section 5 of the FTC Act. The complaint said Clarifai used about three million OkCupid images plus related demographic and location data without disclosure or a data-sharing agreement. The result is a 20-year order and a 10-year enhanced compliance program with recordkeeping, reporting, and monitoring duties, even without a fine.
Fresh reporting and conference demonstrations underscored why biometrics remain the most concrete privacy fight right now. New coverage of the Angela Lipps case described how North Dakota investigators used a Clearview AI lead and follow-on photo comparisons in a bank-fraud case before arresting a Tennessee woman who was later cleared by bank records showing she was elsewhere. At RSAC 2026, ESET’s Jake Moore showed how cheap AI tools can spoof facial recognition and liveness checks, including in a bank onboarding flow.
Yet deployment is still expanding. North Yorkshire Police announced a live facial-recognition rollout with promises of immediate deletion for non-matches, while Stockton, California approved a $3.15 million expansion of Flock license-plate readers. Separately, more than 60 U.S. groups asked Meta, the FTC, DOJ, and the White House to scrutinize reported plans for facial recognition in Ray-Ban smart glasses, though Meta has not confirmed the feature.
Key points
- The FTC’s OkCupid-Clarifai order makes undisclosed AI training on user images and profile data a live consumer-protection issue, not just a policy or copyright argument.
- The alleged OkCupid dataset included not only photos but also demographic and location information, widening the compliance lesson for AI vendors and data-sharing partners.
- Fresh reporting on the Lipps case shows how a facial-recognition lead can still cascade into arrest and months of detention even after additional human review.
- Face-based verification is getting easier to spoof at the same time police and local governments keep expanding live facial recognition and vehicle-tracking systems.
- Backlash around smart glasses and public-space biometrics is growing, but the day’s concrete moves were still deployments and oversight orders rather than broad new limits.
Implications
For companies, yesterday’s biggest change is practical. The FTC is signaling that if customer images or profile fields end up in AI training without clear disclosure and valid sharing arrangements, the problem can turn into a long-tail enforcement matter with years of oversight. That puts more weight on vendor contracts, training-data inventories, secondary-use notices, retention rules, and internal records showing exactly how datasets were sourced and approved.
This also continues the recent shift toward use-specific scrutiny of biometrics: where the images came from, whether a match is reliable enough to act on, and what happens when the system is wrong. For product teams and public agencies alike, face matching looks harder to defend as a standalone control. Human review, auditable deletion, access limits, and fallback methods that do not depend on a face alone are becoming less optional.
Things to watch
Watch
Whether the FTC uses the OkCupid-Clarifai order as a template for other cases involving scraped, licensed, or partner-supplied personal data used in AI training.
Watch
Whether Angela Lipps pursues civil-rights litigation, and whether discovery reveals what safeguards, confidence thresholds, or audit steps were used before her arrest.
Watch
Whether Meta confirms any facial-recognition feature for smart glasses, and whether local police rollouts in the UK and U.S. draw formal legal or procurement challenges over retention, sharing, and accuracy.

