United States Customs and Border Safety plans to spend $225,000 for a 12 months of entry to Clearview AI, a face recognition instrument that compares images towards billions of photographs scraped from the web.
The deal extends entry to Clearview instruments to Border Patrol’s headquarters intelligence division (INTEL) and the Nationwide Focusing on Middle, models that accumulate and analyze information as a part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” individuals and networks seen as safety threats.
The contract states that Clearview offers entry to “over 60+ billion publicly out there photographs” and shall be used for “tactical concentrating on” and “strategic counter-network evaluation,” indicating the service is meant to be embedded in analysts’ day-to-day intelligence work slightly than reserved for remoted investigations. CBP says its intelligence models draw from a “number of sources,” together with commercially out there instruments and publicly out there information, to establish individuals and map their connections for nationwide safety and immigration operations.
The settlement anticipates analysts dealing with delicate private information, together with biometric identifiers comparable to face photographs, and requires nondisclosure agreements for contractors who’ve entry. It doesn’t specify what sorts of images brokers will add, whether or not searches could embrace US residents, or how lengthy uploaded photographs or search outcomes shall be retained.
The Clearview contract lands because the Division of Homeland Safety faces mounting scrutiny over how face recognition is utilized in federal enforcement operations far past the border, together with large-scale actions in US cities which have swept up US residents. Civil liberties teams and lawmakers have questioned whether or not face-search instruments are being deployed as routine intelligence infrastructure, slightly than restricted investigative aids, and whether or not safeguards have stored tempo with growth.
Final week, Senator Ed Markey introduced legislation that may bar ICE and CBP from utilizing face recognition know-how altogether, citing considerations that biometric surveillance is being embedded with out clear limits, transparency, or public consent.
CBP didn’t instantly reply to questions on how Clearview can be built-in into its programs, what forms of photographs brokers are licensed to add, and whether or not searches could embrace US residents.
Clearview’s enterprise mannequin has drawn scrutiny as a result of it depends on scraping images from public web sites at scale. These photographs are transformed into biometric templates with out the data or consent of the individuals photographed.
Clearview additionally seems in DHS’s just lately launched synthetic intelligence stock, linked to a CBP pilot initiated in October 2025. The stock entry ties the pilot to CBP’s Traveler Verification System, which conducts face comparisons at ports of entry and different border-related screenings.
CBP states in its public privateness documentation that the Traveler Verification System doesn’t use data from “industrial sources or publicly out there information.” It’s extra possible, at launch, that Clearview entry would as an alternative be tied to CBP’s Automated Focusing on System, which hyperlinks biometric galleries, watch lists, and enforcement information, together with information tied to current Immigration and Customs Enforcement operations in areas of the US removed from any border.
Clearview AI didn’t instantly reply to a request for remark.
Latest testing by the National Institute of Standards and Technology, which evaluated Clearview AI amongst different distributors, discovered that face-search programs can carry out properly on “high-quality visa-like images” however falter in much less managed settings. Pictures captured at border crossings that have been “not initially meant for automated face recognition” produced error charges that have been “a lot increased, usually in extra of 20 p.c, even with the extra correct algorithms,” federal scientists say.
The testing underscores a central limitation of the know-how: NIST discovered that face-search programs can not scale back false matches with out additionally rising the chance that the programs fail to acknowledge the proper individual.
Consequently, NIST says companies could function the software program in an “investigative” setting that returns a ranked checklist of candidates for human evaluation slightly than a single confirmed match. When programs are configured to all the time return candidates, nonetheless, searches for individuals not already within the database will nonetheless generate “matches” for evaluation. In these circumstances, the outcomes will all the time be one hundred pc unsuitable.

