The FTC Brings Algorithmic Bias into Sharp Focus
The FTC Brings Algorithmic Bias into Sharp Focus
In May 2023, the Federal Trade Commission (FTC) warned that it would closely monitor alleged misuses of biometric information—it has now acted on that warning.[1] For the first time, the FTC has brought an enforcement action under Section 5 of the FTC Act based on alleged algorithmic bias. What’s more, the FTC did so with steep proposed penalties—including deletion of data, models, and algorithms derived from the biometrics at issue.
On December 19, 2023, the FTC filed a complaint and proposed stipulated order against Rite Aid Corporation for the company’s alleged use of facial recognition technology without appropriate safeguards, including sufficient bias testing.[2] Although this marks the first time the FTC has used its authority to address algorithmic bias, it is not the first time the FTC has addressed a company’s use of facial recognition technology.[3]
According to a statement by FTC Commissioner Alvaro M. Bedoya, the action marks a new focus by the FTC on companies that deploy biometrics and artificial intelligence (AI) systems that may have biased impacts on consumers.[4]
In its complaint, the FTC alleges that Rite Aid used facial recognition technology “to identify patrons that it had previously deemed likely to engage in shoplifting or other criminal behavior.”[5] The FTC contends that Rite Aid did so “without taking reasonable steps to address the risks that their deployment of such technology was likely to result in harm to consumers as a result of false-positive facial recognition match alerts.”[6]
According to the complaint, the technology at issue sent alerts to Rite Aid employees when a patron who matched an entry in “Rite Aid’s watchlist database” entered the store.[7] A match allegedly led to Rite Aid employees subjecting such patrons “to increased surveillance; banning them from entering or making purchases at the Rite Aid stores; publicly and audibly accusing them of past criminal activity in front of friends, family, acquaintances, and strangers; detaining them or subjecting them to searches; and calling the police to report that they had engaged in criminal activity.”[8] These actions allegedly occurred in numerous instances involving false-positive matches, meaning the technology incorrectly identified a person who had entered a store as someone in Rite Aid’s watchlist database.[9]
The FTC claims that Rite Aid deployed the technology without reasonable steps to prevent harm to consumers, including purportedly failing to:
These failures, the FTC contends, injured most acutely Black, Asian, Latino, and women consumers, all of whom the technology allegedly more commonly falsely identified as matches.[11]
The proposed order would impose significant obligations on Rite Aid. It also provides guidance on the FTC’s expectations with respect to the use of automated technologies not only for surveillance, but also for other screening processes, such as hiring, advertising, and pricing.
The proposed order has several key requirements for Rite Aid, such as:
In his Statement, Commissioner Bedoya put it plainly: “Biased face surveillance hurts people.”[23] The Commissioner made his message for the industry clear:
I want industry to understand that this Order is a baseline for what a comprehensive algorithmic fairness program should look like. Beyond giving notice, industry should carefully consider how and when people can be enrolled in an automated decision-making system, particularly when that system can substantially injure them. In the future, companies that violate the law when using these systems should be ready to accept the appointment of an independent assessor to ensure compliance.[24]
The action against Rite Aid provides a number of insights into how the FTC may use its enforcement authority, as well as the potential ripple effects beyond the use of biometrics.
Second, the implications are not limited to face surveillance. Indeed, Commissioner Bedoya framed the action as “part of a much broader trend of algorithmic unfairness” and cited examples of résumé screening models, advertising platforms, and pricing models as presenting similar issues of algorithmic bias that could potentially violate Section 5.[25]
In sum, the FTC has now set forth a detailed list of expectations for companies that operate biometric surveillance systems that impact or potentially harm consumers. Reasonable algorithm fairness practices and ongoing monitoring and testing of the company’s systems and tools will be crucial to help avoid regulatory scrutiny.
[1] See FTC Warns About Misuses of Biometric Information and Harm to Consumers (May 18, 2023).
[2] Complaint for Permanent Injunction and Other Relief (“Compl.”) ¶ 3, Fed. Trade Comm’n v. Rite Aid Corp., No. 2:23-cv-05023 (E.D. Pa. Dec. 19, 2023), ECF No. 1.
[3] See, e.g., FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology (May 7, 2021); FTC Imposes $5 Billion Penalty and Sweeping New Privacy Restrictions on Facebook (July 24, 2019).
[4] Statement of Commissioner Bedoya on FTC v. Rite Aid Corp. (“Statement”), Comm’n File No. 202-3190 (Dec. 19, 2023).
[5] Compl. ¶ 3.
[6] Id. ¶ 140.
[7] Id. ¶ 3.
[8] Id. ¶ 4.
[9] Id.
[10] Id. ¶ 5.
[11] Id. ¶ 6.
[12] Proposed Stipulated Order for Permanent Injunction and Other Relief, Ex. A. Decision and Order (“Proposed Order”) at 6, Fed. Trade Comm’n v. Rite Aid Corp., No. 2:23-cv-05023 (E.D. Pa. Dec. 19, 2023), ECF No. 2-2.
[13] Id.
[14] Id.
[15] Id. at 7.
[16] Id.
[17] Id. at 7-9.
[18] Id. at 9-12.
[19] Id. at 12, 14.
[20] Id. at 13-16.
[21] Id. 15-16.
[22] Id. at 17-23.
[23] Statement at 1.
[24] Id. at 4.
[25] Id. at 5.
Practices
Industries + Issues