FTC Rolls Out Targeted AI Enforcement
FTC Rolls Out Targeted AI Enforcement
Marking the opening of a new enforcement sweep called “Operation AI Comply,” the U.S. Federal Trade Commission (FTC) announced that it has brought actions against five companies for their allegedly deceptive or unfair use of AI. These efforts follow a summer filled with FTC activity relating to AI, including comments submitted by the FTC to the Federal Communications Commission outlining the tools the FTC intends to wield to address potential risks that AI presents to consumers and business; Dos and Don’ts on AI chatbots from the FTC Business Blog; and a joint statement alongside the Department of Justice and international enforcers on AI competition issues.
While these recent actions come with the eye-catching new Operation AI Comply tag, the enforcement tools that the FTC is wielding are familiar. The complaints, ranging from traditional FTC claims for deceptive money-making schemes supposedly powered by AI to more novel claims that an AI tool was being used for illegal purposes, were nonetheless packaged as FTC Act Section 5 claims consistent with the FTC’s position that there is no AI exception to consumer protection law. The current targeted operation follows the FTC’s growing enforcement record in recent years against companies (such as Rite Aid) that used AI tools in violation of Section 5. In the absence of new regulation, agencies continue to adapt existing frameworks to tackle AI schemes, including by algorithmic disgorgement, which is mandated deletion of algorithms that were created using illegally collected data, an emerging and significant part of the FTC’s AI enforcement strategy.
Three of the FTC’s actions involved fairly standard money-making schemes that were couched in new AI capabilities. In complaints against Ascend Ecom, Ecommerce Empire Builders, and FBA Machine, the FTC alleged that the companies’ representations that their AI tools would enable customers to easily generate income on the internet were false and misleading. In each of these cases, the companies offered to provide customers with, and support customers in running, online businesses, including online storefronts utilizing AI-powered tools. The FTC alleged that customers were misled, that the services did not provide the success promised, and that the companies refused to provide refunds to customers for their upfront investments.
In its complaint against DoNotPay, which offers a “robot lawyer” powered by AI, the FTC alleged that the company’s claims that its services could quickly generate legal documents and check small business websites for compliance violations had not been tested and were misleading to consumers. The FTC alleged that the company did not conduct testing to determine whether its AI chatbot’s output was equal to the level of a human lawyer, and that the company did not hire or retain any attorneys. The counts of false or unsubstantiated performance and false claims were brought under Section 5(a) of the FTC Act. DoNotPay has agreed to a proposed Commission order settling the charges against it, agreeing to pay $193,000 and to provide notice to customers informing them of the limitation of the services offered. The order also prohibits the company from making claims about its ability to substitute for any professional service without supporting evidence.
Finally, the FTC also filed a complaint against Rytr, which markets and sells an AI writing assistant service offering several tools, including a “Testimonial & Review” generator. The complaint alleged that this tool allows subscribers to generate false consumer reviews and testimonials with misleading content not based on users’ input, which could deceive potential buyers. For example, with only an entry of a product name and the desired tone of the review, the tool would spit out a detailed review filled with positive feedback unrelated to any actual user’s experience. The FTC alleged that the tool allowed users to create false and deceptive consumer reviews and that providing this service was an unfair business practice in violation of Section 5(a) of the FTC Act. In the 3-2 vote to file the complaint, Commissioners Holyoak and Ferguson dissented, arguing that the potential harm was speculative because the complaint did not contain allegations that any such reviews were actually posted or relied upon by consumers.
These complaints demonstrate that the FTC will continue to use its existing tools, while also leveraging new theories, to prevent companies from using AI to mislead consumers. While it remains to be seen how courts will respond to these latest enforcement efforts, what is clear, as FTC Chair Lina M. Khan has said herself, is that “there is no AI exemption from the laws on the books.”
Practices
Industries + Issues