Don’t use Americans’ data on the sly to train AI, FTC warns businesses

Graeme Sloan/Sipa/File

US companies may find themselves under federal scrutiny if they “quietly” try to funnel customers’ personal information into training artificial intelligence models, the government warned this week.

The warning by the Federal Trade Commission, the nation’s top privacy and consumer protection agency, highlights the enormous value of Americans’ personal data. Troves of digital information already help Netflix determine what you might like to watch next, or help Amazon figure out what you’re likely to buy, or help Google understand what shops are nearby.

Now, however, much of that same information could be fed into ever more sophisticated AI models amid the rush to adopt a hot new technology, the FTC wrote in a blog post Tuesday.

“You may have heard that ‘data is the new oil,’” the agency said, referencing an adage describing the way personal information is a critical input powering the machinery of Big Tech. “There is perhaps no data refinery as large-capacity and as data-hungry as AI.”

Many companies disclose how they use customer or user information in their privacy policies. But simply updating a privacy policy to say that a company will now use personal data collected for other purposes to train AI isn’t transparent enough and could violate the law, the FTC said.

Consumer protection regulators won’t hesitate to crack down on companies “surreptitiously re-writing their privacy policies or terms of service to allow themselves free rein to use consumer data for product development,” the agency said. “Ultimately, there’s nothing intelligent about obtaining artificial consent.”

The blog post highlights how, amid a lack of congressional action to regulate AI, federal agencies are increasingly trying to apply existing law to AI’s potential risks and harms.

The FTC’s guidance this week coincided with a warning Tuesday by Gary Gensler, the head of the Securities and Exchange Commission, that publicly traded companies risk violating US securities law if they mislead investors by overhyping what their AI tools can do, or if they say they use AI when truthfully they do not.

The FTC has similarly warned companies not to make overheated claims about AI, pointing to its powers to enforce the Fair Credit Reporting Act, the Equal Credit Opportunity Act and the FTC Act, which authorizes the agency to go after “unfair or deceptive practices,” which can include false claims in marketing or privacy policies.

For more CNN news and newsletters create an account at CNN.com

Advertisement