BBB National Programs Issues Compliance Warning for Use of AI in Child-Directed Advertising and Data Collection

McLean, VA – May 1, 2024 – BBB National Programs’ Children’s Advertising Review Unit (CARU) today issued a new compliance warning on the application of CARU’s Advertising and Privacy Guidelines to the use of Artificial Intelligence (AI).

The CARU compliance warning puts advertisers, brands, endorsers, developers, toy manufacturers, and others on notice that CARU’s Advertising and Privacy Guidelines apply to the use of AI in advertising and the collection of personal data from children. 

The warning states that CARU will strictly enforce its Advertising and Privacy Guidelines in connection with the use of AI and the potential risks that its use may pose in terms of manipulative practices, including influencer marketing, deceptive claims, and privacy practices. Additionally, according to the warning, marketers should be particularly cautious to avoid deceiving children about what is real and what is not when engaging with realistic AI-powered experiences and content. 

"Our Compliance Warning stresses the importance of responsible advertising and data privacy practices in the children’s space, particularly in AI-driven advertising and data collection," said Dona Fraser, Senior Vice President, Privacy Initiatives at BBB National Programs. "We call on marketers, brands, and developers to prioritize transparency, safety, and compliance with CARU’s Advertising and Privacy Guidelines as we all work to maintain the well-being of children as AI becomes commonplace.”

CARU’s Guidelines are widely recognized industry standards designed to assure that advertising directed to children is not deceptive, unfair, or inappropriate for its intended audience and that, in an online environment, children’s data is collected and handled responsibly. CARU monitors child-directed media to ensure compliance with its Guidelines, seeking the voluntary cooperation of companies and, where necessary, referral for enforcement action to an appropriate federal regulatory body, usually the Federal Trade Commission (FTC), or to a state Attorney General. 

 

CARU’s Advertising Guidelines 

CARU’s Advertising Guidelines apply to advertising in all media, regardless of whether AI is used to create or disseminate the ads, including digital worlds where altered, simulated, and synthetic content is powered by AI. 

Brands using AI in advertising should be particularly cautious of the potential to mislead or deceive a child in the following areas:

  • AI-generated deep fakes; simulated elements, including the simulation of realistic people, places, or things; or AI-powered voice cloning techniques within an ad. 
  • Product depictions, including copy, sound, and visual presentations generated or enhanced using AI indicating product or performance characteristics.
  • Fantasy, via techniques such as animation and AI-generated imagery, that could unduly exploit a child’s imagination, create unattainable performance expectations, or exploit a child’s difficulty in distinguishing between the real and the fanciful.
  • The creation of character avatars and simulated influencers that directly engage with the child and can mislead children into believing they are engaging with a real person.

 

CARU’s Advertising Guidelines also noted that advertisers should take measures when using generative AI to depict people to ensure the depictions reflect the diversity of humanity and do not promote harmful negative stereotypes. 

 

CARU’s Privacy Guidelines

CARU’s Privacy Guidelines apply to online data collection and other privacy-related practices for online services that target children under 13 years of age, and to operators that have actual knowledge they are collecting personal information from children under 13 years of age.  

Because AI offers unique opportunities to interact with children who may not understand the nature of the information being sought or its intended use, brands using AI in online services should be particularly cautious in the following areas:

  • Requirements and responsibilities when collecting personal information from a child under the Children’s Online Privacy Protection Act (COPPA).
  • Reliance upon third-party generative AI technology to operate and process data, which may require verifiable parental consent (VPC).
  • Operators who input a child’s personal information into an AI system and receive a deletion request from a parent, which may be nearly impossible to retrieve and delete. 
  • AI-connected toys and online services must collect VPC and properly disclose their collection practices in their Privacy Policy, prior to any collection, use, or sharing of children’s personal information through their own online service or with a third-party generative AI service. 

 

View CARU’s AI Compliance Warning here.