New AI Disclosure Rules For Political And Social Issue Ads On Facebook

Facebook will be soon be requiring new disclosures for advertisers who use AI to generate political or social issue ads. It’s good for normie users, but it could probably go a bit further, IMHO. New disclosure rules will go into effect in early January.

We’re announcing a new policy to help people understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered, including through the use of AI. This policy will go into effect in the new year and will be required globally.

Advertisers will have to disclose whenever a social issue, electoral, or political ad contains a photorealistic image or video, or realistic sounding audio, that was digitally created or altered to:

  • Depict a real person as saying or doing something they did not say or do; or

  • Depict a realistic-looking person that does not exist or a realistic-looking event that did not happen, or alter footage of a real event that happened; or

  • Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.

Advertisers running these ads do not need to disclose when content is digitally created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad. This may include image size adjusting, cropping an image, color correction, or image sharpening, unless such changes are consequential or material to the claim, assertion, or issue raised in the ad.

Meta will add information on the ad when an advertiser discloses in the advertising flow that the content is digitally created or altered. This information will also appear in the Ad Library. If we determine that an advertiser doesn’t disclose as required, we will reject the ad and repeated failure to disclose may result in penalties against the advertiser. We will share additional details about the specific process advertisers will go through during the ad creation process.

As always, we remove content that violates our policies whether it was created by AI or a person. Our independent fact-checking partners review and rate viral misinformation and we do not allow an ad to run if it’s rated as False, Altered, Partly False, or Missing Context. For example, fact-checking partners can rate content as “Altered” if they determine it was created or edited in ways that could mislead people, including through the use of AI or other digital tools.

Previous
Previous

Can AI Enable More Effective Constituent Communications For Lawmakers? A New Study Says…Maybe

Next
Next

Threads Rolls Out Hashtag Functionality…Without the Hashtag