Google announced it will soon require all election advertisers to add disclaimers when their ads have been altered or created using artificial intelligence (AI) tools.
Google’s new policy update, kicking in around mid-November, means that election advertisers using Google’s platforms will have to alert viewers when their ads use images, videos, or audio made or tweaked by generative AI.
Google writes in the changelog:
we are updating our Political content policy to require that all verified election advertisers in regions where verification is required must prominently disclose when their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. This disclosure must be clear and conspicuous, and must be placed in a location where it is likely to be noticed by users. This policy will apply to image, video, and audio content.
Examples of ads that will require a disclaimer include those that create the false impression that someone said or did something they never actually said or did. This also encompasses ads that manipulate real event footage to fabricate scenes that never actually occurred.
Ads that contain synthetic content altered or generated in such a way that is inconsequential to the claims made in the ad will be exempt from these disclosure requirements. This includes editing techniques such as image resizing, cropping, color or brightening corrections, defect correction (for example, “red eye” removal), or background edits that do not create realistic depictions of actual events.
Google has informed Bloomberg that its new policy does not extend to videos uploaded to YouTube that do not constitute paid advertising, even if such videos are uploaded by political campaigns.
The new policy might help Google improve its transparency measures for election ads. The policy comes after Google went ahead with the targeted ad tracking system for the Chrome browser.
Image Source: Facebook via CBInsights