YouTube broadens AI deepfake detection to protect politicians, officials, and journalists
YouTube expands its AI deepfake detection program to include politicians, government officials, and journalists to curb the spread of manipulated videos and protect public figures.
YouTube is widening access to its likeness detection technology, which is designed to identify AI-generated deepfakes, by opening a pilot program for government officials, political candidates, and journalists, the company announced Tuesday. Participants in the pilot will get access to a tool that can detect unauthorised AI-generated content and allow them to request its removal if they believe it violates YouTube's rules.
The technology was launched last year for around 4 million YouTube creators in the YouTube Partner Program, following earlier testing.
Much like YouTube's existing Content ID system, which identifies copyrighted material in user-uploaded videos, the likeness detection feature is designed to detect AI-generated simulated faces. These kinds of tools are sometimes used to spread misinformation and distort people's understanding of reality by using deepfaked versions of prominent public figures — such as politicians or other government officials — to make it appear as though they are saying or doing things they never actually said or did.
YouTube says it is trying to strike a balance between protecting free expression and managing the risks posed by AI systems capable of creating convincing digital likenesses of public figures.
"This expansion is really about the integrity of the public conversation," Leslie Miller, YouTube's vice president of Government Affairs and Public Policy, said during a press briefing held ahead of Tuesday's announcement. "We know that the risks of AI impersonation are particularly high for those in the civic space. But while we are providing this new shield, we're also being careful about how we use it," she added.
Miller said not every detected match would automatically be removed upon the affected person's request. Instead, YouTube will review each request under its existing privacy policy framework to determine whether the content qualifies as parody or political criticism, both of which are protected forms of expression.
The company also said it is pushing for similar safeguards at the federal level by supporting the NO FAKES Act in Washington, D.C. That proposal would regulate the use of AI to create unauthorised recreations of a person's voice or visual likeness.
To use the new tool, eligible participants in the pilot must first verify their identity by uploading a selfie along with a government-issued ID. After that, they can create a profile, review any matches that appear, and choose whether to request removal. YouTube said it eventually wants to give people the option to block violating uploads before they are published, or to monetise those videos, in a way similar to how Content ID works today.
The company did not say which politicians or officials will be part of the first testing group, but it said the longer-term goal is to make the technology more broadly available.
Videos identified as using AI will carry labels, though those labels will not always appear in the same place. In some cases, the label will show up in the video description, while videos dealing with more "sensitive topics" will display the label more prominently on the video itself. That is the same general approach YouTube currently applies to all AI-generated material.
"There's a lot of content that's produced with AI, but that distinction's actually not material to the content itself," Amjad Hanif, YouTube's vice president of Creator Products, said when discussing how labels are placed. "It could be a cartoon that is generated with AI. And so I think there's a judgment on whether it's a category that maybe merits a very visible disclaimer," he said.
YouTube is not currently disclosing how many AI deepfake removals have been handled through this detection system in creators' hands, but said the amount of content removed so far has been "very small."
"I think for a lot of [creators], it's just been the awareness of what's being created, but the volume of actually removal requests is really, really low because most of it turns out to be fairly benign or additive to their overall business," Hanif said.
That dynamic may look very different when the deepfakes involve government officials, political figures, or journalists.
Over time, YouTube says it plans to expand its deepfake detection technology into other areas, including recognisable spoken voices and other forms of intellectual property, such as well-known fictional characters.
Tags:
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0