Facebook is implementing a policy against deepfaked images and other manipulated media
In an effort to counter the spread of lies and false information, Facebook is implementing a new policy to remove manipulated media from the social network if it is attempting to slander or mischaracterize the subject or subjects of said media.
In a post via Facebook's official newsroom, Facebook's Vice President of Global Policy Management Monika Bickert stated that these new measures are the result of consulting with over 50 global experts with a variety of technical, policy, media, legal, civic and academic backgrounds.
Any audio, photos, or videos that meet the following criteria will be removed:
• Edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.
• A product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.
Crucially, this new policy does not extend to satirical or parodical content, or "video that has been edited solely to omit or change the order of words."
Bickert concludes her post with a statement that Facebook's policies towards manipulated media will continue to change based on Facebook's internal insights as well as new information that arises as a result of their partnerships with experts.