On Tuesday, Twitter announced changes to its policy around posts that are deceptively manipulated including deepfakes, or AI-altered videos that distort reality ahead of the 2020 elections.
In a blog post, Twitter announced changes to the companys synthetic and manipulated media policy, which it defines as any photo, audio, or video thats been significantly altered or fabricated to mislead people or change the original meaning of the content. Under the new rules, Twitter willremove this kind of media if the company finds it likely to cause harm such as content that threatens peoples physical safety or could cause widespread civil unrest. If Twitter doesnt think manipulated media posts are likely to cause harm, it may still label the tweets as containing manipulated media, warn users who try to share them, and deprioritize the content in users feeds. The changes will go into effect on March 5.
Twitter is the latest social media company, along with Facebook, YouTube, and Reddit, to restrict increasingly controversial deepfakes and other kinds of manipulated media in recent months on their platforms. The vast majority (according to a recent study, about 96 percent) of deepfakes are nonconsensual pornographic material, often altering images of women to show them participating in sexual acts without their permission. These videos are already in violation of Twitters longstanding nonconsensual nudity policy.
But a different kind of manipulated media posted on social platforms has been causing controversy of late: deceptively edited videos of prominent politicians. One of the most famous examples so far is from May, when a doctored video of House Speaker Nancy Pelosi went viral on social media platforms, including Twitter, that slowed down her speech to make her seem inebriated. Similarly, a clip of former Vice President Joe Biden went viral online that was misleadingly edited to make it falsely appear he was making racist remarks.
Under the new rules, Twitter says in the future, it would at minimum label the video like the one of Pelosi and Biden as manipulated, since their speech was deceptively altered. Beyond the Pelosi example, political deepfakes have become a concern for US lawmakers and other government officials, who warn that they could be used by malicious actors to undermine US democracy and influence elections. Twitter and other companies increasingly tougher rules on the topic are in part a response to these fears, particularly ahead of the 2020 presidential elections.
The changes Twitter and others have announced are a long time coming. In May, YouTube removed the controversial Pelosi video, although it continued to gain traction on other platforms, causing right-wing cable news pundits to question Pelosis mental health and fitness to serve office. Facebook eventually placed a warning label on the video for users who shared it. the company also added links to vetted content that explained that the video was manipulated.
Twitter was comparatively the most permissive, letting tweets containing the video stand without any intervention. President Donald Trump went on to retweet one of the doctored videos of Pelosi, which currently has racked up around 90,000 likes. On a press call on Thursday, Twitters head of site integrity, Yoel Roth, said at minimum it would label the Pelosi video, and depending on what the tweet sharing the video says, the company might choose to take down specific tweets. A spokesperson for Twitter said the new rules will not be applied retroactively so Trumps tweet sharing the doctored video wont be labeled as such for now.
Twitter has been working on updates to its manipulated media policy for some time. Back in October, it announced Tuesdays changes in a draft post and invited the public to take a survey regarding the proposed rules, or to tweet their feedback. The company says it received more than 6,500 responses in that period, and also consulted with several academics and civic organizations. The company says that while 90 percent of individuals who participated in the survey wanted the company to label significantly altered tweets and alert them before they share such media, they were more divided on if Twitter should delete such media; 55 percent of those surveyed in the US said Twitter should take down such content, while many others expressed concerns about the impact of free expression and censorship if it were to remove such content, according to Twitter.
Twitters changes to its manipulated media policy are a positive step. But it also leaves a gray area about what kind of content will pass Twitters tests for what counts as synthetic or manipulated media thats been shared deceptively, and for whether something is likely to cause harm. Its easy to see how one person could argue that a deceptively edited video of a politician is harmful to the democratic process, whereas others would argue that taking down such a video would warrant unnecessary censorship.
Twitter said it will prioritize applying its new policies to content that poses a physical threat to peoples safety.
And aside from the big question of how strictly Twitter will enforce these new policies, theres also a question of how they will be able to find all the posts that potentially break the rules in the first place.
Twitter says it isnt coming out with any new tools to discover manipulated or synthetic media but instead continuing its existing process which largely relies on users to find and report tweets that may violate the companys content moderation policies. The company said it will also partner with third parties to help identify manipulated content.
Overall, Twitter made a significant step Tuesday in tightening its rules against deceptive media but exactly how this plays out will depend on how quickly and decisively the company can act on the inevitably tricky and politically charged cases that will arise in the months ahead.