After working on its manipulated media policy over the last few months, Twitter has this week unveiled its official rule against the posting of deceptive manipulated content, while it’s also launching a new tag for detected edited material.
As per Twitter, it’s new, official rule on manipulated media usage is:
“You may not deceptively share synthetic or manipulated media that are likely to cause harm. In addition, we may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide context.”
Twitter showcased its new label in a separate announcement tweet:
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
With deepfakes potentially on the rise as a tool for misinformation, Twitter’s approach seems like a good one, allowing for satirical or light-hearted uses of the evolving form, while implementing clear warnings on potentially damaging content.
Indeed, deepfakes have become a major focus for online providers in recent times, with both Google and Facebook also launching new research initiatives to help them detect and action the same. From a consumer perspective, deepfakes haven’t really had a major impact as yet (that we know of), but the increased action from the major online players suggests that a new wave is coming, and that as such technology becomes easier to use, and more accessible, it will inevitably be seen as a tool for manipulating public opinion, where possible.
And it’s easy to imagine it become a bigger issue. These days, people tend to believe what they choose to online – if a news story doesn’t align with your world view, dismiss it and find another source that does. That approach has been emboldened by world leaders who’ve increasingly dismissed press reports as ‘fake news’ in public. In this new media climate, you can bet that manipulated videos depicting what people want to believe will spread quickly – so it’s important for Twitter, and other platforms, to take proactive steps.
Hopefully, these added prompts will at least slow any such momentum, and prompt people to reconsider. It’ll likely be impossible to stop deepfakes from having any impact, but prompt labeling like this could act as a significant deterrent.