SOCIAL
Facebook Announces New Policy to Crackdown on Manipulated Media
With other social media platforms looking at how they can utilize manipulated media for features, including deepfakes, Facebook has announced the first iteration of its policy to stop the spread of misleading fake videos, as part of its broader effort to pre-empt the potential rise of problematic deepfake videos.
Facebook says that it’s been meeting with experts in the field to formulate its policy, including people with “technical, policy, media, legal, civic and academic backgrounds”.
As per Facebook:
“As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Facebook says that it’s new policies do not extend to content which is parody or satire, “or video that has been edited solely to omit or change the order of words”. The latter may seem somewhat problematic, but this type of editing is already covered in Facebook’s existing rules – though Facebook does also note that:
“Videos which don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
So why doesn’t Facebook just remove these as well – if Facebook has the capacity to identify content as fake, and it’s reported as a violation, Facebook could just remove all of it, deepfake or not, and eliminate it as a problem.
But Facebook says that this approach could be counter-intuitive, because those same images/videos will be available elsewhere online.
“This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
So, Facebook’s framing its decision not to remove some manipulated content as a civic duty, which is similar to its approach on political ads, which Facebook won’t subject to fact-checking because:
“People should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.”
So it’s helping, it’s serving the public interest – and Facebook in no way benefits from hosting such content, and the subsequent engagement it generates, on its platform, as opposed to removing it, and then, potentially, seeing users migrate to some other social network in order to facilitate the same discussion. That’s got nothing to do with it. Purely to benefit the public.
Skepticism aside, deepfakes are clearly an area of concern for the major networks heading into 2020, with Twitter, Google and Facebook all running their own, independent research projects to establish the best ways to detect and remove such content. They’re not doing this for no reason – with so much emphasis on the potential dangers of deepfakes for manipulative messaging, it may well suggest that the platforms are seeing increased focus on this type of activity from bad actors, and they’re working to head it off before it has a chance to cause problems.
Given the focus on misinformation since 2016, and the willingness of some to believe what they choose to, you can imagine that deepfakes could indeed be a major weapon for political activists. And worse, in many cases, even when a fake video has been proven false, it’s already too late. The damage has been done, the anger embedded, the opinion formed.
Case in point, this video has been circulating around Facebook for a few years, depicting a Muslim refugee smashing up a Christian statue in Italy with a hammer.
Except its not a Christian statue, he’s not a refugee, and the video wasn’t recorded in Italy. The actual incident occurred in 2017 in Algeria – a majority Muslim nation – where the statue of a naked woman has long been a subject of religious debate.
This misleading framing of the video has been debunked, repeatedly, and reported. But it still comes up every now and then, sparking anti-Muslim sentiment, even though the details are completely false (and as you can see, this version was viewed more than 1.1 million times).
This video is not a deepfake, but as noted, even though people can scroll through the comments and find out that it’s false, even though it’s been debunked over and over, it largely doesn’t matter. The social media news cycle moves fast, and sharing is easy. Most users view things like this once, take it at face value, pass it on, then move on to what’s next.
You can imagine the same approach will apply to deepfakes – what happens, for example, if someone posts a deepfake of Joe Biden saying something condemning? Various obviously manipulated Biden videos are already sifting through Facebook’s network – a deepfake would likely gain traction very fast, probably too fast to reign in. Opinions solidified, responses felt.
You can see why, then, all the major players are working so hard to head off this next level of manipulation at the pass.
As noted, this also comes as TikTok is reportedly working on a new tool which will turn deepfakes into a feature, of sorts.
TikTok says that it has no plans to release the feature into markets outside of China, with the feature actually being tested in Douyin, the Chinese version of TikTok. But given the app’s potential requirement to share its data with the Chinese Government, that could be even more concerning – through this process, users would need to provide a biometric face scan, which TikTok could then, theoretically, store on its servers.
The Chinese Government has the most sophisticated citizen surveillance network in the world, comprising of more than 170 million CCTV cameras, the equivalent of one for every 12 people in the country. All of these cameras are equipped with advanced facial recognition capacity, and China has already been using this to identify Uighar Muslims, people who have evaded fines and protesters in Hong Kong.
Imagine if it also had a database of TikTok users, made available by this feature? You could argue that most adults have a drivers’ license, and that would be enough to set off the system regardless, but only around 369 million Chinese people are registered to drive, out of 1.39 billion citizens, while TikTok users can sign up from the age of 13. That’s a lot of valuable data.
Aside from the manipulative concerns of deepfakes, TikTok may have also found a new issue to contend with (note: TikTok has said that the functionality, which is not approved, would only be available to older users).
In summary, deepfakes could become a major problem, on several fronts, which is why Facebook is putting in the work now to stop the next major misinformation trend.
As Ben Smith, the Editor in Chief of BuzzFeed noted recently:
“I think the media is totally prepared not to repeat the mistakes of the last [election] cycle… but I’m sure we will **** it up in some new way we aren’t expecting.”
Could deepfakes be the thing that throws the next election cycle off balance?
Definitely an element to watch in 2020.