Facebook wants to be the arbiter of truth after all. At least when it comes to intentionally misleading deepfakes and heavily manipulated and/or synthesized media content, such as AI-generated photorealistic human faces that look like real people but aren’t.
In a policy update announced late yesterday, the social network’s VP of global policy management, Monika Bickert, writes that it will take a stricter line on manipulated media content from here on in — removing content that’s been edited or synthesized “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say”.
However edits for quality or cuts and splices to videos that simply curtail or change the order of words are not covered by the ban.
Which means that disingenuous doctoring — such as this example from the recent UK General Election (where campaign staff for one political party edited a video of a politician from a rival party who was being asked a question about brexit to make it look like he was lost for words when in fact he wasn’t) — will go entirely untouched by the new ‘tougher’ policy. Ergo there’s little to trouble Internet-savvy political ‘truth’ spinners here. The disingenuousness digital campaigning can go on.
Instead of grappling with that sort of subtle political fakery, Facebook is focusing on quick PR wins — around the most obviously inauthentic stuff where it won’t risk accusations of partisan bias if it pulls bogus content.
Hence the new policy bans deepfake content that involves the use of AI technologies to “merge, replace or superimpose content onto a video, making it appear to be authentic” — which looks as if it will capture the crudest stuff, such as revenge deepfake porn which superimposes a real person’s face onto an adult performer’s body (albeit nudity is already banned on Facebook’s platform).
It’s not a blanket ban on deepfakes either, though — with some big carve outs for “parody or satire”.
So it’s a bit of an open question whether this deepfake video of Mark Zuckerberg, which went viral last summer — seemingly showing the Facebook founder speaking like a megalomaniac — would stay up or not under the new policy. The video’s creators, a pair of artists, described the work as satire so such stuff should survive the ban. (Facebook did also leave it up at the time.)
But, in future, deepfake creators are likely to further push the line to see what they can get away with under the new policy.
The social network’s controversial policy of letting politicians lie in ads also means it could, technically, still give pure political deepfakes a pass — i.e. if a political advertiser was paying it to run purely bogus content as an ad. Though it would be a pretty bold politician to try that.
More likely there’s more mileage for political campaigns and opinion influencers to keep on with more subtle manipulations. Such as the doctored video of House speaker Nancy Pelosi that went viral on Facebook last year, which had slowed down audio that made her sound drunk or ill. The Washington Post suggests that video — while clearly potentially misleading — still wouldn’t qualify to be taken down under Facebook’s new ‘tougher’ manipulated media policy.
Bickert’s blog post stipulates that manipulated content which doesn’t meet Facebook’s new standard for removal may still be reviewed by the independent third party fact-checkers Facebook relies upon for the lion’s share of ‘truth sifting’ on its platform — and who may still rate such content as ‘false’ or ‘partly false’. But she emphasizes it will continue to allow this type of bogus content to circulate (while potentially reducing its distribution), claiming such labelled fakes provide helpful context.
So Facebook’s updated position on manipulated media sums to ‘no to malicious deepfakes but spindoctors please carry on’.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” Bickert writes, claiming: “This approach is critical to our strategy and one we heard specifically from our conversations with experts.
“If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
The dystopian development provides another motivation for the tech giant to ban ‘pure’ AI fakes, given the technology risks supercharging its fake accounts problem. (And, well, that could be bad for business.)
“Our teams continue to proactively hunt for fake accounts and other coordinated inauthentic behavior,” suggests Bickert, arguing that: “Our enforcement strategy against misleading manipulated media also benefits from our efforts to root out the people behind these efforts.”
While still relatively nascent as a technology, deepfakes have shown themselves to be catnip to the media which loves the spectacle they create. As a result, the tech has landed unusually quickly on legislators’ radars as a disinformation risk — California implemented a ban on political deepfakes around elections this fall, for example — so Facebook is likely hoping to score some quick and easy political points by moving in step with legislators even as it applies its own version of a ban.
Bickert’s blog post also fishes for further points, noting Facebook’s involvement in a Deep Fake Detection Challenge which was announced last fall — “to produce more research and open source tools to detect deepfakes”.
“As these partnerships and our own insights evolve, so too will our policies toward manipulated media. In the meantime, we’re committed to investing within Facebook and working with other stakeholders in this area to find solutions with real impact,” she adds.
TechCrunch an American online publisher focusing on the tech industry. The company specifically reports on the business related to tech, technology news, analysis of emerging trends in tech, and profiling of new tech businesses and products.
Facebook fighting against disinformation: Launch new options
Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.
“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.
He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.
The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.
Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.
The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.
This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).
“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.
They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.
Change on Facebook
Facebook announces news that will facilitate your sales and purchases on the social network.
Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.
The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.
Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.
In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.
Entireweb Articles – Read the latest Articles and News in Search Engine related world!
Facebook AI Hunts & Removes Harmful Content
Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.
Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.
Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.
The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.
The Facebook announcement stated:
“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.
But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.
…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”
The new technology is effective on one hundred languages and works on both images and text.
Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.
Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.
“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.
If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”
New Facebook AI Live
Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.
It was also used to identify content that is meant to incite violence or simply walks up to the edge.
Facebook used the following example of harmful content that stops just short of inciting violence:
“Does that guy need all of his teeth?”
The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.
Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.
Graph Shows Success Of Facebook Hate Speech Detection
Entailment Few-Shot Learning
Facebook calls their new technology, Entailment Few-Shot Learning.
It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.
Facebook’s article about the research used this example:
“…we can reformulate an apparent sentiment classification input and label pair:
[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:
[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”
Facebook Working To Develop Humanlike AI
The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.
The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.
“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.
By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”
Read Facebook’s Announcement Of New AI
Article About Facebook’s New Technology
Read Facebook’s Research Paper
Roger Montti is a search marketer with 20 years experience.
I offer site audits and link building strategies.
New Facebook Groups Features For Building Strong Communities
Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.
In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.
Here’s an overview of everything that was announced at the recent Facebook Communities Summit.
More Options For Facebook Group Admins
Admins can utilize these new features to make their Groups feel more unique :
- Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
- Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
- Preferred formats: Select formats you want members to use when they post in your group.
- Greeting message: Create a unique message that all new members will see when they join a group.
Stronger Connections For Members
Members of Facebook Groups can build stronger connections by taking advantage of the following new features:
- Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
- Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
- Recurring Events: Set up regular events for member to get together either online or in person.
- Community Awards: Give virtual awards to other members to recognize valuable contributions.
New Ways To Manage Communities
New tools will make it easier for admins to manage their groups:
- Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
- Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
- Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Monetization & Fundraisers
A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:
- Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
- Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
- Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Bringing Together Groups & Pages
Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.
This will allow Group admins to use an official voice when interacting with their community.
Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.
When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.
Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.
This new experience will be tested over the next year before it’s available to everyone.
Source: Meta Newsroom
Featured Image: AlesiaKan/Shutterstock
Matt Southern has been the lead news writer at Search Engine Journal since 2013. With a degree in communications, Matt has an uncanny ability to make the most complex subject matter easy to understand. When he’s not ferociously following and covering the search industry, he’s busy writing SEO-friendly copy that converts.
TikTok Launches ‘TikTok Tactics’ Online Course to Help Marketers Level-Up their Platform Approach
3 ways marketers can prepare for a cookieless future
5 Things You Didn’t Know About Artificial Intelligence
7 Social Commerce Tips & Who’s Doing It Right
Op-Ed: Education tipline launched by Virginia governor is a slap in the face to teachers
Unlock Free Creator-Inspired Items in F1 2021’s Podium Pass Series 4
How to Build a Successful Remote Freelance Team for Your Business
Google Drops FLoC For Topics API
Squarespace Announces Video Hosting And Monetization
Taboola automates personalized homepages
Here’s How Meta Is Changing Facebook Ads Targeting For 2022
14 Top Reasons Why Google Isn’t Indexing Your Site
20 Tips and Best Practices
Pages That Look Like Error Pages Can Be Considered Soft 404s By Google
Are Nofollow Links a Google Ranking Factor?
17 Actionable Content Marketing Tips for 2022
How To Help Google Rank Products With Duplicate Descriptions
10 Things You Need To Know To Be Successful
Google On How To Improve SEO Audits
Google AdSense Guide: increase earnings and escape low CPC
SEARCHENGINES7 days ago
Google Search Ranking Update On January 19th & 20th
SEARCHENGINES5 days ago
Bug With Google Ads Discovery & Performance Max Campaigns & New Placement Reports
SEO4 days ago
What Is a Google Broad Core Algorithm Update?
SEARCHENGINES5 days ago
Google Looking To Make Crawling More Efficient & Environmental Friendly
SEARCHENGINES2 days ago
67% Of Google Searches Have Duplicate Top Stories & Web Results URLs
MARKETING5 days ago
How to Create Functional SOPs (That Your Employees Actually Use)
SEARCHENGINES5 days ago
Google New York City Conference Room View
SEO6 days ago
The Only Shopify SEO Checklist You Need To Rank Your Site