Connect with us

FACEBOOK

Facebook partially documents its content recommendation system

Published

on

Algorithmic recommendation systems on social media sites like YouTube, Facebook and Twitter have shouldered much of the blame for the spread of misinformation, propaganda, hate speech, conspiracy theories and other harmful content. Facebook, in particular, has come under fire in recent days for allowing QAnon conspiracy groups to thrive on its platform and for helping militia groups to scale membership. Today, Facebook is attempting to combat claims that its recommendation systems are at any way at fault for how people are exposed to troubling, objectionable, dangerous, misleading and untruthful content.

The company has, for the first time, made public how its content recommendation guidelines work.

In new documentation available in Facebook’s Help Center and Instagram’s Help Center, the company details how Facebook and Instagram’s algorithms work to filter out content, accounts, Pages, Groups and Events from its recommendations.

Currently, Facebook’s Suggestions may appear as Pages You May Like, “Suggested For You” posts in News Feed, People You May Know, or Groups You Should Join. Instagram’s suggestions are found within Instagram Explore, Accounts You May Like and IGTV Discover.

The company says Facebook’s existing guidelines have been in place since 2016 under a strategy it references as “remove, reduce, and inform.” This strategy focuses on removing content that violates Facebook’s Community Standards, reducing the spread of problematic content that does not violate its standards, and informing people with additional information so they can choose what to click, read or share, Facebook explains.

The Recommendation Guidelines typically fall under Facebook’s efforts in the “reduce” area, and are designed to maintain a higher standard than Facebook’s Community Standards, because they push users to follow new accounts, groups, Pages and the like.

See also  European parliament’s NationBuilder contract under investigation by data regulator

Facebook, in the new documentation, details five key categories that are not eligible for recommendations. Instagram’s guidelines are similar. However, the documentation offers no deep insight into how Facebook actually chooses what to recommend to a given user. That’s a key piece to understanding recommendation technology, and one Facebook intentionally left out.

One obvious category of content that many not be eligible for recommendation includes those that would impede Facebook’s “ability to foster a safe community,” such as content focused on self-harm, suicide, eating disorders, violence, sexually explicit content, regulated content like tobacco or drugs or content shared by non-recommendable accounts or entities.

Facebook also claims to not recommend sensitive or low-quality content, content users frequently say they dislike and content associated with low-quality publishings. These further categories include things like clickbait, deceptive business models, payday loans, products making exaggerated health claims or offering “miracle cures,” content promoting cosmetic procedures, contests, giveaways, engagement bait, unoriginal content stolen from another source, content from websites that get a disproportionate number of clicks from Facebook versus other places on the web or news that doesn’t include transparent information about the authorship or staff.

In addition, Facebook claims it won’t recommend fake or misleading content, like those making claims found false by independent fact checkers, vaccine-related misinformation and content promoting the use of fraudulent documents.

It says it will also “try” not to recommend accounts or entities that recently violated Community Standards, shared content Facebook tries to not recommend, posted vaccine-related misinformation, engaged in purchasing “Likes,” has been banned from running ads, posted false information or are associated with movements tied to violence.

See also  Hootsuite's Social Media Image Size Guide for 2022 [Infographic]

The latter claim, of course, follows recent news that a Kenosha militia Facebook Event remained on the platform after being flagged 455 times after its creation, and had been cleared by four moderators as non-violating content. The associated Page had issued a “call to arms” and hosted comments about people asking what types of weapons to bring. Ultimately, two people were killed and a third was injured at protests in Kenosha, Wisconsin when a 17-year old armed with an AR-15-style rifle broke curfew, crossed state lines and shot at protestors.

Given Facebook’s track record, it’s worth considering how well Facebook is capable of abiding by its own stated guidelines. Plenty of people have found their way to what should be ineligible content, like conspiracy theories, dangerous health content, COVID-19 misinformation and more by clicking through on suggestions at times when the guidelines failed. QAnon grew through Facebook recommendations, it’s been reported.

It’s also worth noting, there are many gray areas that guidelines like these fail to cover.

Militia groups and conspiracy theories are only a couple of examples. Amid the pandemic, U.S. users who disagreed with government guidelines on business closures can easily find themselves pointed toward various “reopen” groups where members don’t just discuss politics, but openly brag about not wearing masks in public or even when required to do so at their workplace. They offer tips on how to get away with not wearing masks, and celebrate their successes with selfies. These groups may not technically break rules by their description alone, but encourage behavior that constitutes a threat to public health.

Meanwhile, even if Facebook doesn’t directly recommend a group, a quick search for a topic will direct you to what would otherwise be ineligible content within Facebook’s recommendation system.

See also  Facebook Shares New Report on the Rise of 'Mindful Wellness' and What it Means for Marketers

For instance, a quick search for the word “vaccines” currently suggests a number of groups focused on vaccine injuries, alternative cures and general anti-vax content. These even outnumber the pro-vax content. At a time when the world’s scientists are trying to develop protection against the novel coronavirus in the form of a vaccine, allowing anti-vaxxers a massive public forum to spread their ideas is just one example of how Facebook is enabling the spread of ideas that may ultimately become a global public health threat.

The more complicated question, however, is where does Facebook draw the line in terms of policing users having these discussions versus favoring an environment that supports free speech? With few government regulations in place, Facebook ultimately gets to make this decision for itself.

Recommendations are only a part of Facebook’s overall engagement system, and one that’s often blamed for directing users to harmful content. But much of the harmful content that users find could be those groups and Pages that show up at the top of Facebook search results when users turn to Facebook for general information on a topic. Facebook’s search engine favors engagement and activity — like how many members a group has or how often users post — not how close its content aligns with accepted truths or medical guidelines.

Facebook’s search algorithms aren’t being similarly documented in as much detail.

TechCrunch

FACEBOOK

Facebook fighting against disinformation: Launch new options

Published

on

Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.

“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.

He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.

The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.

Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.

The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.

This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).

“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.

See also  3 reforms social media platforms should make in light of ‘The Social Dilemma’

They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.

Change on Facebook

Facebook announces news that will facilitate your sales and purchases on the social network.

Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.

The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.

Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.

In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.

Continue Reading

FACEBOOK

Facebook AI Hunts & Removes Harmful Content

Published

on

Main Article Image - AI

Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.

Few-Shot Learning

Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.

The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.

The Facebook announcement stated:

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.

But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”

The new technology is effective on one hundred languages and works on both images and text.

See also  Lowlights from Zuckerberg’s Libra testimony in Congress

Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.

Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.

“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.

If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

New Facebook AI Live

Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.

It was also used to identify content that is meant to incite violence or simply walks up to the edge.

Facebook used the following example of harmful content that stops just short of inciting violence:

“Does that guy need all of his teeth?”

The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.

Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.

Graph Shows Success Of Facebook Hate Speech Detection

Facebook Hate Speech AI

Entailment Few-Shot Learning

Facebook calls their new technology, Entailment Few-Shot Learning.

It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

See also  Tim and Chris Vanderhook buy back Viant – Adelphic DSP

Facebook’s article about the research used this example:

“…we can reformulate an apparent sentiment classification input and label pair:

[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:

[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”

Facebook Working To Develop Humanlike AI

The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.

The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.

“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.

By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”

Citations

Read Facebook’s Announcement Of New AI

Our New AI System to Help Tackle Harmful Content

Article About Facebook’s New Technology

Harmful content can evolve quickly. Our new AI system adapts to tackle it

Read Facebook’s Research Paper

Entailment as Few-Shot Learner (PDF)

Searchenginejournal.com

Continue Reading

FACEBOOK

New Facebook Groups Features For Building Strong Communities

Published

on

Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.

In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.

Here’s an overview of everything that was announced at the recent Facebook Communities Summit.

More Options For Facebook Group Admins

Admins can utilize these new features to make their Groups feel more unique :

  • Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
  • Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
  • Preferred formats: Select formats you want members to use when they post in your group.
  • Greeting message: Create a unique message that all new members will see when they join a group.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Stronger Connections For Members

Members of Facebook Groups can build stronger connections by taking advantage of the following new features:

  • Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
  • Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
  • Recurring Events: Set up regular events for member to get together either online or in person.
  • Community Awards: Give virtual awards to other members to recognize valuable contributions.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

New Ways To Manage Communities

New tools will make it easier for admins to manage their groups:

  • Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
  • Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
  • Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Monetization & Fundraisers

A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:

  • Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
  • Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
  • Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Bringing Together Groups & Pages

Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.

See also  TikTok’s parent company, ByteDance, is reportedly looking for $2B before its Hong Kong public offering

This will allow Group admins to use an official voice when interacting with their community.

Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.

When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.

Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.

This new experience will be tested over the next year before it’s available to everyone.

Source: Meta Newsroom


Featured Image: AlesiaKan/Shutterstock

Searchenginejournal

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending