Connect with us

FACEBOOK

3 reforms social media platforms should make in light of ‘The Social Dilemma’

Published

on

“The Social Dilemma” is opening eyes and changing digital lives for Netflix bingers across the globe. The filmmakers explore social media and its effects on society, raising some crucial points about impacts on mental health, politics and the myriad ways firms leverage user data. It interweaves interviews from industry executives and developers who discuss how social sites can manipulate human psychology to drive deeper engagement and time spent within the platforms.

Despite the glaring issues present with social media platforms, people still crave digital attention, especially during a pandemic, where in-person connections are strained if not impossible.

So, how can the industry change for the better? Here are three ways social media should adapt to create happier and healthier interpersonal connections and news consumption.

Stop censoring

On most platforms, like Facebook and Instagram, the company determines some of the information presented to users. This opens the platform to manipulation by bad actors and raises questions about who exactly is dictating what information is seen and what is not. What are the motivations behind those decisions? And some of the platforms dispute their role in this process, with Mark Zuckerberg saying in 2019, “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online.”

Censorship can be absolved with a restructured type of social platform. For example, consider a platform that does not rely on advertiser dollars. If a social platform is free for basic users but monetized by a subscription model, there is no need to use an information-gathering algorithm to determine which news and content are served to users.

See also  YouTube launches hashtag landing pages to all users

This type of platform is not a ripe target for manipulation because users only see information from people they know and trust, not advertisers or random third parties. Manipulation on major social channels happens frequently when people create zombie accounts to flood content with fake “likes” and “views” to affect the viewed content. It’s commonly exposed as a tactic for election meddling, where agents use social media to promote false statements. This type of action is a fundamental flaw of social algorithms that use AI to make decisions about when and what to censor as well as what it should promote.

Don’t treat users like products

The issues raised by “The Social Dilemma” should reinforce the need for social platforms to self-regulate their content and user dynamics and operate ethically. They should review their most manipulative technologies that cause isolation, depression and other issues and instead find ways to promote community, progressive action and other positive attributes.

A major change required to bring this about is to eliminate or reduce in-platform advertising. An ad-free model means the platform does not need to aggressively push unsolicited content from unsolicited sources. When ads are the main driver for a platform, then the social company has a vested interest in using every psychological and algorithm-based trick to keep the user on the platform. It’s a numbers game that puts profit over users.

More people multiplied by more time on the site equals ad exposure and ad engagement and that means revenue. An ad-free model frees a platform from trying to elicit emotional responses based on a user’s past actions, all to keep them trapped on the site, perhaps to an addictive degree.

See also  Instagram to test hiding Like counts in US, which could hurt influencers

Encourage connections without clickbait

A common form of clickbait is found on the typical social search page. A user clicks on an image or preview video that suggests a certain type of content, but upon clicking they are brought to unrelated content. It’s a technique that can be used to spread misinformation, which is especially dangerous for viewers who rely on social platforms for their news consumption, instead of traditional outlets. According to the Pew Research Center, 55% of adults get their news from social media “often” or “sometimes.” This causes a significant problem when clickbait articles make it easier to offer distorted “fake news” stories.

Unfortunately, when users engage with clickbait content, they are effectively “voting” for that information. That seemingly innocuous action creates a financial reason for others to create and disseminate further clickbait. Social media platforms should aggressively ban or limit clickbait. Management at Facebook and other firms often counter with a “free speech” argument when it comes to stopping clickbait. However, they should consider the intent is not to act as censors that are stopping controversial topics but protecting users from false content. It’s about cultivating trust and information sharing, which is much easier to accomplish when post content is backed by facts.

“The Social Dilemma” is rightfully an important film that encourages a vital dialogue about the role social media and social platforms play in everyday life. The industry needs to change to create more engaged and genuine spaces for people to connect without preying on human psychology.

A tall order, but one that should benefit both users and platforms in the long term. Social media still creates important digital connections and functions as a catalyst for positive change and discussion. It’s time for platforms to take note and take responsibility for these needed changes, and opportunities will arise for smaller, emerging platforms taking a different, less-manipulative approach.

See also  How dopamine fuels the golden rule of content marketing

TechCrunch

FACEBOOK

Facebook fighting against disinformation: Launch new options

Published

on

Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.

“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.

He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.

The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.

Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.

The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.

This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).

“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.

See also  Creating a robust churn-reversal system

They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.

Change on Facebook

Facebook announces news that will facilitate your sales and purchases on the social network.

Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.

The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.

Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.

In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.

Continue Reading

FACEBOOK

Facebook AI Hunts & Removes Harmful Content

Published

on

Main Article Image - AI

Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.

Few-Shot Learning

Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.

The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.

The Facebook announcement stated:

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.

But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”

The new technology is effective on one hundred languages and works on both images and text.

See also  Facebook’s use of Onavo spyware faces questions in EU antitrust probe — report

Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.

Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.

“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.

If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

New Facebook AI Live

Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.

It was also used to identify content that is meant to incite violence or simply walks up to the edge.

Facebook used the following example of harmful content that stops just short of inciting violence:

“Does that guy need all of his teeth?”

The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.

Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.

Graph Shows Success Of Facebook Hate Speech Detection

Facebook Hate Speech AI

Entailment Few-Shot Learning

Facebook calls their new technology, Entailment Few-Shot Learning.

It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

See also  Creating a robust churn-reversal system

Facebook’s article about the research used this example:

“…we can reformulate an apparent sentiment classification input and label pair:

[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:

[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”

Facebook Working To Develop Humanlike AI

The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.

The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.

“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.

By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”

Citations

Read Facebook’s Announcement Of New AI

Our New AI System to Help Tackle Harmful Content

Article About Facebook’s New Technology

Harmful content can evolve quickly. Our new AI system adapts to tackle it

Read Facebook’s Research Paper

Entailment as Few-Shot Learner (PDF)

Searchenginejournal.com

Continue Reading

FACEBOOK

New Facebook Groups Features For Building Strong Communities

Published

on

Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.

In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.

Here’s an overview of everything that was announced at the recent Facebook Communities Summit.

More Options For Facebook Group Admins

Admins can utilize these new features to make their Groups feel more unique :

  • Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
  • Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
  • Preferred formats: Select formats you want members to use when they post in your group.
  • Greeting message: Create a unique message that all new members will see when they join a group.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Stronger Connections For Members

Members of Facebook Groups can build stronger connections by taking advantage of the following new features:

  • Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
  • Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
  • Recurring Events: Set up regular events for member to get together either online or in person.
  • Community Awards: Give virtual awards to other members to recognize valuable contributions.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

New Ways To Manage Communities

New tools will make it easier for admins to manage their groups:

  • Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
  • Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
  • Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Monetization & Fundraisers

A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:

  • Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
  • Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
  • Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Facebook groups new featuresScreenshot from about.fb.com/news, November 2021.

Bringing Together Groups & Pages

Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.

See also  Facebook’s use of Onavo spyware faces questions in EU antitrust probe — report

This will allow Group admins to use an official voice when interacting with their community.

Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.

When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.

Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.

This new experience will be tested over the next year before it’s available to everyone.

Source: Meta Newsroom


Featured Image: AlesiaKan/Shutterstock

Searchenginejournal

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending