Connect with us


Instagram ‘pods’ game the algorithm by coordinating likes and comments on millions of posts



Researchers at NYU have identified hundreds of groups of Instagram users, some with thousands of members, that systematically exchange likes and comments in order to game the service’s algorithms and boost visibility. In the process, they also trained machine learning agents to identify whether a post has been juiced in this way.

“Pods,” as they’ve been dubbed, straddle the line between real and fake engagement, making them tricky to detect or take action against. And while they used to be a niche threat (and still are compared with fake account and bot activity), the practice is growing in volume and efficacy.

Pods are easily found via searching online, and some are open to the public. The most common venue for them is Telegram, as it’s more or less secure and has no limit to the number of people who can be in a channel. Posts linked in the pod are liked and commented on by others in the group, with the effect of those posts being far more likely to be spread widely by Instagram’s recommendation algorithms, boosting organic engagement.

Reciprocity as a service

The practice of groups mutually liking one another’s posts is called reciprocity abuse, and social networks are well aware of it, having removed setups of this type before. But the practice has never been studied or characterized in detail, the team from NYU’s Tandon School of Engineering explained.

“In the past they’ve probably been focused more on automated threats, like giving credentials to someone to use, or things done by bots,” said lead author of the study Rachel Greenstadt. “We paid attention to this because it’s a growing problem, and it’s harder to take measures against.”

On a small scale it doesn’t sound too threatening, but the study found nearly 2 million posts that had been manipulated by this method, with more than 100,000 users taking part in pods. And that’s just the ones in English, found using publicly available data. The paper describing the research was published in the Proceedings of the World Wide Web Conference and can be read here.

See also  Which Digital Outreach Tactics Actually Work in 2020? [Infographic]

Importantly, the reciprocal liking does more than inflate apparent engagement. Posts submitted to pods got large numbers of artificial likes and comments, yes, but that activity deceived Instagram’s algorithm into promoting them further, leading to much more engagement even on posts not submitted to the pod.

When contacted for comment, Instagram initially said that this activity “violates our policies and we have numerous measures in place to stop it,” and said that the researchers had not collaborated with the company on the research.

In fact the team was in contact with Instagram’s abuse team from early on in the project, and it seems clear from the study that whatever measures are in place have not, at least in this context, had the desired effect. I pointed this out to the representative and will update this post if I hear back with any more information.

“It’s a grey area”

But don’t reach for the pitchforks just yet — the fact is this kind of activity is remarkably hard to detect, because really it’s identical in many ways to a group of friends or like-minded users engaging with each others’ content in exactly the way Instagram would like. And really, even classifying the behavior as abuse isn’t so simple.

“It’s a grey area, and I think people on Instagram think of it as a grey area,” said Greenstadt. “Where does it end? If you write an article and post it on social media and send it to friends, and they like it, and they sometimes do that for you, are you part of a pod? The issue here is not necessarily that people are doing this, but how the algorithm should treat this action, in terms of amplifying or not amplifying that content.”

See also  Facebook Unveils New Logo With Unique Branding for All of its Products

Obviously if people are doing it systematically with thousands of users and even charging for access (as some groups do), that amounts to abuse. But drawing the line isn’t easy.

More important is that the line can’t be drawn unless you first define the behavior, which the researchers did by carefully inspecting the differences in patterns of likes and comments on pod-boosted and ordinary posts.

“They have different linguistic signatures,” explained co-author Janith Weerasinghe. “What words they use, the timing patterns.”

As you might expect, strangers obligated to comment on posts they don’t actually care about tend to use generic language, saying things like “nice pic” or “wow” rather than more personal remarks. Some groups actually warn against this, Weerasinghe said, but not many.

The list of top words used reads, predictably, like the comment section on any popular post, though perhaps that speaks to a more general lack of expressiveness on Instagram than anything else:

But statistical analysis of thousands of such posts, both pod-powered and normal, showed a distinctly higher prevalence of “generic support” comments, often showing up in a predictable pattern.

This data was used to train a machine learning model, which when set loose on posts it had never seen, was able to identify posts given the pod treatment with as high as 90% accuracy. This could help surface other pods — and make no mistake, this is only a small sample of what’s out there.

“We got a pretty good sample for the time period of the easily accessible, easily findable pods,” said Greenstadt. “The big part of the ecosystem that we’re missing is pods that are smaller but more lucrative, that have to have a certain presence on social media already to join. We’re not influencers, so we couldn’t really measure that.”

The numbers of pods and the posts they manipulate has grown steadily over the last two years. About 7,000 posts were found during March of 2017. A year later that number had jumped to nearly 55,000. March of 2019 saw over 100,000, and the number continued to increase through the end of the study’s data. It’s safe to say that pods are now posting over 4,000 times a day — and each one is getting a large amount of engagement, both artificial and organic. Pods now have 900 users on average, and some had over 10,000.

See also  UK’s competition regulator asks for views on breaking up Google

You may be thinking: “If a handful of academics using publicly available APIs and Google could figure this out, why hasn’t Instagram?”

As mentioned before, it’s possible the teams there have simply not considered this to be a major threat and consequently have not created policies or tools to prevent it. Rules proscribing using a “third party app or service to generate fake likes, follows, or comments” arguably don’t apply to these pods, since in many ways they’re identical to perfectly legitimate networks of users (though Instagram clarified that it considers pods as violating the rule). And certainly the threat from fake accounts and bots is of a larger scale.

And while it’s possible that pods could be used as a venue for state-sponsored disinformation or other political purposes, the team didn’t notice anything happening along those lines (though they were not looking for it specifically). So for now the stakes are still relatively small.

That said, Instagram clearly has access to data that would help to define and detect this kind of behavior, and its policies and algorithms could be changed to accommodate it. No doubt the NYU researchers would love to help.



Facebook fighting against disinformation: Launch new options



Meta, the parent company of Facebook, has dismantled new malicious networks that used vaccine debates to harass professionals or sow division in some countries, a sign that disinformation about the pandemic, spread for political ends, is on the wane not.

“They insulted doctors, journalists and elected officials, calling them supporters of the Nazis because they were promoting vaccines against the Covid, ensuring that compulsory vaccination would lead to a dictatorship of health,” explained Mike Dvilyanski, director investigations into emerging threats, at a press conference on Wednesday.

He was referring to a network linked to an anti-vaccination movement called “V_V”, which the Californian group accuses of having carried out a campaign of intimidation and mass harassment in Italy and France, against health figures, media and politics.

The authors of this operation coordinated in particular via the Telegram messaging system, where the volunteers had access to lists of people to target and to “training” to avoid automatic detection by Facebook.

Their tactics included leaving comments under victims’ messages rather than posting content, and using slightly changed spellings like “vaxcinati” instead of “vaccinati”, meaning “people vaccinated” in Italian.

The social media giant said it was difficult to assess the reach and impact of the campaign, which took place across different platforms.

This is a “psychological war” against people in favor of vaccines, according to Graphika, a company specializing in the analysis of social networks, which published Wednesday a report on the movement “V_V”, whose name comes from the Italian verb “vivere” (“to live”).

“We have observed what appears to be a sprawling populist movement that combines existing conspiratorial theories with anti-authoritarian narratives, and a torrent of health disinformation,” experts detail.

See also  Daily Crunch: TikTok becomes a political battleground in Russia

They estimate that “V_V” brings together some 20,000 supporters, some of whom have taken part in acts of vandalism against hospitals and operations to interfere with vaccinations, by making medical appointments without honoring them, for example.

Change on Facebook

Facebook announces news that will facilitate your sales and purchases on the social network.

Mark Zuckerberg, the boss of Facebook, announced that the parent company would now be called Meta, to better represent all of its activities, from social networks to virtual reality, but the names of the different services will remain unchanged. A month later, Meta is already announcing news for the social network.

The first is the launch of online stores in Facebook groups. A “Shop” tab will appear and will allow members to buy products directly through the group in question.

Other features have been communicated with the aim of facilitating e-commerce within the social network, such as the display of recommendations and a better mention of products or even Live Shopping. At this time, no date has been announced regarding the launch of these new options.

In the light of recent features, the company wants to know the feedback from its users through the survey same like what Tesco doing to get its customers feedback via Tesco Views Survey. However, the company is still about this feedback will announce sooner than later in this regard.

Continue Reading


Facebook AI Hunts & Removes Harmful Content



Main Article Image - AI

Facebook announced a new AI technology that can rapidly identify harmful content in order to make Facebook safer. Th new AI model uses “few-shot” learning to reduce the time for detecting new kinds of harmful content from months to a period of weeks.

Few-Shot Learning

Few-shot learning has similarities to Zero-shot learning. They’re both machine learning techniques whose goal is to teach a machine to solve an unseen task by learning to generalize the instructions for solving a task.

Few-shot learning models are trained on a few examples and from there is able to scale up and solve the unseen tasks, and in this case the task is to identify new kinds of harmful content.

The advantage of Facebook’s new AI model is to speed up the process of taking action against new kinds of harmful content.

The Facebook announcement stated:

“Harmful content continues to evolve rapidly — whether fueled by current events or by people looking for new ways to evade our systems — and it’s crucial for AI systems to evolve alongside it.

But it typically takes several months to collect and label thousands, if not millions, of examples necessary to train each individual AI system to spot a new type of content.

…This new AI system uses a method called “few-shot learning,” in which models start with a general understanding of many different topics and then use much fewer — or sometimes zero — labeled examples to learn new tasks.”

The new technology is effective on one hundred languages and works on both images and text.

See also  Twitter Updates its Platform Rules to Cover More Types of COVID-19 Misinformation

Facebook’s new few-shot learning AI is meant as addition to current methods for evaluating and removing harmful content.

Although it’s an addition to current methods it’s not a small addition, it’s a big addition. The impact of the new AI is one of scale as well as speed.

“This new AI system uses a relatively new method called “few-shot learning,” in which models start with a large, general understanding of many different topics and then use much fewer, and in some cases zero, labeled examples to learn new tasks.

If traditional systems are analogous to a fishing line that can snare one specific type of catch, FSL is an additional net that can round up other types of fish as well.”

New Facebook AI Live

Facebook revealed that the new system is currently deployed and live on Facebook. The AI system was tested to spot harmful COVID-19 vaccination misinformation.

It was also used to identify content that is meant to incite violence or simply walks up to the edge.

Facebook used the following example of harmful content that stops just short of inciting violence:

“Does that guy need all of his teeth?”

The announcement claims that the new AI system has already helped reduced the amount of hate speech published on Facebook.

Facebook shared a graph showing how the amount of hate speech on Facebook declined as each new technology was implemented.

Graph Shows Success Of Facebook Hate Speech Detection

Facebook Hate Speech AI

Entailment Few-Shot Learning

Facebook calls their new technology, Entailment Few-Shot Learning.

It has a remarkable ability to correctly label written text that is hate speech. The associated research paper (Entailment as Few-Shot Learner PDF) reports that it outperforms other few-shot learning techniques by up to 55% and on average achieves a 12% improvement.

See also  Getting Started With Facebook Dynamic Product Ads

Facebook’s article about the research used this example:

“…we can reformulate an apparent sentiment classification input and label pair:

[x : “I love your ethnic group. JK. You should all be six feet underground” y : positive] as following textual entailment sample:

[x : I love your ethnic group. JK. You should all be 6 feet underground. This is hate speech. y : entailment].”

Facebook Working To Develop Humanlike AI

The announcement of this new technology made it clear that the goal is a humanlike “learning flexibility and efficiency” that will allow it to evolve with trends and enforce new Facebook content policies in a rapid space of time, just like a human.

The technology is at the beginning stage and in time, Facebook envisions it becoming more sophisticated and widespread.

“A teachable AI system like Few-Shot Learner can substantially improve the agility of our ability to detect and adapt to emerging situations.

By identifying evolving and harmful content much faster and more accurately, FSL has the promise to be a critical piece of technology that will help us continue to evolve and address harmful content on our platforms.”


Read Facebook’s Announcement Of New AI

Our New AI System to Help Tackle Harmful Content

Article About Facebook’s New Technology

Harmful content can evolve quickly. Our new AI system adapts to tackle it

Read Facebook’s Research Paper

Entailment as Few-Shot Learner (PDF)

Continue Reading


New Facebook Groups Features For Building Strong Communities



Meta launches new features for Facebook Groups to improve communication between members, strengthen communities, and give admins more ways to customize the look and feel.

In addition, the company shares its vision for the future of communities on Facebook, which brings features from Groups and Pages together in one place.

Here’s an overview of everything that was announced at the recent Facebook Communities Summit.

More Options For Facebook Group Admins

Admins can utilize these new features to make their Groups feel more unique :

  • Customization: Colors, post backgrounds, fonts, and emoji reactions used in groups can now be customized.
  • Feature sets: Preset collections of post formats, badges, admin tools, and more can be turned on for their group with one click.
  • Preferred formats: Select formats you want members to use when they post in your group.
  • Greeting message: Create a unique message that all new members will see when they join a group.
Facebook groups new featuresScreenshot from, November 2021.

Stronger Connections For Members

Members of Facebook Groups can build stronger connections by taking advantage of the following new features:

  • Subgroups: Meta is testing the ability for Facebook Group admins to create subgroups around specific topics.
  • Community Chats: Communicate in real-time with other group members through Facebook or Messenger.
  • Recurring Events: Set up regular events for member to get together either online or in person.
  • Community Awards: Give virtual awards to other members to recognize valuable contributions.
Facebook groups new featuresScreenshot from, November 2021.

New Ways To Manage Communities

New tools will make it easier for admins to manage their groups:

  • Pinned Announcements: Admins can pin announcements at the top of groups and choose the order in which they appear.
  • Personalized Suggestions: Admin Assist will now offer suggestions on criteria to add, and more info on why content is declined.
  • Internal Chats: Admins can now create create group chats exclusively for themselves and other moderators.
Facebook groups new featuresScreenshot from, November 2021.

Monetization & Fundraisers

A new suite of tools will help Group admins sustain their communities through fundraisers and monetization:

  • Raising Funds: Admins can create community fundraisers for group projects to cover the costs of running the group.
  • Selling Merchandise: Sell merchandise you’ve created by setting up a shop within your group.
  • Paid Memberships: Create paid subgroups that members can subscribe to for a fee.
Facebook groups new featuresScreenshot from, November 2021.

Bringing Together Groups & Pages

Facebook is introducing a new experience that brings elements of Pages and Groups together in one place.

See also  Daily Crunch: TikTok becomes a political battleground in Russia

This will allow Group admins to use an official voice when interacting with their community.

Currently, Admins post to a Facebook Group it shows that it’s published by the individual user behind the account.

When this new experience rolls out, posts from Admins will show up as official announcements posted by the group. Just like how a post from a Facebook Page shows that it’s published by the Page.

Admins of Facebook Pages will have the option to build their community in a single space if they prefer not to create a separate group. When this change rolls out, Page admins can utilize moderation tools accessible to Group admins.

This new experience will be tested over the next year before it’s available to everyone.

Source: Meta Newsroom

Featured Image: AlesiaKan/Shutterstock


Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address