Connect with us

SOCIAL

Pinterest Provides New Overview of its Evolving Scam Detection and Deactivation Processes

Published

on

With malicious actors increasingly looking to utilize social media platforms to spread misinformation, operate phishing scams or for other nefarious motives, it’s important that every platform continues to evolve its detection processes, in order to protect users from exploitation, and keep their feeds free of unwanted distraction, helping to maximize engagement.

This week, Pinterest has provided an overview of its evolving efforts to detect and remove spam and malicious content, which details how it detects problematic Pins at domain level, based on where scammers and spammers are looking to refer Pinners to after clicking through via on-platform content.

As explained by Pinterest.  

“One tactic malicious actors enact is misusing a Pin’s image and linking to a malicious external website. Our models detect spam vectors, like Pin links, as well as users engaging in spammy behaviors. We quickly limit distribution of Pins with spam links and take direct action against users identified with a high confidence to be engaging in spammy behavior.”

The visual nature of Pinterest makes it a less receptive platform for straight spam messaging, as well as its increasing focus on shopping, which has changed how people use the app. But even so, scammers, as Pinterest notes, will still try to lure Pinners to their websites via misleading means. 

Which is why Pinterest’s domain-level approach is a key deterrent – by focusing on detecting scam websites, Pinterest can then eliminate all links to each site systematically, quickly de-activating those links, and protecting users. 

“We perform a manual review for those identified with low confidence to limit false positives, and we notify users of our actions to maintain transparency and also provide an option of appeal against our decision. To maximize impact, our model learns to classify a domain as spam rather than a link. We apply the same enforcement to all Pins with links belonging to the same domain.”

This is a good approach, and while not all platforms apply the same, domain-level strategy, in Pinterest’s case, it’s an effective way to more rapidly respond to such threats, and provide more protection for users. 

Advertisement

Pinterest also utilizes a similar process in detecting problematic individuals.

“We use features created from user attributes and their past behaviors as inputs. We also use user-domain interaction, summarized as a domain scores distribution for each user where domain scores are reused from the spam domain model, as an input.”

So again, Pinterest uses broader, domain-level detection to weed out potentially problematic individuals, then marks them for potential enforcement. 

“We cluster users on attributes which can successfully isolate suspicious groups with high accuracy. Experts identify these attributes by exploring the behavior of suspicious users and their use of resources for creating spammy content.”

Pinterest spam detection

Through these expanded detection systems, Pinterest is able to take more wide-reaching, blanket approaches to eliminating spammy behavior, and again, better protecting users, and the user experience, by stamping them out before they can have any real impact.

It seems like an effective approach to stopping Pin misuse for such activity, with broad-reaching enforcement by domain facilitating faster shutdowns of scam networks.

Of course, no process is perfect, and there are still examples of Pins being used to spread misinformation and other scams. But this system makes it much more difficult for scammers to operate, by quickly deactivating large swathes of Pins referring to problematic domains in one step.

You can read more about Pinterest’s evolving spam detection processes here

Socialmediatoday.com

Advertisement
Advertisement

SOCIAL

UK teen died after ‘negative effects of online content’: coroner

Published

on

Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.

Advertisement

“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.

Advertisement

“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish