Connect with us

SOCIAL

Meta Releases New ‘Widely Viewed Content’ Report for Facebook, Which Continues to be a Baffling Overview

Published

on

Meta Releases New 'Widely Viewed Content' Report for Facebook, Which Continues to be a Baffling Overview

[ad_1]

Safe to say that Meta’s efforts to refute the idea that Facebook amplifies divisive political content are not going exactly as it would have hoped.

As a quick recap, last year, Facebook published its first ever ‘Widely Viewed Content’ report for Facebook, which it launched largely in response to this Twitter account, created by New York Times journalist Kevin Roose, which highlights the most popular Facebook posts every day, based on listings from Facebook’s own CrowdTangle monitoring platform.

The listings are regularly dominated by right-wing spokespeople and Pages, which gives the impression that Facebook amplifies this type of content specifically, via its algorithms.

Understandably, Facebook was unhappy with this characterization, so first, it disbanded the CrowdTangle team after a dispute over what content the app should display. Then it launched its own, more favorable report, based on more indicative data, according to its estimation, which it then vowed to share each quarter moving forward, as a transparency measure,

Which sounds good, it’s great when we have more insight into what’s actually happening. Yet the actual report doesn’t really clarify or refute all that much.

Advertisement

For example, Facebook includes this chart in each of the Widely Viewed Content reports, to show that news content really isn’t that big of a deal in the app.

So posts from friends and family are the most prominent – which doesn’t really tell you much, because those posts could, of course, be shares of content from news pages, or opinions on the news of the day, based on publisher content.

Which is the real focus of the report – in the first Widely Viewed Content report, Meta showed that it wasn’t actually news content that was getting the most traction in the app, but really, it was spam, junk and recipes that were seeing the most exposure.

Meta’s latest Widely Viewed Content report, released today, shows similar – with one particularly notable exception:

Facebook Widely Viewed Content Report

Note the issue here?

The first listed Page here, the most viewed Facebook Page for the quarter, in the report that Meta is using to show that its platform isn’t a negative influence, has actually been banned by Meta itself for violating its Community Standards.

That’s not a great look – while the rest of the listings in the report also, once again, highlight that spam, junk and random pages (a tyre lettering company, letters to Santa via UPS) also gained major traction throughout the period.

Really, this latest report further underlines concerns with Facebook’s distribution, as a Page that it’s identified as sharing questionable posts, for whatever reason (Meta won’t clarify the details), has gained huge traction in the app, before Meta eventually shut it down.

Worth also noting that this report covers a three-month period (in this case, the period between October 1, 2021 and December 31, 2021), which means that it’s probably less likely to see news content listed anyway, as the news cycle changes quickly, and major news stories only gain traction on any given day.

Advertisement

You could argue, then, that if the same right-wing news outlets that are regularly highlighted in Roose’s Daily Top 10 list are actually indicative of Facebook sharing trends, then they’d show up in this list.

Facebook Widely Viewed Content Report

But for one, many of these Facebook Pages share YouTube links, and we don’t have the context on the specifics of this referral traffic (with YouTube being the top domain source), while it’s also questionable as to how many users actually click on the links shared by each Page.

Often, the headline is enough to spark outrage and debate, with the comment sections going crazy with responses, without users actually reading the post.

If somebody shares a post with a divisive headline, is its capacity for division diminished if people don’t actually click through to read it?

Basically, there are a lot of gaps in the logic Meta’s using here, which leaves a lot of room for interpretation. And really, it’s impossible to argue that Facebook’s algorithm doesn’t incentivize divisive, argumentative posts, because its system does indeed look to fuel engagement, and keep users interacting as a means to keep them in the app.   

What fuels engagement online? Emotionally-charged posts, with anger and joy being among the most highly shareable emotions. As any social media marketer knows, trigger these responses in your audience and you’ll generate engagement, because more emotional pull means more comments, more reactions – and in Facebook’s case, more reach, because the algorithm will give your content more exposure based on that activity.

It makes sense, then, that Facebook has helped to fuel a whole industry of emotion-charged takes, in the battle for audience attention – and the subsequent ad dollars that this increased exposure can bring.

People have often pinned social media, in general, as the key element that’s sparked more societal division, and there is an argument for that as well, in terms of having more exposure to everyone’s thoughts on every issue. But the algorithmic incentive, the dopamine rush of Likes and comments, the buzz of notifications. All of these elements play into the more partisan media landscape, and the impetus to share increasingly incendiary takes.

Take the biggest issue of the day, come up with the worst take you can on it. Then press ‘Post’. Like it or not, that’s now an effective strategy in many cases, and honestly, it’s pretty ridiculous the lengths that Meta continues to go to in order to try and suggest that this isn’t the case.

Advertisement

Either way, that is the direction that Meta has taken, and its Widely Viewed Content reports continue to show, essentially, that the time people spend on Facebook is mostly spent on mindless junk.

But mindless rubbish is better than divisive misinformation, right? That’s better.

Right?

Honestly, I don’t know, but I do know that this report is doing Meta no favors in terms of overall perception.

You can view Meta’s ‘Widely Viewed Content’ report for Q4 2021 here.



[ad_2]

Source link

SOCIAL

Twitter Faces Advertiser Boycott Due to Failures to Police Child Abuse Material

Published

on

Elon Musk Launches Hostile Takeover Bid for Twitter

Twitter’s no good, very bad year continues, with the company this week being forced to inform some advertisers that their ads had been displayed in the app alongside tweets soliciting child pornography and other abuse material.

As reported by Reuters:

Brands ranging from Walt Disney, NBCUniversal and Coca-Cola, to a children’s hospital, were among some 30 advertisers that have appeared on the profile pages of Twitter accounts that peddle links to the exploitative material.”

The discovery was made by cybersecurity group Ghost Data, which worked with Reuters to uncover the ad placement concerns, dealing another big blow to the app’s ongoing business prospects.

Already in a state of disarray amid the ongoing Elon Musk takeover saga, and following recent revelations from its former security chief that it’s lax on data security and other measures, Twitter’s now also facing an advertiser exodus, with big brands including Dyson, Mazda and Ecolab suspending their Twitter campaigns in response.

Which, really, is the least concerning element about the discovery, with the Ghost Data report also identifying more than 500 accounts that openly shared or requested child sexual abuse material over a 20-day period.

Ghost Data says that Twitter failed to remove more than 70% of the accounts during the time of the study.

Advertisement

The findings raise further questions about Twitter’s inability, or willingness, to address potentially harmful material, with The Verge reporting late last month that Twitter ‘cannot accurately detect child sexual exploitation and non-consensual nudity at scale’.

That finding stemmed from an investigation into Twitter’s proposed plan to give adult content creators the ability to begin selling OnlyFans-style paid subscriptions in the app.

Rather than working to address the abundance of pornographic material on the platform, Twitter instead considered leaning into it – which would undoubtedly raise the risk factor for advertisers who do not want their promotions to appear alongside potentially offensive tweets.

Which is likely happening, at an even greater scale than this new report suggests, because Twitter’s own internal investigation into its OnlyFans-esque proposal found that:

Twitter could not safely allow adult creators to sell subscriptions because the company was not – and still is not – effectively policing harmful sexual content on the platform.”

In other words, Twitter couldn’t risk facilitating the monetization of exploitative material in the app, and because it has no way of tackling such, it had to scrap the proposal before it really gained any traction.

With that in mind, these new findings are no surprise – but again, the advertiser backlash is likely to be significant, which could force Twitter to launch a new crackdown either way.

For its part, Twitter says that it is investing more resources dedicated to child safety, ‘including hiring for new positions to write policy and implement solutions’.

Advertisement

So, great, Twitter’s taking action now. But these reports, based on investigation into Twitter’s own examinations, show that Twitter has been aware of this potential issue for some time – not child exploitation specifically, but adult content concerns that it has no way of policing.

In fact, Twitter openly assists in the promotion of adult content, albeit inadvertently. For example, in the ‘For You’ section of my ‘Explore’ tab (i.e. the front page of Explore in the app), Twitter continuously recommends that I follow ‘Facebook’ as a topic, based on my tweets and the people I follow in the app.

Here are the tweets that it highlighted as some of the top topical tweets for ‘Facebook’ yesterday:

It’s not pornographic material as such, but I’m tipping that if I tap through on any of these profiles, I’ll find it pretty quick. And again, these tweets are highlighted based on Twitter’s own topical tweets algorithm, which is based on engagement with tweets that mention the topic term. These completely unrelated and off-topic tweets are then being pushed by Twitter itself, to users that haven’t expressed any interest in adult content.

It’s clear, based on all the available evidence, that Twitter does have a porn problem, and it’s doing little to address it.

Distributors of adult content view Twitter as the best social network for advertising, because it’s less restrictive than Facebook, and has much broader reach than niche adult sites, while Twitter gains the usage and engagement benefits of hosting material that other social platforms would simply not allow.

Which is likely why it’s been willing to turn a blind eye to such for so long, to the point that it’s now being highlighted as a much bigger problem.

Though it is important to note that adult content, in itself, is not inherently problematic, among consenting adult users at least. It’s Twitter’s approach to child abuse and exploitative content that’s the real issue at hand.

Advertisement

And Twitter’s systems are reportedly ‘woefully inadequate’ in this respect.

As reported by The Verge:

A 2021 report found that the processes Twitter uses to identify and remove child sexual exploitation material are woefully inadequate – largely manual at a time when larger companies have increasingly turned to automated systems that can catch material that isn’t flagged by PhotoDNA. Twitter’s primary enforcement software is “a legacy, unsupported tool” called RedPanda, according to the report. “RedPanda is by far one of the most fragile, inefficient, and under-supported tools we have on offer,” one engineer quoted in the report said.”

Indeed, additional analysis of Twitter’s CSE detection systems found that of the 1 million reports submitted each month, 84% contain newly-discovered material – ‘none of which would be flagged’, by Twitter’s systems.

So while it’s advertisers that are putting the pressure back on the company in this instance, it’s clear that Twitter’s issues stem far beyond ad placement concerns alone.

Hitting Twitter’s bottom line, however, may be the only way to force the platform to take action – though it’ll be interesting to see just how willing and able Twitter is to enact a broader plan to address such amidst of its ongoing ownership battle.

Within its takeover agreement with Elon Musk, there’s a provision which states that Twitter needs to:

“Use its commercially reasonable efforts to preserve substantially intact the material components of its current business organization.”

Advertisement

In other words, Twitter can’t make any significant changes to its operational structure while it’s in the transition phase, which is currently in debate as it headed for a courtroom battle with Musk.

Would initiating a significant update to its CSE detection models qualify as a substantial change – substantial enough to alter the operating structure of the company at the time of the initial agreement?

In essence, Twitter likely doesn’t want to make any major changes. But it might have to, especially if more advertisers join this new boycott, and push the company to take immediate action.

It’s likely to be a mess either way, but this is a huge concern for Twitter, which should be rightfully held to account for its systemic failures in this respect.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish