Connect with us


Can Effective Regulation Reduce the Impact of Divisive Content on Social Networks?



can effective regulation reduce the impact of divisive content on social networks

Amid a new storm of controversy sparked by The Facebook Files, an expose of various internal research projects which, in some ways, suggest that Facebook isn’t doing enough to protect users from harm, the core question that needs to be addressed is often being distorted by inherent bias and specific targeting of Facebook, the company, as opposed to social media, and algorithmic content amplification as a concept.

That is, what do we do to fix it? What can be done, realistically, that will actually make a difference; what changes to regulation or policy could feasibly be implemented to reduce the amplification of harmful, divisive posts that are fueling more angst within society as a result of the increasing influence of social media apps?

It’s important to consider social media more broadly here, because every social platform uses algorithms to define content distribution and reach. Facebook is by far the biggest, and has more influence on key elements, like news content – and of course, the research insights themselves, in this case, came from Facebook.

The focus on Facebook, specifically, makes sense, but Twitter also amplifies content that sparks more engagement, LinkedIn sorts its feed based on what it determines will be most engaging. TikTok’s algorithm is highly attuned to your interests.

The problem, as highlighted by Facebook whistleblower Frances Haugen is algorithmic distribution, not Facebook itself – so what ideas do we have that can realistically improve that element?

And the further question then is, will social platforms be willing to make such changes, especially if they present a risk to their engagement and user activity levels?

Haugen, who’s an expert in algorithmic content matching, has proposed that social networks should be forced to stop using engagement-based algorithms altogether, via reforms to Section 230 laws, which currently protect social media companies from legal liability for what users share in their apps.

As explained by Haugen:

“If we had appropriate oversight, or if we reformed [Section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.”

The concept here is that Facebook – and by extension, all social platforms – would be held accountable for the ways in which they amplify certain content. So if more people end up seeing, say, COVID misinformation because of algorithmic intervention, Facebook could be held legally liable for any impacts.

That would add significant risk to any decision-making around the construction of such algorithms, and as Haugen notes, that may then see the platforms forced to take a step back from measures which boost the reach of posts based on how users interact with such content.

Essentially, that would likely see social platforms forced to return to pre-algorithm days, when Facebook and other apps would simply show you a listing of the content from the pages and people you follow in chronological order, based on post time. That, in turn, would then reduce the motivation for people and brands to share more controversial, engagement-baiting content in order to play into the algorithm’s whims.

The idea has some merit – as various studies have shown, sparking emotional response with your social posts is key to maximizing engagement, and thus, reach based on algorithm amplification, and the most effective emotions, in this respect, are humor and anger. Jokes and funny videos still do well on all platforms, fueled by algorithm reach, but so too do anger-inducing hot takes, which partisan news outlets and personalities have run with, which could well be a key source of the division and angst we now see online.

To be clear, Facebook cannot solely be held responsible for such. Partisan publishers and controversial figures have long played a role in broader discourse, and they were sparking attention and engagement with their left-of-center opinions long before Facebook arrived. The difference now is that social networks facilitate such broad reach, while they also, through Likes and other forms of engagement, provide direct incentive for such, with individual users getting a dopamine hit by triggering response, and publishers driving more referral traffic, and gaining more exposure through provocation.

Really, a key issue in when considering the former outcome is that everyone now has a voice, and when everyone has a platform to share their thoughts and opinions, we’re all far more exposed to such, and far more aware. In the past, you likely had no idea about your uncle’s political persuasions, but now you know, because social media reminds you every day, and that type of peer sharing is also playing a role in broader division.

Haugen’s argument, however, is that Facebook incentivizes this – for example, one of the reports Haugen leaked to the Wall Street Journal outlines how Facebook updated its News Feed algorithm in 2018 to put more emphasis on engagement between users, and reduce political discussion, which had become an increasingly divisive element in the app. Facebook did this by changing its weighting for different types of engagement with posts.

Facebook algorithm diagram

The idea was that this would incentivize more discussion, by weighting replies more heavily – but as you can imagine, by putting more value on comments, in order to drive more reach, that also prompted more publishers and Pages to share increasingly divisive, emotionally-charged posts, in order to incite more reactions, and get higher share scores as a result. With this update, Likes were no longer the key driver of reach, as they had been, with Facebook making comments and Reactions (including ‘Angry’) increasingly important. As such, sparking discussion around political trends actually became more prominent, and exposed more users to such content in their feeds.

The suggestion then, based on this internal data, is that Facebook knew this, it knew that this change had ramped up divisive content. But they opted not to revert back, or implement another update, because engagement, a key measure for its business success, had indeed increased as a result.

In this sense, removing the algorithm motivation would make sense – or maybe, you could look to remove algorithm incentives for certain post types, like political discussion, while still maximizing the reach of more engaging posts from friends, catering to both engagement goals and divisive concerns.

That’s what Facebook’s Dave Gillis, who works on the platform’s product safety team has pointed to in a tweet thread, in response to the revelations.

As per Gillis:

At the end of the WSJ piece about algorithmic feed ranking, it’s mentioned – almost in passing – that we switched away from engagement-based ranking for civic and health content in News Feed. But hang-on – that’s kind of a big deal, no? It’s probably reasonable to rank, say, cat videos and baby photos by likes etc. but handle other kinds of content with greater care. And that is, in fact, what our teams advocated to do: use different ranking signals for health and civic content, prioritizing quality + trustworthiness over engagement. We worked hard to understand the impact, get leadership on board – yep, Mark too – and it’s an important change.

This could be a way forward, using different ranking signals for different types of content, which may work to enable optimal amplification of content, boosting beneficial user engagement, while also lessening the motivation for certain actors to post divisive material in order to feed into algorithmic reach.

Would that work? Again, it’s hard to say, because people would still be able to share posts, they’d still be able to comment and re-distribute material online, there are still many ways that amplification can happen outside of the algorithm itself.

In essence, there are merits to both suggestions, that social platforms could treat different types of content differently, or that algorithms could be eliminated to reduce the amplification of such material.

And as Haugen notes, focusing on the systems themselves is important, because content-based solutions open up various complexities when the material is posted in other languages and regions.

“In the case of Ethiopia, there are 100 million people and six languages. Facebook only supports two of those languages for integrity systems. This strategy of focusing on language-specific, content-specific systems for AI to save us is doomed to fail.”

Maybe, then, removing algorithms, or at least changing the regulations around how algorithms operate, would be an optimal solution, which could help to reduce the impacts of negative, rage-inducing content across the social media sphere.

But then we’re back to the original problem that Facebook’s algorithm was designed to solve – back in 2015 Facebook explained that it needed the News Feed algorithm not only to maximize user engagement, but also to help ensure that people saw all the updates of most relevance to them.

As it explained, the average Facebook user, at that time, had around 1, 500 posts eligible to appear in their News Feed on any given day, based on Pages they’d liked and their personal connections – while for some more active users, that number was more like 15,000. It’s simply not possible for people to read every single one of these updates every day, so Facebook’s key focus with the initial algorithm was to create a system that uncovered the best, most relevant content for each individual, in order to provide users with the most engaging experience, and subsequently keep them coming back.

As Facebook’s chief product officer Chris Cox explained to Time Magazine:

“If you could rate everything that happened on Earth today that was published anywhere by any of your friends, any of your family, any news source, and then pick the 10 that were the most meaningful to know today, that would be a really cool service for us to build. That is really what we aspire to have News Feed become.”

The News Feed approach has evolved a lot since then, but the fundamental challenge that it was designed to solve remains. People have too many connections, they follow too many Pages, they’re members of too many groups to get all of their updates, every day. Without the feed algorithm, they will miss relevant posts, relevant updates like family announcements and birthdays, and they simply won’t be as engaged in the Facebook experience.

Without the algorithm, Facebook will lose out, by failing to optimize for audience desires – and as highlighted in another of the reports shared as part of the Facebook Files, it’s actually already seeing engagement declines in some demographic subsets.

Facebook engagement over time

You can imagine that if Facebook were to eliminate the algorithm, or be forced to change its direction on this, that this graph will only get worse over time.

Zuck and Co. are therefore not likely to be keen on that solution, so a compromise, like the one proposed by Gillis, may be the best that can be expected. But that comes with its own flaws and risks.   

Either way, it is worth noting that the focus of the debate needs to shift to algorithms more broadly, not just on Facebook alone, and whether there is actually a viable, workable way to change the incentives around algorithm-based systems to limit the distribution of more divisive elements.

Because that is a problem, no matter how Facebook or anyone else tries to spin it, which is why Haugen’s stance is important, as it may well be the spark that leads us to a new, more nuanced debate around this key element.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address


With outburst, Musk puts X’s survival in the balance



Even after Elon Musk gutted the staff by two-thirds, X, formerly Twitter, still has around 2,000 employees, and incurs substantial fixed costs like data servers and real estate

Even after Elon Musk gutted the staff by two-thirds, X, formerly Twitter, still has around 2,000 employees, and incurs substantial fixed costs like data servers and real estate
– Copyright POOL/AFP/File Leon Neal


Elon Musk’s verbal assault on advertisers who have shunned X (formerly Twitter) threatens to sink the social network further, with the tycoon warning of the platform’s demise, just one year after taking control.

“If somebody’s gonna try to blackmail me with advertising, go fuck yourself,” a visibly furious Musk told an interviewer in New York in front of an audience of the US business elite this week.

Musk was lashing out at the advertisers who had abandoned his platform after Media Matters, a left-wing media watchdog group, warned big companies that their ads were running aside posts by neo-Nazis.

Walmart on Friday was the latest to join the exodus, following the footsteps of IBM, Disney, Paramount, NBCUniversal, Lionsgate and others.

The latest controversy broke earlier this month when Musk declared a tweet exposing an anti-Semitic conspiracy theory as the “absolute truth.”

Musk apologized for his tweet, even taking a trip to Israel to meet with Prime Minister Benjamin Netanyahu, but on Wednesday he targeted his anger squarely at advertisers.

“It doesn’t take a social media expert to know that publicly and personally attacking the people in companies that pay X’s bills is not going to be good for business,” said analyst Jasmine Enberg of Insider Intelligence.

“Most advertiser boycotts on social media companies, including X, have been short lived. There’s a potential for this one to be longer,” she added.

Musk said the survival of X could be at stake.

“What this advertising boycott is going to do is kill the company,” Musk said.

“Everybody will know” that advertisers were those responsible, he angrily added.

– Bankruptcy looms? –

Even before the latest bust up, Insider Intelligence was forecasting a 54-percent contraction in ad sales, to $1.9 billion this year.

“The advertising exodus at X could accelerate with Musk not playing nice in the sandbox,” said Dan Ives of Wedbush Securities.

According to data provided to AFP by market data analysis company SensorTower, as many as half of the social network’s top 100 US advertisers in October 2022 have already stopped spending altogether.

But by dropping X, “you are opening yourself up for competitors to step into your territory,” warned Kellis Landrum, co-founder of digital marketing agency True North Social.

Advertisers may also choose to stay for lack of an equivalent alternative.

Meta’s new Threads platform and other upstarts have yet to prove worthy adversaries for the time being, Landrum argued.

Analyst Enberg insisted that “X is not an essential platform for many advertisers, so withdrawing temporarily tends to be a pretty painless decision.”

Privately held, X does not release official figures, but all estimates point to a significant drop in the number of users.

SensorTower puts the annual fall at 45 percent for monthly users at the start of the fourth quarter, compared with the same period last year.

Added to this is the disengagement of dozens of highly followed accounts, including major brands such as Coca-Cola, PepsiCo, JPMorgan Bank and Starbucks as well as many celebrities and media personalities that have stopped or reduced usage.

The corporate big names haven’t posted any content for weeks, when they used to be an everyday presence.

None of the dozen or so companies contacted by AFP responded to requests for comments.

In normal conditions, Twitter or X “was always much larger than its ad dollars,” said Enberg.

It was “an important place for brands and companies to connect with consumers and customers,” she said.

Even after Musk gutted the staff by two-thirds, X still has around 2,000 employees, and incurs substantial fixed costs like data servers and real estate.

Another threat is the colossal debt contracted by Musk for his acquisition, but now carried by X, which must meet a payment of over a billion dollars each year.

In his tense interview on Wednesday, Musk hinted that he would not come to the rescue if the coffers run dry, even if he has ample means to do so.

“If the company fails… it will fail because of an advertiser boycott and that will bankrupt the company,” Musk said.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


Walmart says it has stopped advertising on Elon Musk’s X platform



Walmart says it has stopped advertising on Elon Musk's X platform

Walmart said Friday that it is scaling back its advertising on X, the social media company formerly known as Twitter, because “we’ve found some other platforms better for reaching our customers.”

Walmart’s decision has been in the works for a while, according to a person familiar with the move. Yet it comes as X faces an advertiser exodus following billionaire owner Elon Musk’s support for an antisemitic post on the platform. 

The retailer spends about $2.7 billion on advertising each year, according to MarketingDive. In an email to CBS MoneyWatch, X’s head of operations, Joe Benarroch, said Walmart still has a large presence on X. He added that the company stopped advertising on X in October, “so this is not a recent pausing.”

“Walmart has a wonderful community of more than a million people on X, and with a half a billion people on X, every year the platform experiences 15 billion impressions about the holidays alone with more than 50% of X users doing most or all of their shopping online,” Benarroch said.

Musk struck a defiant pose earlier this week at the New York Times’ Dealbook Summit, where he cursed out advertisers that had distanced themselves from X, telling them to “go f— yourself.” He also complained that companies are trying to “blackmail me with advertising” by cutting off their spending with the platform, and cautioned that the loss of big advertisers could “kill” X.

“And the whole world will know that those advertisers killed the company,” Musk added.

Elon Musk faces backlash from lawmakers, companies over endorsement of antisemitic X post


Dozens of advertisers — including players such as Apple, Coca Cola and Disney — have bailed on X since Musk tweeted that a post on the platform that claimed Jews fomented hatred against White people, echoing antisemitic stereotypes, was “the actual truth.”

Advertisers generally shy away from placing their brands and marketing messages next to controversial material, for fear that their image with consumers could get tarnished by incendiary content. 

The loss of major advertisers could deprive X of up to $75 million in revenue, according to a New York Times report

Musk said Wednesday his support of the antisemitic post was “one of the most foolish” he’d ever posted on X. 

“I am quite sorry,” he said, adding “I should in retrospect not have replied to that particular post.”

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading


US Judge Blocks Montana’s Effort to Ban TikTok



U.S. Judge Blocks Montana’s Effort to Ban TikTok in the State

TikTok has won another reprieve in the U.S., with a district judge blocking Montana’s effort to ban the app for all users in the state.

Back in May, Montana Governor Greg Gianforte signed legislation to ban TikTok outright from operating in the state, in order to protect residents from alleged intelligence gathering by China. There’s no definitive evidence that TikTok is, or has participated in such, but Gianforte opted to move to a full ban, going further than the government device bans issued in other regions.

As explained by Gianforte at the time:

The Chinese Communist Party using TikTok to spy on Americans, violate their privacy, and collect their personal, private, and sensitive information is well-documented. Today, Montana takes the most decisive action of any state to protect Montanans’ private data and sensitive personal information from being harvested by the Chinese Communist Party.”

In response, a collection of TikTok users challenged the proposed ban, arguing that it violated their first amendment rights, which led to this latest court challenge, and District Court Judge Donald Molloy’s decision to stop Montana’s ban effort.

Montana’s TikTok ban had been set to go into effect on Jan. 1, 2024.

In issuing a preliminary injunction to stop Montana from imposing a full ban on the app, Molloy said that Montana’s legislation does indeed violate the Constitution and “oversteps state power.”

Molloy’s judgment is primarily centered on the fact that Montana has essentially sought to exercise foreign policy authority in enacting a TikTok ban, which is only enforceable by federal authorities. Molloy also noted that there was apervasive undertone of anti-Chinese sentiment” within Montana’s proposed legislation.

TikTok has welcomed the ruling, issuing a brief statement in response:

Montana attorney general, meanwhile, has said that it’s considering next steps to advance its proposed TikTok ban.

The news is a win for TikTok, though the Biden Administration is still weighing a full TikTok ban in the U.S., which may still happen, even though the process has been delayed by legal and legislative challenges.

As I’ve noted previously, my sense here would be that TikTok won’t be banned in the U.S. unless there’s a significant shift in U.S.-China relations, and that relationship is always somewhat tense, and volatile to a degree.

If the U.S. government has new reason to be concerned, it may well move to ban the app. But doing so would be a significant step, and would prompt further response from the C.C.P.

Which is why I suspect that the U.S. government won’t act, unless it feels that it has to. And right now, there’s no clear impetus to implement a ban, and stop a Chinese-owned company from operating in the region, purely because of its origin.

Which is the real crux of the issue here. A TikTok ban is not just banning a social media company, it’s blocking cross-border commerce, because the company is owned by China, which will remain the logic unless clear evidence arises that TikTok has been used as a vector for gathering information on U.S. citizens.

Banning a Chinese-owned app because it is Chinese-owned is a statement, beyond concerns about a social app, and the U.S. is right to tread carefully in considering how such a move might impact other industries.

So right now, TikTok is not going to be banned, in Montana, or anywhere else in the U.S. But that could still change, very quickly.

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading