Connect with us

SOCIAL

Will the Capitol Riots Prompt a Re-Think as to How Social Platforms Address Free Speech?

Published

on

Will the events of this week mark a turning point for social media platforms and their approach to content moderation and censorship, particularly around dangerous movements and politically motivated debate?

Following the riots at the Capitol building on Wednesday, which had been instigated, at least in part, by US President Donald Trump, who had called on his supporters to mobilize, and even fight for him, in a last-ditch attempt to overturn the election result, all of the major social platforms took action against the President and his avid supporters, in varying ways.

Many believe that these actions were long overdue, given Trump’s history of using social media as a megaphone for his divisive agenda, but others have rightly noted that this was the first time that the resulting violence could be directly tied to social media itself.

There’s been a range of previous movements, like QAnon, which have been linked to criminal acts, but the actual role that social media has played in facilitating such has been up for debate. That’s somewhat similar to Russian interference in the 2016 US Election – we know now that Russian-based groups did seek to interfere, and influence voter actions in the lead up to the poll. But did those efforts actually work? Did people really change their voting behavior as a result? The actual impact is difficult to accurately measure.

This week’s Capitol riots, however, can be clearly and directly linked to social media activity

As explained by ProPublica:

For weeks, the far-right supporters of President Donald Trump railed on social media that the election had been stolen. They openly discussed the idea of violent protest on the day Congress met to certify the result.”

The specific details of their planned action are also tied into Trump’s social posts – for example, after Trump tweeted:

Trump tweet about Pence

The focus of the protesters turned to VP Pence, with various reports suggesting that they intended to kidnap or hold Pence hostage in order to force Congress to reinstate the President.

This was the first time where the full plan of action, from inception to outcome, was traceable via social posts and activity, with the President actively playing a part in provoking and inciting the mob. Which is why the platforms moved to stronger mitigation efforts in response – but does that mean they’ll look to change how they view similar incidents in future?

In many ways, Trump himself is an anomaly, a hugely popular celebrity turned politician, who then used his celebrity status to share his messaging via social platforms. Having a well-known personality become a politician is not uncommon, so it’s not an entirely new approach, but the way Trump weaponized his social media following was different to what we’ve ever seen.

As Trump himself told Fox Business in 2017:

I doubt I would be here if it weren’t for social media, to be honest with you. […] When somebody says something about me, I’m able to go bing, bing, bing and I take care of it. The other way, I would never be get the word out.”

In this way, Trump essentially used social platforms as his own propaganda outlet, deriding any stories negative of him and his administration as being ‘fake news’, while any positive coverage was 100% accurate. This lead to various contradictions – one week, The New York Times was ‘the enemy of the people’, publishing false information at will, then the next, when it posted a poll in his favor, it was acceptable once again. 

Yet, despite these inconsistencies, Trump’s supporters lapped it up, and over time, he’s been able to use his social media presence to build a cult-like empire, which eventually lead to him being able to effectively incite a coup in an attempt to keep himself in power. This is despite there being no solid evidence to support his claims of massive voter fraud, which, in Trump’s view, invalidates the election result.

Trump’s approach is been different to what we’ve seen in the past – but that doesn’t mean it can’t happen again. And if it does, will the social platforms look to take a tougher stand earlier on? Will they now see that the end result of taking a more ‘hands-off’ approach is more dangerous than actually addressing such issues in their initial stages?

A key example here is QAnon – as far back as 2016, various experts had warned Facebook about the dangers posed by the ‘pizzagate’ conspiracy movement, which had been gaining momentum and support across its platforms. Facebook refused to take action, citing its ‘free speech’ ethos, and that initial seed then evolved into a more organized movement, which then morphed into the QAnon group. An internal investigation conducted by Facebook last year found that the platform had provided a home for thousands of QAnon groups and Pages, with millions of members and followers. And as threats of violence and dangerous activity were increasingly linked to the group, Facebook finally chose to act, first cracking down on QAnon groups in August last year, before announcing a full ban on QAnon-related content in October.

Facebook will argue that it acted based on the evidence it saw, and in line with its evolving approach to such. But it does seem likely that, had Facebook sought to take a stronger stance in those initial stages, QAnon may not have ever been able to develop the momentum that it did. And while QAnon was only one of the many groups at play in the Capitol riot, you could argue that the situation could have been avoided had there been a more concentrated effort to draw a line on misinformation and dangerous speech much earlier in the piece.

Yet at the same time, Facebook has been calling for a more comprehensive approach to such – as noted by Instagram chief Adam Mosseri this week:

“We, at Facebook and Instagram, have been clear for years that we believe regulation around harmful content would be a good thing. That gets tricky when elected officials start violating rules, but is still an idea worth pursuing.”

Facebook itself has implemented its own, independent Oversight Board to assist with content decisions, a team of experts from a range of fields that will help the company implement better approaches to content moderation, and rulings on what should and should not be allowed on its platforms.

The Oversight Board has only just begun, and it’s still an experiment in many respects, and we don’t know what sort of impact it’ll end up having. But Facebook sees this as a micro-example of what the entire industry should be seeking.

Again from Mosseri:

“We’ve suggested third-party bodies to set standards for harmful content and to measure companies against those standards. Regulation could set baselines for what’s allowed and require companies to build systems accordingly.”

In Facebook’s view, it shouldn’t be up to the platforms themselves to rule on what’s allowed in this respect, it should come down to a panel of independent experts to establish parameters for all platforms, in order to ensure uniformity in approach, and lessen the burden of censorship on private organizations – which clearly have different motivations based on business strategy. 

In some ways, this means that Facebook is agreeing with critics that it hasn’t adequately addressed such concerns, because it’s working to balance different goals, while it’s also learning as it goes in many respects. No company has ever been in Facebook’s situation before, serving over 2.7 billion users, in virtually every region of the world, and when you’re working to monitor the actions of so many people, in so many different places, with so many different concerns, inevitably, things are going to slip through the cracks.

But at that scale, when things do slip, the consequences, as we’ve seen, can be significant. And many also overlook, or are unaware of the impacts that Facebook has had in smaller Asian and African regions, where it’s also seen as a major influential factor in local politics, elections, unrest, etc.

Maybe now, however, with the scenes playing out on the doorstep of American democracy, with Senators locked in their offices to avoid the violence. Maybe now, there’ll be an increased push for change, and to implement more stringent rules around what action needs to be taken to stamp out concerning movements before they can take root.

Whether that comes from the platforms themselves, or via increased external regulation, the Capitol riots could be a turning point for social media more broadly.

Of course, there will always be those who seek to push the limits, no matter where those limits are set, and there will always be elements that ride the line, and could easily veer into more dangerous territory. But it seems clear now that something must be done, with the weaponization of social media posing major risks. 

Will that spark a new debate around the limits of free speech, and the responsibility of big tech? It seems like now is the time to ask the big questions.  

Socialmediatoday.com

SOCIAL

Snap making changes to direct response advertising business

Published

on

Snap making changes to direct response advertising business

The company posted a net loss of $288.5 million, or 18 cents a share, including $34 million in charges from its workforce restructuring. That compared to a profit of $23 million, or one cent, a year earlier.

Snap ended the fourth quarter with 375 million daily users, a 17% increase. In the first three months of the year, the company estimates 382 million to 384 million people will use its platform daily.

Snap has become a bellwether for other digital advertising companies. Last year, it was the first to raise concerns about the slowdown in marketer spending online and to fire a significant number of employees—20% of its workforce—to cut costs in the face of falling revenue.

The company has spent the last two quarters refocusing the organization, cutting projects that don’t contribute to user and revenue growth.

In the first quarter, Snap expects the environment to “remain challenging as we expect the headwinds we have faced over the past year to persist.”

Investors will get additional information about the state of the digital ad market when Meta and Alphabet report earnings later this week.

—Bloomberg News

Source link

Continue Reading

SOCIAL

Twitter Outlines New Platform Rules Which Emphasize Reduced Reach, as Opposed to Suspensions

Published

on

Twitter Outlines New Platform Rules Which Emphasize Reduced Reach, as Opposed to Suspensions

After reinstating thousands of previously suspended accounts, as part of new chief Elon Musk’s ‘amnesty’ initiative, Twitter has now outlined how it will be enforcing its rules from now on, which includes less restrictive measures for some violations.

As explained by Twitter:

“We have been proactively reinstating previously suspended accounts […] We did not reinstate accounts that engaged in illegal activity, threats of harm or violence, large-scale spam and platform manipulation, or when there was no recent appeal to have the account reinstated. Going forward, we will take less severe actions, such as limiting the reach of policy-violating Tweets or asking you to remove Tweets before you can continue using your account.”

This is in line with Musk’s previously stated ‘freedom of speech, not freedom of reach’ approach, which will see Twitter leaning more towards leaving content active in the app, but reducing its impact algorithmically, if it breaks any rules.

Which means a lot of tweets that would have previously been deemed violative will now remain in the app, and while Musk notes that no ads will be displayed against such content, that could be difficult to enforce, given the way the tweet timeline functions.

But it does align with Musk’s free speech approach, and reduces the onus on Twitter, to some degree, in moderating speech. It will still need to assess each instance, case-by-case, but users themselves will be less aware of penalties – though Musk has also flagged adding more notifications and explainers to outline any reach penalties as well.

“Account suspension will be reserved for severe or ongoing, repeat violations of our policies. Severe violations include but are not limited to: engaging in illegal content or activity, inciting or threatening violence or harm, privacy violations, platform manipulation or spam, and engaging in targeted harassment of our users.

Which still means that a lot of content that these users had been suspended for previously would still result in suspension now, and it leaves a lot up to Twitter management in allocating severity of impact in certain actions.

How do you definitively measure threats of violence or harm, for example? Former President Donald Trump was sanctioned under this policy, but many, including Musk, were critical of Twitter’s decision to do so, given that Trump is an elected representative.

In other nations, too, Twitter has been pressured to remove tweets under these policies, and it’ll be interesting to see how Twitter 2.0 handles such, given its stated more lax approach to moderation, despite its rules remaining largely the same.

Already, questions have been raised on this front – Twitter recently removed links to a BBC documentary that’s critical of the Indian Government, at the request of India’s PM. Twitter hasn’t offered any official explanation for the action, but with Musk also working with the Indian Government to secure partnerships for his other business, Tesla, questions have been raised as to how he will manage both impacts concurrently.

In essence, Twitter’s approach has changed when it chooses to do so, but the rules, as such, will effectively be governed by Musk himself. And as we’ve already seen, he will make drastic rules changes based on personal agendas and experience.

Twitter says that, starting February 1st, any previously suspended users will be able to appeal their suspension, and be evaluated under its new criteria for reinstatement.

It’s also targeting February for a launch of its new account penalties notifications.



Source link

Continue Reading

SOCIAL

4 new social media features you need to know about this week

Published

on

New social media features to know this week


Social media never stands still. Every week there are new features — and it’s hard for the busy comms pro to stay up-to-date on it all.

We’ve got you covered.

Here’s what you need to know about this week.

LinkedIn

Social media sleuth Matt Navarra reported on Twitter that LinkedIn will soon make the newsletters you subscribe to through the site visible to other users.

This should aid newsletter discovery by adding in an element of social proof: if it’s good enough for this person I like and respect, it’s good enough for me. It also might be anopportunity to get your toe in the water with LinkedIn’s newsletter features.

Instagram

After admitting they went a little crazy on Reels and ignored their bread and butter of photographs, Instagram continues to refine its platform and algorithm. Although there were big changes over the last few weeks, these newer changes are subtler but still significant.

 

 

First, the animated avatars will be more prominent on profiles. Users can now choose to flip between the cartoony, waving avatar and their more traditional profile picture, rather than picking one or the other, TechCrunch reported, seemingly part of a push to incorporate metaverse-esque elements into the app.

Instagram also appears to have added an option to include a lead form on business profiles. We say “appears” because, as Social Media Today reports, the feature is not yet listed as an official feature, though it has rolled out broadly.

The feature will allow businesses to use standard forms or customize their own, including multiple choice questions or short answer.

Twitter

In the chaotic world of Twitter updates, this week is fairly staid — with a useful feature for advertisers.

The platform will roll out the ability to promote tweets among search results. As Twitter’s announcement points out, someone actively searching for a term could signal stronger intent than someone merely passively scrolling a feed.

Which of these new features are you most interested in? That LinkedIn newsletter tool could be great for spreading the word — and for discovering new reads.

Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT



Source link

Continue Reading

Trending

en_USEnglish