Connect with us


Facebook’s Stance on Political Ads Once Again Highlights a Common Flaw in its Policy Approach



There’s an old Saturday Night Live sketch, starring Norm McDonald, called ‘Bible Challenge’ in which the competition is based on honesty, with each contestant confessing to their knowledge of the Bible, and winning points based on whether they knew the answer, or they didn’t (apologies for the low-quality clip).

This came to mind when contemplating Facebook’s current approach to political ads and posted commentary from elected officials. Facebook’s stance once again came under scrutiny this week after Twitter decided to add a fact-check warning to two tweets from US President Donald Trump. That prompted Trump to call for changes to the laws which protect social platforms from liability for the content that users post on their platforms – if the platforms are going to edit people’s posts, then the current regulations should no longer apply, according to the Trump administration.

If any change is made to these laws, that would impact Facebook as well, and arguably even more so, given The Social Network has more than 10x as many daily active users. So where does Facebook stand on the matter?

As you would expect, Facebook says that any such change would have an adverse impact.

Repealing or limiting Section 230 […] will restrict more speech online, not less. By exposing companies to potential liability for everything that billions of people around the world say, this would penalize companies that choose to allow controversial speech and encourage platforms to censor anything that might offend anyone.”

But Facebook has also stood by its decision not to take action on the same statements from President Trump that Twitter has, which Trump has also re-posted on his Facebook Page.

Facebook post from Donald Trump

As per Facebook CEO Mark Zuckerberg:

“I disagree strongly with how the President spoke about this, but I believe people should be able to see this for themselves, because ultimately, accountability for those in positions of power can only happen when their speech is scrutinized out in the open.”

So Facebook is still sticking to its guns, and will not subject posts from politicians to fact checks.

See also  Game Streaming Platform Twitch is Seeing a Significant Rise in Non-Gaming Live-Streams

Is that the right approach, or does it create a dangerous situation where influential leaders can say whatever they want, unchecked, unhindered, and able to reach a very large audience?

The answer is not simple – for better or worse, Facebook’s approach does make some sense.

Facebook’s stance, which I don’t think that it’s done well in communicating, is that these people are elected officials, these are the leaders that the voting public has chosen. Therefore we have a right to hear what they gave to say, good or bad, true or not. As the leaders chosen by the majority, it should be up to the people to then judge their public actions, which, in part, is based on what they say. If anything, social platforms merely add transparency, and for every post and comment, the voters can judge for themselves how they feel, and make a more informed decision on their support (or not).

It’s not an illogical position, but that process yet again highlights a common flaw in Facebook’s policy approach – that being that Facebook errs on the side of optimism, and assumes the best in people, while overlooking the potential negatives.

That’s what lead to the Cambridge Analytica situation – Facebook gave various academic groups access to vast collections of user data, under the promise that none of them would use such insight for any purpose beyond their stated research demands. Of course, at least one of them did, and in retrospect, it seems overly optimistic of Facebook to have assumed that nobody would be tempted to misuse its powerful audience insights for such purpose. But Facebook didn’t have any systems or processes in place to stop this, it just assumed nothing would go wrong. Until it did.

See also  Clubhouse Launches 'Creator Commons' Resource Hub to Help Guide Your Platform Strategy

The same thing happened with Facebook’s SDK – Facebook gave developers full access to user insights, under the provision that they would only gather people’s personal information if they needed such. Many apps ended up sucking in a huge amounts of people’s personal data, and not only on the users of their apps, but also their friends and family, who were connected to them through Facebook’s expanded network.

The developers shouldn’t have been able to access so much data, and Facebook has since implemented significant restrictions on such. But Facebook, again, didn’t consider the potential negatives of this process – it simply, seemingly, hoped that developers just wouldn’t misuse its tools.

At best, Facebook failed to consider the potential for misuse in both cases. Which brings us back to its stance on political ads.

As noted, Facebook’s approach does make some sense – these are the people that we have elected, and we have a right to hear whatever they have to say. The flaw, however, is in how Facebook assumes that will play out.

As explained by Zuckerberg last October:

I believe in giving people a voice, because at the end of the day, I believe in people. And as long as enough of us keep fighting for this, I believe that more people’s voices will eventually help us work through these issues together and write a new chapter in our history — where from all of our individual voices and perspectives, we can bring the world closer together.”

Again, the principle is that the people can judge for themselves, but that also assumes that people will have the capacity to determine truth from fact on their own, and that leaders won’t share outright lies or misinformation to challenge that. Which, as we’ve seen repeatedly, is not the case.

As an example, back in March, Brazilian President Jair Bolsonaro urged cities to remain open as normal amid the COVID-19 outbreak, saying  that:

See also  Everything You Need to Know to Create Engaging Instagram Carousels [Infographic]

We must return to normality – the few states and city halls should abandon their scorched-earth policies.”

Bolsonaro has constantly downplayed the pandemic, labeling it ‘a little flu’ and ‘ nothing to be afraid of’.

This is an elected official, so according to Facebook’s approach, the Brazilian people should be free to decide if these statements are true or not. But in this case, that’s an extremely dangerous approach.

Many people will take Bolsonaro’s advice based on his word alone, which could lead to them heading out, against official health advice. Almost 29,000 Brazilians have been killed by COVID-19 thus far, while the WHO has said that the nation is now the new epicenter for the pandemic. As such, Bolsonaro’s statement, in retrospect, is fairly dangerous, and the fact that the President advised such has given significant weight to a counter-narrative that has almost undoubtedly lead to more deaths.

The risks in this case would outweigh the transparency benefits.

But that’s basically Facebook’s approach, which brings me back to that SNL sketch from years ago. The flaw in Facebook’s stance on political content is that it won’t fact check such because it trusts that the various elements simply won’t gratuitously misuse their capacity.

Norm McDonald’s character wins ‘The Bible Game’ not because his character is the most honest, but because he’s the opposite – yet the rules of the game don’t account for his behavior.

In principle, the idea of not implementing fact checks on posts from political leaders makes some sense. But in practice, it might just make it easier for the most unscrupulous candidates to win.

Continue Reading


Twitter Expands its Test of User-Reported Misinformation, Expanding Platform Insight



Twitter Looks to Extend its Keyword Blocking and Mute Options to More Elements

After seeing success with its initial test of a new, manual reporting option, enabling users to flag tweets that contain potentially misleading claims, Twitter is now expanding the test to more regions, with users in Brazil, Spain, and the Philippines now set to get access.

Launched in August last year, Twitter’s latest effort to combat misinformation focuses on audience trends and perception of such as a means to determine common issues with the platform, and what people feel compelled to report, pointing to things that they don’t want to see.

The process adds an additional ‘It’s misleading’ option to your tweet reporting tools, providing another means to flag concerning claims.

Which is obviously not a foolproof way to detect and remove misleading content – but as noted, the idea is not so much focused on direct enforcement, as such, but more on broader trends based on how many people report certain tweets, and what people report.

As Twitter explained as part of the initial launch:

“Although we may not take action on this report or respond to you directly, we will use this report to develop new ways to reduce misleading info. This could include limiting its visibility, providing additional context, and creating new policies.”

So essentially, the concept is that if, say, 100, or 1,000 people report the same tweet for ‘political misinformation’, that’ll likely get Twitter’s attention, which may help Twitter identify what users don’t want to see, and want the platform to take action against, even if it’s not actually in violation of the current rules.

See also  Zoom will enable waiting rooms by default to stop Zoombombing

So it’s more of a research tool than an enforcement option – which is a better approach, because enabling users to dictate removals by mass-reporting in this way could definitely lead to misuse.

That, in some ways, has borne true in its initial testing – as explained by Head of Site Integrity Yoel Roth:

On average, only about 10% of misinfo reports were actionable -compared to 20-30% for other policy areas. A key driver of this was “off-topic” reports that don’t contain misinfo at all.

In other words, a lot of the tweets reported through this manual option were not an actual concern, which highlight the challenges in using user reports as an enforcement measure.

But Roth notes that the data they have gathered has been valuable either way:

We’re already seeing clear benefits from reporting for the second use case (aggregate analysis) – especially when it comes to non-text-based misinfo, such as media and URLs linking to off-platform misinformation.

So it may not be a great avenue for direct action on each reported tweet, but as a research tool, the initiative has helped Twitter determine more areas of focus, which contributes to its broader effort to eliminate misinformation within the tweet eco-system.

A big element of this is bots, with various research reports indicating that Twitter bots are key amplifiers of misinformation and politically biased information.

In early 2020, at the height of the Australian bushfire crisis, researchers from Queensland University detected a massive network of Twitter bots that had been spreading misinformation about the Australian bushfire crisis and amplifying anti-climate change conspiracy theories in opposition to established facts. Other examinations have found that bot profiles, at times, contribute up to 60% of tweet activity around some trending events.

See also  Facebook is expanding Spotify partnership with new ‘Boombox’ project

Twitter is constantly working to better identify bot networks and eliminate any influence they may have, but this expanded reporting process may help to identify additional bot trends, as well as providing insight into the actual reach of bot pushes via expanded user reporting.

There are various ways in which such insight could be of value, even if it doesn’t result in direct action against offending tweets, as such. And it’ll be interesting to see how Twitter’s expansion of the program improves the initiative, and how it also pairs with its ongoing ‘Birdwatch’ reporting program to detect platform misuse.

Essentially, this program won’t drive a sudden influx of direct removals, eliminating offending tweets based on the variable sensibilities of each user. But it will help to identify key content trends and user concerns, which will contribute to Twitter’s broader effort to better detect these movements, and reduce their influence.

Source link

Continue Reading


Twitter’s Latest Promotional Campaign Focuses on Celebrities Who’ve Manifested Success Via Tweet



Twitter's Latest Promotional Campaign Focuses on Celebrities Who've Manifested Success Via Tweet

Twitter has launched a new advertising campaign which is focused on ‘manifesting’ via tweet, highlighting how a range of successful athletes and entertainers made initial commitments to their success via Twitter long before their public achievements.

Through a new set of billboard ads across the US, Twitter will showcase 12 celebrities that ‘tweeted their dreams into existence’.

As explained by Twitter:

To honor these athletes and other celebrities for Tweeting their dreams into existence, Twitter turned their famous Tweets into 39+ billboards! Located across 8 cities (NYC, LA, SF, Chicago, Toronto, Houston, Tampa, Talladega), most of the billboards can be found in the hometowns or teams’ locations of the stars who manifested their dreams, such as Bubba Wallace in Talladega and Diamond DeShields in Chicago.”

Twitter Manifest campaign

Beyond the platform promotion alone, the billboards actually align with usage trends at this time of year, as people work to stick with their New Year’s resolutions, and adopt new habits that will improve their lives. Seeing big-name stars that have been able to achieve their own dreams, which they’ve publicly communicated via tweet, could be another avenue to holding firm on such commitments, while Twitter also notes that tweets about manifestation are at an all-time high, seeing 100% year-over-year growth.

Maybe that’s the key. By sharing your ambitions and goals publicly, maybe that additional accountability will better ensure that you stick to your commitments – or maybe it’s all just mental, and by adding that extra public push to yourself, you’ll feel more compelled to keep going, because it’s there for all to see.

See also  Social Bluebook was hacked, exposing 217,000 influencers’ accounts

In addition to the promotional value of the campaign, Twitter’s also donating nearly $1 million to charities as selected by each of the featured celebrities.

“Some of the charities include Boys and Girls Club, Destination Crenshaw, The 3-D Foundation, and UNICEF Canada.”

It’s an interesting push, which again comes at the right time of year. Getting into a new routine is tough, as is changing careers, publishing your first artwork, speaking in public, etc. Maybe, by seeing how these stars began as regular people, tweeting their dreams like you or I, that could act as a good motivator that you too can achieve what you set out to do, and that by posting such publicly, you’re making a commitment, not to the random public, but to yourself, that you will do it this year.

Sure, 2022 hasn’t exactly got off to a great start, with a COVID resurgence threatening to derail things once again. But maybe, this extra push could be the thing that keeps you focused, like these celebrities, even amid external distractions.  

Source link

Continue Reading


Snapchat Adds New Limits on Adults Seeking to Connect with Minors in the App



Snapchat Adds New Limits on Adults Seeking to Connect with Minors in the App

After Instagram added similar measures last year, Snapchat is now implementing new restrictions to limit adults from sending messages to users under the age of 18 in the app.

As reported by Axios, Snapchat is changing its “Quick Add” friend suggestion process so that it’s not possible for people to add users aged under 18 “unless there are a certain number of friends in common between the two users”. That won’t stop such connection completely, but it does add another barrier in the process, which could reduce harm.

The move is a logical and welcome step, which will help improve the security of youngsters in the app, but the impacts of such could be far more significant on Snap, which is predominantly used by younger people.

Indeed, Snapchat reported last year that around 20% of its total user base was aged under 18, with the majority of its audience being in the 13-24 year-old age bracket. That means that interaction between these age groups is likely a significant element of the Snap experience, and restricting such could have big impacts on overall usage, even if it does offer greater protection for minors.

Which is why this is a particularly significant commitment from Snap – though it is worth noting that Snapchat won’t necessarily stop older users from connecting with younger ones in the app, it just won’t make it as easy through initial recommendations, via the Quick Add feature.

So it’s not a huge change, as such. But again, given the interplay between these age groups in the app, it is a marker of Snap’s commitment to protection, and to finding new ways to ensure that youngsters are not exposed to potential harm within the app.

See also  Game Streaming Platform Twitch is Seeing a Significant Rise in Non-Gaming Live-Streams

Snapchat has faced several issues on this front, with the ephemeral focus of the app providing fertile ground for predators, as it automatically erases any evidence trail in the app. With that in mind, Snap does have a way to go in providing more protection, but it is good to see the company looking at ways to limit such interactions, and combat potentially harmful misuse.

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address