Connect with us

SOCIAL

New Report Shows That 74% of People Don’t Believe Tech Platforms Will Be Able to Stop Political Manipulation

Published

on

In what will likely come as little surprise, a new study from Pew Research has shown that 74% of Americans have little to no confidence that tech companies, including Facebook, Twitter and Google, will be able to prevent the misuse of their platforms to influence the outcome of the 2020 presidential election.

Pew Research Tech Trust graph

As you can see here, trust levels are similar across the political divide this time around, in slight variance to the same survey in 2018.

As per Pew:

“Confidence in technology companies to prevent the misuse of their platforms is even lower than it was in the weeks before the 2018 midterm elections, when about two-thirds of adults had little confidence these companies would prevent election influence on their platforms.”

Again, that’s not a major surprise. Another survey published by Pew earlier this month also found that both Republicans and Democrats “register far more distrust than trust of social media sites as sources for political and election news”, with 59% of respondents specifically noting that they do not trust the news content they see on Facebook.

So people don’t trust the news they’re seeing on social platforms already. Given this, it makes sense that they also don’t have much faith in avoiding manipulation. And while each platform has implemented new measures to better protect users, and weed out “inauthentic” actions, the data would suggest that it’s not enough.

It’s worth noting too that this latest survey was conducted between January 6th and January 19th, 2020, and incorporates responses from 12,638 people.

But while users don’t currently have a lot of faith, they do believe that technology companies should do more to stop the spread of misinformation. 

Pew Research tech trust

As you can see here, 78% of respondents agree that tech companies “have a responsibility to prevent the misuse of their platforms to influence the 2020 Presidential election”. 

So, to recap, people don’t trust the news they’re seeing on digital platforms, and have little faith that the situation will improve – even though they feel that the providers have a responsibility to do so.

Advertisement

The bigger question then is “does that matter?”

I don’t mean that in a moralistic sense – of course it matters that people are potentially being manipulated. But I mean in terms of what impacts that will have – will people, for example, stop getting their news content from Facebook and other platforms as a result of this lack of trust, as noted in their responses?

Do you want to know the answer?

Historical evidence shows that people won’t stop using Facebook as a result of these trends. They probably should, right? If people believe that they may well be manipulated by social media news coverage, maybe it’d be better to get off of these apps, and stop getting their news coverage from them. But that won’t happen.

Case in point – in yet another Pew Research report, its researchers found that, in 2016, the year of the last Presidential election, 62% of Americans got at least some of their news content from social media. In 2018, after all the discussion around foreign interference and manipulation, amid all the coverage around social media misuse by political activists. After all that, guess what happened?

Pew Research social media news

More people now get more of their news exposure through social media. So while it’s one thing for people to say ‘we don’t trust what we see’, it’s another thing to actually get them to take action on such, and actively stop using social channels to source news content.

Because that’s hard to do. More than just content, social platforms provide engagement, and the dopamine rush of likes and shares. That can be addictive – so while people don’t necessarily agree with what they’re seeing online, they do like to engage with it, they like to argue against it, to virtue signal in the comments. If you’re looking for the reason why we’re so divided along political lines these days, look to the engagement that people see in disagreement, the allure of the battle which few can resist. 

Sure, I might dislike my uncle’s views on climate change, for example, which he regularly shares on Facebook. But you can bet that in quiet moments, I’m going to check in on his posts. Because it’s addictive, the anger and outrage, like poking a wound to feel that little twinge of pain. It solidifies you in your beliefs – and when you finally feel the need to respond and call him/her out, there’s a rush in that engagement.

It’s not surprising that people distrust Facebook as a news source in this sense. But they’re still going there for the fight. And I would argue that Facebook is okay with that, as opposed to feeling any significant need to play referee and quell disagreement.   

Advertisement

So while this new survey doesn’t reveal any amazing insights, it is interesting to note what it suggests, in terms of broader behavioral trends, and what that means for civic discussion and engagement.

Socialmediatoday.com

Advertisement

SOCIAL

YouTube Tests Improved Comment Removal Notifications, Updated Video Performance and Hashtag Insights

Published

on

YouTube Expands its 'Pre-Publish Checks' Tool to the Mobile App

YouTube’s looking to provide more context on content removals and violations, while it’s also experimenting with a new form of analytics on average video performance benchmarks, along with improved hashtag discovery, which could impact your planning and process.

First off, on policy violations – YouTube’s looking to provide more context on comment removals via an updated system that will link users through to the exact policy that they’ve violated when a comment is removed.

As explained by YouTube’s Conor Kavanagh:

“Many users have told us that they would like to know if and when their comment has been removed for violating one of our Community Guidelines. Additionally, we want to protect creators from a single user’s ability to negatively impact the community via comments, either on a single channel or multiple channels.”

The new comment removal notification aims to address this, by providing more context as to when a comment has been removed for violating the platform’s Community Guidelines.

In expansion of this, YouTube will also put some users into timeout if they keep breaking the rules. Literally:

If someone leaves multiple abusive comments, they may receive a temporary timeout which will block the ability to comment for up to 24 hours.”

Advertisement

YouTube says that this will hopefully reduce the amount of abusive comments across the platform, while also adding more transparency to the process, in order to help people understand how they’ve broken the rules, which could also help to guide future behavior.

On a similar note, YouTube’s also expanding its test of timestamps in Community Guidelines policy violation notifications for publishers, which provide more specific details on when a violation has occurred in video clips.

Initially only available for violations of its ‘Harmful and Dangerous’ policy, YouTube’s now expanding these notifiers to violations related to ‘Child Safety’, ‘Suicide and Self-Harm’, and ‘Violent or Graphic’.

If you’re in the experiment, you’ll see these timestamps in YouTube Studio as well as over email if we believe a violation has occurred. We hope these timestamps are useful in understanding why your video violated our policies and we hope to expand to more policies over time.”

On another front, YouTube’s also testing a new analytics card in YouTube Studio which will show creators the typical amount of views they get on different formats, including VODs, Shorts, and live streams.

YouTube average video performance

As you can see in this example, the new data card will provide insight into the average amount of views you see in each format, based on your the last 10 uploads in each, which could provide more comparative context on performance.

Finally, YouTube’s also launched a test that aims to showcase more relevant hashtags on video clips.

“We’re launching an experiment to elevate the hashtags on a video’s watch page that we’ve found viewers are interested in, instead of just the first few added to the video’s description. Hashtags are still chosen by creators themselves – nothing is changing there – the goal of the experiment is simply to drive more engagement with hashtags while connecting viewers with content they will likely enjoy.”

So YouTube will be looking to highlight more relevant hashtags in video clips, as a means to better connect users to more video clips on the same topic.

Advertisement

Which could put more emphasis on hashtag use – so it could be time to upgrade your hashtag research approach in line with the latest trending topics.

All of these updates are fairly minor, but they could impact your YouTube approach, and it’s worth considering the potential impacts in your process.

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending

en_USEnglish