For the last four years, the 2020 US Election has been projected as the ultimate test of social media networks and their capacity to respond to allegations of mass manipulation, foreign interference, fake news and more.
Facebook and Twitter, in particular, have worked to add in a range of new measures and tools to better protect the integrity of the critical vote, including new ad transparency measures, improved detection systems to stamp out ‘coordinated inauthentic behavior‘, updated response processes, etc.
So how have they done? Amidst the ongoing confusion around the results, as they continue to trickle in, how have the major social networks performed with respect to combating mass manipulation and counter-messaging through their apps?
Here’s a look at some of the key points of focus, and action, over the past week.
Facebook has been working to keep users updated on the latest, accurate polling information by adding prominent banners to feeds, in both Facebook and Instagram, which remind users that the votes are still being counted.
Yesterday, Facebook announced that it will soon share a new banner, once a winner of the election is projected by official sources, providing more context on the process.
Once Reuters and a majority of independent decision desks at major media outlets, including ABC, CBS, Fox, NBC, CNN and AP project a winner, we’ll update the notifications running across the top of Facebook and Instagram with the projected winner of the election. pic.twitter.com/5asIe2hZip
— Facebook Newsroom (@fbnewsroom) November 6, 2020
These official updates serve as a counter to the speculation online, and do seem to be having some impact in quelling angst around the result.
Though speculation of voter fraud, as provoked by the US President, is also having an impact – which Facebook is reportedly measuring, in real-time, via its own internal systems.
According to BuzzFeed News, Facebook has an internal measurement tool which it uses to predict the likelihood of potential real-world violence, based on rising discussion trends.
As explained by BuzzFeed:
“In a post to a group on Facebook’s internal message board, one employee alerted their colleagues to a nearly 45% increase in the metric, which assesses the potential for danger based on hashtags and search terms, over the last five days. The post notes that trends were previously rising “slowly,” but that recently “some conspiracy theory and general unhappiness posts/hashtags” were gaining popularity.”
Which is interesting, for several reasons.
For one, it shows that Facebook is aware of the influence it can have on real-world action, and that it understands that allowing certain trends to spread and grow can be dangerous. That means that Facebook is constantly measuring how far it can push things, and when it needs to slow trends up, in order to avoid them spilling over.
So for all it’s playing down of its potential to influence major incidents, Facebook knows that it does, and it allows certain controversial discussions to continue, uninhibited, until it deems that it needs to act.
That seems concerning, that Facebook is putting itself in a position to manage the balance between maximizing engagement and facilitating potential unrest.
The very existence of the metric shows that Facebook could likely do more to stop such movements before they gain traction.
It also shows that Facebook can’t play dumb on other topics, like QAnon or anti-vaxxers, as based on this data, it would be well aware of their potential for harm, well before they become problematic.
Many experts had called on Facebook to do more about QAnon for years, before Facebook finally took action. This metric, if it does exist, suggests that Facebook knew the risk all along, and only chose to act when it felt it was absolutely necessary. Which could be too late for any victims caught in the early crossfire. So who decides when the risk is too high, and should Facebook be in charge of making that call?
This will no doubt come under further investigation in the wake of the election.
Based on these insights, Facebook also took the additional step this week of blocking certain hashtags linked to the rising criticism of the vote-counting process.
Following US President Donald Trump’s unsubstantiated claims that the vote tallying process was ‘illegal’ and ‘rigged’, groups of Trump supporters gathered outside several vote counting centers across the US and began calling for poll workers inside to ‘stop the count’.
Again, with the threat of real-world violence rising, Facebook took action, blocking the hashtags #sharpiegate (in relation to the manual editing of vote slips), #stopthesteal and #riggedelection. TikTok also blocked these hashtags, while Twitter has continued to add warnings to all posts it detects which may contain election misinformation.
Indeed, incoming US Senator Marjorie Taylor Greene has seen a raft of her tweets hidden due to violations of Twitter’s election misinformation policy.
In addition to this, Facebook has also removed several groups which had been created on the back of questions around the election results, due to concerns that they could be used to organize violent protests in response.
Facebook is also reportedly looking to add more friction to sharing of political posts in order to slow the momentum of conspiracy-fueling content.
As per The New York Times:
“Facebook plans to add more “friction” – such as an additional click or two – before people can share posts and other content. The company will also demote content on the News Feed if it contains election-related misinformation, making it less visible, and limit the distribution of election-related Facebook Live streams.”
Again, potentially based on its violence predictor, Facebook is looking to add more measures to reduce the spread of harmful misinformation, and material that could incite further tension within the community.
But a lot of content is still getting through – it’s not difficult to find various videos and posts on the platform which raise questions about the voting process.
It’s likely, of course, that Facebook can’t stop all of it, which is why adding more friction could be key to at least reducing dissent.
In another significant move, former Trump advisor Steve Bannon has been permanently banned from Twitter after calling for the beheadings of two top American public servants, in order to send a warning to others.
Bannon made the suggestion in a video, which has since also been removed from Facebook and YouTube as well.
Bannon was looking to equate modern-day politics to medieval times, saying:
“Second term kicks off with firing [FBI Director Christopher] Wray, firing [Dr. Anthony] Fauci. Now I actually want to go a step farther but I realize the President is a kind-hearted man and a good man. I’d actually like to go back to the old times of Tudor England. I’d put the heads on pikes, right, I’d put them at the two corners of the White House as a warning to federal bureaucrats – ‘you either get with the program or you’re gone – time to stop playing games.”
Bannon’s point was theoretical, not literal, but the concern here is that it’s entirely possible that not all of his listeners and/or supporters would derive the same meaning.
Bannon’s statement is another example of the importance of curbing violent rhetoric in public address, regardless of intention, as it can have significant consequences. Now, Bannon faces a ban, on several platforms, and could find it much harder to gain amplification for his future messaging as a result.
While the results of the election are not yet known, and may not be for some time, it does seem, at this stage, that the additional efforts and measures implemented by the major social platforms have been mostly effective in limiting the spread of misinformation, and quelling at least some of the angst around the results.
But we won’t know for a while yet. At present, there seems to be little discussion about foreign manipulation or similar, but it’s possible that it just hasn’t been detected. And while Facebook and Twitter are working quickly to add warning labels and limit distribution now, once the final results are announced, that could prove to be another key test.
There’ll still be a lot of assessment to come, and division in US society is still significant. But there are positive signs that the platforms themselves have done all they can, by most assessments.
Why Your Website Should be Optimized for Mobile [Infographic]
Let’s face it – we’re about to move into 2022, and if you don’t understand the importance of having a mobile-friendly website by now, maybe you never will.
Mobile usage is now a key connector, in an increasing range of ways, and providing an optimized, engaging mobile experience is key to maximizing your business performance.
Yes, it can cost time and money to optimize your mobile presence, but in the vast majority of cases, that investment will likely pay off, as more and more people are likely seeking information on their mobile device, and are more likely to go with the provider who’s website is responsive, engaging and answers their key queries, quickly and easily.
There’s a reason why Google factors in mobile performance as a search ranking factor.
Underlining this, the team from 2Flow have put together this infographic outlining the importance of having a mobile-friendly website, now and in future.
It could be worth re-checking your site as part of your 2022 preparations.
12 Graphic Design Trends to Watch in 2022 [Infographic]
What will be the big visual trends of 2022?
The team from graphic design service 99designs by Vista have taken a look at the trends they’ve seen throughout the year to make some predictions on coming presentation shifts, which could help you stay ahead of the next wave.
There are some interesting ideas here too, including 90’s nostalgia, Y2K and a grunge revival.
Engaging visuals are key to standing out in busy social media feeds, and as such, it’s worth taking note of trend listings like this and considering whether you need to update your brand style and format.
Take a look at the full overview from 99designs in the infographic below.
LinkedIn Shares New Insights into the Gen Z Audience on the Platform [Infographic]
As you’ve no doubt read many times, Gen Z is increasingly becoming a critical market for all businesses, and will soon be the largest generation of consumers around the world.
Which, of course, makes sense. That’s essentially how time works – as younger people grow up, and become adults and employees, and therefore become a larger focus for businesses in every way.
That’s not really a revelation, but it is worth noting the interests and consumption behaviors of Gen Z, and the factors that can influence their actions, in regards to marketing outreach.
Which is what this new overview from LinkedIn provides. LinkedIn has analyzed data from the 78 million Gen Z members on its platform to look at how they engage, how they spend, why they purchase from different companies, and more.
You can check out LinkedIn’s full report here, or take a look at the infographic overview below.
WORDPRESS2 weeks ago
Data Breach Spreads To Six Web Hosts
SEO2 weeks ago
11 Ways to Build a Google Algorithm Update Resistant SEO Strategy
WORDPRESS2 weeks ago
WordPress 5.9 Rushed “In A Dangerous Way” Is Now Delayed
SOCIAL2 weeks ago
YouTube Adds Option to Reuse Details from Previous Videos to Streamline Uploads, New Mobile Analytics
SOCIAL1 week ago
TikTok’s Testing a New ‘Business Registration’ Option to Confirm Business Information in the App
SOCIAL3 days ago
LinkedIn Shares New Insights into the Gen Z Audience on the Platform [Infographic]
GOOGLE2 weeks ago
Google Hanukkah Decorations Are Live For 2021
SEO2 weeks ago
4 Ways SEO Improves Small Business