As TikTok continues to grow, Facebook continues to look for new ways to limit its expansion, and stop users from migrating away from its own apps.
A key weapon for Facebook on this front is scale. It may not be able to compete with TikTok in terms of fun, nor with the personalization of TikTok’s algorithm. But Facebook can offer creators more reach, by showcasing their Reels clips to Instagram’s billion-plus users.
And it can actually provide significantly more reach potential than just that.
Back in December, Facebook began testing a new option which would enable Reels creators to also share their Reels clips into the Facebook News Feed and to Facebook Watch, facilitating a potentially huge expansion of Reels exposure.
And now, it seems that Facebook’s advancing on this front, with a new, more polished prompt spotted in testing.
As you can see in this new, full-screen explainer, posted by user @VarunBanur (and shared by social media expert Matt Navarra), Facebook is now prompting some Reels creators to share their Reels clips to Facebook, where they can be recommended to any of Facebook’s 2.8 billion users, greatly expanding reach.
That could prove to be a powerful lure for Reels creators – though interestingly, this test appears to currently be being pushed in India, where Instagram initially launched Reels as a replacement for the banned TikTok.
TikTok reportedly had around 200 million Indian users at the time of its banning in the nation, and Instagram swooped in to launch Instagram Reels within days of TikTok’s removal, seeking to capitalize on this newly orphaned audience. Instagram says that Reels has seen steady growth in the Indian market, and by adding an option to share your Reels clips to Facebook, that would expand its audience potential, and likely put its reach on par with YouTube, which is also trying to gather up former TikTok users in India via its own ‘Shorts’ option.
Given this, the Facebook integration may not necessarily be about beating out TikTok, as such, as TikTok isn’t even present in the test region. But still, it would provide Facebook with another powerful lure for Reels, which could help negate competitors, while also, potentially, boosting the revenue potential of its short video option.
Right now, the option is not functional, so it’s not at the full live testing phase as yet. But it’s an interesting consideration – by providing more audience potential, Facebook could look to win over TikTok creators by giving them more opportunity to build audience, and generate income from their efforts.
TikTok is yet to establish a solid structure for creators to make money from the platform – but with projections that the app will reach a billion users in 2021, it’s evolving fast, and it likely won’t take long for TikTok to establish a more sustainable revenue-generation process.
Which probably means that Facebook needs to work quickly to win more creators over. There’s no word on if or when this new option might go live, or be released to more regions. But it once again underlines the rising influence of TikTok, and how it’s spooked The Social Network with its rapid growth.
Twitter’s Rules Around Speech are Focused on Avoiding Harm, Not Maintaining Control
An inevitable element of the Elon Musk takeover at Twitter is political division, with Elon essentially using left and right-wing antagonism to stoke debate, and boost engagement in the app.
Musk is a vocal proponent of free speech, and of social platforms in particular allowing users to say whatever they want, within the bounds of local laws. Which makes sense, but at the same time, social platforms, which can effectively provide reach to billions of people, also have some responsibility to manage that capacity, and ensure that it’s not misused to amplify messages that could potentially cause real world harm.
Like, for example, when the President tweets this:
Free speech proponents will say that he’s the President, and he should be allowed to say what he wants as the nation’s democratically elected leader. But at the same time, there’s a very real possibility that the President effectively saying that people are allowed to shoot looters, or that protesters will be shot, could lead to direct, real world harm.
“No it won’t, only snowflakes think that, real people don’t take these things literally.”
But the thing is, some people do, and it’s generally only in retrospect that we assess such and determine the causes of angst, confusion, and indeed harm that can be caused by such messaging.
Social platforms know this. For years, in various nations, social media apps have been used to spread messaging that’s lead to violence, civil unrest, and even revolts and riots. In many instances, this has been because social apps have allowed messaging to be spread which is not technically illegal, but is potentially harmful.
There have been ethnic tensions in Myanmar, fueled by Facebook posts, the mobilization of violent groups in Zimbabwe, the targeting of Sikhs in India, Zika chaos in South Africa. All of these have been traced back to social media posts as early, incendiary elements.
And then there was this:
The final series of tweets that finally saw Trump banned from Twitter effectively called on his millions of supporters to storm the Capitol building, in a misguided effort to overturn the result of the 2020 election.
Politicians were cornered in their offices, fearing for their lives (especially those that Trump had called out by name, including former VP Mike Pence), while several people were killed in the ensuing confusion, as Trump supporters entered the Capitol building and looted, vandalized and terrorized all in their path.
That action had essentially been endorsed, even goaded, by Trump, with Twitter providing the means to amplify his messaging. Twitter recognized this, and decided that it did not want to play a part in a political coup, so it banned Trump for this and his repeated violations of its rules.
Many disagreed with Twitter’s decision (note: Facebook also banned Trump). but again, this wasn’t the first time that Twitter had seen its platform used to fuel political unrest. It’s just that now, it was in the US, on the biggest stage possible, and in the midst of what many still view as a ‘culture war’ between the woke left, who want to restrict speech in line with their own agenda, and the freedom-loving right, who want to be able to say whatever they like, without fear of consequence.
Musk himself was opposed to Twitter’s decision.
Elon, of course, has his own history of issues based on his tweets, including his infamous ‘taking Tesla private at $420’ comment, which resulted in the FCC effectively forcing him to step down as chairman of Tesla, and his 2018 tweet which accused a cave diver of being a pedophile, despite having no basis at all to make such a claim. Musk saw no problem with either, even in retrospect – and he even went as far as hiring a private investigator to dig up dirt on the cave diver to dilute the man’s defamation suit.
Free speech, as Musk sees it, should enable him to say such, and people should be able to judge for themselves what that means. Even if it impacts investors or harms an innocent person’s reputation, Musk sees no harm in making such statements.
As such, it’s unsurprising that Musk has now overseen Trump’s account being reinstated, as part of his broader push to overturn Twitter’s years of perceived suppression of free speech.
If enough people sign up, he can reduce the platform’s reliance on ads, and make the rules around speech in the app whatever he wants, and get a win for his army of dedicated supporters – but the thing is, the ‘war’ that Elon’s pushing here doesn’t actually exist.
The majority of Twitter users don’t see there being a divide between the ‘elite’ blue checkmark accounts and the ‘regular’ users. The majority don’t have some fundamental opposition to people posting whatever they like, and there’s no broader push from on-high to control what can and cannot be shared, and who or what you can talk about. The only significant action that Twitter’s taken in the past on this front has been specifically to avoid harm, and to limit the potential for dangerous actions that might be inspired by tweets.
Which, in amongst all the ‘free speech’, ‘culture war’ propaganda, is what could eventually end up being overlooked.
Again, it’s only in retrospect that we can clearly see the connections between what’s shared online and real world harm, it’s only after years of seeing the anger bubbles swell on Facebook and Twitter that things truly started to boil over. The risk now is that we’re about to see these bubbles get bigger once again, and despite the lessons of past, despite seeing what can happen when we allow dangerous movements to grow via every borderline tweet and comment, Musk is leading a new charge to fan the flames of division once again.
Which is really the only thing that journalists and commentators are warning against. It’s not driven by corporate leanings or government control, it’s not some ‘woke agenda’ that’s being infused throughout the mainstream media, in order to stop people from learning ‘the truth’. It’s because we’ve seen what happens when regulations are loosened, and when social platforms with huge reach potential allow the worst elements to propagate. We know what happens when speech that may not be illegal, but can cause harm, is amplified to many, many more people.
The ideal of true free speech is that it allows us to address even the most sensitive of topics, and make progress on the key issues of the day, by hearing all sides, no matter how disagreeable we personally may find them. But we know, from very recent history, that this is not the most likely outcome of loosening the safeguards online.
Which is the misnomer of Musk’s ‘culture wars’ push. On the face of it, there’s a battle to be won, there’s a side to choose, there an ‘us’ and a ‘them’ – but in reality, there’s not.
In reality, there’s risk and there’s harm. And while there are extremes of cultural sensitivity, on either side of the debate, the risk is that by getting caught up in a fictional conflict, we end up overlooking, or worse, ignoring the markers of the next violent surge.
That could lead to even more significant harm than we’ve seen this far, and the only beneficiaries will be those stoking the flames.