As it looks towards the future of digital connection, Meta is developing various tools to help facilitate that process, which includes a major focus on VR development, and making VR a more immersive, realistic, responsive experience, that people can use for anything that they desire.
But we’re not there yet. At present, the current VR experience is pretty amazing, in terms of functional, untethered headsets, but the actual in-world elements are still a way off from where Meta wants them to be, with the blocky, legless graphic experience being more of a framework for the next stage.
Which is where Meta is focused, and today, Meta CEO Mark Zuckerberg has offered a glimpse of what’s coming, with an overview of various VR units and experiments that Meta is working on to facilitate the next stage.
As explained by Zuckerberg, the various headsets have been built to focus on specific elements of VR development, including retinal resolution for enhanced visual experiences, focal depth, which enables you to focus on different objects on screen, and high dynamic range for optimal color realism in VR spaces.
The challenge then is to incorporate all of these elements into a single VR unit, which Zuckerberg says they have done with ‘Holocake II’, which is a working prototype for its most advanced, holographic VR unit.
Which is not available to consumers as yet, and won’t be for some time, if ever – but the various experiments provide some perspective on how Meta’s looking to solve the key challenges of VR, with a view to it becoming the portal to its advanced metaverse experience.
Zuckerberg’s showcase is the latest in Meta’s new push to provide more perspective on the future, and what people will be able to do within its advanced, idealistic VR worlds.
Over the past month, Meta has been releasing new glimpses into the future of VR, which comes amid concerns that Meta’s high spending on development, and slowly shrinking ad revenue, could become problematic for the company at some stage.
As we noted recently, in some ways, it seems like Meta may have gone too early with its metaverse push, as we’re still so far away from a functional metaverse experience being a reality, but at the same time, given that it’s investing billions into its VR development, Meta needed to provide some indication of its eventual product roadmap, in order to appease investors and the broader market.
Whether that comes to fruition is another question. Sure, the metaverse looks amazing, and it very well may be the future of connection for the next generation of consumers. But there are no guarantees, and as Meta tries to chase TikTok, and recoup losses as a result of Apple’s privacy permissions update, there is a risk that, if the metaverse fails to catch on, it could end up hurting the company’s long term growth ambitions.
Of course, no one’s betting against Zuck, and clearly, modern gaming and interaction trends do point to in-world social engagement being the future, in some form. But it’s hard, right now, to bridge the gap between the current VR experience – which is fairly clunky and nauseating for any length of time – and Meta’s imagined immersive worlds, where everything and anything will be possible, all of the time.
It seems likely that’s where things are headed, but there are many stepping stones in between, which is what Meta is now trying to explain to consumers, and the market, as it progresses towards its next stage.
Twitter’s Rules Around Speech are Focused on Avoiding Harm, Not Maintaining Control
An inevitable element of the Elon Musk takeover at Twitter is political division, with Elon essentially using left and right-wing antagonism to stoke debate, and boost engagement in the app.
Musk is a vocal proponent of free speech, and of social platforms in particular allowing users to say whatever they want, within the bounds of local laws. Which makes sense, but at the same time, social platforms, which can effectively provide reach to billions of people, also have some responsibility to manage that capacity, and ensure that it’s not misused to amplify messages that could potentially cause real world harm.
Like, for example, when the President tweets this:
Free speech proponents will say that he’s the President, and he should be allowed to say what he wants as the nation’s democratically elected leader. But at the same time, there’s a very real possibility that the President effectively saying that people are allowed to shoot looters, or that protesters will be shot, could lead to direct, real world harm.
“No it won’t, only snowflakes think that, real people don’t take these things literally.”
But the thing is, some people do, and it’s generally only in retrospect that we assess such and determine the causes of angst, confusion, and indeed harm that can be caused by such messaging.
Social platforms know this. For years, in various nations, social media apps have been used to spread messaging that’s lead to violence, civil unrest, and even revolts and riots. In many instances, this has been because social apps have allowed messaging to be spread which is not technically illegal, but is potentially harmful.
There have been ethnic tensions in Myanmar, fueled by Facebook posts, the mobilization of violent groups in Zimbabwe, the targeting of Sikhs in India, Zika chaos in South Africa. All of these have been traced back to social media posts as early, incendiary elements.
And then there was this:
The final series of tweets that finally saw Trump banned from Twitter effectively called on his millions of supporters to storm the Capitol building, in a misguided effort to overturn the result of the 2020 election.
Politicians were cornered in their offices, fearing for their lives (especially those that Trump had called out by name, including former VP Mike Pence), while several people were killed in the ensuing confusion, as Trump supporters entered the Capitol building and looted, vandalized and terrorized all in their path.
That action had essentially been endorsed, even goaded, by Trump, with Twitter providing the means to amplify his messaging. Twitter recognized this, and decided that it did not want to play a part in a political coup, so it banned Trump for this and his repeated violations of its rules.
Many disagreed with Twitter’s decision (note: Facebook also banned Trump). but again, this wasn’t the first time that Twitter had seen its platform used to fuel political unrest. It’s just that now, it was in the US, on the biggest stage possible, and in the midst of what many still view as a ‘culture war’ between the woke left, who want to restrict speech in line with their own agenda, and the freedom-loving right, who want to be able to say whatever they like, without fear of consequence.
Musk himself was opposed to Twitter’s decision.
Elon, of course, has his own history of issues based on his tweets, including his infamous ‘taking Tesla private at $420’ comment, which resulted in the FCC effectively forcing him to step down as chairman of Tesla, and his 2018 tweet which accused a cave diver of being a pedophile, despite having no basis at all to make such a claim. Musk saw no problem with either, even in retrospect – and he even went as far as hiring a private investigator to dig up dirt on the cave diver to dilute the man’s defamation suit.
Free speech, as Musk sees it, should enable him to say such, and people should be able to judge for themselves what that means. Even if it impacts investors or harms an innocent person’s reputation, Musk sees no harm in making such statements.
As such, it’s unsurprising that Musk has now overseen Trump’s account being reinstated, as part of his broader push to overturn Twitter’s years of perceived suppression of free speech.
If enough people sign up, he can reduce the platform’s reliance on ads, and make the rules around speech in the app whatever he wants, and get a win for his army of dedicated supporters – but the thing is, the ‘war’ that Elon’s pushing here doesn’t actually exist.
The majority of Twitter users don’t see there being a divide between the ‘elite’ blue checkmark accounts and the ‘regular’ users. The majority don’t have some fundamental opposition to people posting whatever they like, and there’s no broader push from on-high to control what can and cannot be shared, and who or what you can talk about. The only significant action that Twitter’s taken in the past on this front has been specifically to avoid harm, and to limit the potential for dangerous actions that might be inspired by tweets.
Which, in amongst all the ‘free speech’, ‘culture war’ propaganda, is what could eventually end up being overlooked.
Again, it’s only in retrospect that we can clearly see the connections between what’s shared online and real world harm, it’s only after years of seeing the anger bubbles swell on Facebook and Twitter that things truly started to boil over. The risk now is that we’re about to see these bubbles get bigger once again, and despite the lessons of past, despite seeing what can happen when we allow dangerous movements to grow via every borderline tweet and comment, Musk is leading a new charge to fan the flames of division once again.
Which is really the only thing that journalists and commentators are warning against. It’s not driven by corporate leanings or government control, it’s not some ‘woke agenda’ that’s being infused throughout the mainstream media, in order to stop people from learning ‘the truth’. It’s because we’ve seen what happens when regulations are loosened, and when social platforms with huge reach potential allow the worst elements to propagate. We know what happens when speech that may not be illegal, but can cause harm, is amplified to many, many more people.
The ideal of true free speech is that it allows us to address even the most sensitive of topics, and make progress on the key issues of the day, by hearing all sides, no matter how disagreeable we personally may find them. But we know, from very recent history, that this is not the most likely outcome of loosening the safeguards online.
Which is the misnomer of Musk’s ‘culture wars’ push. On the face of it, there’s a battle to be won, there’s a side to choose, there an ‘us’ and a ‘them’ – but in reality, there’s not.
In reality, there’s risk and there’s harm. And while there are extremes of cultural sensitivity, on either side of the debate, the risk is that by getting caught up in a fictional conflict, we end up overlooking, or worse, ignoring the markers of the next violent surge.
That could lead to even more significant harm than we’ve seen this far, and the only beneficiaries will be those stoking the flames.