Ta kontakt med oss


Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet


Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online.

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”



We’re Already Living in the Metaverse


We’re Already Living in the Metaverse

“Do a Dance”

The trend started, as so many do, on TikTok. Amazon customers, watching packages arrive through Ring doorbell devices, asked the people making the deliveries to dance for the camera. The workers—drivers for “Earth’s most customer-centric company” and therefore highly vulnerable to customer ratings—complied. The Ring owners posted the videos. “I said bust a dance move for the camera and he did it!” read one caption, as an anonymous laborer shimmied, listlessly. Another customer wrote her request in chalk on the path leading up to her door. DO A DANCE, the ground ordered, accompanied by a happy face and the word SMILE. The driver did as instructed. His command performance received more than 1.3 million likes.

Explore the March 2023 Issue

Check out more from this issue and find your next story to read.

View More

Watching that video, I did what I often do when taking in the news these days: I stared in disbelief, briefly wondered about the difference between the dystopian and the merely weird, and went about my business. But I kept thinking about those clips, posted by customers who saw themselves as directors and populated by people who, in the course of doing one job, had been stage-managed into another.

Dystopias often share a common feature: Amusement, in their skewed worlds, becomes a means of captivity rather than escape. George Orwell’s 1984 had the telescreen, a Ring-like device that surveilled and broadcast at the same time. The totalitarian regime of Ray Bradbury’s Fahrenheit 451 burned books, yet encouraged the watching of television. Aldous Huxley’s Brave New World described the “feelies”—movies that, embracing the tactile as well as the visual, were “far more real than reality.” In 1992, Neal Stephenson’s sci-fi novel Snow Crash imagined a form of virtual entertainment so immersive that it would allow people, essentially, to live within it. He named it the metaverse.

In the years since, the metaverse has leaped from science fiction and into our lives. Microsoft, Alibaba, and ByteDance, the parent company of TikTok, have all made significant investments in virtual and augmented reality. Their approaches vary, but their goal is the same: to transform entertainment from something we choose, channel by channel or stream by stream or feed by feed, into something we inhabit. In the metaverse, the promise goes, we will finally be able to do what science fiction foretold: live within our illusions.

No company has placed a bigger bet on this future than Mark Zuckerberg’s. In October 2021, he rebranded Facebook as Meta to plant a flag in this notional landscape. For its new logo, the company redesigned the infinity symbol, all twists with no end. The choice was apt: The aspiration of the renamed company is to engineer a kind of endlessness. Why have mere users when you can have residents?

For now, Meta’s promise of immersive entertainment seems as clunky as the goggles required to access all that limitless fun. But the promise is also redundant: Zuckerberg positions himself as an innovator, but the environment that Meta is marketing already exists. Where were those Amazon drivers doing their dancing, if not in the metaverse?

In the future, the writers warned, we will surrender ourselves to our entertainment. We will become so distracted and dazed by our fictions that we’ll lose our sense of what is real. We will make our escapes so comprehensive that we cannot free ourselves from them. The result will be a populace that forgets how to think, how to empathize with one another, even how to govern and be governed.

That future has already arrived. We live our lives, willingly or not, within the metaverse.

A Vaster Wasteland

When scholars warn of the United States becoming a “post-truth” society, they typically focus on the ills that poison our politics: the misinformation, the mistrust, the president who apparently thought he could edit a hurricane with a Sharpie. But the encroachments of a post-truth world are matters of culture as well.

In 1961, Newton Minow, just appointed by President John F. Kennedy to lead the Federal Communications Commission, gave a speech before a convocation of TV-industry leaders. He was blunt. The executives, he said, were filling the air with “a procession of game shows, formula comedies about totally unbelievable families, blood and thunder, mayhem, violence, sadism, murder, Western bad men, Western good men, private eyes, gangsters, more violence, and cartoons.” They were turning TV into “a vast wasteland.”

The epithet stuck. Minow’s speech is best remembered for its criticism of TV, but it was also a prescient acknowledgment of the medium’s power. TV beamed its illusions into home after home, brain after brain. It shaped people’s views of the world even as it distracted them from reality.

Minow made his speech in an era when television was contained to three broadcast channels, to certain hours of the day, and, for that matter, to the living room. Today, of course, screens are everywhere; the entertainment environment is so vast, you can get lost in it. When we finish one series, the streaming platforms humbly suggest what we might like next. When the algorithm gets it right, we binge, disappearing into a fictional world for hours or even days at a time, less couch potato than lotus-eater.

Social media, meanwhile, beckons from the same devices with its own promises of unlimited entertainment. Instagram users peer into the lives of friends and celebrities alike, and post their own touched-up, filtered story for others to consume. TikTok’s endless talent show is so captivating that members of the intelligence community fear China could use the platform to spy on Americans or to disseminate propaganda—feelies as a weapon of war. Even the less photogenic Twitter invites users to enter an alternate realm. As the New York Times columnist Ross Douthat has observed, “It’s a place where people form communities and alliances, nurture friendships and sexual relationships, yell and flirt, cheer and pray.” It’s “a place people don’t just visit but inhabit.”

I’ve inhabited Twitter in that way too—just as I’ve inhabited Instagram and Hulu and Netflix. I don’t want to question the value of entertainment itself—that would be foolish and, in my case, deeply hypocritical. But I do want to question the hold that all of the immersive amusement is gaining over my life, and maybe yours.

Dwell in this environment long enough, and it becomes difficult to process the facts of the world through anything except entertainment. We’ve become so accustomed to its heightened atmosphere that the plain old real version of things starts to seem dull by comparison. A weather app recently sent me a push notification offering to tell me about “interesting storms.” I didn’t know I needed my storms to be interesting. Or consider an email I received from TurboTax. It informed me, cheerily, that “we’ve pulled together this year’s best tax moments and created your own personalized tax story.” Here was the entertainment imperative at its most absurd: Even my Form 1040 comes with a highlight reel.

Such examples may seem trivial, harmless—brands being brands. But each invitation to be entertained reinforces an impulse: to seek diversion whenever possible, to avoid tedium at all costs, to privilege the dramatized version of events over the actual one. To live in the metaverse is to expect that life should play out as it does on our screens. And the stakes are anything but trivial. In the metaverse, it is not shocking but entirely fitting that a game-show host and Twitter personality would become president of the United States.

In the years since Minow delivered his speech, the language of television has come to saturate the way Americans talk about the world around us. People who are deluded, we say, have “lost the plot”; people who have become pariahs have been “canceled.” In earlier ages, people attributed their circumstances to the will of gods and the whims of fate; we attribute ours to the artistic choices of “the writers” and lament that we may be living through America’s final season. These are jokes, of course, but they have an uneasy edge. They suggest a creeping realization that we truly have come to inhabit our entertainment.


Last May, 19 children and two of their teachers were murdered at Robb Elementary School in Uvalde, Texas. The next day, Quinta Brunson, the creator and star of the ABC sitcom Abbott Elementary, shared a message—one of many—that she’d received in response to the massacre: a request from a fan that she write a school-shooting story line into her comedy. “People are that deeply removed from demanding more from the politicians they’ve elected and are instead demanding ‘entertainment,’ ” Brunson wrote on Twitter. “I can’t ask ‘are yall ok’ anymore because the answer is ‘no.’ ”

Brunson’s frustration was understandable. Yet it’s also hard to blame the fans who, as they grieved a real shooting, sought comfort in a fictional one. They have been conditioned to expect that the news will instantaneously become entertainment.

Almost as soon as a big event happens, a production company repurposes it as a pseudo-fiction. In 2019, two Boeing 737 Max airplanes crashed, killing 346 people; by early 2020, Variety was announcing, “Boeing 737 Max Disaster Series in Works.” In July 2020, The Hollywood Reporter shared that Adam McKay’s next project at HBO would “take on the timeliest of subjects: the race to develop a vaccine for COVID-19.” In January 2021, Reddit users collaborated to inflate the stock of the video-game store GameStop; a week later, MGM announced that it had landed the film rights to a book proposal—a book proposal, not an actual book—about the story. In the metaverse, history repeats itself, first as tragedy, then as wry dramedy on HBO Max.

Producers have been ripping plots from the headlines for as long as there have been headlines to rip them from. The difference today is the speed and the scale of the conversion. There are commercial reasons for this frenzy of optioning. In general, plundering reality is much easier and cheaper than inventing something new. The streaming platforms wouldn’t keep making the series, however, if viewers didn’t watch them. And watching them can be disorienting.

The tagline at the start of every episode of Inventing Anna, the 2022 Netflix series, neatly sums up the approach of the new “ripped from the headlines” genre: “This whole story is completely true. Except for all of the parts that are totally made up.” Inventing Anna is the lavishly fictionalized story of Anna Sorokin (more commonly known by her alias, Anna Delvey), a Russian woman who pretended to be a German heiress to gain the trust and then the money of rich people in New York City. It is a tale about lies so brazen that they revealed some well-disguised truths—about the magical thinking of high finance, about America’s enduring susceptibility to the con artist.

Inventing Anna is based on a 2018 New York magazine story by the journalist Jessica Pressler. The show weaves the article—lyrically rendered but truthfully told—into its own version of the story. Inventing Anna is by turns flashy, cheeky, and insightful. It operates in the realm that the postmodernists call hyperreality: Its colors are saturated; its pace is frenetic; it plays, sometimes, less as a drama than as a music video. Most of all, the show sells the idea that an unstable relationship between fact and fiction is its own kind of fun.

In that, Inventing Anna is typical. WeCrashed, Super Pumped: The Battle for Uber, The Dropout, and many other series repurpose high-profile news events as glossy amusements. Gaslit, Winning Time, A Friend of the Family, Pam & Tommy, and American Crime Story do similar work with history so recent, it can barely be considered history at all. Many of them are self-consciously products of “prestige TV,” and many of them are quite good: smartly written, slickly produced, and performed by talented actors.

The shows also deliver a voyeuristic thrill that can be difficult for even the most thoroughly reported and artfully told journalism to rival. The promise of the metaverse has always been the ability to inhabit realms that would otherwise be closed to us: In a recent ad, Meta’s Quest 2 headset transports one young woman into an NFL scrum and another into the Ironman suit. A series like The Crown provides a similar experience. We sit with the Royal Family in their bedrooms. We see them fighting. We see them weeping. This is a biopic about lives still being lived.

Of course, such voyeurism is possible only because the shows are not bound by the rules of nonfiction. Like so many entries in the genre, The Crown combines finicky photorealism and breezy artistic license. The series offers a stitch-by-stitch re-creation of the “revenge dress” that Princess Diana debuted after Prince Charles’s infidelity came to light; it also fabricates dialogue, events, and entire characters. In 2020, the United Kingdom’s culture secretary asked Netflix to add a disclaimer to the show making clear that it is, fundamentally, a work of fiction. Netflix declined, saying it was confident that viewers knew the show was fiction. Yet its executives surely understand that the series is appealing precisely because it presents its fictions with the swagger of settled fact.

One night this past fall, my partner and I were watching an episode of Gaslit (about the life of the Watergate celebrity Martha Mitchell). We were both side-screening with our phones, and at some point we realized we were doing the exact same thing: combing Wikipedia to find out whether the scene we’d just watched had actually happened. In this, we were missing the point. When you’re watching a show like Gaslit or The Crown, you are supposed to accept that the story is true in a broad sense, not a specific one. You are not meant to question the difference between nonfiction and a story that’s been “lightly” fictionalized. And you are definitely not supposed to be on Wikipedia, trying to cross-reference the real history against the one you’re seeing on Starz.

Illustration: infinite stacked smartphone screens receding into the distance, the center phone with an abstract human face

Here my TV-loving self interrupts, indignantly and a little defensively: It’s just TV. It’s all in good fun. And that’s true. I enjoyed Gaslit. And when Super Pumped cast Uma Thurman as Arianna Huffington and gave her one apparent note—more camp—I had no choice but to watch. Taken together, though, such series start to destabilize our sense of what is true and what has been invented—or elided—to tell a good story.

Consider the Theranos scandal. Elizabeth Holmes’s company was covered meticulously in real time by journalists, most prominently at The Wall Street Journal, and the full arc of her deceptions was described masterfully by the Journal ’s John Carreyrou in his book, Bad Blood. But the fraud has proved so irresistible that it is now also the subject of a documentary, a true-crime podcast called The Dropout, a Hulu drama also called The Dropout, and, soon, an Adam McKay feature film, adapted from Carreyrou’s Bad Blood, which will also be called Bad Blood. The consumer of all this news and entertainment can be forgiven for mixing up where she got her facts—and whether they’re facts at all.

In a surreal twist, the fictionalization of the Theranos debacle has now become part of the nonfiction story line. Last March, the fraud trial of the former Theranos COO Sunny Balwani was complicated when two of the potential jurors who had been selected to hear the case were dismissed; they had seen episodes of The Dropout and might have been prejudiced by its depiction of the events at issue in the trial.

In the 1990s, media critics worried—rightly—that the news was becoming frivolous, whether in the form of histrionic shoutfests like Crossfire, lurid news magazines like Dateline, or the overheated coverage of the O. J. Simpson trial. Then came a boom in entertainment that pretended to be news and to many viewers was indistinguishable from it: Jon Stewart, Stephen Colbert, Samantha Bee. Today, the critiques that the news channels were obsessed with ratings, or that too many people had abandoned the 6 o’clock news for The Daily Show, seem quaint. There is no longer any distinction: The news has become entertainment, and entertainment has become the news.

In January 2021, Britain’s Sky TV announced that Kenneth Branagh would be starring as Boris Johnson in a miniseries about the coronavirus pandemic. Asked about the role in September 2022—asked, in particular, about the logic of airing a history of an event that was still unfolding—Branagh demurred. “I think these events are unusual,” he said, “and part of what we must do is acknowledge them.”

Neither a pandemic that has now killed more than 200,000 Britons nor a leader who bungled his way through the disaster was in danger of going unacknowledged by the BBC or The Times of London. Yet Branagh’s comment was telling. The rise of these hyperreal TV shows coincides with the decline of the institutions that report on the world as it is. The semi-fictions stake their claims while journalism flails. We have gradually accommodated ourselves to the idea that if an event doesn’t become a limited series or a movie, it hasn’t happened. When news breaks, we shrug. We’ll wait for the miniseries. And take for granted that its version of the story will be true—except for the parts that are totally made up.

The Main Character

By the mid-20th century, the historian Warren Susman argued, a great shift was taking place. American values had traditionally emphasized a collection of qualities we might shorthand as “character”: honesty, diligence, an abiding sense of duty. The rise of mass media changed those terms, Susman wrote. In the media-savvy and consumption-oriented society that Americans were building, people came to value—and therefore demand—what Susman called “personality”: charm, likability, the talent to entertain. “The social role demanded of all in the new Culture of Personality was that of a performer,” Susman wrote. “Every American was to become a performing self.”

That demand remains. Now, though, the value is not merely interpersonal charm, but the ability to broadcast it to mass audiences. Social media has truly made each of us a performing self. “All the world’s a stage” was once a metaphor; today, it’s a dull description of life in the metaverse. As the journalist Neal Gabler foresaw in his book Life: The Movie, performance, as a language but also as a value, bleeds into nearly every facet of experience.

A recent H&M ad campaign promised that the brand would make sure that “you are the main character of each day.” In September, my partner booked a hotel room for a weekend trip; the confirmation email vowed that the stay would allow him to “craft your next story.” My iPhone is now in the habit of transforming photographs and videos from my camera roll into mini-movies. The bespoke videos come with a soundtrack selected by the operating system. They also come unprompted: I was recently served up a slideshow, set to strings that Ken Burns might appreciate, of pictures I’d taken of my dog. The aim, of course, is commercial. What better way to encourage customers to be loyal than to tell them their life should be a movie? A life so full that it gets optioned: the new American dream.

Or the new American nightmare. On Twitter, “the main character” is shorthand for the person who will be a given day’s subject of communal scorn. The strangers who pile on, often with vehemence, may be reacting to the target’s legitimate failings or merely to perceived ones. Regardless, they may be engaging in what the psychologist John Suler has described as the online disinhibition effect: the tendency for people in digital spaces to act in ways they never would offline. The disinhibition might originate in an assumption that the digital world differs from the “real” world, or in a sense that online interactions amount to a low-stakes game. But it can lead people to treat the humans on the other side of the screen as not human—not real—at all.

Last July, while Lilly Simon was commuting on the subway in New York, a stranger began filming her without her knowledge or consent. This was when monkeypox, recently declared a global health emergency, was spreading in the city. Simon has a genetic condition that causes tumors to grow at her nerve endings; some of the growths are visible on her skin. The tumors are usually benign, but can lead to painful complications. They are not contagious. The person recording her knew none of this. Instead, the videographer zoomed in on Simon’s legs and arms, analyzing her, and posted the results of their “investigation” on TikTok. Simon, after learning of the video’s existence, posted a reply. “I will not let any of y’all reverse any years of therapy and healing that I had to endure to deal with the condition,” she said in it. In short order, her response went viral, the original video was taken down, and Simon gave an interview about the experience to The New York Times.

A happy ending, of sorts, to an otherwise grim tale of what life can be like in the metaverse: A person, simply trying to get from one place to another, is transformed into a reluctant star of a movie she didn’t know she was in. The dynamics are simple, and stark. The people on our screens look like characters, so we begin to treat them like characters. And characters are, ultimately, expendable; their purpose is to serve the story. When their service is no longer required, they can be written off the show.

Insurrection for the ’Gram

Disinhibition may begin in the online world, but it doesn’t stay there. The dystopian aspects of the metaverse take on a political dimension, though not necessarily in the way that the 20th-century visionaries anticipated. Those writers imagined a populace pacified by empty entertainments. They didn’t foresee that the telescreen might instead incite them to political violence.

My colleague Tom Nichols has argued that one of the primary motivations driving the January 6 insurrectionists was boredom—and a sense that they had a right to be the heroes of their own American Revolution. Certainly, to watch the attack live on TV, as I did that day, was to be struck by how many of the people ransacking the Capitol were having a grand old time. They posed for (incriminating) photos. They livestreamed their vandalism for their followers. They were doing insurrection for the ’gram. Indeed, a striking number of the participants performed their sedition dressed as superheroes. Several tied Trump 2020 flags around their neck, the wrinkled nylon streaking behind them as they plundered.

Some insurrectionists dressed as heroes from another fictional universe: not Marvel or DC, but QAnon. The origins of the QAnon conspiracy theory are convoluted, and its ongoing appeal has a range of explanations. But it has thrived, at least in part, because it is so well suited to the metaverse. Its adherents have filter-bubbled and siloed and red-pilled themselves so completely that they live in a universe of fiction; they trust, above all, in the anonymous showrunner who is writing and directing and producing reality, every once in a while dropping tantalizing clues about what might happen in the next episode. The hero of the show is Donald Trump, the man who has mastered, like perhaps no one else in American history, TV’s powers of manipulation. Its villains are the members of the “deep state,” thousands of demi-humans united in their pedophiliac designs on America’s children.

The efforts to hold the instigators of the insurrection to account have likewise unfolded as entertainment. “Opinion: January 6 Hearings Could Be a Real-Life Summer Blockbuster,” read a CNN headline in May—the unstated corollary being that if the hearings failed at the box office, they would fail at their purpose. (“Lol no one is watching this,” the account of the Republican members of the House Judiciary Committee tweeted as the hearings were airing, attempting to suggest such a failure.)

The hearings did not fail, though; on the contrary, the first one was watched by some 20 million people—ratings similar to those earned by a Sunday Night Football broadcast. And the success came in part because the January 6 committee so ably turned its findings into compelling TV. The committee summoned well-spoken and, in many cases, telegenic witnesses. It made a point of transforming that day’s chaos into a comprehensive plot. Its production was so successful that The New York Times included the hearings on its list of 2022’s best TV shows.

The committee understood that for people to care about January 6—for people to take an interest in the greatest coup attempt in American history—the violence and treason had to be translated into that universal American language: a good show.

Illustration: stacks of glowing smartphones and tablets that each display a different part of an abstract human figure, hand, and eye

In September, Florida Governor Ron DeSantis arranged for a group of people seeking asylum in the U.S. to board airplanes. They were told that housing, financial assistance, and employment would be waiting for them when they landed. Instead, the planes flew to Martha’s Vineyard, where there was nothing waiting for the confused travelers except a group of equally confused locals. But those locals gave the travelers food and shelter. Immigration lawyers came to help. Journalists obtained copies of the brochures that had been handed out to the asylum seekers, and informed the public of the series of false promises through which human beings had been turned into props.

The send-them-to-the-Vineyard plan had been fueled by TV. After Texas Governor Greg Abbott began busing migrants to places where they would supposedly become a burden to Democrats, “shipping migrants” became a regular topic of conversation on the morning show Fox & Friends, and Fox News in general. The hosts filled their airtime joking about the conveyances that would be necessary to ship people to the Vineyard. The idea was repeated so steadily that, as often happens, the joke became the plan, and then the plan became the reality, and then the asylum seekers, desperate and misled, were sent like Amazon Prime packages to a place selected because Barack Obama vacations there.

And the producers of the whole thing, rather than questioning the premise of their show after it did little besides expose a community rallying to help people in need, instead promised more performances. Senator Ted Cruz—whose father, as it happens, sought asylum in the U.S.—announced that another group of asylum seekers would be shipped to Joe Biden’s vacation spot. (“Rehoboth Beach, Delaware next,” he said.) Abbott continued busing migrants out of Texas—this time the drop-off location was in front of Vice President Kamala Harris’s Washington, D.C., residence. The National Republican Senatorial Committee, not to be outdone, brought audience participation to the show: A fundraising email asked recipients where Republican governors should “ship” migrants next.

“The propagandist’s purpose,” Aldous Huxley observed, “is to make one set of people forget that certain other sets of people are human.” Donald Trump had a habit of demeaning his opponents, en masse, as “vicious, horrible” people. The images have only grown more hallucinatory. In September, Representative Marjorie Taylor Greene told a gathering of young people in Texas that her Democratic colleagues are “kind of night creatures, like witches and vampires and ghouls.”

The rhetoric may seem absurd, but it serves a purpose. This is language designed to dehumanize. And it is language that has gained traction. Last year, the Public Religion Research Institute published an analysis of QAnon’s hold over Americans. The group asked nearly 20,000 survey respondents whether they agreed with the QAnon belief that “the government, media, and financial worlds are controlled by Satan-worshiping pedophiles.” Nearly a sixth—16 percent—said they did.

“I’m a Real Person”

In his 1985 book, Amusing Ourselves to Death, the critic Neil Postman described a nation that was losing itself to entertainment. What Newton Minow had called “a vast wasteland” in 1961 had, by the Reagan era, led to what Postman diagnosed as a “vast descent into triviality.” Postman saw a public that confused authority with celebrity, assessing politicians, religious leaders, and educators according not to their wisdom, but to their ability to entertain. He feared that the confusion would continue. He worried that the distinction that informed all others—fact or fiction—would be obliterated in the haze.

In late 2022, The New York Times revealed that George Santos, a newly elected Republican representative from Long Island, had invented or wildly inflated not just his résumé (a familiar political sin) but his entire biography. Santos had, in essence, run as a fictional character and won. His lies and obfuscations—about his education, his employment history, his charitable work, even his religion—were shocking in their brazenness. They were also met, by many, with a collective shrug. “Everyone fabricates their résumé,” one of his constituents told the Times. Another vowed her continued support: “He was never untruthful with me,” she said. Their reactions are reminiscent of the Obama voter who explained to Politico, in 2016, why he would be switching his allegiances: “At least Trump is fun to watch.”

These are Postman’s fears in action. They are also Hannah Arendt’s. Studying societies held in the sway of totalitarian dictators—the very real dystopias of the mid-20th century—Arendt concluded that the ideal subjects of such rule are not the committed believers in the cause. They are instead the people who come to believe in everything and nothing at all: people for whom the distinction between fact and fiction no longer exists.

A republic requires citizens; entertainment requires only an audience. In 2020, a former health official worried aloud that “viewers will get tired of another season of coronavirus.” The concern, it turned out, was warranted: Americans have struggled to make sense of a pandemic that refuses to conform to a tidy narrative structure—digestible plots, cathartic conclusions.

Life in the metaverse brings an aching contradiction: We have never been able to share so much of ourselves. And, as study after study has shown, we have never felt more alone. Fictions, at their best, expand our ability to understand the world through other people’s eyes. But fiction can flatten, too. Recall how many Americans, in the grim depths of the pandemic, refused to understand the wearing of masks as anything but “virtue signaling”—the performance of a political view, rather than a genuine public-health measure. Note how many pundits have dismissed well-documented tragedies—children massacred at school, families separated by a callous state—as the work of “crisis actors.” In a functioning society, “I’m a real person” goes without saying. In ours, it is a desperate plea.

This could be how we lose the plot. This could be the somber finale of America: The Limited Series. Or perhaps it’s not too late for us to do what the denizens of the fictional dystopias could not: look up from the screens, seeing the world as it is and one another as we are. Be transported by our entertainment but not bound by it.

“Are you not entertained?” Maximus, the hero of Gladiator, yells to the Roman throngs who treat his pain as their show. We might see something of ourselves in both the captive warrior and the crowd. We might feel his righteous fury. We might recognize their fun. We have never been more entertained. That is our luxury—and our burden.

This article appears in the March 2023 print edition with the headline “We’re Already in the Metaverse.” When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.


Fortsätt läsa


This Bangladeshi-Pakistani Couple Named Their Son ‘India’, Here’s Why


This Bangladeshi-Pakistani Couple Named Their Son 'India', Here's Why

Last Updated: January 30, 2023, 09:46 IST

Bangladeshi-Pakistani couple named their kid ‘India’. (Credits: Facebook/Omar Esa)

This Bangladeshi-Pakistani couple named their kid ‘India’ and the reason will leave you in splits.

A Bangladeshi-origin and Pakistani-origin couple have named their kid India and their rationale behind the decision is leaving the Internet in splits. Omar Esa, a popular nasheed singer, took to Facebook to share a hilarious photo of him and his wife with their kid lying between them. Yep, just like the three neighbouring countries lie next to each other on the map. Originally called Ibrahim, the kid has been given a ‘new name’ by his parents: India.

“A WARNING to all new parents and condolences to all the parents who did what we did, so me and my begum made the silly mistake to let our firstborn Ibrahim sleep in our bed from when he was a little baby, you know new parents and that, we were so protective over him,” Esa wrote on Facebook.

“Well now this little guy is used to this sleeping arrangement and always ends up in the middle of us when we are sleeping even though he has his own bedroom. So as I’m Pakistani origin and my wife is Bangladeshi origin, we have given Ibrahim a new name, we call him India now as he’s right in the middle of his Pakistani and Bangladeshi parents, India causing mad issues in my life,” he added.

“Photographer may be America,” quipped one Facebook user. “This is crazy because my sister took this photo and she lives in America and is an American citizen ,” Esa replied. In the comments, while many shared stories of their own problems with their kids taking up their beds, matters also got a bit heated with citizens from the three neighbours exchanging barbs. Well, here’s hoping the entire thing was a joke. For Ibrahim’s sake more than anyone else’s.

‘India’ is not all that weird of a name for a kid if you consider some of the other names we’ve heard in the past. For instance, a restaurant owner jokingly told everyone that her granddaughter was named ‘Pakora’ after the most popular dish at the restaurant. After her post started going viral, Hilary Braniff said that she made the whole thing up to bring some cheer in the industry amid rising costs and increasing energy bills, reported BelfastLive. The real baby, actually born on August 24 last year, is Braniff’s first granddaughter. She is, thankfully, named Grace.

Read all the Latest Buzz News here


Fortsätt läsa


Awkward police mishap on popular Aussie beach: ‘Not a great idea’


Awkward police mishap on popular Aussie beach: 'Not a great idea'

Two police officers were left with bruised egos on Saturday after their car became bogged at a popular beach in Western Australia.

The male officers were new to the Esperance area, Western Australia Police revealed in a post on Facebook, and their interesting attempt to meet the locals left the masses amused.

Pictures of the incident were shared online and show the Toyota Camry — a front-wheel drive — parked up nice and close to the water’s edge at the popular Wharton Beach. The officers are seen trying to free the wheels of their car from the sand but eventually turned to locals for help.

Police officers became bogged in the sand at a Wharton beach in WA. Source: Facebook/WA Police Force

The beach is popular for surfing and allows vehicle access, but only for 4WD cars as smaller cars may struggle. So it’s no surprise their Toyota Camry got stuck. Another beachgoer helped the police car to safety with a video showing the white 4WD pulling the police vehicle from the sand using a rope.

The unusual scenes caught the attention of dozens of beachgoers who watched as the car was being towed. Photos were also shared on a popular Facebook page ‘Bogged’ where it racked up thousands of likes and comments from amused social media users.

WA police responds to amusing beach blunder

Esperance Police has increased patrols over the holiday period in response to an increased number of “hoon drivers” on popular beaches in the area. It’s believed these officers were carrying out their patrol when they became stuck. Goldfields-Esperance District WA Police shared details of the ordeal on Facebook and thanked those who rushed to the officers’ aid.

“Esperance Police would like to welcome SC Neville to the team who started today and found a new way to engage with our wonderful local community,” the post said, alongside photos of the incident.

No one was injured and no damage was done to the car, police said. The egos of the officers involved, however, “are in for repair” the police force joked after their beach blunder was made public.

Locals on the beach rushed to the aid of the police officers after car got bogged on sand.

Locals on the beach rushed to the aid of the officers. Source: Facebook/WA Police Force

“Imagine driving a Camry on the beach,” one person mocked on Facebook with others agreeing it’s “not a great idea” to be driving a two-wheeled-drive on the sand.

“When you’re asked to ‘patrol the beaches’ and you take it literally….,” another joked.

“Hahaha they’re only human too… good to see the locals and tourists helping where needed! Well done,” one other said.

A beachgoer in a white 4WD helped tow the police car off the sand.

A beachgoer in the white 4WD helped tow the police car off the sand. Source: Facebook/Bogged

Do you have a story tip? Email: [email protected]

You can also follow us on Facebook, Instagram, TikTok och Twitter and download the Yahoo News app from the App Store or Google Play.


Fortsätt läsa