Connect with us

FACEBOOK

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

Published

on

Facebook’s Oversight Board already ‘a bit frustrated’ — and it hasn’t made a call on Trump ban yet

The Facebook Oversight Board (FOB) is already feeling frustrated by the binary choices it’s expected to make as it reviews Facebook’s content moderation decisions, according to one of its members who was giving evidence to a UK House of Lords committee today which is running an enquiry into freedom of expression online.

The FOB is currently considering whether to overturn Facebook’s ban on former US president, Donald Trump. The tech giant banned Trump “indefinitely” earlier this year after his supporters stormed the US capital.

The chaotic insurrection on January 6 led to a number of deaths and widespread condemnation of how mainstream tech platforms had stood back and allowed Trump to use their tools as megaphones to whip up division and hate rather than enforcing their rules in his case.

Yet, after finally banning Trump, Facebook almost immediately referred the case to it’s self-appointed and self-styled Oversight Board for review — opening up the prospect that its Trump ban could be reversed in short order via an exceptional review process that Facebook has fashioned, funded and staffed.

Alan Rusbridger, a former editor of the British newspaper The Guardian — and one of 20 FOB members selected as an initial cohort (the Board’s full headcount will be double that) — avoided making a direct reference to the Trump case today, given the review is ongoing, but he implied that the binary choices it has at its disposal at this early stage aren’t as nuanced as he’d like.

“What happens if — without commenting on any high profile current cases — you didn’t want to ban somebody for life but you wanted to have a ‘sin bin’ so that if they misbehaved you could chuck them back off again?” he said, suggesting he’d like to be able to issue a soccer-style “yellow card” instead.

“I think the Board will want to expand in its scope. I think we’re already a bit frustrated by just saying take it down or leave it up,” he went on. “What happens if you want to… make something less viral? What happens if you want to put an interstitial?

“So I think all these things are things that the Board may ask Facebook for in time. But we have to get our feet under the table first — we can do what we want.”

“At some point we’re going to ask to see the algorithm, I feel sure — whatever that means,” Rusbridger also told the committee. “Whether we can understand it when we see it is a different matter.”

To many people, Facebook’s Trump ban is uncontroversial — given the risk of further violence posed by letting Trump continue to use its megaphone to foment insurrection. There are also clear and repeat breaches of Facebook’s community standards if you want to be a stickler for its rules.

Among supporters of the ban is Facebook’s former chief security officer, Alex Stamos, who has since been working on wider trust and safety issues for online platforms via the Stanford Internet Observatory.

Stamos was urging both Twitter and Facebook to cut Trump off before everything kicked off, writing in early January: “There are no legitimate equities left and labeling won’t do it.”

But in the wake of big tech moving almost as a unit to finally put Trump on mute, a number of world leaders and lawmakers were quick to express misgivings at the big tech power flex.

Germany’s chancellor called Twitter’s ban on him “problematic”, saying it raised troubling questions about the power of the platforms to interfere with speech. While other lawmakers in Europe seized on the unilateral action — saying it underlined the need for proper democratic regulation of tech giants.

The sight of the world’s most powerful social media platforms being able to mute a democratically elected president (even one as divisive and unpopular as Trump) made politicians of all stripes feel queasy.

Facebook’s entirely predictable response was, of course, to outsource this two-sided conundrum to the FOB. After all, that was its whole plan for the Board. The Board would be there to deal with the most headachey and controversial content moderation stuff.

And on that level Facebook’s Oversight Board is doing exactly the job Facebook intended for it.

But it’s interesting that this unofficial ‘supreme court’ is already feeling frustrated by the limited binary choices it’s asked them for. (Of, in the Trump case, either reversing the ban entirely or continuing it indefinitely.)

The FOB’s unofficial message seems to be that the tools are simply far too blunt. Although Facebook has never said it will be bound by any wider policy suggestions the Board might make — only that it will abide by the specific individual review decisions. (Which is why a common critique of the Board is that it’s toothless where it matters.)

How aggressive the Board will be in pushing Facebook to be less frustrating very much remains to be seen.

“None of this is going to be solved quickly,” Rusbridger went on to tell the committee in more general remarks on the challenges of moderating speech in the digital era. Getting to grips with the Internet’s publishing revolution could in fact, he implied, take the work of generations — making the customary reference the long tail of societal disruption that flowed from Gutenberg inventing the printing press.

If Facebook was hoping the FOB would kick hard (and thorny-in-its-side) questions around content moderation into long and intellectual grasses it’s surely delighted with the level of beard stroking which Rusbridger’s evidence implies is now going on inside the Board. (If, possibly, slightly less enchanted by the prospect of its appointees asking it if they can poke around its algorithmic black boxes.)

Kate Klonick, an assistant professor at St John’s University Law School, was also giving evidence to the committee — having written an article on the inner workings of the FOB, published recently in the New Yorker, after she was given wide-ranging access by Facebook to observe the process of the body being set up.

The Lords committee was keen to learn more on the workings of the FOB and pressed the witnesses several times on the question of the Board’s independence from Facebook.

Rusbridger batted away concerns on that front — saying “we don’t feel we work for Facebook at all”. Though Board members are paid by Facebook via a trust it set up to put the FOB at arm’s length from the corporate mothership. And the committee didn’t shy away or raising the payment point to query how genuinely independent they can be?

“I feel highly independent,” Rusbridger said. “I don’t think there’s any obligation at all to be nice to Facebook or to be horrible to Facebook.”

“One of the nice things about this Board is occasionally people will say but if we did that that will scupper Facebook’s economic model in such and such a country. To which we answer well that’s not our problem. Which is a very liberating thing,” he added.

Of course it’s hard to imagine a sitting member of the FOB being able to answer the independence question any other way — unless they were simultaneously resigning their commission (which, to be clear, Rusbridger wasn’t).

He confirmed that Board members can serve three terms of three years apiece — so he could have almost a decade of beard-stroking on Facebook’s behalf ahead of him.

Klonick, meanwhile, emphasized the scale of the challenge it had been for Facebook to try to build from scratch a quasi-independent oversight body and create distance between itself and its claimed watchdog.

“Building an institution to be a watchdog institution — it is incredibly hard to transition to institution-building and to break those bonds [between the Board and Facebook] and set up these new people with frankly this huge set of problems and a new technology and a new back end and a content management system and everything,” she said.

Rusbridger had said the Board went through an extensive training process which involved participation from Facebook representatives during the ‘onboarding’. But went on to describe a moment when the training had finished and the FOB realized some Facebook reps were still joining their calls — saying that at that point the Board felt empowered to tell Facebook to leave.

“This was exactly the type of moment — having watched this — that I knew had to happen,” added Klonick. “There had to be some type of formal break — and it was told to me that this was a natural moment that they had done their training and this was going to be moment of push back and breaking away from the nest. And this was it.”

However if your measure of independence is not having Facebook literally listening in on the Board’s calls you do have to query how much Kool Aid Facebook may have successfully doled out to its chosen and willing participants over the long and intricate process of programming its own watchdog — including to extra outsiders it allowed in to observe the set up.

The committee was also interested in the fact the FOB has so far mostly ordered Facebook to reinstate content its moderators had previously taken down.

In January, when the Board issued its first decisions, it overturned four out of five Facebook takedowns — including in relation to a number of hate speech cases. The move quickly attracted criticism over the direction of travel. After all, the wider critique of Facebook’s business is it’s far too reluctant to remove toxic content (it only banned holocaust denial last year, for example). And lo! Here’s its self-styled ‘Oversight Board’ taking decisions to reverse hate speech takedowns…

The unofficial and oppositional ‘Real Facebook Board’ — which is truly independent and heavily critical of Facebook — pounced and decried the decisions as “shocking”, saying the FOB had “bent over backwards to excuse hate”.

Klonick said the reality is that the FOB is not Facebook’s supreme court — but rather it’s essentially just “a dispute resolution mechanism for users”.

If that assessment is true — and it sounds spot on, so long as you recall the fantastically tiny number of users who get to use it — the amount of PR Facebook has been able to generate off of something that should really just be a standard feature of its platform is truly incredible.

Klonick argued that the Board’s early reversals were the result of it hearing from users objecting to content takedowns — which had made it “sympathetic” to their complaints.

“Absolute frustration at not knowing specifically what rule was broken or how to avoid breaking the rule again or what they did to be able to get there or to be able to tell their side of the story,” she said, listing the kinds of things Board members had told her they were hearing from users who had petitioned for a review of a takedown decision against them.

“I think that what you’re seeing in the Board’s decision is, first and foremost, to try to build some of that back in,” she suggested. “Is that the signal that they’re sending back to Facebook — that’s it’s pretty low hanging fruit to be honest. Which is let people know the exact rule, given them a fact to fact type of analysis or application of the rule to the facts and give them that kind of read in to what they’re seeing and people will be happier with what’s going on.

“Or at least just feel a little bit more like there is a process and it’s not just this black box that’s censoring them.”

In his response to the committee’s query, Rusbridger discussed how he approaches review decision-making.

“In most judgements I begin by thinking well why would we restrict freedom of speech in this particular case — and that does get you into interesting questions,” he said, having earlier summed up his school of thought on speech as akin to the ‘fight bad speech with more speech’ Justice Brandeis type view.

“The right not to be offended has been engaged by one of the cases — as opposed to the borderline between being offended and being harmed,” he went on. “That issue has been argued about by political philosophers for a long time and it certainly will never be settled absolutely.

“But if you went along with establishing a right not to be offended that would have huge implications for the ability to discuss almost anything in the end. And yet there have been one or two cases where essentially Facebook, in taking something down, has invoked something like that.”

“Harm as oppose to offence is clearly something you would treat differently,” he added. “And we’re in the fortunate position of being able to hire in experts and seek advisors on the harm here.”

While Rusbridger didn’t sound troubled about the challenges and pitfalls facing the Board when it may have to set the “borderline” between offensive speech and harmful speech itself — being able to (further) outsource expertise presumably helps — he did raise a number of other operational concerns during the session. Including over the lack of technical expertise among current board members (who were purely Facebook’s picks).

Without technical expertise how can the Board ‘examine the algorithm’, as he suggested it would want to, because it won’t be able to understand Facebook’s content distribution machine in any meaningful way?

Since the Board currently lacks technical expertise, it does raise wider questions about its function — and whether its first learned cohort might not be played as useful idiots from Facebook’s self-interested perspective — by helping it gloss over and deflect deeper scrutiny of its algorithmic, money-minting choices.

If you don’t really understand how the Facebook machine functions, technically and economically, how can you conduct any kind of meaningful oversight at all? (Rusbridger evidently gets that — but is also content to wait and see how the process plays out. No doubt the intellectual exercise and insider view is fascinating. “So far I’m finding it highly absorbing,” as he admitted in his evidence opener.)

“People say to me you’re on that Board but it’s well known that the algorithms reward emotional content that polarises communities because that makes it more addictive. Well I don’t know if that’s true or not — and I think as a board we’re going to have to get to grips with that,” he went on to say. “Even if that takes many sessions with coders speaking very slowly so that we can understand what they’re saying.”

“I do think our responsibility will be to understand what these machines are — the machines that are going in rather than the machines that are moderating,” he added. “What their metrics are.”

Both witnesses raised another concern: That the kind of complex, nuanced moderation decisions the Board is making won’t be able to scale — suggesting they’re too specific to be able to generally inform AI-based moderation. Nor will they necessarily be able to be acted on by the staffed moderation system that Facebook currently operates (which gives its thousand of human moderators a fantastically tiny amount of thinking time per content decision).

Despite that the issue of Facebook’s vast scale vs the Board’s limited and Facebook-defined function — to fiddle at the margins of its content empire — was one overarching point that hung uneasily over the session, without being properly grappled with.

“I think your question about ‘is this easily communicated’ is a really good one that we’re wrestling with a bit,” Rusbridger said, conceding that he’d had to brain up on a whole bunch of unfamiliar “human rights protocols and norms from around the world” to feel qualified to rise to the demands of the review job.

Scaling that level of training to the tens of thousands of moderators Facebook currently employs to carry out content moderation would of course be eye-wateringly expensive. Nor is it on offer from Facebook. Instead it’s hand-picked a crack team of 40 very expensive and learned experts to tackle an infinitesimally smaller number of content decisions.

“I think it’s important that the decisions we come to are understandable by human moderators,” Rusbridger added. “Ideally they’re understandable by machines as well — and there is a tension there because sometimes you look at the facts of a case and you decide it in a particular way with reference to those three standards [Facebook’s community standard, Facebook’s values and “a human rights filter”]. But in the knowledge that that’s going to be quite a tall order for a machine to understand the nuance between that case and another case.

“But, you know, these are early days.”

TechCrunch

FACEBOOK

How much do we shape-shift across social media?

Published

on

How much do we shape-shift across social media?

Like the spaces we frequent in the physical world, each social app serves a different, fairly obvious purpose. If LinkedIn is a job fair of some sort, Instagram is a playground, or a party — both of which can be simultaneously bright, loud, and exhausting. The distinctions between these platforms are very much known.

But these are places we go to everyday, and in each, we shift. We flick through a handful apps everyday, the more prominent ones arguably being TikTok, Twitter, WhatsApp, Facebook, Instagram, and LinkedIn. On some, our tone may be nonchalant; on another, indignant. These are emotions expressed daily, sometimes concurrently, with different interfaces displaying alternative views, moods, even personas.

How much do we actually do shape-shift across social media? Turns out, a lot.

Samara Madhvani, who owns a boutique social media consultancy(Opens in a new window), says that what she shares on TikTok is vastly different from her posts on Instagram.

“Most of my friends don’t use [TikTok], so I feel like I can post more freely without being judged,” she tells Mashable. “It’s a great space to experiment with different kinds of content, that I would probably never share on Instagram.”

Similarly, brand management and development specialist MaryKate tells Mashable that she shows her “full authentic self” solely on Snapchat.

“Snapchat is for [my] innermost thoughts,” she says. Meanwhile, she uses Instagram to post “photos of things, travel and the occasional selfie”. TikTok is for more niche interests, where she posts “drone footage or animal footage”. Twitter is a point of conflict, where she feels more filtered.

“I feel like each social media platform is a different part of me,” she says.

At their core, these apps are intending for users to be on display, in whatever curated form they desire. Apps like BeReal have attempted to offer a different side to social media, with the premise that users can be their most authentic selves. Yet, it’s another platform that is, in reality, asking something of the user: who are you in this moment? What do you and to show?


“When you look at our behaviour on social media as a whole, our personality on a platform depends on how we perceive its usage.”

– Ria Chopra

Ria Chopra(Opens in a new window), a writer and journalist, says that she is guarded about her personal life and selective when it comes to posting across all platforms.

“The sides of my personality I choose to show differ from platform to platform,” she says. “When you look at our behaviour on social media as a whole, our personality on a platform depends on how we perceive its usage. LinkedIn is perceived by me to be a professional space, so I’m professional there. Instagram is for personal connections, so I’m more likely to put up birthday posts there, while Twitter is more stream-of-consciousness, simply because of that’s the kind of stuff I see there and believe it’s for.”

Being human means having to change, situationally and socially, on the daily. This isn’t news to any adult. Who you are at work may be a far cry from who you are at home. What you show to your closest friends can be deviation from who are you with your siblings. For Black people and people of color, code switching is even more habitual(Opens in a new window), particularly in the workplace where bias based on factors like speech(Opens in a new window) has long had a negative impact. These ever-so-subtle shifts that take place are near instinctive for most. But when this applies to the internet, too, identity can be in constant flux.

For many users, this is a natural aspect to having more than one social media account. It’s almost a given: an exercise in construction and curation(Opens in a new window), for numerous reasons.

Being a woman or a marginalized person on social media comes with its own set of complications, for instance. These are ones that can largely hinder what a person chooses to share and speak about on public platforms. Seyi Akiwowo(Opens in a new window), author of How to Stay Safe Online(Opens in a new window), addressed this extensively in her guidebook to the internet. “The idea that online platforms are neutral is a fairy tale. It’s not a few bad apples ruining the experience for the rest of us. The very DNA of these platforms is in conflict with the best interests of a large number of their users,” Akiwowo writes. “Women and girls across the globe are walking on eggshells because of the fear of online abuse.”

Research by Plan International in 2017(Opens in a new window), which Akiwowo cites, found that 43 percent of girls aged 11 to 18 admitted to holding back their opinions on social media for fear of being criticized. Self-censorship, while admittedly an issue for all on social apps, is heightened when it comes to young girls who are doing so for their own safety online.

“Women can post on almost any topic — animal rights, climate change, healthcare — and abuse usually follows,” writes Akiwowo.

Then there are the lesser but significant factors everyone faces – like who your followers are and whether your account is private. These will also play a natural role in choosing how to behave on a certain platform. This is perhaps what led to the surge of “finstas” — which now seem near extinct — a few years ago. These “fake” Instagram accounts allowed for privacy and exclusivity, but are now a dated concept, shadowed by integrated features like Instagram’s Close Friends and Twitter Circle. The demand for these also alludes to the greater desire to post and interact in different ways, even in the space of a singular app.

Madhvani believes that total, complete authenticity is a far reach on any platform. “Even a comment or a like on someone else’s content will leave a digital footprint,” she says. “Today, everything that people post is somewhat curated. At the end of the day, you’re posting and sharing for a purpose whether it’s to look a certain way or to get more followers or even sell a product.”

Alex Quicho, head of futures at trends agency Canvas8(Opens in a new window), suggests there is a positive side to the transformations we undergo on apps, saying that social media can play a role in “trying out different facets of one’s persona”.

“Today’s crop of users are less concerned about projecting a stable image or personal brand,” says Quicho. “We’re seeing many Gen Zers adopt an exploratory attitude to how they appear on social platforms: seeing these false personas as creative and constructive.”

In this vein, having different sorts of social media can provide paths to traverse identity and to explore different interests. The possible trouble is not in utilizing these purpose-driven platforms. Instead, there is potential for burnout in these spaces(Opens in a new window), which is already a dangling possibility(Opens in a new window) for anyone who uses social media.

Chopra says that she is increasingly “cross-posting” across platforms, in an endeavor to integrate content and show her comprehensive self.

“It’s unconscious, but maybe that’s my bid to be more ‘me’ everywhere. So I’ve posted my tweets on LinkedIn, my Instagram posts on Twitter, if I want to. And it’s paying off — I feel more authentic knowing that I’m reflecting a more holistic sense of my personality everywhere,” she explains.

Let’s face it: authenticity and social media are hardly interconnected. Some social media users are increasingly pursuing this concept, seeking to be themselves on platforms designed to allow the opposite. But living in the digital age — with an influx of apps at our disposal — means having to have more than one public face: a near constant metamorphosis.



Source link

Continue Reading

FACEBOOK

This Facebook Page Shares 116 Memes That Might Teach You Something (New Pics)

Published

on

This Facebook Page Shares 116 Memes That Might Teach You Something (New Pics)

What was your experience like in school? Were you straight-A student, or were you more focused on upholding your reputation as class clown than finishing your homework on time? Regardless of how much you remember from the good (or bad) old days in the classroom, it’s likely that there weren’t many memes involved in the curriculum. If there were, I’m jealous! And if there weren’t, don’t worry. We’ve got you covered with your daily dose of educational memes down below!

We’ve gathered some of the best posts from Educational Memes on Instagram to remind you all that learning and laughing don’t have to be mutually exclusive. So, pandas, enjoy these pics that might take you back to the days of packed lunches, recess and raising your hand when you had a question, and   be sure to upvote the ones that make you feel particularly intelligent.

Keep reading to also find a conversation with the creator of this hilarious account, Yashdeep Kanhai, and then if you’re interested in finding even more memes dedicated to living, laughing and learning, you can find Bored Panda’s previous article featuring Educational Memes right here!

More info: Instagram | Facebook


Source link

Continue Reading

FACEBOOK

Soros-Funded Fake News Operation Pushes Facebook to Reinstate Trump Ban

Published

on

Soros-Funded Fake News Operation Pushes Facebook to Reinstate Trump Ban

Courier Newsroom also has ties to progressive megadonor Laurene Powell Jobs

George Soros / Getty Images

A progressive billionaire-funded network of Democratic propaganda sites masquerading as legitimate news websites is leading the push to keep former president Donald Trump off Facebook and Twitter.

Courier Newsroom, which bills itself as the “largest left-leaning news network in the country,” organized a petition this week to pressure Facebook’s Mark Zuckerberg and Twitter’s Elon Musk to keep Trump off their platforms. Facebook said Thursday it would reinstate Trump “in the coming weeks.” Twitter reinstated Trump in November, but the former president has not posted on the site. Both sites banned him in January 2021, under pressure from Democratic lawmakers and liberal advocacy groups.

“We cannot allow him to rejoin these platforms and spread more hateful, inaccurate information. Sign the petition now to keep Trump off of Facebook and Twitter,” Courier’s petition says. Blue Amp Action, a Democratic consulting firm, circulated the petition in an email to Courier supporters. The consulting firm has worked for a number of Democratic campaigns. President Joe Biden’s presidential campaign paid Blue Amp Action around $230,000 for media production services in 2020.

Maintaining a ban on Trump could help Democrats in 2024 by depriving the early GOP favorite of access to two of the country’s biggest media platforms. And that aligns with the political goals of Courier Newsroom’s biggest backers.

Laurene Powell Jobs, who inherited $20 billion from her late husband, Apple founder Steve Jobs, was a major funder of ACRONYM, the digital media company behind Courier Newsroom, the Washington Free Beacon reported. While the exact nature of Powell Jobs’s ties to ACRONYM and Courier Newsroom are unclear, she is not the only progressive megadonor propping up the group.

In October 2021, the progressive billionaire George Soros and LinkedIn cofounder Reid Hoffman formed an organization called Good Information, Inc. The group acquired Courier Newsroom, which operates websites designed to look like legitimate local news publications. Soros, the Democratic party’s biggest donor, gave $1.2 million to Courier Newsroom through his Open Society Foundations in 2021 to support the group’s “non-partisan journalism.”

While Courier Newsroom aims to root out political disinformation online, Hoffman has funded multiple projects that used disinformation to help elect Democrats. In 2017, he funded a project in which tech firms created fake social media personas in order to suppress Republican voter turnout in Alabama’s 2017 special Senate election.

Other Soros-funded advocacy groups have pressured Facebook to maintain its ban on Trump.

Media Matters for America, which received $500,000 in Soros cash in 2021, partnered with Accountable Tech to form the “Keep Trump Off Facebook” campaign. MoveOn.org, one of the largest progressive groups in the country, has purchased ads on Facebook to circulate a petition to keep Trump off the platform. Soros donated $450,000 to MoveOn in 2021.

Source link

Continue Reading

Trending

en_USEnglish