Connect with us


Twitter Announces an Expansion of its ‘Birdwatch’ Crowd-Sourced Fact-Checking Program



Twitter Announces an Expansion of its ‘Birdwatch’ Crowd-Sourced Fact-Checking Program

This seems… concerning.

Today, just weeks out from the US midterms, Twitter has announced that it will expand its experimental Birdwatch crowd-sourced fact-checking program, as a means to combat misinformation throughout the app.

As you can see in these examples, Birdwatch, which Twitter first launched early last year, enables participants to highlight information in Tweets that they believe is misleading, and add notes to provide additional context.

Anyone can apply to become a Birdwatch contributor (where it’s available), so long as you have a verified phone number, no recent Twitter rule violations, and a minimum of six months using the app. The process then cross-matches the contributions from Birdwatch participants to highlight the notes rated as most helpful, based on a range of qualifiers, with all Birdwatch notes available for anyone to see.

Which is an interesting approach to content moderation, putting more onus on the user community to dictate what is and is not acceptable, as opposed to internal moderation teams making that call.

And it works. Twitter says that, according to its research, people who see a Birdwatch note are 20-40% less likely to agree with the substance of a potentially misleading Tweet than someone who sees the Tweet alone. Twitter also says that people who see Birdwatch notes are 15-35% less likely to Like or Retweet a Tweet than someone who sees the Tweet alone.

So, it’s having an impact, and it could be a good way to dispel misinformation, even if it does seem a little risky putting such rulings into the hands of users.


Either way, Twitter’s confident enough to move ahead with the experiment:

“We’ll start by adding larger groups of eligible applicants to the pilot on a more frequent basis. The process will be adjusted as needed as we closely monitor whether this change has any impact on either the quality or the frequency of contributions.”

So more applicants will now be accepted into the Birdwatch program, which will expand the pool of citizen fact-checkers.   

“The visibility of notes on public Tweets will also be increasing. In the coming weeks, more people using Twitter in the US will start to see notes on Tweets that Birdwatch contributors have collectively identified as Helpful. Importantly, this doesn’t mean you’ll start seeing notes on every Tweet, simply that a larger number of you will start seeing notes that have been rated Helpful.”

Twitter also says that it’s rolling out an updated Birdwatch onboarding process, which will better incentivize contributors to write and rate notes in a thoughtful way.

New Birdwatch contributors who have met the eligibility criteria will begin with an initial Rating Impact score of zero, which they can increase by consistently rating other contributors’ notes and reliably identifying those that are Helpful and Not Helpful. Once a contributor’s score has risen to five, they can start writing notes. Contributors can further increase their Writing and Rating Impact scores by both writing Helpful notes and continuing to rate notes written by others.”

Twitter Birdwatch

More fact-checkers, more notes highlighted, and more incentive for contributors to contribute to the quality of the ratings. It’s a significant expansion of the program, which, again, has shown promising results thus far.

But then again, there is also this:

Twitter’s crowdsourced fact-checking program, Birdwatch, accepted a QAnon supporter account into its ranks, according to a leaked internal audit. To make matters even worse, Twitter had been warned by experts ahead of time that this exact scenario might be possible.”


As reported by Input Magazine, there may still be some potential flaws in Twitter’s Birdwatch system, with this incident highlighted by former Twitter security advisor Peiter Zatko in his recent revelations about flaws in Twitter’s security processes.

The individual in question was removed from the program before contributing notes, so any potential conflict was avoided in this instance. But Zatko has warned that there are significant flaws in this approach, which could be exploited by those seeking to infiltrate the system.

An expansion of the Birdwatch program – essentially upping the stakes for those that may be looking for ways to influence the conversation – will make it an even bigger target, and as the system becomes more prominent, that will make bad actors pay even more attention to the option as a vector for influence.

That’s not to say that Twitter can’t, or won’t, counter any attempts at misuse. But it is an important element to watch – and ahead of the US midterms, when political attention will be higher than ever, it could be a risky bet to expand the program at this stage.

It does seem like a well-conceived system. But even seemingly well-thought-out programs have been impacted by bad actors in the past.

Source link


UK teen died after ‘negative effects of online content’: coroner



Molly Russell was exposed to online material 'that may have influenced her in a negative way'

Molly Russell was exposed to online material ‘that may have influenced her in a negative way’ – Copyright POOL/AFP/File Philip FONG

A 14-year-old British girl died from an act of self harm while suffering from the “negative effects of online content”, a coroner said Friday in a case that shone a spotlight on social media companies.

Molly Russell was “exposed to material that may have influenced her in a negative way and, in addition, what had started as a depression had become a more serious depressive illness,” Andrew Walker ruled at North London Coroner’s Court.

The teenager “died from an act of self-harm while suffering depression”, he said, but added it would not be “safe” to conclude it was suicide.

Some of the content she viewed was “particularly graphic” and “normalised her condition,” said Walker.

Russell, from Harrow in northwest London, died in November 2017, leading her family to set up a campaign highlighting the dangers of social media.

“There are too many others similarly affected right now,” her father Ian Russell said after the ruling.


“At this point, I just want to say however dark it seems, there is always hope.

“I hope that this will be an important step in bringing about much needed change,” he added.

The week-long hearing became heated when the family’s lawyer, Oliver Sanders, took an Instagram executive to task.

A visibly angry Sanders asked Elizabeth Lagone, the head of health and wellbeing at Meta, Instagram’s parent company, why the platform allowed children to use it when it was “allowing people to put potentially harmful content on it”.

“You are not a parent, you are just a business in America. You have no right to do that. The children who are opening these accounts don’t have the capacity to consent to this,” he said.

Lagone apologised after being shown footage, viewed by Russell, that “violated our policies”.

Of the 16,300 posts Russell saved, shared or liked on Instagram in the six-month period before her death, 2,100 related to depression, self-harm or suicide, the inquest heard.

Children’s charity NSPCC said the ruling “must be a turning point”.


“Tech companies must be held accountable when they don’t make children’s safety a priority,” tweeted the charity.

“This must be a turning point,” it added, stressing that any delay to a government bill dealing with online safety “would be inconceivable to parents”.

Source link

Continue Reading

Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address