After a year of testing, Twitter has announced some new updates to its Birdwatch crowdsourced fact-checking program, which enables Twitter users to add notes to Tweets that they believe contain misleading information.
As you can see, through Birdwatch, users can add in manual notes and tips on tweets, which can help to provide more context to future readers. And while that could also be problematic, in terms of people using it as a tool to silence dissenting opinions, Birdwatch reports don’t limit a tweet’s reach or performance, as such, they merely provide more context to those that seek it. And if a lot of people are saying it’s false, it probably is, while Twitter’s also working with official fact-checking groups and journalists to add more credibility to the notes.
And now, Twitter’s looking to take Birdwatch to the next stage:
“Starting today, a small (and randomized) group of people on Twitter in the US will see Birdwatch notes directly on some Tweets. They’ll also be able to rate notes, providing input that will help improve Birdwatch’s ability to add context that is helpful to people from different points of view.”
As you can see in these example screenshots, now, some users will see Birdwatch notes displayed upfront on tweets in their timeline, and they’ll also be prompted to rate that supplemental information to further qualify the info.
That’ll no doubt raise the ire of free speech activists, who already feel that social platforms are over-stepping the fact-checking mark, but it could be a simple, valuable way to facilitate crowd-sourced fact-checking, while also reducing the reach of questionable claims.
But again, it could also be problematic. You can imagine that some groups will ‘brigade’ these reports if they can, in order to counter claims they don’t like or agree with – though Twitter does have some additional qualifiers for its displayed notes.
“To appear on a Tweet, notes first need to be rated helpful by enough Birdwatch contributors from different perspectives. Difference in perspectives is determined by how people have rated notes in the past, not based on demographics.”
So there is a weighting of some sort to the Birdwatch responses, which could eliminate bias, at least to some degree.
But the process is still a work in progress, which is why Twitter’s taking its time, and only launching this new update to a small group to begin with.
As explained by Twitter’s GM of Consumer Services Kayvon Beykpour:
“An open and community-driven program like this is extremely ambitious (we look to Wikipedia as a source of inspiration here), and ultimately only effective if it’s able to result in high quality and informative content consistently, at scale, and through self-correcting incentives. Everything we’ve learned so far makes us feel even more encouraged by the potential for impact as Birdwatch scales.”
Indeed, there have been some encouraging signs for the program thus far, with Twitter reporting that those who’ve viewed Birdwatch notes are 20-40% less likely to agree with the substance of a potentially misleading Tweet, while the majority of users who’ve seen them have found the Birdwatch notes to be helpful.
Twitter’s still working through the full details, and it has made various changes to the process, including ensuring diversity among Birdwatch participants and adding Birdwatch aliases so people can make reports without fear of being identified and targeted for their comments.
It’s an ambitious program, and it’s still too early to say whether Birdwatch will prove valuable, but the concept has merit, using a Reddit-style crowd-sourcing system to refute questionable claims, without having to introduce downvotes like Reddit’s process.
And it could prove to be a valuable addition to the broader detection scope of the platform. Despite the test running for a year, however, it’s still really early days, and it seems right for Twitter to take a cautious approach with this next stage.
Meta’s Adding More Ad Targeting Information to its Ad Library Listings
In the wake of the Cambridge Analytics scandal, Meta has implemented a range of data protection measures to ensure that it limits access to users’ personal data and insight, while at the same time, it’s also been working to provide more transparency into how its systems are being used by different groups to target their messaging.
These conflicting approaches require a delicate balance, one which Meta has largely been able to maintain via its Ad Library, which enables anyone to see any ad being run by any Facebook Page in the recent past.
Now, Meta’s looking to add to that insight, with new information being added to the Ad Library on how Pages are using social issue, electoral or political ads in their process.
As you can see here, the updated Ad Library overview will include more specific information on how each advertiser is using these more sensitive targeting options, which could help researchers detect misuse or report concerns.
As explained by Meta:
“At the end of this month, detailed targeting information for social issue, electoral or political ads will be made available to vetted academic researchers through the Facebook Open Research and Transparency (FORT) environment […] Coming in July, our publicly available Ad Library will also include a summary of targeting information for social issue, electoral or political ads run after launch. This update will include data on the total number of social issue, electoral and political ads a Page ran using each type of targeting (such as location, demographics and interests) and the percentage of social issue, electoral and political ad spend used to target those options.”
That’s a significant update for Meta’s ad transparency efforts, which will help researchers better understand key trends in ad usage, and how they relate to messaging resonance and response.
Meta has come under scrutiny over such in the past, with independent investigations finding that housing ads, for example, were illegally using race-based exclusions in their ad targeting. That led to Meta changing its rules on how its exclusions can be used, and this new expansion could eventually lead to similar, by making discriminatory ad targeting easier to identify, with direct examples from Meta’s system.
For regular advertisers, it could also give you some additional insight into your competitors’ tactics. You might find more detailed information on how other brands are honing in on specific audiences, which may not be discriminatory, but may highlight new angles for your own marketing efforts.
It’s a good transparency update, which should glean significant benefits for researchers trying to better understand how Meta’s intricate ad targeting system is being used in various ways.
How to Use Product Synonyms to Build Use Case Awareness & Scale SEO
SEO Legend, Mentor & Friend
What Is Small Business SEO?
Google Updates Video Best Practices For Thumbnail Transparency
Pega addresses accelerating business complexity
How & Why To Prevent Bots From Crawling Your Site
5 Ways to Improve Your Lead Scoring Process
A Guide To LinkedIn Single Image Ad Retargeting
10 Solutions You’ll Learn at Hero Conf London 2022
The Ultimate Guide to Video Marketing
LinkedIn Adds Live Captions for Audio Events, Custom URL Listings on Creator Profiles
Six Ways to Adjust Google Ads to Save Budget
Daily Search Forum Recap: May 2, 2022
How Does Google Multisearch Affect SEO?
How to Write the Perfect Page Title With SEO in Mind
Where To Invest In SEO For Maximum Impact
Google Testing New Ad Format With Swipeable Images In A Carousel
Google Says You Can Use Hashtags In Meta Descriptions
Google Search Console URL Parameter Tool Is Now Offline
What’s A Good Cost Per Acquisition (CPA)? Ask The PPC
MARKETING5 days ago
The Ultimate Guide to On-Page SEO in 2022
MARKETING6 days ago
How to Use Pinterest Advertising to Promote Products and Attract Customers
SEARCHENGINES4 days ago
Google Displays Out Of Stock For Items Using Back Order Value In Structured Data
SOCIAL7 days ago
The State of Social Listening in 2022 – Report