Connect with us

NEWS

Twitter Announces Birdwatch – Volunteer Program Against Misinformation via @martinibuster

Published

on

Twitter announced a new anti-misinformation initiative. The plan is to use a transparent community-based approach to identifying misinformation. This is a pilot program restricted to the United States.

Twitter Birdwatch

Birdwatch is a system where contributors create “notes” about misinformation found on tweets. The notes are meant to provide context.

The notes will initially not be visible from the tweets but exist on a separate site located on a subdomain of Twitter (birdwatch.twitter.com) that currently redirects to twitter.com/i/birdwatch.

The goal is to eventually show the notes on the tweets that are judged to contain misinformation.

That way Twitter community members can be made aware of the low quality of a tweet.

Advertisement

Community Approach

What Twitter is proposing is a passive form of content moderation.

Many forums and social media sites (including Twitter) have a way for members to report a post when it is problematic in some way. Typical reasons for reporting a post can be spam, bullying, or misinformation.

This is a passive form of moderating content in a community, which is the form of moderation that Twitter is taking.

Community-driven moderation is a more proactive step because members, usually called moderators, can delete or edit a problematic post.

Birdwatch’s approach is limited to creating notes about a problematic post. Contributors will not be able to actually remove a bad post.

Advertisement

According to Twitter:

“Birdwatch allows people to identify information in Tweets they believe is misleading and write notes that provide informative context. We believe this approach has the potential to respond quickly when misleading information spreads, adding context that people trust and find valuable.

Eventually we aim to make notes visible directly on Tweets for the global Twitter audience, when there is consensus from a broad and diverse set of contributors.”

Advertisement

Continue Reading Below

Example of How Birdwatch Works

Birdwatch has three components:

Advertisement
  1. Notes
  2. Ratings
  3. Birdwatch site

Birdwatch Notes

Birdwatch community moderators attach notes to tweets they judge to be problematic. The note process begins with clicking the three dot menu and selecting the Notes option.

Screenshot of an example of a Twitter Birdwatch process

Screenshot of an example of a Twitter Birdwatch process

Thereafter the note process involves answering multiple choice questions and adding feedback within a text area.

Advertisement

Continue Reading Below

Example of Twitter Birdwatch Notes

Screenshot of a Twitter Birdwatch note example

Screenshot of a Twitter Birdwatch note example

Here is an example of a helpful Birdwatch note:

Screenshot of a helpful Twitter Birdwatch note

Screenshot of a helpful Twitter Birdwatch note

This is an example of an unhelpful Birdwatch note:

Screenshot of an example of an unhelpful Birdwatch note

Screenshot of an example of an unhelpful Birdwatch note

Note Ranking

The next component of Birdwatch is community ranking of each other’s notes. This is a way for members to upvote or downvote rankings so that the best ranked notes can be selected as representative of an accurate note.

Advertisement

Advertisement

Continue Reading Below

This is a way for the community to self-moderate notes so that only the best and non-manipulative notes make it to the top.

Transparency of Twitter Birdwatch

The Birdwatch system is transparent. That means all the data is available to be downloaded and viewed by anyone from the Birdwatch Download page.

Try It and See Approach

Twitter acknowledges that the pilot program is a work in process and that issues such as malicious efforts to manipulate the program are things that will have to be dealt with as they turn up.

Twitter did not, however, offer a plan for dealing with community manipulation.

Advertisement

This is their statement:

“We know there are a number of challenges toward building a community-driven system like this — from making it resistant to manipulation attempts to ensuring it isn’t dominated by a simple majority or biased based on its distribution of contributors. We’ll be focused on these things throughout the pilot.”

Advertisement

Continue Reading Below

This seems like an ad-hoc approach to dealing with problems as they arise as opposed to anticipating them and having a plan in place to deal with them.

Community-driven Fight Against Misinformation

I have almost twenty years of experience moderating and running forums. In my experience, community administrators identify trustworthy members and make them moderators, allowing the volunteer members to help run the community themselves.

Advertisement

In a healthy community, moderators do not function like police who are enforcing rules. In a well-operated community moderators are more like servants that help the community function better.

What Twitter is proposing falls short of true moderation. It’s more of an attempt to provide trustworthy feedback on problematic posts that contain misinformation.

Citations

Read the official announcement:
Introducing Birdwatch, a Community-based Approach to Misinformation

Advertisement

Continue Reading Below

Advertisement

Sign up for Birdwatch
http://twitter.github.io/birdwatch/join

Searchenginejournal.com

See also  Twitter is now allowing users to share that controversial New York Post story

NEWS

Google Is Creating A New Core Web Vitals Metric via @sejournal, @martinibuster

Published

on

In a recent HTTPArchive Almanac article about CMS use worldwide, the author mentioned that all platforms score great on the First Input Delay (FID), a Core Web Vitals metric and that Google is working on a new metric, which one might presume may replace First Input Delay (FID).

Every year the HTTPArchive publishes multiple articles about the state of the web. Chapter 16 is about content management systems (CMS). The article was written by a Backend Group Manager and Head of Web Performance Wix engineer and reviewed and analyzed by various Googlers and others.

The article raised an interesting point about how the First Input Delay metric has lost meaning and mentioned how Google was developing a new metric.

First Input Delay

Core Web Vitals are a group of user experience metrics designed to provide a snapshot of how well web pages perform for users and First Input Delay (FID) is one of those metrics.

FID measures how fast a browser can respond to user interaction with a website, like how long it takes for a response to happen when a user clicks a button on a website.

Advertisement

Advertisement

Continue Reading Below

The thing about FID is that all major content management systems, like WordPress, Wix, Drupal and others all have lightning fast FID scores.

Everyone Wins An FID Trophy

The article first mentions that most CMS’s score exceptionally well for FID. And the platforms that score less well still have relatively high scores that lag behind by only 5 percentage points.

The author wrote:

“FID is very good for most CMSs on desktop, with all platforms scoring a perfect 100%. Most CMSs also deliver a good mobile FID of over 90%, except Bitrix and Joomla with only 83% and 85% of origins having a good FID.”

What’s happened to FID is that it’s basically a metric where everyone gets a trophy. If almost all sites score exceptionally well, if everyone gets a trophy, then that means there really isn’t much of a reason for the metric to exist because the goal of getting this part of the user experience fixed has been reached.

Advertisement

Advertisement

Continue Reading Below

The article then mentions how Google (the Chrome team) is currently creating a new metric for measuring responsiveness and response latency.

The article continued:

“The fact that almost all platforms manage to deliver a good FID, has recently raised questions about the strictness of this metric.

The Chrome team recently published an article, which detailed the thoughts towards having a better responsiveness metric in the future.”

Advertisement

Input Response Delay Versus Full Event Duration

The article linked to a recent Google article published on Web.dev titled, Feedback wanted: An experimental responsiveness metric.

What’s important about this article is that it reveals that Google is working on a new input delay metric. Knowing about this metric can give a head start to preparing for what is coming in the future.

The main point to understand about this new metric is that it isn’t measuring just single interactions. It is measuring groups of individual interactions that are part of a user action.

While the article cited in HTTPArchive cited a November 2021 article that asks for publisher feedback, this new metric has been under development for awhile now.

A June 2021 Web.dev article outlined these goals for the new measurement:

Advertisement

“Consider the responsiveness of all user inputs (not just the first one)

Capture each event’s full duration (not just the delay).

Group events together that occur as part of the same logical user interaction and define that interaction’s latency as the max duration of all its events.

Create an aggregate score for all interactions that occur on a page, throughout its full lifecycle.”

The Web.dev article states that the goal is to design a better metric that encompasses a more meaningful measurement of the user experience.

“We want to design a metric that better captures the end-to-end latency of individual events and offers a more holistic picture of the overall responsiveness of a page throughout its lifetime.

…With this new metric we plan to expand that to capture the full event duration, from initial user input until the next frame is painted after all the event handlers have run.

Advertisement

We also plan to measure interactions rather than individual events. Interactions are groups of events that are dispatched as part of the same, logical user gesture (for example: pointerdown, click, pointerup).”

Advertisement

Continue Reading Below

It’s also explained like this:

“The event duration is meant to be the time from the event hardware timestamp to the time when the next paint is performed after the event is handled.

But if the event doesn’t cause any update, the duration will be the time from event hardware timestamp to the time when we are sure it will not cause any update.”

Advertisement

Two Approaches To Interaction Latency Metric

Web.dev explains that the Chrome engineers are exploring two approaches for measuring the interaction latency:

  1. Maximum Event Duration
  2. Total Event Duration

Maximum Event Duration

An interaction consists of multiple events of varying durations. This measurement bases itself on the largest duration out of a group.

Total Event Duration

This is a sum of all the event durations.

FID Is Likely Going Away?

It’s possible that FID could remain as part of Core Web Vitals but what’s the point if ever site scores 100% on it?

Advertisement

Continue Reading Below

Advertisement

For that reason, it’s not unreasonable to assume that FID is going away sometime in the relatively near future.

The Chrome team is soliciting feedback on different approaches to measuring interaction latency. Now is the time to speak up.

Citations

HTTPArchive Web Almanac: CMS

Feedback wanted: An experimental responsiveness metric

Towards a better responsiveness metric

Advertisement

Searchenginejournal.com

See also  Pinterest tests online events with dedicated ‘class communities’
Continue Reading

NEWS

Gravatar “Breach” Exposes Data of 100+ Million Users

Published

on

The security alert company HaveIBeenPwned notified users that the profile information of 114 million Gravatar users had been leaked online in what they characterized as a data breach. Gravatar denies that it was hacked.

Here’s a screenshot of the email that was sent to HaveIBeenPwned users that characterized the Gravatar event as a data breach:

Gravatar Breach

Gravatar Enumeration Vulnerability

The user information of every person with a Gravatar account was open to being downloaded using software that “scrapes” the data.

While technically that is not a breach, the manner in which user information was stored by Gravatar made it easy for a person with malicious intent to obtain user information which could then be used as part of another attack to gain passwords and access.

Advertisement

Gravatar accounts are public information. However the individual user profile accounts are not publicly listed in a way that can easily be browsed. Ordinarily a person would have to know account information like the username in order to find the account and all the publicly available information.

A security researcher discovered in late 2020 that Gravatar user account information was recorded in numerical order. A news report from the time described how the security researcher peeked into a JSON file linked in the profile page revealed an ID number that corresponded to the numerical number assigned to that user.

The problem with that user identification number is that the profile could be reached with that number.

Because the number was not randomly generated but in numerical order, anyone wishing to access the all of the Gravatar usernames could access that information by requesting and scraping the user profiles in numerical order.

Data Scraping Event

A data breach is defined as when an unauthorized person gains access to information that is not publicly available.

Advertisement

The Gravatar information was publicly available but an outsider would have to know the username of the Gravatar user in order to gain access to the Gravatar user profile. Additionally the email address of that user was stored in an insecure encrypted manner (called an MD5 hash).

An MD5 hash is insecure and can easily be unencrypted (also known as cracked). Storing email addresses in the MD5 format provided only minor security protection.

That means that once an attacker downloaded the usernames and the email MD5 hash it was then a simple matter for the user’s email address to be extracted.

According to the security researcher who initially discovered the username enumeration vulnerability, Gravatar only had “virtually no rate limiting” which means that a scraper bot could request millions of user profiles without being stopped or challenged for suspicious behavior.

According to the news report from October 2020 that originally divulged the vulnerability:

Advertisement

“While data provided by Gravatar users on their profiles is already public, the easy user enumeration aspect of the service with virtually no rate limiting raises concerns with regards to the mass collection of user data.”

Gravatar Minimizes User Data Collection

Gravatar tweeted public statements that minimized the impact of the user information collection.

The last tweet in the series from Gravatar encouraged readers to learn how Gravatar works:

“If you want to learn more about how Gravatar works or adjust the data shared on your profile, please visit http://Gravatar.com.”

Ironically, Gravatar linked to an insecure protocol of the URL, using HTTP. Upon reaching the URL there was no redirect on Gravatar to a secure (HTTPS) version of the web page, which only undermined their efforts to project a sense of security.

Twitter Users React

One Twitter user objected to the use of the word “breach” because the information was publicly available.

The person behind the HaveIBeenPwned website responded:

Advertisement

Why Gravatar Scraping Event Is Important

Troy Hunt, the person behind the HaveIBeenPwned website explained in a series of tweets why the Gravatar scraping event is important.

Troy asserted that the data that users entrusted to Gravatar was used in a way that was unexpected.

Gravatar User Trust Eroded

Users Want Control Over Their Gravatar Information

Troy asserted that users want to be aware of how their information is used and accessed.

Were Gravatar Users Pwned?

An argument could be made that a Gravatar account can be public but not easily harvested as Step One of a hacking event by people with malicious intent.

Gravatar asserted that after the enumeration attack vulnerability was disclosed that they took steps to close it to prevent further downloading of user information.

Advertisement

So on the one hand Gravatar took steps to prevent those with malicious intent from harvesting user information. But on the other hand they said reports of Gravatar being hacked is misinformation.

But the fact is that HaveIBeenPwned did not call it a hacking event, they called it a breach.

An argument could be made that Gravatar’s use of the MD5 hash for storing email data was insecure and the moment hackers cracked the insecure encryption, the abnormal scraping of “public information” became a breach.

Many Gravatar users aren’t particularly happy and are looking for answers:

Searchenginejournal.com

Advertisement
See also  Google Employees Form a Union
Continue Reading

NEWS

Google Watches For How Sites Fit Into Overall Internet via @sejournal, @martinibuster

Published

on

Google’s John Mueller answered a question about how long it takes for Google to recognize site quality. During the course of answering the question he mentioned something that bears a little more looking into and that’s the concept of how it’s important to Google to understand how a website fits into the context of the overall Internet.

How A Website Fits Into The Internet

John Mueller’s statement about how Google seeks to understand a website’s fit into the overall Internet as part of evaluating a site for quality is short on details, yet the emphasis he puts onto this and his statement that it can take months to complete the assessment implies that this is something important.

  • Is he talking about linking patterns?
  • Is he talking about the text of the content?

If it’s important to Google then it’s important for SEO.

Advertisement

Continue Reading Below

How Long Does It Take To Reassess A Website?

The person asking the question used the example of a site that goes down for a period of time and how long it might take Google to restore traffic and so-called “authority,” which isn’t something that Google uses.

This is the question about Google site quality:

Advertisement

“Are there any situations where Google negates a site’s authority that can’t be recovered, even if the cause has been rectified.

So, assuming that the cause was a short term turbulence with technical issues or content changes, how long for Google to reassess the website and fully restore authority, search position and traffic?

Does Google have a memory as such?”

How Google Determines Site Quality

Mueller first discusses the easy situation where a site goes down for a short period of time.

John Mueller’s answer:

“For technical things, I would say we pretty much have no memory in the sense that if we can’t crawl a website for awhile or if something goes missing for awhile and it comes back then we have that content again, we have that information again, we can show that again.

That’s something that pretty much picks up instantly again.

Advertisement

And this is something that I think we have to have because the Internet is sometimes very flaky and sometimes sites go offline for a week or even longer.

And they come back and it’s like nothing has changed but they fixed the servers.

And we have to deal with that and users are still looking for those websites.”

Advertisement

Continue Reading Below

Advertisement

Overall Quality And Relevance Of A Website

Mueller next discusses the more difficult problem for Google of understanding the overall quality of a site and especially this idea of how a site fits into the rest of the Internet.

Mueller continues:

“I think it’s a lot trickier when it comes to things around quality in general where assessing the overall quality and relevance of a website is not very easy.

It takes a lot of time for us to understand how a website fits in with regards to the rest of the Internet.

And that means on the one hand it takes a lot of time for us to recognize that maybe something is not as good as we thought it was.

Similarly, it takes a lot of time for us to learn the opposite again.

Advertisement

And that’s something that can easily take, I don’t know, a couple of months, a half a year, sometimes even longer than a half a year, for us to recognize significant changes in the site’s overall quality.

Because we essentially watch out for …how does this website fit in with the context of the overall web and that just takes a lot of time.

So that’s something where I would say, compared to technical issues, it takes a lot longer for things to be refreshed in that regard.”

The Context Of A Site Within The Overall Web

How a site fits into the context of the overall web seems like the forest as opposed to the trees.

As SEOs and publishers it seems we focus on the trees, headings, keywords, titles, site architecture, and inbound links.

Advertisement

But what about how the site fits into the rest of the Internet? Does that get considered? Is that a part of anyone’s internal site audit checklist?

Perhaps because the phrase, “how a site fits into the overall Internet” is very general and can encompass a lot, I suspect it’s not always the top consideration in a site audit or site planning.

Hypothetical Example Site A Site Quality Assessment

Let’s consider Example Site A. The phrase can mean, in the the context of links, the sites that link into Example Site A and what Example Site A links out to, and the interconnected network that creates and how it reflects in terms of topic and site quality.

Advertisement

Continue Reading Below

Advertisement

That interconnected network might consist of sites or pages that are related by topic. Or it could be associated with spam through the sites that Example Site A links out to.

John Mueller can also be referring to the content itself and how that content is different from other sites on a similar topic, how it includes more information, how the content is better or worse in comparison with other sites.

And what are those other sites? Are they in comparison with top ranked sites? Or just in comparison with all normal non-spam sites?

Mueller keeps referencing how Google tries to understand how a site fits within the overall web and it might be useful to know a little more.

Citation

Time It Takes For Google To Understand How Site Fits Into Overall Internet

Watch John Mueller discuss how Google evaluates site quality at the 22:37 minute second mark:

Advertisement

[embedded content]

Searchenginejournal.com

See also  Pakistan temporarily blocks social media
Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending