Connect with us

NEWS

Twitter explains how it will handle misleading tweets about the US election results

Published

on

Twitter recently updated its policies in advance of the U.S. elections to include specific rules that detailed how it would handle tweets making claims about election results before they were official. Today, the company offered more information about how it plans to prioritize the enforcement of its rules and how it will label any tweets that fall under the new guidelines.

In September, Twitter said it would either remove or attach a warning label to any premature claims of victory, with a focus on tweets that incite “unlawful conduct to prevent a peaceful transfer of power or orderly succession,” the company had explained.

This morning, Twitter added that it will prioritize labeling tweets about the presidential election and any other “highly contested races” where there may be significant issues with misleading information.

The company says tweets are eligible to be labeled if the account has a U.S. 2020 candidate label, including presidential candidates and campaigns — meaning the Trump and Biden campaigns will not be immune to the new policies.

Tweets can also be labeled if the account is U.S.-based with more than 100,000 followers or if they have significant engagement with the tweet — the threshold is either 25,000 Likes or 25,000 Quote Tweets plus Retweets, the company says. This latter guideline aims to clamp down on allowing misinformation to go viral, even if the tweet in question was initiated by a smaller account.

Twitter also explained how it will determine if an election result is considered “official,” saying that the result will need to be announced by a state election official. Twitter also may consider an election result official if at least two of a select list of national news outlets make the call. These outlets include ABC News, The Associated Press, CBS News, CNN, Decision Desk HQ, Fox News and NBC News.

If a tweet is labeled as being “misleading information” under this new policy, users will be shown a prompt pointing them to credible information before they’re able to retweet or further amplify the post on Twitter. However, Twitter won’t stop retweets from being posted.

Advertisement

Twitter, however, recently made it more difficult to blindly retweet, by forcing retweets to go through “Quote Tweet” user interface instead. This change aims to slow people down from quickly retweeting posts without adding their own commentary.

In addition to labeling tweets with misleading information, Twitter says if it sees content “inciting interference with the election, encouraging violent action or other physical harms,” it may take additional measures, including adding a warning or even removing the tweet.

Issues around a contested election have been of increased concern, following reports that said President Trump has a plan to declare victory on Tuesday night if it looks like he’s ahead. Trump denied these claims on Sunday, but added he thinks it’s a “terrible thing when states are allowed to tabulate ballots for a long period of time after the election is over,” Axios reported.

TechCrunch

Advertisement
See also  The Roblox final fantasy

NEWS

Google Is Creating A New Core Web Vitals Metric via @sejournal, @martinibuster

Published

on

In a recent HTTPArchive Almanac article about CMS use worldwide, the author mentioned that all platforms score great on the First Input Delay (FID), a Core Web Vitals metric and that Google is working on a new metric, which one might presume may replace First Input Delay (FID).

Every year the HTTPArchive publishes multiple articles about the state of the web. Chapter 16 is about content management systems (CMS). The article was written by a Backend Group Manager and Head of Web Performance Wix engineer and reviewed and analyzed by various Googlers and others.

The article raised an interesting point about how the First Input Delay metric has lost meaning and mentioned how Google was developing a new metric.

First Input Delay

Core Web Vitals are a group of user experience metrics designed to provide a snapshot of how well web pages perform for users and First Input Delay (FID) is one of those metrics.

FID measures how fast a browser can respond to user interaction with a website, like how long it takes for a response to happen when a user clicks a button on a website.

Advertisement

Advertisement

Continue Reading Below

The thing about FID is that all major content management systems, like WordPress, Wix, Drupal and others all have lightning fast FID scores.

Everyone Wins An FID Trophy

The article first mentions that most CMS’s score exceptionally well for FID. And the platforms that score less well still have relatively high scores that lag behind by only 5 percentage points.

The author wrote:

“FID is very good for most CMSs on desktop, with all platforms scoring a perfect 100%. Most CMSs also deliver a good mobile FID of over 90%, except Bitrix and Joomla with only 83% and 85% of origins having a good FID.”

What’s happened to FID is that it’s basically a metric where everyone gets a trophy. If almost all sites score exceptionally well, if everyone gets a trophy, then that means there really isn’t much of a reason for the metric to exist because the goal of getting this part of the user experience fixed has been reached.

Advertisement

Advertisement

Continue Reading Below

The article then mentions how Google (the Chrome team) is currently creating a new metric for measuring responsiveness and response latency.

The article continued:

“The fact that almost all platforms manage to deliver a good FID, has recently raised questions about the strictness of this metric.

The Chrome team recently published an article, which detailed the thoughts towards having a better responsiveness metric in the future.”

Advertisement

Input Response Delay Versus Full Event Duration

The article linked to a recent Google article published on Web.dev titled, Feedback wanted: An experimental responsiveness metric.

What’s important about this article is that it reveals that Google is working on a new input delay metric. Knowing about this metric can give a head start to preparing for what is coming in the future.

The main point to understand about this new metric is that it isn’t measuring just single interactions. It is measuring groups of individual interactions that are part of a user action.

While the article cited in HTTPArchive cited a November 2021 article that asks for publisher feedback, this new metric has been under development for awhile now.

A June 2021 Web.dev article outlined these goals for the new measurement:

Advertisement

“Consider the responsiveness of all user inputs (not just the first one)

Capture each event’s full duration (not just the delay).

Group events together that occur as part of the same logical user interaction and define that interaction’s latency as the max duration of all its events.

Create an aggregate score for all interactions that occur on a page, throughout its full lifecycle.”

The Web.dev article states that the goal is to design a better metric that encompasses a more meaningful measurement of the user experience.

“We want to design a metric that better captures the end-to-end latency of individual events and offers a more holistic picture of the overall responsiveness of a page throughout its lifetime.

…With this new metric we plan to expand that to capture the full event duration, from initial user input until the next frame is painted after all the event handlers have run.

Advertisement

We also plan to measure interactions rather than individual events. Interactions are groups of events that are dispatched as part of the same, logical user gesture (for example: pointerdown, click, pointerup).”

Advertisement

Continue Reading Below

It’s also explained like this:

“The event duration is meant to be the time from the event hardware timestamp to the time when the next paint is performed after the event is handled.

But if the event doesn’t cause any update, the duration will be the time from event hardware timestamp to the time when we are sure it will not cause any update.”

Advertisement

Two Approaches To Interaction Latency Metric

Web.dev explains that the Chrome engineers are exploring two approaches for measuring the interaction latency:

  1. Maximum Event Duration
  2. Total Event Duration

Maximum Event Duration

An interaction consists of multiple events of varying durations. This measurement bases itself on the largest duration out of a group.

Total Event Duration

This is a sum of all the event durations.

FID Is Likely Going Away?

It’s possible that FID could remain as part of Core Web Vitals but what’s the point if ever site scores 100% on it?

Advertisement

Continue Reading Below

Advertisement

For that reason, it’s not unreasonable to assume that FID is going away sometime in the relatively near future.

The Chrome team is soliciting feedback on different approaches to measuring interaction latency. Now is the time to speak up.

Citations

HTTPArchive Web Almanac: CMS

Feedback wanted: An experimental responsiveness metric

Towards a better responsiveness metric

Advertisement

Searchenginejournal.com

See also  Twitter restricts numerous high-profile accounts in India following ‘legal demand’
Continue Reading

NEWS

Gravatar “Breach” Exposes Data of 100+ Million Users

Published

on

The security alert company HaveIBeenPwned notified users that the profile information of 114 million Gravatar users had been leaked online in what they characterized as a data breach. Gravatar denies that it was hacked.

Here’s a screenshot of the email that was sent to HaveIBeenPwned users that characterized the Gravatar event as a data breach:

Gravatar Breach

Gravatar Enumeration Vulnerability

The user information of every person with a Gravatar account was open to being downloaded using software that “scrapes” the data.

While technically that is not a breach, the manner in which user information was stored by Gravatar made it easy for a person with malicious intent to obtain user information which could then be used as part of another attack to gain passwords and access.

Advertisement

Gravatar accounts are public information. However the individual user profile accounts are not publicly listed in a way that can easily be browsed. Ordinarily a person would have to know account information like the username in order to find the account and all the publicly available information.

A security researcher discovered in late 2020 that Gravatar user account information was recorded in numerical order. A news report from the time described how the security researcher peeked into a JSON file linked in the profile page revealed an ID number that corresponded to the numerical number assigned to that user.

The problem with that user identification number is that the profile could be reached with that number.

Because the number was not randomly generated but in numerical order, anyone wishing to access the all of the Gravatar usernames could access that information by requesting and scraping the user profiles in numerical order.

Data Scraping Event

A data breach is defined as when an unauthorized person gains access to information that is not publicly available.

Advertisement

The Gravatar information was publicly available but an outsider would have to know the username of the Gravatar user in order to gain access to the Gravatar user profile. Additionally the email address of that user was stored in an insecure encrypted manner (called an MD5 hash).

An MD5 hash is insecure and can easily be unencrypted (also known as cracked). Storing email addresses in the MD5 format provided only minor security protection.

That means that once an attacker downloaded the usernames and the email MD5 hash it was then a simple matter for the user’s email address to be extracted.

According to the security researcher who initially discovered the username enumeration vulnerability, Gravatar only had “virtually no rate limiting” which means that a scraper bot could request millions of user profiles without being stopped or challenged for suspicious behavior.

According to the news report from October 2020 that originally divulged the vulnerability:

Advertisement

“While data provided by Gravatar users on their profiles is already public, the easy user enumeration aspect of the service with virtually no rate limiting raises concerns with regards to the mass collection of user data.”

Gravatar Minimizes User Data Collection

Gravatar tweeted public statements that minimized the impact of the user information collection.

The last tweet in the series from Gravatar encouraged readers to learn how Gravatar works:

“If you want to learn more about how Gravatar works or adjust the data shared on your profile, please visit http://Gravatar.com.”

Ironically, Gravatar linked to an insecure protocol of the URL, using HTTP. Upon reaching the URL there was no redirect on Gravatar to a secure (HTTPS) version of the web page, which only undermined their efforts to project a sense of security.

Twitter Users React

One Twitter user objected to the use of the word “breach” because the information was publicly available.

The person behind the HaveIBeenPwned website responded:

Advertisement

Why Gravatar Scraping Event Is Important

Troy Hunt, the person behind the HaveIBeenPwned website explained in a series of tweets why the Gravatar scraping event is important.

Troy asserted that the data that users entrusted to Gravatar was used in a way that was unexpected.

Gravatar User Trust Eroded

Users Want Control Over Their Gravatar Information

Troy asserted that users want to be aware of how their information is used and accessed.

Were Gravatar Users Pwned?

An argument could be made that a Gravatar account can be public but not easily harvested as Step One of a hacking event by people with malicious intent.

Gravatar asserted that after the enumeration attack vulnerability was disclosed that they took steps to close it to prevent further downloading of user information.

Advertisement

So on the one hand Gravatar took steps to prevent those with malicious intent from harvesting user information. But on the other hand they said reports of Gravatar being hacked is misinformation.

But the fact is that HaveIBeenPwned did not call it a hacking event, they called it a breach.

An argument could be made that Gravatar’s use of the MD5 hash for storing email data was insecure and the moment hackers cracked the insecure encryption, the abnormal scraping of “public information” became a breach.

Many Gravatar users aren’t particularly happy and are looking for answers:

Searchenginejournal.com

Advertisement
See also  Instagram adds Boomerang effects as TikTok looms
Continue Reading

NEWS

Google Watches For How Sites Fit Into Overall Internet via @sejournal, @martinibuster

Published

on

Google’s John Mueller answered a question about how long it takes for Google to recognize site quality. During the course of answering the question he mentioned something that bears a little more looking into and that’s the concept of how it’s important to Google to understand how a website fits into the context of the overall Internet.

How A Website Fits Into The Internet

John Mueller’s statement about how Google seeks to understand a website’s fit into the overall Internet as part of evaluating a site for quality is short on details, yet the emphasis he puts onto this and his statement that it can take months to complete the assessment implies that this is something important.

  • Is he talking about linking patterns?
  • Is he talking about the text of the content?

If it’s important to Google then it’s important for SEO.

Advertisement

Continue Reading Below

How Long Does It Take To Reassess A Website?

The person asking the question used the example of a site that goes down for a period of time and how long it might take Google to restore traffic and so-called “authority,” which isn’t something that Google uses.

This is the question about Google site quality:

Advertisement

“Are there any situations where Google negates a site’s authority that can’t be recovered, even if the cause has been rectified.

So, assuming that the cause was a short term turbulence with technical issues or content changes, how long for Google to reassess the website and fully restore authority, search position and traffic?

Does Google have a memory as such?”

How Google Determines Site Quality

Mueller first discusses the easy situation where a site goes down for a short period of time.

John Mueller’s answer:

“For technical things, I would say we pretty much have no memory in the sense that if we can’t crawl a website for awhile or if something goes missing for awhile and it comes back then we have that content again, we have that information again, we can show that again.

That’s something that pretty much picks up instantly again.

Advertisement

And this is something that I think we have to have because the Internet is sometimes very flaky and sometimes sites go offline for a week or even longer.

And they come back and it’s like nothing has changed but they fixed the servers.

And we have to deal with that and users are still looking for those websites.”

Advertisement

Continue Reading Below

Advertisement

Overall Quality And Relevance Of A Website

Mueller next discusses the more difficult problem for Google of understanding the overall quality of a site and especially this idea of how a site fits into the rest of the Internet.

Mueller continues:

“I think it’s a lot trickier when it comes to things around quality in general where assessing the overall quality and relevance of a website is not very easy.

It takes a lot of time for us to understand how a website fits in with regards to the rest of the Internet.

And that means on the one hand it takes a lot of time for us to recognize that maybe something is not as good as we thought it was.

Similarly, it takes a lot of time for us to learn the opposite again.

Advertisement

And that’s something that can easily take, I don’t know, a couple of months, a half a year, sometimes even longer than a half a year, for us to recognize significant changes in the site’s overall quality.

Because we essentially watch out for …how does this website fit in with the context of the overall web and that just takes a lot of time.

So that’s something where I would say, compared to technical issues, it takes a lot longer for things to be refreshed in that regard.”

The Context Of A Site Within The Overall Web

How a site fits into the context of the overall web seems like the forest as opposed to the trees.

As SEOs and publishers it seems we focus on the trees, headings, keywords, titles, site architecture, and inbound links.

Advertisement

But what about how the site fits into the rest of the Internet? Does that get considered? Is that a part of anyone’s internal site audit checklist?

Perhaps because the phrase, “how a site fits into the overall Internet” is very general and can encompass a lot, I suspect it’s not always the top consideration in a site audit or site planning.

Hypothetical Example Site A Site Quality Assessment

Let’s consider Example Site A. The phrase can mean, in the the context of links, the sites that link into Example Site A and what Example Site A links out to, and the interconnected network that creates and how it reflects in terms of topic and site quality.

Advertisement

Continue Reading Below

Advertisement

That interconnected network might consist of sites or pages that are related by topic. Or it could be associated with spam through the sites that Example Site A links out to.

John Mueller can also be referring to the content itself and how that content is different from other sites on a similar topic, how it includes more information, how the content is better or worse in comparison with other sites.

And what are those other sites? Are they in comparison with top ranked sites? Or just in comparison with all normal non-spam sites?

Mueller keeps referencing how Google tries to understand how a site fits within the overall web and it might be useful to know a little more.

Citation

Time It Takes For Google To Understand How Site Fits Into Overall Internet

Watch John Mueller discuss how Google evaluates site quality at the 22:37 minute second mark:

Advertisement

[embedded content]

Searchenginejournal.com

See also  5 tips for brands that want to succeed in the new era of influencer marketing
Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending