Connect with us

NEWS

Core Web Vitals Might Include Noindexed Pages via @martinibuster

Published

on

Webmaster Trends Analyst John Mueller answered questions about Core Web Vitals and how the scores are calculated. He also discussed the possibility of noindexed pages being be used as part of the Core Web Vitals calculation in the new ranking signal that is coming soon.

Core Web Vitals

The Core Web Vitals are user experience metrics. They are a group of metrics that Google chose to represent how well a web page downloads and presents a good user experience for site visitors.

There are three Core Web Vitals metrics:

  1. Largest Contentful Paint (LCP)
    How fast a web page is perceived to load
  2. First Input Delay (FID)
    How soon a visitor can interact with a web page
  3. Cumulative Layout Shift (CLS)
    How stable web page elements (like buttons, text and images) are while the page is downloading, without shifting about.

Those three metrics are scheduled to become ranking factors sometime in 2021. That is why many publishers and SEOs are concerned about how Google calculates the core web vitals score because, as a ranking factor, there is a possibility that it may impact rankings in certain scenarios.

Advertisement

Continue Reading Below

Screenshot of Google’s John Mueller Discussing Noindexed Pages and Core Web Vitals

Screenshot of John Mueller discussing core web vitalsScreenshot of Google’s John Mueller discussing why noindexed pages might be used to calculate Core Web Vitals scoreScreenshot of John Mueller discussing core web vitals

Lab Data and Field Data

Knowing what lab data and field data are is key to understanding John Mueller’s answer.

Lab data, in reference to web vitals scores is an estimate of the score. The lab data scores are generated in a simulated environment.

The goal with lab data is to give a publisher an idea of what could be problematic.

Advertisement

Continue Reading Below

Field Data is a score based on actual site visitors under real-world conditions.

It’s the field data that Google will be using to calculate the associated ranking signal score.

Publishers concerned about their ability to rank are concerned with how field data is calculated.

  • Does Google use actual page score?
  • Does Google use an average of several pages to calculate the core web vitals score?

Noindex and Core Web Vitals

Noindex is a signal that a publisher can use to tell Google not to include a web page in Google’s search results.

According to Google’s official documentation:

“You can prevent a page from appearing in Google Search by including a noindex meta tag in the page’s HTML code, or by returning a noindex header in the HTTP request.

When Googlebot next crawls that page and sees the tag or header, Googlebot will drop that page entirely from Google Search results, regardless of whether other sites link to it.”

The question asked of Google’s John Mueller was whether a noindexed page will be used to calculate the web vitals score.

What made this question important was that the publisher was blocking these pages because they were very slow and the publisher did not want those pages used as part of the calculation of the core web vitals score.

This is the first question:

“With regards to core web vitals, field data is going to be the one to pay attention to, correct (in terms of ranking signals)?”

John Mueller’s response:

“Yes, yes, it’s the field data.”

Google May Aggregate Pages for Core Web Vitals

In the follow up question Mueller reveals how Google may in some cases calculate the core web vitals score as an average of multiple pages.

This is the question:

“When this becomes a ranking signal… is it going to be page level or domain level?”

Mueller answered:

“…What happens with the field data is we don’t have data points for every page.

So we, for the most part, we need to have kind of groupings of individual pages.

And depending on the amount of data that we have, that can be a grouping of the whole website (kind of the domain).

…I think in the Chrome User Experience Report they use the origin which would be the subdomain and the protocol there.

So that would be kind of the overarching kind of grouping.

And if we have more data for individual parts of a website then we’ll try to use that.

And I believe that’s something you also see in search console where we’ll show like one URL and say… there’s so many other pages that are associated with that. And that’s kind of the grouping that we would use there.”

Advertisement

Continue Reading Below

Mueller is clear that the core web vitals score may not always be calculated on a page by page basis.

Will Slow Pages Affect Overall CWV Score?

The person asking the follow up question then related that they have a set of pages that are slow and are no-indexed and asked if those pages can impact the core web vitals score.

“We gave this set of pages that they are slow. And these we have a noindex on them… they are very slow. And that’s why we don’t want it to be accounted for.”

Mueller responded:

“I don’t know for sure how we would do things with a noindex there. But it’s not something you can easily determine ahead of time.

Like, will we see this as one website or will we see it as different groupings there.

Sometimes with the Chrome User Experience Report data you can see like, Does Google have data points for those noindex pages? Does Google have data points for the other pages there?

And then you can kind of figure out like okay, it can recognize that there is separate kinds of pages and can treat them individually.

And if that’s the case, then I don’t see a problem with that.

If it’s a smaller website where we just don’t have a lot of signals for the website then those noindex pages could be playing a role there as well.

So I’m not 100% sure but my understanding is that in the Chrome User Experience Report data we do include all kinds of pages that users access.

So there’s no specific kind of, will this page be indexed like this or not check that happens there because the indexability is sometimes quite complex with regards to canonicals and all of that.

So it’s not trivial to determine… on the Chrome side if this page will be indexed or not.

It might be the case that if a page has a clear noindex then even in Chrome we would be able to recognize that. But I’m not 100% sure if we actually do that.

I would also check the Chrome User Experience Report data. I think you can download data into BigQuery and you can play with that a little bit and figure out how is that happening for other sites, for similar sites that kind of fall in the same category as the site that you’re working on.”

Advertisement

Continue Reading Below

Pages that Users Access

While Mueller hedged by saying that he wasn’t 100% certain if Google used noindexed pages, he did affirm that the Chrome User Experience Report included all kinds of pages (which in this context presumably includes noindexed pages).

The reason they are included is because, according to Mueller:

“…we do include all kinds of pages that users access.”

The logic behind using noindexed pages can be that because users can access a page then it is going to be measured. The reason is because a user will experience the noindexed pages, regardless if those web pages are blocked to Google.

Though Mueller wasn’t 100% certain, until there is further clarification, it may be prudent to assume that noindexed pages will be measured as part of the core web vitals ranking score.

Citation

Watch the Office Hours Hangout

[embedded content]

Searchenginejournal.com

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

NEWS

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

Published

on

By

OpenAI Introduces Fine-Tuning for GPT-4 and Enabling Customized AI Models

OpenAI has today announced the release of fine-tuning capabilities for its flagship GPT-4 large language model, marking a significant milestone in the AI landscape. This new functionality empowers developers to create tailored versions of GPT-4 to suit specialized use cases, enhancing the model’s utility across various industries.

Fine-tuning has long been a desired feature for developers who require more control over AI behavior, and with this update, OpenAI delivers on that demand. The ability to fine-tune GPT-4 allows businesses and developers to refine the model’s responses to better align with specific requirements, whether for customer service, content generation, technical support, or other unique applications.

Why Fine-Tuning Matters

GPT-4 is a very flexible model that can handle many different tasks. However, some businesses and developers need more specialized AI that matches their specific language, style, and needs. Fine-tuning helps with this by letting them adjust GPT-4 using custom data. For example, companies can train a fine-tuned model to keep a consistent brand tone or focus on industry-specific language.

Fine-tuning also offers improvements in areas like response accuracy and context comprehension. For use cases where nuanced understanding or specialized knowledge is crucial, this can be a game-changer. Models can be taught to better grasp intricate details, improving their effectiveness in sectors such as legal analysis, medical advice, or technical writing.

Key Features of GPT-4 Fine-Tuning

The fine-tuning process leverages OpenAI’s established tools, but now it is optimized for GPT-4’s advanced architecture. Notable features include:

  • Enhanced Customization: Developers can precisely influence the model’s behavior and knowledge base.
  • Consistency in Output: Fine-tuned models can be made to maintain consistent formatting, tone, or responses, essential for professional applications.
  • Higher Efficiency: Compared to training models from scratch, fine-tuning GPT-4 allows organizations to deploy sophisticated AI with reduced time and computational cost.

Additionally, OpenAI has emphasized ease of use with this feature. The fine-tuning workflow is designed to be accessible even to teams with limited AI experience, reducing barriers to customization. For more advanced users, OpenAI provides granular control options to achieve highly specialized outputs.

Implications for the Future

The launch of fine-tuning capabilities for GPT-4 signals a broader shift toward more user-centric AI development. As businesses increasingly adopt AI, the demand for models that can cater to specific business needs, without compromising on performance, will continue to grow. OpenAI’s move positions GPT-4 as a flexible and adaptable tool that can be refined to deliver optimal value in any given scenario.

By offering fine-tuning, OpenAI not only enhances GPT-4’s appeal but also reinforces the model’s role as a leading AI solution across diverse sectors. From startups seeking to automate niche tasks to large enterprises looking to scale intelligent systems, GPT-4’s fine-tuning capability provides a powerful resource for driving innovation.

OpenAI announced that fine-tuning GPT-4o will cost $25 for every million tokens used during training. After the model is set up, it will cost $3.75 per million input tokens and $15 per million output tokens. To help developers get started, OpenAI is offering 1 million free training tokens per day for GPT-4o and 2 million free tokens per day for GPT-4o mini until September 23. This makes it easier for developers to try out the fine-tuning service.

As AI continues to evolve, OpenAI’s focus on customization and adaptability with GPT-4 represents a critical step in making advanced AI accessible, scalable, and more aligned with real-world applications. This new capability is expected to accelerate the adoption of AI across industries, creating a new wave of AI-driven solutions tailored to specific challenges and opportunities.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

GOOGLE

This Week in Search News: Simple and Easy-to-Read Update

Published

on

This Week in Search News: Simple and Easy-to-Read Update

Here’s what happened in the world of Google and search engines this week:

1. Google’s June 2024 Spam Update

Google finished rolling out its June 2024 spam update over a period of seven days. This update aims to reduce spammy content in search results.

2. Changes to Google Search Interface

Google has removed the continuous scroll feature for search results. Instead, it’s back to the old system of pages.

3. New Features and Tests

  • Link Cards: Google is testing link cards at the top of AI-generated overviews.
  • Health Overviews: There are more AI-generated health overviews showing up in search results.
  • Local Panels: Google is testing AI overviews in local information panels.

4. Search Rankings and Quality

  • Improving Rankings: Google said it can improve its search ranking system but will only do so on a large scale.
  • Measuring Quality: Google’s Elizabeth Tucker shared how they measure search quality.

5. Advice for Content Creators

  • Brand Names in Reviews: Google advises not to avoid mentioning brand names in review content.
  • Fixing 404 Pages: Google explained when it’s important to fix 404 error pages.

6. New Search Features in Google Chrome

Google Chrome for mobile devices has added several new search features to enhance user experience.

7. New Tests and Features in Google Search

  • Credit Card Widget: Google is testing a new widget for credit card information in search results.
  • Sliding Search Results: When making a new search query, the results might slide to the right.

8. Bing’s New Feature

Bing is now using AI to write “People Also Ask” questions in search results.

9. Local Search Ranking Factors

Menu items and popular times might be factors that influence local search rankings on Google.

10. Google Ads Updates

  • Query Matching and Brand Controls: Google Ads updated its query matching and brand controls, and advertisers are happy with these changes.
  • Lead Credits: Google will automate lead credits for Local Service Ads. Google says this is a good change, but some advertisers are worried.
  • tROAS Insights Box: Google Ads is testing a new insights box for tROAS (Target Return on Ad Spend) in Performance Max and Standard Shopping campaigns.
  • WordPress Tag Code: There is a new conversion code for Google Ads on WordPress sites.

These updates highlight how Google and other search engines are continuously evolving to improve user experience and provide better advertising tools.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

FACEBOOK

Facebook Faces Yet Another Outage: Platform Encounters Technical Issues Again

Published

on

By

Facebook Problem Again

Uppdated: It seems that today’s issues with Facebook haven’t affected as many users as the last time. A smaller group of people appears to be impacted this time around, which is a relief compared to the larger incident before. Nevertheless, it’s still frustrating for those affected, and hopefully, the issues will be resolved soon by the Facebook team.

Facebook had another problem today (March 20, 2024). According to Downdetector, a website that shows when other websites are not working, many people had trouble using Facebook.

This isn’t the first time Facebook has had issues. Just a little while ago, there was another problem that stopped people from using the site. Today, when people tried to use Facebook, it didn’t work like it should. People couldn’t see their friends’ posts, and sometimes the website wouldn’t even load.

Downdetector, which watches out for problems on websites, showed that lots of people were having trouble with Facebook. People from all over the world said they couldn’t use the site, and they were not happy about it.

When websites like Facebook have problems, it affects a lot of people. It’s not just about not being able to see posts or chat with friends. It can also impact businesses that use Facebook to reach customers.

Since Facebook owns Messenger and Instagram, the problems with Facebook also meant that people had trouble using these apps. It made the situation even more frustrating for many users, who rely on these apps to stay connected with others.

During this recent problem, one thing is obvious: the internet is always changing, and even big websites like Facebook can have problems. While people wait for Facebook to fix the issue, it shows us how easily things online can go wrong. It’s a good reminder that we should have backup plans for staying connected online, just in case something like this happens again.

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending