Connect with us

SEO

A Technical SEO Guide To Lighthouse Performance Metrics

Published

on

A Technical SEO Guide To Lighthouse Performance Metrics

Maybe you’re here because you’re a die-hard fan of performance metrics. Or maybe you don’t know what Lighthouse is and are too afraid to ask.

Either is an excellent option. Welcome!

Together, we’re hoping to take your performance improvement efforts from “make all the numbers green” to some clear and meaningful action items.

Note: This article was updated for freshness in January 2022 to represent versions 8 and 9.

Technical SEO and Google Data Studio nerd Rachel Anderson joined me on this merry adventure into demystifying developer documentation.

Advertisement

We’re going to answer:

  • What is Lighthouse?
  • How is Lighthouse different from Core Web Vitals?
  • Why doesn’t Lighthouse match Search Console/Crux reports?
  • How is Performance Score calculated?
  • Why is my score different each time I test?
  • Lighthouse Performance metrics explained
  • How to test performance using Lighthouse

What Is Lighthouse?

Performance is about measuring how quickly a browser can assemble a webpage.

Lighthouse uses a web browser called Chromium to build pages and runs tests on the pages as they’re built.  The tool is open-source (meaning it is maintained by the community and free to use).

Each audit falls into one of five categories:

  1. Performance.
  2. Accessibility.
  3. Best Practices.
  4. SEO.
  5. Progressive Web App.
Screenshot from Lighthouse, January 2022

For the purposes of this article, we’re going to use the name Lighthouse to refer to the series of tests executed by the shared Github repo, regardless of the execution method.

Version 9 is currently out on Github and is slated for large-scale rollout with the stable Chrome 98 release in February 2022.

Lighthouse And Web Core Vitals

On May 5, 2020, the Chromium project announced a set of three metrics with which the Google-backed open-source browser would measure performance.

The metrics, known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality signals.

Advertisement

The goal of these metrics is to measure web performance in a user-centric manner.

Within two weeks, Lighthouse v6 rolled out with a modified version of Web Core Vitals at the heart of the update.

July 2020 saw Lighthouse v6’s unified metrics adopted across Google products with the release of Chrome 84.

Chrome DevTools Audits panel was renamed to Lighthouse. Pagespeed insights and Google Search Console also reference these unified metrics.

This change in focus sets new, more refined goals.

How Is Lighthouse Different Than Core Web Vitals?

The three metrics represented by Core Web Vital are part of Lighthouse performance scoring.

Advertisement

Largest Contentful Paint, Total Blocking Time, and Cumulative Layout Shift comprise 70% of Lighthouse’s weighted performance score.

The scores you’ll see for CWV in Lighthouse are the result of emulated tests.

It’s the same metric but measured off a single page load rather than calculated from page loads around the world.

Why Doesn’t Lighthouse Match Search Console/Crux reports?

For real users, how quickly a page assembles is based on factors like their network connection, the device’s network processing power, and even the user’s physical distance to the site’s servers.

Lighthouse performance data doesn’t account for all these factors.

Instead, the tool emulates a mid-range device and throttles CPU in order to simulate the average user.

Advertisement

These are lab tests collected within a controlled environment with predefined device and network settings.

Lab data is helpful for debugging performance issues.

It does not mean that the experience on your local machine in a controlled environment represents the experiences of real humans in the wild.

The good news is you don’t have to choose between Lighthouse and Core Web Vitals. They’re designed to be part of the same workflow.

Always start with field data from the Chrome User Experience Report to identify issues impacting real uses.

Then leverage the expanded testing capabilities of Lighthouse to identify the code causing the issue.

Advertisement

If you’re working on a site pre-launch or QAing changes in a non-public environment, Lighthouse will be your new best #webperf friend.

Workflow for performanceScreenshot from Lighthouse, January 2022

How Is Lighthouse Performance Metrics Calculated?

Performance scores from LighthouseLighthouse, January 2022

In versions 8 and 9, Lighthouse’s performance score is made of seven metrics with each contributing a percentage of the total performance score.

Lighthouse metricsCreated by author, January 2022

Why Is My Score Different Each Time I Test?

Your score may change each time you test.

Browser extensions, internet connection, A/B tests, or even the ads displayed on that specific page load have an impact.

If you’re curious/furious to know more, check out the documentation on performance testing variability.

Lighthouse Performance Metrics Explained

Largest Contentful Paint (LCP)

  • What it represents: A user’s perception of loading experience.
  • Lighthouse Performance score weighting: 25%
  • What it measures: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.
  • How it’s measured: Lighthouse extracts LCP data from Chrome’s tracing tool.
  • Is Largest Contentful Paint a Web Core Vital? Yes!
  • LCP Scoring
  • Goal: Achieve LCP in < 2.5 seconds.
LCP measurementsCreated by author, January 2022

What Elements Can Be Part Of LCP?

  • Text.
  • Images.
  • Videos.
  • Background images.

What Counts As LCP On Your Page?

It depends! LCP typically varies by page template.

This means that you can measure a handful of pages using the same template and define LCP.

Lighthouse will provide you with the exact HTML of the LCP element, but it can be useful to know the node as well when communicating with developers.

Advertisement

The node name will be consistent while the exact on-page image or text may change depending on which content is rendered by the template.

How To Define LCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover over the LCP marker in the Timings section.
  4. The element(s) that correspond to LCP is detailed in the Related Node field.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

What Causes Poor LCP?

Poor LCP typically comes from four issues:

  1. Slow server response times.
  2. Render-blocking JavaScript and CSS.
  3. Resource load times.
  4. Client-side rendering.

How To Fix Poor LCP

If the cause is slow server response time:

  • Optimize your server.
  • Route users to a nearby CDN.
  • Cache assets.
  • Serve HTML pages cache-first.
  • Establish third-party connections early.

If the cause is render-blocking JavaScript and CSS:

  • Minify CSS.
  • Defer non-critical CSS.
  • Inline critical CSS.
  • Minify and compress JavaScript files.
  • Defer unused JavaScript.
  • Minimize unused polyfills.

If the cause is resource load times:

  • Optimize and compress images.
  • Preload important resources.
  • Compress text files.
  • Deliver different assets based on the network connection (adaptive serving).
  • Cache assets using a service worker.

If the cause is client-side rendering:

Resources For Improving LCP

Total Blocking Time (TBT)

  • What it represents: Responsiveness to user input.
  • Lighthouse Performance score weighting: 30%
  • What it measures: TBT measures the time between First Contentful Paint and Time to Interactive. TBT is the lab equivalent of First Input Delay (FID) – the field data used in the Chrome User Experience Report and Google’s upcoming Page Experience ranking signal.
  • How it’s measured: The total time in which the main thread is occupied by tasks taking more than 50ms to complete. If a task takes 80ms to run, 30ms of that time will be counted toward TBT. If a task takes 45ms to run, 0ms will be added to TBT.
  • Is Total Blocking Time a Web Core Vital? Yes! It’s the lab data equivalent of First Input Delay (FID).

TBT Scoring

  • Goal: Achieve TBT score of less than 300 milliseconds.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

First Input Delay, the field data equivalent to TBT, has different thresholds.

FID Time in millisecondsCreated by author, January 2022

Long Tasks And Total Blocking Time

TBT measures long tasks – those taking longer than 50ms.

Advertisement

When a browser loads your site, there is essentially a single line queue of scripts waiting to be executed.

Any input from the user has to go into that same queue.

When the browser can’t respond to user input because other tasks are executing, the user perceives this as lag.

Essentially, long tasks are like that person at your favorite coffee shop who takes far too long to order a drink.

Like someone ordering a 2% venti four-pump vanilla, five-pump mocha whole-fat froth, long tasks are a major source of bad experiences.

Short tasks vs. long tasksScreenshot by author, January 2022

What Causes A High TBT On Your Page?

Heavy JavaScript.

Advertisement

That’s it.

How To See TBT Using Chrome Devtools

A Technical SEO Guide To Lighthouse Performance MetricsScreenshot from Chrome Devtools, January 2022

How To Fix Poor TBT

  • Break up Long Tasks.
  • Optimize your page for interaction readiness.
  • Use a web worker.
  • Reduce JavaScript execution time.

Resources For Improving TBT

First Contentful Paint (FCP)

  • What it represents: FCP marks the time at which the first text or image is painted (visible).
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time when I can see the page I requested is responding. My thumb can stop hovering over the back button.
  • How it’s measured: Your FCP score in Lighthouse is measured by comparing your page’s FCP to FCP times for real website data stored by the HTTP Archive.
  • Your FCP increases if it is faster than other pages in the HTTP Archive.
  • Is First Contentful Paint a Web Core Vital? No

FCP Scoring

  • Goal: Achieve FCP in < 2 seconds.
FCP timeCreated by author, January 2022

What Elements Can Be Part Of FCP?

The time it takes to render the first visible element to the DOM is the FCP.

Anything that happens before an element that renders non-white content to the page (excluding iframes) is counted toward FCP.

Since iframes are not considered part of FCP, if they are the first content to render, FCP will continue counting until the first non-iframe content loads, but the iframe load time isn’t counted toward the FCP.

The documentation around FCP also calls out that is often impacted by font load time and there are tips for improving font loads.

Advertisement

FCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Click on the FCP marker in the Timings section.
  4. The summary tab has a timestamp with the FCP in ms.

How To Improve FCP

In order for content to be displayed to the user, the browser must first download, parse, and process all external stylesheets it encounters before it can display or render any content to a user’s screen.

The fastest way to bypass the delay of external resources is to use in-line styles for above-the-fold content.

To keep your site sustainably scalable, use an automated tool like penthouse and Apache’s mod_pagespeed.

These solutions will come with some restrictions to functionalities, require testing, and may not be for everyone.

Universally, we can all improve our site’s time to First Contentful Paint by reducing the scope and complexity of style calculations.

Advertisement

If a style isn’t being used, remove it.

You can identify unused CSS with Chrome Dev Tool’s built-in Code Coverage functionality.

Use better data to make better decisions.

Similar to TTI, you can capture real user metrics for FCP using Google Analytics to correlate improvements with KPIs.

Resources For Improving FCP

Speed Index

  • What it represents: How much is visible at a time during load.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The Speed Index is the average time at which visible parts of the page are displayed.
  • How it’s measured: Lighthouse’s Speed Index measurement comes from a node module called Speedline.

You’ll have to ask the kindly wizards at webpagetest.org for the specifics but roughly, Speedline scores vary by the size of the viewport (read as device screen) and have an algorithm for calculating the completeness of each frame.

Speed index measurementsScreenshot by author, January 2022
  • Is Speed Index a Web Core Vital? No.

SI Scoring

  • Goal: achieve SI in < 4.3 seconds.
Speed Index metricsCreated by author, January 2022

How To Improve SI

Speed score reflects your site’s Critical Rendering Path.

Advertisement

A “critical” resource means that the resource is required for the first paint or is crucial to the page’s core functionality.

The longer and denser the path, the slower your site will be to provide a visual page.

If your path is optimized, you’ll give users content faster and score higher on Speed Index.

How The Critical Path Affects Rendering

Optimized rendering vs unoptimized timesScreenshot by author, January 2022

Lighthouse recommendations commonly associated with a slow Critical Rendering Path include:

  • Minimize main-thread work.
  • Reduce JavaScript execution time.
  • Minimize Critical Requests Depth.
  • Eliminate Render-Blocking Resources.
  • Defer offscreen images.

Resources For Improving SI

Time To Interactive

  • What it represents: Load responsiveness; identifying where a page looks responsive but isn’t yet.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time from when the page begins loading to when its main resources have loaded and are able to respond to user input.
  • How it’s measured: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:

1. The page displays useful content, which is measured by the First Contentful Paint.

2. Event handlers are registered for most visible page elements.

3. The page responds to user interactions within 50 milliseconds.

Advertisement
  • Is Time to Interactive a Web Core Vital? No

TTI Scoring

Goal: achieve TTI score of less than 3.8 seconds.

TTI scoring systemCreated by author, January 2022

Resources For Improving TTI

Cumulative Layout Shift (CLS)

  • What it represents: A user’s perception of a page’s visual stability.
  • Lighthouse Performance score weighting: 15%
  • What it measures: It quantifies shifting page elements through the end of page load.
  • How it’s measured: Unlike other metrics, CLS isn’t measured in time. Instead, it’s a calculated metric based on the number of frames in which elements move and the total distance in pixels the elements moved.
CLS Layout Score formulaCreated by author, January 2022

CLS Scoring

  • Goal: achieve CLS score of less than 0.1.
CLS Scoring systemCreated by author, January 2022

What Elements Can Be Part Of CLS?

Any visual element that appears above the fold at some point in the load.

That’s right – if you’re loading your footer first and then the hero content of the page, your CLS is going to hurt.

Causes Of Poor CLS

  • Images without dimensions.
  • Ads, embeds, and iframes without dimensions.
  • Dynamically injected content.
  • Web Fonts causing FOIT/FOUT.
  • Actions waiting for a network response before updating DOM.

How To Define CLS Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover and move from left to right over the screenshots of the load (make sure the screenshots checkbox is checked).
  4. Watch for elements bouncing around after the first paint to identify elements causing CLS.

How To Improve CLS

Once you identify the element(s) at fault, you’ll need to update them to be stable during the page load.

For example, if slow-loading ads are causing the high CLS score, you may want to use placeholder images of the same size to fill that space as the ad loads to prevent the page shifting.

Advertisement

Some common ways to improve CLS include:

  • Always include width and height size attributes on images and video elements.
  • Reserve space for ad slots (and don’t collapse it).
  • Avoid inserting new content above existing content.
  • Take care when placing non-sticky ads near the top of the viewport.
  • Preload fonts.

CLS Resources

How To Test Performance Using Lighthouse

Methodology Matters

Out of the box, Lighthouse audits a single page at a time.

A single page score doesn’t represent your site, and a fast homepage doesn’t mean a fast site.

Test multiple page types within your site.

Identify your major page types, templates, and goal conversion points (signup, subscribe, and checkout pages).

If 40% of your site is blog posts, make 40% of your testing URLs blog pages!

Advertisement

Example Page Testing Inventory

Example Page Testing InventoryCreated by author, January 2022

Before you begin optimizing, run Lighthouse on each of your sample pages and save the report data.

Record your scores and the to-do list of improvements.

Prevent data loss by saving the JSON results and utilizing Lighthouse Viewer when detailed result information is needed.

Get Your Backlog to Bite Back Using ROI

Getting development resources to action SEO recommendations is hard.

An in-house SEO professional could destroy their pancreas by having a birthday cake for every backlogged ticket’s birthday. Or at least learn to hate cake.

Advertisement

In my experience as an in-house enterprise SEO pro, the trick to getting performance initiatives prioritized is having the numbers to back the investment.

This starting data will become dollar signs that serve to justify and reward development efforts.

With Lighthouse testing, you can recommend specific and direct changes (Think preload this font file) and associate the change to a specific metric.

Chances are you’re going to have more than one area flagged during tests. That’s okay!

If you’re wondering which changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator.

How To Run Lighthouse Tests

This is a case of many roads leading to Oz.

Advertisement

Sure, some scarecrow might be particularly loud about a certain shade of brick but it’s about your goals.

Looking to test an entire staging site? Time to learn some NPM.

Have less than five minutes to prep for a prospective client meeting? A couple of one-off reports should do the trick.

Whichever way you execute, default to mobile unless you have a special use-case for desktop.

For One-Off Reports: PageSpeed Insights

Test one page at a time on PageSpeed Insights. Simply enter the URL.

Lab and field data available in PageSpeed InsightsScreenshot from PageSpeed Insights, January 2022

Pros Of Running Lighthouse From PageSpeed Insights

  • Detailed Lighthouse report is combined with URL-specific data from the Chrome User Experience Report.
  • Opportunities and Diagnostics can be filtered to specific metrics.  This is exceptionally useful when creating tickets for your engineers and tracking the resulting impact of the changes.
  • PageSpeed Insights is running already version 9.
    Pagespeed Insights opportunities and diagnostics filtered by metricScreenshot from PageSpeed Insights, January 2022

Cons Of Running Lighthouse From PageSpeed Insights

  • One report at a time.
  • Only Performance tests are run (if you need SEO, Accessibility, or Best Practices, you’ll need to run those separately)
  • You can’t test local builds or authenticated pages.
  • Reports can’t be saved in JSON, HTML, or Gist format. (Save as PDF via browser functionality is an option.
  • Requires you to manually save results.

For Comparing Test Results: Chrome DevTools Or Web.dev

Because the report will be emulating a user’s experience using your browser, use an incognito instance with all extensions, and the browser’s cache disabled.

Pro-tip: Create a Chrome profile for testing. Keep it local (no sync enabled, password saving, or association to an existing Google account) and don’t install extensions for the user.

Advertisement

How To Run A Test Lighthouse Using Chrome DevTools

  1. Open an incognito instance of Chrome.
  2. Navigate to the Network panel of Chrome Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Tick the box to disable cache.
  4. Navigate to the Lighthouse panel.
  5. Click Generate Report.
  6. Click the dots to the right of the URL in the report
  7. Save in your preferred format (JSON, HTML, or Gist)
    Save options for Lighthouse ReportsScreenshot from Lighthouse Reports, January 2022

Note that your version of Lighthouse may change depending on what version of Chrome you’re using. v8.5 is used on Chrome 97.

Lighthouse v9 will ship with DevTools in Chrome 98.

How To Run A Test Lighthouse Using Web.Dev

It’s just like DevTools but you don’t have to remember to disable all those pesky extensions!

  1. Go to web.dev/measure.
  2. Enter your URL.
  3. Click Run Audit.
  4. Click View Report.
    web.dev view report optionScreenshot by author, January 2022

Pros Of Running Lighthouse From DevTools/web.dev

  • You can test local builds or authenticated pages.
  • Saved reports can be compared using the Lighthouse CI Diff tool.
    Lighthouse CI Diff toolScreenshot from Lighthouse CI Diff, January 2022

Cons Of Running Lighthouse From DevTools/web.dev

  • One report at a time.
  • Requires you to manually save results.

For Testing At Scale (and Sanity): Node Command Line

1. Install npm.
(Mac Pro-tip: Use homebrew to avoid obnoxious dependency issues.)

2. Install the Lighthouse node module with npm

install -g lighthouse

3. Run a single text with

lighthouse <url>

4. Run tests on lists of usings by running tests programmatically.

Advertisement

Pros Of Running Lighthouse From Node

  • Many reports can be run at once.
  • Can be set to run automatically to track change over time.

Cons Of Running Lighthouse From Node

  • Requires some coding knowledge.
  • More time-intensive setup.

Conclusion

The complexity of performance metrics reflects the challenges facing all sites.

We use performance metrics as a proxy for user experience – that means factoring in some unicorns.

Tools like Google’s Test My Site and What Does My Site Cost? can help you make the conversion and customer-focused arguments for why performance matters.

Hopefully, once your project has traction, these definitions will help you translate Lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.

Track your data and shout it from the rooftops.

As much as Google struggles to quantify qualitative experiences, SEO professionals and devs must decode how to translate a concept into code.

Advertisement

Test, iterate, and share what you learn! I look forward to seeing what you’re capable of, you beautiful unicorn.

More resources:


Featured Image: Paulo Bobita/Search Engine Journal




Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Big Update To Google’s Ranking Drop Documentation

Published

on

By

Google updates documentation for diagnosing ranking drops

Google updated their guidance with five changes on how to debug ranking drops. The new version contains over 400 more words that address small and large ranking drops. There’s room to quibble about some of the changes but overall the revised version is a step up from what it replaced.

Change# 1: Downplays Fixing Traffic Drops

The opening sentence was changed so that it offers less hope for bouncing back from an algorithmic traffic drop. Google also joined two sentences into one sentence in the revised version of the documentation.

The documentation previously said that most traffic drops can be reversed and that identifying the reasons for a drop aren’t straightforward. The part about most of them can be reversed was completely removed.

Here is the original two sentences:

“A drop in organic Search traffic can happen for several reasons, and most of them can be reversed. It may not be straightforward to understand what exactly happened to your site”

Now there’s no hope offered for “most of them can be reversed” and more emphasis on understanding what happened is not straightforward.

Advertisement

This is the new guidance

“A drop in organic Search traffic can happen for several reasons, and it may not be straightforward to understand what exactly happened to your site.”

Change #2 Security Or Spam Issues

Google updated the traffic graph illustrations so that they precisely align with the causes for each kind of traffic decline.

The previous version of the graph was labeled:

“Site-level technical issue (Manual Action, strong algorithmic changes)”

The problem with the previous label is that manual actions and strong algorithmic changes are not technical issues and the new version fixes that issue.

The updated version now reads:

“Large drop from an algorithmic update, site-wide security or spam issue”

Change #3 Technical Issues

There’s one more change to a graph label, also to make it more accurate.

Advertisement

This is how the previous graph was labeled:

“Page-level technical issue (algorithmic changes, market disruption)”

The updated graph is now labeled:

“Technical issue across your site, changing interests”

Now the graph and label are more specific as a sitewide change and “changing interests” is more general and covers a wider range of changes than market disruption. Changing interests includes market disruption (where a new product makes a previous one obsolete or less desirable) but it also includes products that go out of style or loses their trendiness.

Graph titled

Change #4 Google Adds New Guidance For Algorithmic Changes

The biggest change by far is their brand new section for algorithmic changes which replaces two smaller sections, one about policy violations and manual actions and a second one about algorithm changes.

The old version of this one section had 108 words. The updated version contains 443 words.

A section that’s particularly helpful is where the guidance splits algorithmic update damage into two categories.

Advertisement

Two New Categories:

  • Small drop in position? For example, dropping from position 2 to 4.
  • Large drop in position? For example, dropping from position 4 to 29.

The two new categories are perfect and align with what I’ve seen in the search results for sites that have lost rankings. The reasons for dropping up and down within the top ten are different from the reasons why a site drops completely out of the top ten.

I don’t agree with the guidance for large drops. They recommend reviewing your site for large drops, which is good advice for some sites that have lost rankings. But in other cases there’s nothing wrong with the site and this is where less experienced SEOs tend to be unable to fix the problems because there’s nothing wrong with the site. Recommendations for improving EEAT, adding author bios or filing link disavows do not solve what’s going on because there’s nothing wrong with the site. The problem is something else in some of the cases.

Here is the new guidance for debugging search position drops:

Algorithmic update
Google is always improving how it assesses content and updating its search ranking and serving algorithms accordingly; core updates and other smaller updates may change how some pages perform in Google Search results. We post about notable improvements to our systems on our list of ranking updates page; check it to see if there’s anything that’s applicable to your site.

If you suspect a drop in traffic is due to an algorithmic update, it’s important to understand that there might not be anything fundamentally wrong with your content. To determine whether you need to make a change, review your top pages in Search Console and assess how they were ranking:

Small drop in position? For example, dropping from position 2 to 4.
Large drop in position? For example, dropping from position 4 to 29.

Keep in mind that positions aren’t static or fixed in place. Google’s search results are dynamic in nature because the open web itself is constantly changing with new and updated content. This constant change can cause both gains and drops in organic Search traffic.

Small drop in position
A small drop in position is when there’s a small shift in position in the top results (for example, dropping from position 2 to 4 for a search query). In Search Console, you might see a noticeable drop in traffic without a big change in impressions.

Advertisement

Small fluctuations in position can happen at any time (including moving back up in position, without you needing to do anything). In fact, we recommend avoiding making radical changes if your page is already performing well.

Large drop in position
A large drop in position is when you see a notable drop out of the top results for a wide range of terms (for example, dropping from the top 10 results to position 29).

In cases like this, self-assess your whole website overall (not just individual pages) to make sure it’s helpful, reliable and people-first. If you’ve made changes to your site, it may take time to see an effect: some changes can take effect in a few days, while others could take several months. For example, it may take months before our systems determine that a site is now producing helpful content in the long term. In general, you’ll likely want to wait a few weeks to analyze your site in Search Console again to see if your efforts had a beneficial effect on ranking position.

Keep in mind that there’s no guarantee that changes you make to your website will result in noticeable impact in search results. If there’s more deserving content, it will continue to rank well with our systems.”

Change #5 Trivial Changes

The rest of the changes are relatively trivial but nonetheless makes the documentation more precise.

For example, one of the headings was changed from this:

Advertisement

You recently moved your site

To this new heading:

Site moves and migrations

Google’s Updated Ranking Drops Documentation

Google’s updated documentation is a well thought out but I think that the recommendations for large algorithmic drops are helpful for some cases and not helpful for other cases. I have 25 years of SEO experience and have experienced every single Google algorithm update. There are certain updates where the problem is not solved by trying to fix things and Google’s guidance used to be that sometimes there’s nothing to fix. The documentation is better but in my opinion it can be improved even further.

Read the new documentation here:

Debugging drops in Google Search traffic

Review the previous documentation:

Internet Archive Wayback Machine: Debugging drops in Google Search traffic

Advertisement

Featured Image by Shutterstock/Tomacco

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google March 2024 Core Update Officially Completed A Week Ago

Published

on

By

Graphic depicting the Google logo with colorful segments on a blue circuit board background, accompanied by the text "Google March 2024 Core Update.

Google has officially completed its March 2024 Core Update, ending over a month of ranking volatility across the web.

However, Google didn’t confirm the rollout’s conclusion on its data anomaly page until April 26—a whole week after the update was completed on April 19.

Many in the SEO community had been speculating for days about whether the turbulent update had wrapped up.

The delayed transparency exemplifies Google’s communication issues with publishers and the need for clarity during core updates

Google March 2024 Core Update Timeline & Status

First announced on March 5, the core algorithm update is complete as of April 19. It took 45 days to complete.

Advertisement

Unlike more routine core refreshes, Google warned this one was more complex.

Google’s documentation reads:

“As this is a complex update, the rollout may take up to a month. It’s likely there will be more fluctuations in rankings than with a regular core update, as different systems get fully updated and reinforce each other.”

The aftershocks were tangible, with some websites reporting losses of over 60% of their organic search traffic, according to data from industry observers.

The ripple effects also led to the deindexing of hundreds of sites that were allegedly violating Google’s guidelines.

Addressing Manipulation Attempts

In its official guidance, Google highlighted the criteria it looks for when targeting link spam and manipulation attempts:

  • Creating “low-value content” purely to garner manipulative links and inflate rankings.
  • Links intended to boost sites’ rankings artificially, including manipulative outgoing links.
  • The “repurposing” of expired domains with radically different content to game search visibility.

The updated guidelines warn:

“Any links that are intended to manipulate rankings in Google Search results may be considered link spam. This includes any behavior that manipulates links to your site or outgoing links from your site.”

John Mueller, a Search Advocate at Google, responded to the turbulence by advising publishers not to make rash changes while the core update was ongoing.

Advertisement

However, he suggested sites could proactively fix issues like unnatural paid links.

Mueller stated on Reddit:

“If you have noticed things that are worth improving on your site, I’d go ahead and get things done. The idea is not to make changes just for search engines, right? Your users will be happy if you can make things better even if search engines haven’t updated their view of your site yet.”

Emphasizing Quality Over Links

The core update made notable changes to how Google ranks websites.

Most significantly, Google reduced the importance of links in determining a website’s ranking.

In contrast to the description of links as “an important factor in determining relevancy,” Google’s updated spam policies stripped away the “important” designation, simply calling links “a factor.”

This change aligns with Google’s Gary Illyes’ statements that links aren’t among the top three most influential ranking signals.

Advertisement

Instead, Google is giving more weight to quality, credibility, and substantive content.

Consequently, long-running campaigns favoring low-quality link acquisition and keyword optimizations have been demoted.

With the update complete, SEOs and publishers are left to audit their strategies and websites to ensure alignment with Google’s new perspective on ranking.

Core Update Feedback

Google has opened a ranking feedback form related to this core update.

You can use this form until May 31 to provide feedback to Google’s Search team about any issues noticed after the core update.

While the feedback provided won’t be used to make changes for specific queries or websites, Google says it may help inform general improvements to its search ranking systems for future updates.

Advertisement

Google also updated its help documentation on “Debugging drops in Google Search traffic” to help people understand ranking changes after a core update.


Featured Image: Rohit-Tripathi/Shutterstock

FAQ

After the update, what steps should websites take to align with Google’s new ranking criteria?

After Google’s March 2024 Core Update, websites should:

  • Improve the quality, trustworthiness, and depth of their website content.
  • Stop heavily focusing on getting as many links as possible and prioritize relevant, high-quality links instead.
  • Fix any shady or spam-like SEO tactics on their sites.
  • Carefully review their SEO strategies to ensure they follow Google’s new guidelines.

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Declares It The “Gemini Era” As Revenue Grows 15%

Published

on

By

A person holding a smartphone displaying the Google Gemini Era logo, with a blurred background of stock market charts.

Alphabet Inc., Google’s parent company, announced its first quarter 2024 financial results today.

While Google reported double-digit growth in key revenue areas, the focus was on its AI developments, dubbed the “Gemini era” by CEO Sundar Pichai.

The Numbers: 15% Revenue Growth, Operating Margins Expand

Alphabet reported Q1 revenues of $80.5 billion, a 15% increase year-over-year, exceeding Wall Street’s projections.

Net income was $23.7 billion, with diluted earnings per share of $1.89. Operating margins expanded to 32%, up from 25% in the prior year.

Ruth Porat, Alphabet’s President and CFO, stated:

Advertisement

“Our strong financial results reflect revenue strength across the company and ongoing efforts to durably reengineer our cost base.”

Google’s core advertising units, such as Search and YouTube, drove growth. Google advertising revenues hit $61.7 billion for the quarter.

The Cloud division also maintained momentum, with revenues of $9.6 billion, up 28% year-over-year.

Pichai highlighted that YouTube and Cloud are expected to exit 2024 at a combined $100 billion annual revenue run rate.

Generative AI Integration in Search

Google experimented with AI-powered features in Search Labs before recently introducing AI overviews into the main search results page.

Regarding the gradual rollout, Pichai states:

“We are being measured in how we do this, focusing on areas where gen AI can improve the Search experience, while also prioritizing traffic to websites and merchants.”

Pichai reports that Google’s generative AI features have answered over a billion queries already:

Advertisement

“We’ve already served billions of queries with our generative AI features. It’s enabling people to access new information, to ask questions in new ways, and to ask more complex questions.”

Google reports increased Search usage and user satisfaction among those interacting with the new AI overview results.

The company also highlighted its “Circle to Search” feature on Android, which allows users to circle objects on their screen or in videos to get instant AI-powered answers via Google Lens.

Reorganizing For The “Gemini Era”

As part of the AI roadmap, Alphabet is consolidating all teams building AI models under the Google DeepMind umbrella.

Pichai revealed that, through hardware and software improvements, the company has reduced machine costs associated with its generative AI search results by 80% over the past year.

He states:

“Our data centers are some of the most high-performing, secure, reliable and efficient in the world. We’ve developed new AI models and algorithms that are more than one hundred times more efficient than they were 18 months ago.

How Will Google Make Money With AI?

Alphabet sees opportunities to monetize AI through its advertising products, Cloud offerings, and subscription services.

Advertisement

Google is integrating Gemini into ad products like Performance Max. The company’s Cloud division is bringing “the best of Google AI” to enterprise customers worldwide.

Google One, the company’s subscription service, surpassed 100 million paid subscribers in Q1 and introduced a new premium plan featuring advanced generative AI capabilities powered by Gemini models.

Future Outlook

Pichai outlined six key advantages positioning Alphabet to lead the “next wave of AI innovation”:

  1. Research leadership in AI breakthroughs like the multimodal Gemini model
  2. Robust AI infrastructure and custom TPU chips
  3. Integrating generative AI into Search to enhance the user experience
  4. A global product footprint reaching billions
  5. Streamlined teams and improved execution velocity
  6. Multiple revenue streams to monetize AI through advertising and cloud

With upcoming events like Google I/O and Google Marketing Live, the company is expected to share further updates on its AI initiatives and product roadmap.


Featured Image: Sergei Elagin/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS