Connect with us

SEO

A Technical SEO Guide To Lighthouse Performance Metrics

Published

on

A Technical SEO Guide To Lighthouse Performance Metrics

Maybe you’re here because you’re a die-hard fan of performance metrics. Or maybe you don’t know what Lighthouse is and are too afraid to ask.

Either is an excellent option. Welcome!

Together, we’re hoping to take your performance improvement efforts from “make all the numbers green” to some clear and meaningful action items.

Note: This article was updated for freshness in January 2022 to represent versions 8 and 9.

Technical SEO and Google Data Studio nerd Rachel Anderson joined me on this merry adventure into demystifying developer documentation.

We’re going to answer:

  • What is Lighthouse?
  • How is Lighthouse different from Core Web Vitals?
  • Why doesn’t Lighthouse match Search Console/Crux reports?
  • How is Performance Score calculated?
  • Why is my score different each time I test?
  • Lighthouse Performance metrics explained
  • How to test performance using Lighthouse

What Is Lighthouse?

Performance is about measuring how quickly a browser can assemble a webpage.

Lighthouse uses a web browser called Chromium to build pages and runs tests on the pages as they’re built.  The tool is open-source (meaning it is maintained by the community and free to use).

Each audit falls into one of five categories:

  1. Performance.
  2. Accessibility.
  3. Best Practices.
  4. SEO.
  5. Progressive Web App.
Screenshot from Lighthouse, January 2022

For the purposes of this article, we’re going to use the name Lighthouse to refer to the series of tests executed by the shared Github repo, regardless of the execution method.

Version 9 is currently out on Github and is slated for large-scale rollout with the stable Chrome 98 release in February 2022.

Lighthouse And Web Core Vitals

On May 5, 2020, the Chromium project announced a set of three metrics with which the Google-backed open-source browser would measure performance.

The metrics, known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality signals.

The goal of these metrics is to measure web performance in a user-centric manner.

Within two weeks, Lighthouse v6 rolled out with a modified version of Web Core Vitals at the heart of the update.

July 2020 saw Lighthouse v6’s unified metrics adopted across Google products with the release of Chrome 84.

Chrome DevTools Audits panel was renamed to Lighthouse. Pagespeed insights and Google Search Console also reference these unified metrics.

This change in focus sets new, more refined goals.

How Is Lighthouse Different Than Core Web Vitals?

The three metrics represented by Core Web Vital are part of Lighthouse performance scoring.

Largest Contentful Paint, Total Blocking Time, and Cumulative Layout Shift comprise 70% of Lighthouse’s weighted performance score.

The scores you’ll see for CWV in Lighthouse are the result of emulated tests.

It’s the same metric but measured off a single page load rather than calculated from page loads around the world.

Why Doesn’t Lighthouse Match Search Console/Crux reports?

For real users, how quickly a page assembles is based on factors like their network connection, the device’s network processing power, and even the user’s physical distance to the site’s servers.

Lighthouse performance data doesn’t account for all these factors.

Instead, the tool emulates a mid-range device and throttles CPU in order to simulate the average user.

These are lab tests collected within a controlled environment with predefined device and network settings.

Lab data is helpful for debugging performance issues.

It does not mean that the experience on your local machine in a controlled environment represents the experiences of real humans in the wild.

The good news is you don’t have to choose between Lighthouse and Core Web Vitals. They’re designed to be part of the same workflow.

Always start with field data from the Chrome User Experience Report to identify issues impacting real uses.

Then leverage the expanded testing capabilities of Lighthouse to identify the code causing the issue.

If you’re working on a site pre-launch or QAing changes in a non-public environment, Lighthouse will be your new best #webperf friend.

Workflow for performanceScreenshot from Lighthouse, January 2022

How Is Lighthouse Performance Metrics Calculated?

Performance scores from LighthouseLighthouse, January 2022

In versions 8 and 9, Lighthouse’s performance score is made of seven metrics with each contributing a percentage of the total performance score.

Lighthouse metricsCreated by author, January 2022

Why Is My Score Different Each Time I Test?

Your score may change each time you test.

Browser extensions, internet connection, A/B tests, or even the ads displayed on that specific page load have an impact.

If you’re curious/furious to know more, check out the documentation on performance testing variability.

Lighthouse Performance Metrics Explained

Largest Contentful Paint (LCP)

  • What it represents: A user’s perception of loading experience.
  • Lighthouse Performance score weighting: 25%
  • What it measures: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.
  • How it’s measured: Lighthouse extracts LCP data from Chrome’s tracing tool.
  • Is Largest Contentful Paint a Web Core Vital? Yes!
  • LCP Scoring
  • Goal: Achieve LCP in < 2.5 seconds.
LCP measurementsCreated by author, January 2022

What Elements Can Be Part Of LCP?

  • Text.
  • Images.
  • Videos.
  • Background images.

What Counts As LCP On Your Page?

It depends! LCP typically varies by page template.

This means that you can measure a handful of pages using the same template and define LCP.

Lighthouse will provide you with the exact HTML of the LCP element, but it can be useful to know the node as well when communicating with developers.

The node name will be consistent while the exact on-page image or text may change depending on which content is rendered by the template.

How To Define LCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover over the LCP marker in the Timings section.
  4. The element(s) that correspond to LCP is detailed in the Related Node field.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

What Causes Poor LCP?

Poor LCP typically comes from four issues:

  1. Slow server response times.
  2. Render-blocking JavaScript and CSS.
  3. Resource load times.
  4. Client-side rendering.

How To Fix Poor LCP

If the cause is slow server response time:

  • Optimize your server.
  • Route users to a nearby CDN.
  • Cache assets.
  • Serve HTML pages cache-first.
  • Establish third-party connections early.

If the cause is render-blocking JavaScript and CSS:

  • Minify CSS.
  • Defer non-critical CSS.
  • Inline critical CSS.
  • Minify and compress JavaScript files.
  • Defer unused JavaScript.
  • Minimize unused polyfills.

If the cause is resource load times:

  • Optimize and compress images.
  • Preload important resources.
  • Compress text files.
  • Deliver different assets based on the network connection (adaptive serving).
  • Cache assets using a service worker.

If the cause is client-side rendering:

Resources For Improving LCP

Total Blocking Time (TBT)

  • What it represents: Responsiveness to user input.
  • Lighthouse Performance score weighting: 30%
  • What it measures: TBT measures the time between First Contentful Paint and Time to Interactive. TBT is the lab equivalent of First Input Delay (FID) – the field data used in the Chrome User Experience Report and Google’s upcoming Page Experience ranking signal.
  • How it’s measured: The total time in which the main thread is occupied by tasks taking more than 50ms to complete. If a task takes 80ms to run, 30ms of that time will be counted toward TBT. If a task takes 45ms to run, 0ms will be added to TBT.
  • Is Total Blocking Time a Web Core Vital? Yes! It’s the lab data equivalent of First Input Delay (FID).

TBT Scoring

  • Goal: Achieve TBT score of less than 300 milliseconds.
A Technical SEO Guide To Lighthouse Performance MetricsCreated by author, January 2022

First Input Delay, the field data equivalent to TBT, has different thresholds.

FID Time in millisecondsCreated by author, January 2022

Long Tasks And Total Blocking Time

TBT measures long tasks – those taking longer than 50ms.

When a browser loads your site, there is essentially a single line queue of scripts waiting to be executed.

Any input from the user has to go into that same queue.

When the browser can’t respond to user input because other tasks are executing, the user perceives this as lag.

Essentially, long tasks are like that person at your favorite coffee shop who takes far too long to order a drink.

Like someone ordering a 2% venti four-pump vanilla, five-pump mocha whole-fat froth, long tasks are a major source of bad experiences.

Short tasks vs. long tasksScreenshot by author, January 2022

What Causes A High TBT On Your Page?

Heavy JavaScript.

That’s it.

How To See TBT Using Chrome Devtools

A Technical SEO Guide To Lighthouse Performance MetricsScreenshot from Chrome Devtools, January 2022

How To Fix Poor TBT

  • Break up Long Tasks.
  • Optimize your page for interaction readiness.
  • Use a web worker.
  • Reduce JavaScript execution time.

Resources For Improving TBT

First Contentful Paint (FCP)

  • What it represents: FCP marks the time at which the first text or image is painted (visible).
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time when I can see the page I requested is responding. My thumb can stop hovering over the back button.
  • How it’s measured: Your FCP score in Lighthouse is measured by comparing your page’s FCP to FCP times for real website data stored by the HTTP Archive.
  • Your FCP increases if it is faster than other pages in the HTTP Archive.
  • Is First Contentful Paint a Web Core Vital? No

FCP Scoring

  • Goal: Achieve FCP in < 2 seconds.
FCP timeCreated by author, January 2022

What Elements Can Be Part Of FCP?

The time it takes to render the first visible element to the DOM is the FCP.

Anything that happens before an element that renders non-white content to the page (excluding iframes) is counted toward FCP.

Since iframes are not considered part of FCP, if they are the first content to render, FCP will continue counting until the first non-iframe content loads, but the iframe load time isn’t counted toward the FCP.

The documentation around FCP also calls out that is often impacted by font load time and there are tips for improving font loads.

FCP Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Click on the FCP marker in the Timings section.
  4. The summary tab has a timestamp with the FCP in ms.

How To Improve FCP

In order for content to be displayed to the user, the browser must first download, parse, and process all external stylesheets it encounters before it can display or render any content to a user’s screen.

The fastest way to bypass the delay of external resources is to use in-line styles for above-the-fold content.

To keep your site sustainably scalable, use an automated tool like penthouse and Apache’s mod_pagespeed.

These solutions will come with some restrictions to functionalities, require testing, and may not be for everyone.

Universally, we can all improve our site’s time to First Contentful Paint by reducing the scope and complexity of style calculations.

If a style isn’t being used, remove it.

You can identify unused CSS with Chrome Dev Tool’s built-in Code Coverage functionality.

Use better data to make better decisions.

Similar to TTI, you can capture real user metrics for FCP using Google Analytics to correlate improvements with KPIs.

Resources For Improving FCP

Speed Index

  • What it represents: How much is visible at a time during load.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The Speed Index is the average time at which visible parts of the page are displayed.
  • How it’s measured: Lighthouse’s Speed Index measurement comes from a node module called Speedline.

You’ll have to ask the kindly wizards at webpagetest.org for the specifics but roughly, Speedline scores vary by the size of the viewport (read as device screen) and have an algorithm for calculating the completeness of each frame.

Speed index measurementsScreenshot by author, January 2022
  • Is Speed Index a Web Core Vital? No.

SI Scoring

  • Goal: achieve SI in < 4.3 seconds.
Speed Index metricsCreated by author, January 2022

How To Improve SI

Speed score reflects your site’s Critical Rendering Path.

A “critical” resource means that the resource is required for the first paint or is crucial to the page’s core functionality.

The longer and denser the path, the slower your site will be to provide a visual page.

If your path is optimized, you’ll give users content faster and score higher on Speed Index.

How The Critical Path Affects Rendering

Optimized rendering vs unoptimized timesScreenshot by author, January 2022

Lighthouse recommendations commonly associated with a slow Critical Rendering Path include:

  • Minimize main-thread work.
  • Reduce JavaScript execution time.
  • Minimize Critical Requests Depth.
  • Eliminate Render-Blocking Resources.
  • Defer offscreen images.

Resources For Improving SI

Time To Interactive

  • What it represents: Load responsiveness; identifying where a page looks responsive but isn’t yet.
  • Lighthouse Performance score weighting: 10%
  • What it measures: The time from when the page begins loading to when its main resources have loaded and are able to respond to user input.
  • How it’s measured: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:

1. The page displays useful content, which is measured by the First Contentful Paint.

2. Event handlers are registered for most visible page elements.

3. The page responds to user interactions within 50 milliseconds.

  • Is Time to Interactive a Web Core Vital? No

TTI Scoring

Goal: achieve TTI score of less than 3.8 seconds.

TTI scoring systemCreated by author, January 2022

Resources For Improving TTI

Cumulative Layout Shift (CLS)

  • What it represents: A user’s perception of a page’s visual stability.
  • Lighthouse Performance score weighting: 15%
  • What it measures: It quantifies shifting page elements through the end of page load.
  • How it’s measured: Unlike other metrics, CLS isn’t measured in time. Instead, it’s a calculated metric based on the number of frames in which elements move and the total distance in pixels the elements moved.
CLS Layout Score formulaCreated by author, January 2022

CLS Scoring

  • Goal: achieve CLS score of less than 0.1.
CLS Scoring systemCreated by author, January 2022

What Elements Can Be Part Of CLS?

Any visual element that appears above the fold at some point in the load.

That’s right – if you’re loading your footer first and then the hero content of the page, your CLS is going to hurt.

Causes Of Poor CLS

  • Images without dimensions.
  • Ads, embeds, and iframes without dimensions.
  • Dynamically injected content.
  • Web Fonts causing FOIT/FOUT.
  • Actions waiting for a network response before updating DOM.

How To Define CLS Using Chrome Devtools

  1. Open the page in Chrome.
  2. Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Hover and move from left to right over the screenshots of the load (make sure the screenshots checkbox is checked).
  4. Watch for elements bouncing around after the first paint to identify elements causing CLS.

How To Improve CLS

Once you identify the element(s) at fault, you’ll need to update them to be stable during the page load.

For example, if slow-loading ads are causing the high CLS score, you may want to use placeholder images of the same size to fill that space as the ad loads to prevent the page shifting.

Some common ways to improve CLS include:

  • Always include width and height size attributes on images and video elements.
  • Reserve space for ad slots (and don’t collapse it).
  • Avoid inserting new content above existing content.
  • Take care when placing non-sticky ads near the top of the viewport.
  • Preload fonts.

CLS Resources

How To Test Performance Using Lighthouse

Methodology Matters

Out of the box, Lighthouse audits a single page at a time.

A single page score doesn’t represent your site, and a fast homepage doesn’t mean a fast site.

Test multiple page types within your site.

Identify your major page types, templates, and goal conversion points (signup, subscribe, and checkout pages).

If 40% of your site is blog posts, make 40% of your testing URLs blog pages!

Example Page Testing Inventory

Example Page Testing InventoryCreated by author, January 2022

Before you begin optimizing, run Lighthouse on each of your sample pages and save the report data.

Record your scores and the to-do list of improvements.

Prevent data loss by saving the JSON results and utilizing Lighthouse Viewer when detailed result information is needed.

Get Your Backlog to Bite Back Using ROI

Getting development resources to action SEO recommendations is hard.

An in-house SEO professional could destroy their pancreas by having a birthday cake for every backlogged ticket’s birthday. Or at least learn to hate cake.

In my experience as an in-house enterprise SEO pro, the trick to getting performance initiatives prioritized is having the numbers to back the investment.

This starting data will become dollar signs that serve to justify and reward development efforts.

With Lighthouse testing, you can recommend specific and direct changes (Think preload this font file) and associate the change to a specific metric.

Chances are you’re going to have more than one area flagged during tests. That’s okay!

If you’re wondering which changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator.

How To Run Lighthouse Tests

This is a case of many roads leading to Oz.

Sure, some scarecrow might be particularly loud about a certain shade of brick but it’s about your goals.

Looking to test an entire staging site? Time to learn some NPM.

Have less than five minutes to prep for a prospective client meeting? A couple of one-off reports should do the trick.

Whichever way you execute, default to mobile unless you have a special use-case for desktop.

For One-Off Reports: PageSpeed Insights

Test one page at a time on PageSpeed Insights. Simply enter the URL.

Lab and field data available in PageSpeed InsightsScreenshot from PageSpeed Insights, January 2022

Pros Of Running Lighthouse From PageSpeed Insights

  • Detailed Lighthouse report is combined with URL-specific data from the Chrome User Experience Report.
  • Opportunities and Diagnostics can be filtered to specific metrics.  This is exceptionally useful when creating tickets for your engineers and tracking the resulting impact of the changes.
  • PageSpeed Insights is running already version 9.
    Pagespeed Insights opportunities and diagnostics filtered by metricScreenshot from PageSpeed Insights, January 2022

Cons Of Running Lighthouse From PageSpeed Insights

  • One report at a time.
  • Only Performance tests are run (if you need SEO, Accessibility, or Best Practices, you’ll need to run those separately)
  • You can’t test local builds or authenticated pages.
  • Reports can’t be saved in JSON, HTML, or Gist format. (Save as PDF via browser functionality is an option.
  • Requires you to manually save results.

For Comparing Test Results: Chrome DevTools Or Web.dev

Because the report will be emulating a user’s experience using your browser, use an incognito instance with all extensions, and the browser’s cache disabled.

Pro-tip: Create a Chrome profile for testing. Keep it local (no sync enabled, password saving, or association to an existing Google account) and don’t install extensions for the user.

How To Run A Test Lighthouse Using Chrome DevTools

  1. Open an incognito instance of Chrome.
  2. Navigate to the Network panel of Chrome Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
  3. Tick the box to disable cache.
  4. Navigate to the Lighthouse panel.
  5. Click Generate Report.
  6. Click the dots to the right of the URL in the report
  7. Save in your preferred format (JSON, HTML, or Gist)
    Save options for Lighthouse ReportsScreenshot from Lighthouse Reports, January 2022

Note that your version of Lighthouse may change depending on what version of Chrome you’re using. v8.5 is used on Chrome 97.

Lighthouse v9 will ship with DevTools in Chrome 98.

How To Run A Test Lighthouse Using Web.Dev

It’s just like DevTools but you don’t have to remember to disable all those pesky extensions!

  1. Go to web.dev/measure.
  2. Enter your URL.
  3. Click Run Audit.
  4. Click View Report.
    web.dev view report optionScreenshot by author, January 2022

Pros Of Running Lighthouse From DevTools/web.dev

  • You can test local builds or authenticated pages.
  • Saved reports can be compared using the Lighthouse CI Diff tool.
    Lighthouse CI Diff toolScreenshot from Lighthouse CI Diff, January 2022

Cons Of Running Lighthouse From DevTools/web.dev

  • One report at a time.
  • Requires you to manually save results.

For Testing At Scale (and Sanity): Node Command Line

1. Install npm.
(Mac Pro-tip: Use homebrew to avoid obnoxious dependency issues.)

2. Install the Lighthouse node module with npm

install -g lighthouse

3. Run a single text with

lighthouse <url>

4. Run tests on lists of usings by running tests programmatically.

Pros Of Running Lighthouse From Node

  • Many reports can be run at once.
  • Can be set to run automatically to track change over time.

Cons Of Running Lighthouse From Node

  • Requires some coding knowledge.
  • More time-intensive setup.

Conclusion

The complexity of performance metrics reflects the challenges facing all sites.

We use performance metrics as a proxy for user experience – that means factoring in some unicorns.

Tools like Google’s Test My Site and What Does My Site Cost? can help you make the conversion and customer-focused arguments for why performance matters.

Hopefully, once your project has traction, these definitions will help you translate Lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.

Track your data and shout it from the rooftops.

As much as Google struggles to quantify qualitative experiences, SEO professionals and devs must decode how to translate a concept into code.

Test, iterate, and share what you learn! I look forward to seeing what you’re capable of, you beautiful unicorn.

More resources:


Featured Image: Paulo Bobita/Search Engine Journal




Source link

SEO

How to Execute the Skyscraper Technique (And Get Results)

Published

on

How to Execute the Skyscraper Technique (And Get Results)

In 2015, Brian Dean revealed a brand-new link building strategy. He called it the Skyscraper Technique.

With over 10,000 backlinks since the post was published, it’s fair to say that the Skyscraper Technique took the world by storm in 2015. But what is it exactly, how can you implement it, and can you still get results with this technique in 2023?

Let’s get started.

What is the Skyscraper Technique?

The Skyscraper Technique is a link building strategy where you improve existing popular content and replicate the backlinks. 

Brian named it so because in his words, “It’s human nature to be attracted to the best. And what you’re doing here is finding the tallest ‘skyscraper’ in your space… and slapping 20 stories to the top of it.”

Here’s how the technique works:

Three steps of the Skyscraper Technique

How to implement the Skyscraper Technique

Follow these three steps to execute the Skyscraper Technique.

1. Find relevant content with lots of backlinks

There are three methods to find relevant pages with plenty of links:

Use Site Explorer

Enter a popular site into Ahrefs’ Site Explorer. Next, go to the Best by backlinks report.

Best pages by backlinks report, via Ahrefs' Site Explorer

This report shows you a list of pages from the site with the highest number of referring domains. If there are content pieces with more than 50 referring domains, they’re likely to be good potential targets.

Sidenote.

Ignore homepages and other irrelevant content when eyeballing this report.

Use Content Explorer

Ahrefs’ Content Explorer is a searchable database of 10 billion pages. You can use it to find mentions of any word or phrase.

Let’s start by entering a broad topic related to your niche into Content Explorer. Next, set a Referring domains filter to a minimum of 50. 

We can also add:

  • Language filter to get only pages in our target language.
  • Exclude homepages to remove homepages from the results.
Ahrefs' Content Explorer search for "gardening," with filters

Eyeball the results to see if there are any potential pieces of content you could beat.

Use Keywords Explorer

Enter a broad keyword into Ahrefs’ Keywords Explorer. Next, go to the Matching terms report and set a Keyword Difficulty (KD) filter to a minimum of 40.

Matching terms report, via Ahrefs' Keywords Explorer

Why filter for KD? 

The reason is due to the method we use at Ahrefs to calculate KD. Our KD score is calculated from a trimmed mean of referring domains (RDs) to the top 10 ranking pages. 

In other words, the top-ranking pages for keywords with high KD scores have lots of backlinks on average.

From here, you’ll want to go through the report to find potential topics you could build a better piece of content around. 

2. Make it better

The core idea (or assumption) behind the Skyscraper Technique is that people want to see the best. 

Once you’ve found the content you want to beat, the next step is to make something even better

According to Brian, there are four aspects worth improving:

  1. Length – If the post has 25 tips, list more.
  2. Freshness – Update any outdated parts of the original article with new images, screenshots, information, stats, etc.
  3. Design – Make it stand out with a custom design. You could even make it interactive.
  4. Depth – Don’t just list things. Fill in the details and make them actionable.

3. Reach out to the right people

The key to successfully executing the Skyscraper Technique is email outreach. But instead of spamming everyone you know, you reach out to those who have already linked to the specific content you have improved. 

The assumption: Since they’ve already linked to a similar article, they’re more likely to link to one that’s better.

You can find these people by pasting the URL of the original piece into Ahrefs’ Site Explorer and then going to the Backlinks report.

Backlinks report for ResumeGenius' how to write a resume, via Ahrefs' Site Explorer

This report shows all the backlinks to the page. In this case, there are 441 groups of links.

But not all of these links will make good prospects. So you’ll likely need to add some filters to clean them up. For example, you can:

  • Add a Language filter for the language you’re targeting (e.g., English).
  • Switch the tab to Dofollow for equity-passing links.
Backlinks report, with filters, via Ahrefs' Site Explorer

Does the Skyscraper Technique still work?

It’s been roughly eight years since Brian shared this link building strategy. Honestly speaking, the technique has been oversaturated. Given its widespread use, its effectiveness may even be limited. 

Some SEOs even say they wouldn’t recommend it.

So we asked our Twitter and LinkedIn following this question and received 1,242 votes. Here are the results:

Pie chart showing 61% of respondents feel the Skyscraper Technique still works

Clearly, many SEOs and marketers still believe the technique works.

Sidenote.

According to Aira’s annual State of Link Building report, only 18% of SEOs still use the Skyscraper Technique. It’s not a go-to for many SEOs, as it ranks #20 among the list of tactics. I suspect its popularity has waned because (1) it’s old and SEOs are looking for newer stuff and (2) SEOs believe that content is more important than links these days.

Why the Skyscraper Technique fails and how to improve your chances of success

Fundamentally, it makes sense that the Skyscraper Technique still works. After all, the principles are the same behind (almost) any link building strategy:

  1. Create great content
  2. Reach out to people and promote it

But why do people think it’s no longer effective? There are a few reasons why and knowing them will help you improve your chances of success with the Skyscraper Technique.

Let’s start with:

1. Sending only Brian’s email template

In Brian’s original post, he suggested an email template for his readers to use:

Hey, I found your post: http://post1

<generic compliment>

It links to this post: http://post2

I made something better: http://post3

Please swap out the link for mine.

Unfortunately, many SEOs decided to use this exact template word for word. 

Link building doesn’t exist in a vacuum. If everyone in your niche decides to send this exact template to every possible website, it’ll burn out real fast. And that’s exactly what happened.

Now, if a website owner sees this template, chances are they’ll delete it right away. 

Sidenote.

Judging by my inbox, there are still people using this exact template. And, like everyone else, I delete the email immediately.

I’m not saying this to disparage templated emails. If you’re sending something at scale, templating is necessary. But move away from this template. Write your own, personalize it as much as possible, and follow the outreach principles here.

Even better, ask yourself:

What makes my content unique and link-worthy?”

2. Not segmenting your prospects

People link for different reasons, so you shouldn’t send everyone the same pitch. 

Consider dividing your list of prospects into segments according to the context in which they linked. You can do this by checking the Anchors report in Site Explorer.

Anchors report, via Ahrefs' Site Explorer

You can clearly see people are linking to different statistics from our SEO statistics post. So, for example, if we were doing outreach for a hypothetical post, we might want to mention to the first group that we have a new statistic for “Over 90% of content gets no traffic from Google.”

Then, to the second group, we’ll mention that we have new statistics for “68% of online experiences.” And so on. 

In fact, that’s exactly what we did when we built links to this post. Check out the case study here:

https://www.youtube.com/watch?v=videoseries

3. Not reaching out to enough people

Ultimately, link building is still a numbers game. If you don’t reach out to enough people, you won’t get enough links. 

Simply put: You need to curate a larger list of link prospects.

So rather than limiting yourself to only replicating the backlinks of the original content, you should replicate the backlinks from other top-ranking pages covering the same topic too.

To find these pages, enter the target keyword into Keywords Explorer and scroll down to the SERP overview.

SERP overview for "how to write a resume," via Ahrefs' Keywords Explorer

In this example, most top-ranking pages have tons of links, and all of them (after filtering, of course) could be potential link prospects.

Pro tip

Looking for even more prospects? Use Content Explorer.

Search for your keyword, set a Referring domains filter, and you’ll see relevant pages where you can “mine” for more skyscraper prospects.

Referring domains filters selected in Ahrefs' Content Explorer

4. Thinking bigger equals better

Someone creates a list with 15 tools. The next person ups it to 30. Another “skyscrapers” it to 50, and the next increases it to 100.

Not only is it a never-ending arms race, there’s also no value for the reader. 

No one wants to skim through 5,000 words or hundreds of items just to find what they need. Curation is where the value is.

When considering the four aspects mentioned by Brian, don’t improve things for the sake of improving them. Adding 25 mediocre tips to an existing list of 25 doesn’t make it “better.” Likewise for changing the publish date or adding a few low-quality illustrations. 

Example: My colleague, Chris Haines, recently published a post on the best niche site ideas. Even though he only included 10, he has already outperformed the other “skyscraper” articles:

Our blog post ranking #3 for the query, "niche site ideas," via Ahrefs' Keywords Explorer

He differentiated himself through his knowledge and expertise. After all, Chris has 10 years of experience in SEO. 

So when you’re creating your article, always look at any improvement through the lens of value:

Are you giving more value to the reader? 

5. Not considering brand

As Ross Hudgens says, “Better does not occur in a branding vacuum.”

Most of the time, content isn’t judged solely on its quality. It’s also judged by who it comes from. We discovered this ourselves too when we tried to build links to our keyword research guide.

Most of the time, people didn’t read the article. They linked to us because of our brand and reputation—they knew we were publishing great content consistently, and they had confidence that the article we were pitching was great too.

In other words, there are times where no matter how hard you “skyscraper” your content, people just won’t link to it because they don’t know who you are. 

Having your own personal brand is important these days. But think about it: What is a “strong brand” if not a consistent output of high-quality work that people enjoy? One lone skyscraper doesn’t make a city; many of them together do.

What I’m saying is this: Don’t be discouraged if your “skyscraper” article gets no results. And don’t be discouraged just because you don’t have a brand right now—you can work on that over time.

Keep on making great content—skyscraper or not—and results will come if you trust the process.

Rome wasn’t built in a day, but they were laying bricks every hour.” 

Final thoughts

The Skyscraper Technique is a legitimate link building tactic that works. But that can only happen if you:

Any questions or comments? Let me know on Twitter.



Source link

Continue Reading

SEO

13 Best High Ticket Affiliate Marketing Programs 2023

Published

on

13 Best High Ticket Affiliate Marketing Programs 2023

Are you looking for more ways to generate income for yourself or your business this year?

With high-ticket affiliate marketing programs, you earn money by recommending your favorite products or services to those who need them.

Affiliate marketers promote products through emails, blog posts, social media updates, YouTube videos, podcasts, and other forms of content with proper disclosure.

While not all affiliate marketers make enough to quit their 9-to-5, any additional income in the current economy can come in handy for individuals and businesses.

How To Get Started With Affiliate Marketing

Here’s a simple summary of how to get started with affiliate marketing.

  • Build an audience. You need websites with traffic, email lists with subscribers, or social media accounts with followers to promote a product – or ideally, a combination of all three.
  • Find products and services you can passionately promote to the audience you have built. The more you love something and believe in its efficacy, the easier it will be to convince someone else to buy it.
  • Sign up for affiliate and referral programs. These will be offered directly through the company selling the product or service, or a third-party affiliate platform.
  • Fill out your application and affiliate profile completely. Include your niche, monthly website traffic, number of email subscribers, and social media audience size. Companies will use that information to approve or reject your application.
  • Get your custom affiliate or referral link and share it with your audience, or the segment of your audience that would benefit most from the product you are promoting.
  • Look for opportunities to recommend products to new people. You can be helpful, make a new acquaintance, and earn a commission.
  • Monitor your affiliate dashboard and website analytics for insights into your clicks and commissions.
  • Adjust your affiliate marketing tactics based on the promotions that generate the most revenue.

Now, continue reading about the best high-ticket affiliate programs you can sign up for in 2023. They offer a high one-time payout, recurring commissions, or both.

The Best High-Ticket Affiliate Marketing Programs

What makes them these affiliate marketing programs the “best” is subjective, but I chose these programs based on their payout amounts, number of customers, and average customer ratings. Customer ratings help determine whether a product is worth recommending. You can also use customer reviews to help you market the products or services when you highlight impressive results customers gain from using the product or service, and the features customers love most.

1. Smartproxy

Smartproxy allows customers to access business data worldwide for competitor research, search engine results page (SERP) scraping, price aggregation, and ad verification.

836 reviewers gave it an average rating of 4.7 out of five stars.

Earn up to $2,000 per customer that you refer to Smartproxy using its affiliate program.

2. Thinkific

Thinkific is an online course creation platform used by over 50,000 instructors in over 100 million courses.

669 reviewers gave it an average rating of 4.6 out of five stars.

Earn up to $1,700 per referral per year through the Thinkific affiliate program.

3. BigCommerce

BigCommerce is an ecommerce provider with open SaaS, headless integrations, omnichannel, B2B, and offline-to-online solutions.

648 reviewers gave it an average rating of 8.1 out of ten stars.

Earn up to $1,500 for new enterprise customers, or 200% of the customer’s first payment by signing up for the BigCommerce affiliate program.

4. Teamwork

Teamwork, project management software focused on maximizing billable hours, helps everyone in your organization become more efficient – from the founder to the project managers.

1,022 reviewers gave it an average rating of 4.4 out of five stars.

Earn up to $1,000 per new customer referral with the Teamwork affiliate program.

5. Flywheel

Flywheel provides managed WordPress hosting geared towards agencies, ecommerce, and high-traffic websites.

36 reviewers gave it an average rating of 4.4 out of five stars.

Earn up to $500 per new referral from the Flywheel affiliate program.

6. Teachable

Teachable is an online course platform used by over 100,000 entrepreneurs, creators, and businesses of all sizes to create engaging online courses and coaching businesses.

150 reviewers gave it a 4.4 out of five stars.

Earn up to $450 (average partner earnings) per month by joining the Teachable affiliate program.

7. Shutterstock

Shutterstock is a global marketplace for sourcing stock photographs, vectors, illustrations, videos, and music.

507 reviewers gave it an average rating of 4.4 out of five stars.

Earn up to $300 for new customers by signing up for the Shutterstock affiliate program.

8. HubSpot

HubSpot provides a CRM platform to manage your organization’s marketing, sales, content management, and customer service.

3,616 reviewers gave it an average rating of 4.5 out of five stars.

Earn an average payout of $264 per month (based on current affiliate earnings) with the HubSpot affiliate program, or more as a solutions partner.

9. Sucuri

Sucuri is a cloud-based security platform with experienced security analysts offering malware scanning and removal, protection from hacks and attacks, and better site performance.

251 reviewers gave it an average rating of 4.6 out of five stars.

Earn up to $210 per new sale by joining Sucuri referral programs for the platform, firewall, and agency products.

10. ADT

ADT is a security systems provider for residences and businesses.

588 reviewers gave it an average rating of 4.5 out of five stars.

Earn up to $200 per new customer that you refer through the ADT rewards program.

11. DreamHost

DreamHost web hosting supports WordPress and WooCommerce websites with basic, managed, and VPS solutions.

3,748 reviewers gave it an average rating of 4.7 out of five stars.

Earn up to $200 per referral and recurring monthly commissions with the DreamHost affiliate program.

12. Shopify

Shopify, a top ecommerce solution provider, encourages educators, influencers, review sites, and content creators to participate in its affiliate program. Affiliates can teach others about entrepreneurship and earn a commission for recommending Shopify.

Earn up to $150 per referral and grow your brand as a part of the Shopify affiliate program.

13. Kinsta

Kinsta is a web hosting provider that offers managed WordPress, application, and database hosting.

529 reviewers gave it a 4.3 out of five stars.

Earn $50 – $100 per new customer, plus recurring revenue via the Kinsta affiliate program.

Even More Affiliate Marketing Programs

In addition to the high-ticket affiliate programs listed above, you can find more programs to join with a little research.

  • Search for affiliate or referral programs for all of the products or services you have a positive experience with, personally or professionally.
  • Search for affiliate or referral programs for all of the places you shop online.
  • Search for partner programs for products and services your organization uses or recommends to others.
  • Search for products and services that match your audience’s needs on affiliate platforms like Shareasale, Awin, and CJ.
  • Follow influencers in your niche to see what products and services they recommend. They may have affiliate or referral programs as well.

A key to affiliate marketing success is to diversify the affiliate marketing programs you join.

It will ensure that you continue to generate an affiliate income, regardless of if one company changes or shutters its program.

More resources:


Featured image: Shutterstock/fatmawati achmad zaenuri



Source link

Continue Reading

SEO

The Current State of Google PageRank & How It Evolved

Published

on

The Current State of Google PageRank & How It Evolved

PageRank (PR) is an algorithm that improves the quality of search results by using links to measure the importance of a page. It considers links as votes, with the underlying assumption being that more important pages are likely to receive more links.

PageRank was created by Google co-founders Sergey Brin and Larry Page in 1997 when they were at Stanford University, and the name is a reference to both Larry Page and the term “webpage.” 

In many ways, it’s similar to a metric called “impact factor” for journals, where more cited = more important. It differs a bit in that PageRank considers some votes more important than others. 

By using links along with content to rank pages, Google’s results were better than competitors. Links became the currency of the web.

Want to know more about PageRank? Let’s dive in.

Google still uses PageRank

In terms of modern SEO, PageRank is one of the algorithms comprising Experience Expertise Authoritativeness Trustworthiness (E-E-A-T).

Google’s algorithms identify signals about pages that correlate with trustworthiness and authoritativeness. The best known of these signals is PageRank, which uses links on the web to understand authoritativeness.

Source: How Google Fights Disinformation

We’ve also had confirmation from Google reps like Gary Illyes, who said that Google still uses PageRank and that links are used for E-A-T (now E-E-A-T).

When I ran a study to measure the impact of links and effectively removed the links using the disavow tool, the drop was obvious. Links still matter for rankings.

PageRank has also been a confirmed factor when it comes to crawl budget. It makes sense that Google wants to crawl important pages more often.

Fun math, why the PageRank formula was wrong 

Crazy fact: The formula published in the original PageRank paper was wrong. Let’s look at why. 

PageRank was described in the original paper as a probability distribution—or how likely you were to be on any given page on the web. This means that if you sum up the PageRank for every page on the web together, you should get a total of 1.

Here’s the full PageRank formula from the original paper published in 1997:

PR(A) = (1-d) + d (PR(T1)/C(T1) + … + PR(Tn)/C(Tn))

Simplified a bit and assuming the damping factor (d) is 0.85 as Google mentioned in the paper (I’ll explain what the damping factor is shortly), it’s:

PageRank for a page = 0.15 + 0.85 (a portion of the PageRank of each linking page split across its outbound links)

In the paper, they said that the sum of the PageRank for every page should equal 1. But that’s not possible if you use the formula in the paper. Each page would have a minimum PageRank of 0.15 (1-d). Just a few pages would put the total at greater than 1. You can’t have a probability greater than 100%. Something is wrong!

The formula should actually divide that (1-d) by the number of pages on the internet for it to work as described. It would be:

PageRank for a page = (0.15/number of pages on the internet) + 0.85 (a portion of the PageRank of each linking page split across its outbound links)

It’s still complicated, so let’s see if I can explain it with some visuals.

1. A page is given an initial PageRank score based on the links pointing to it. Let’s say I have five pages with no links. Each gets a PageRank of (1/5) or 0.2.

PageRank example of five pages with no links yet

2. This score is then distributed to other pages through the links on the page. If I add some links to the five pages above and calculate the new PageRank for each, then I end up with this: 

PageRank example of five pages after one iteration

You’ll notice that the scores are favoring the pages with more links to them.

3. This calculation is repeated as Google crawls the web. If I calculate the PageRank again (called an iteration), you’ll see that the scores change. It’s the same pages with the same links, but the base PageRank for each page has changed, so the resulting PageRank is different.

PageRank example of five pages after two iterations

The PageRank formula also has a so-called “damping factor,” the “d” in the formula, which simulates the probability of a random user continuing to click on links as they browse the web. 

Think of it like this: The probability of you clicking a link on the first page you visit is reasonably high. But the likelihood of you then clicking a link on the next page is slightly lower, and so on and so forth.

If a strong page links directly to another page, it’s going to pass a lot of value. If the link is four clicks away, the value transferred from that strong page will be a lot less because of the damping factor.

Example showing PageRank damping factor
History of PageRank

The first PageRank patent was filed on January 9, 1998. It was titled “Method for node ranking in a linked database.” This patent expired on January 9, 2018, and was not renewed. 

Google first made PageRank public when the Google Directory launched on March 15, 2000. This was a version of the Open Directory Project but sorted by PageRank. The directory was shut down on July 25, 2011.

It was December 11, 2000, when Google launched PageRank in the Google toolbar, which was the version most SEOs obsessed over.

This is how it looked when PageRank was included in Google’s toolbar. 

PageRank 8/10 in Google's old toolbar

PageRank in the toolbar was last updated on December 6, 2013, and was finally removed on March 7, 2016.

The PageRank shown in the toolbar was a little different. It used a simple 0–10 numbering system to represent the PageRank. But PageRank itself is a logarithmic scale where achieving each higher number becomes increasingly difficult.

PageRank even made its way into Google Sitemaps (now known as Google Search Console) on November 17, 2005. It was shown in categories of high, medium, low, or N/A. This feature was removed on October 15, 2009.

Link spam

Over the years, there have been a lot of different ways SEOs have abused the system in the search for more PageRank and better rankings. Google has a whole list of link schemes that include:

  • Buying or selling links—exchanging links for money, goods, products, or services.
  • Excessive link exchanges.
  • Using software to automatically create links.
  • Requiring links as part of a terms of service, contract, or other agreement.
  • Text ads that don’t use nofollow or sponsored attributes.
  • Advertorials or native advertising that includes links that pass ranking credit.
  • Articles, guest posts, or blogs with optimized anchor text links.
  • Low-quality directories or social bookmark links.
  • Keyword-rich, hidden, or low-quality links embedded in widgets that get put on other websites.
  • Widely distributed links in footers or templates. For example, hard-coding a link to your website into the WP Theme that you sell or give away for free.
  • Forum comments with optimized links in the post or signature.

The systems to combat link spam have evolved over the years. Let’s look at some of the major updates.

Nofollow

On January 18, 2005, Google announced it had partnered with other major search engines to introduce the rel=“nofollow” attribute. It encouraged users to add the nofollow attribute to blog comments, trackbacks, and referrer lists to help combat spam.

Here’s an excerpt from Google’s official statement on the introduction of nofollow:

If you’re a blogger (or a blog reader), you’re painfully familiar with people who try to raise their own websites’ search engine rankings by submitting linked blog comments like “Visit my discount pharmaceuticals site.” This is called comment spam, we don’t like it either, and we’ve been testing a new tag that blocks it. From now on, when Google sees the attribute (rel=“nofollow”) on hyperlinks, those links won’t get any credit when we rank websites in our search results. 

Almost all modern systems use the nofollow attribute on blog comment links. 

SEOs even began to abuse nofollow—because of course we did. Nofollow was used for PageRank sculpting, where people would nofollow some links on their pages to make other links stronger. Google eventually changed the system to prevent this abuse.

In 2009, Google’s Matt Cutts confirmed that this would no longer work and that PageRank would be distributed across links even if a nofollow attribute was present (but only passed through the followed link).

Google added a couple more link attributes that are more specific versions of the nofollow attribute on September 10, 2019. These included rel=“ugc” meant to identify user-generated content and rel=“sponsored” meant to identify links that were paid or affiliate.

Algorithms targeting link spam

As SEOs found new ways to game links, Google worked on new algorithms to detect this spam. 

When the original Penguin algorithm launched on April 24, 2012, it hurt a lot of websites and website owners. Google gave site owners a way to recover later that year by introducing the disavow tool on October 16, 2012.

When Penguin 4.0 launched on September 23, 2016, it brought a welcome change to how link spam was handled by Google. Instead of hurting websites, it began devaluing spam links. This also meant that most sites no longer needed to use the disavow tool. 

Google launched its first Link Spam Update on July 26, 2021. This recently evolved, and a Link Spam Update on December 14, 2022, announced the use of an AI-based detection system called SpamBrain to neutralize the value of unnatural links. 

The original version of PageRank hasn’t been used since 2006, according to a former Google employee. The employee said it was replaced with another less resource-intensive algorithm.

They replaced it in 2006 with an algorithm that gives approximately-similar results but is significantly faster to compute. The replacement algorithm is the number that’s been reported in the toolbar, and what Google claims as PageRank (it even has a similar name, and so Google’s claim isn’t technically incorrect). Both algorithms are O(N log N) but the replacement has a much smaller constant on the log N factor, because it does away with the need to iterate until the algorithm converges. That’s fairly important as the web grew from ~1-10M pages to 150B+.

Remember those iterations and how PageRank kept changing with each iteration? It sounds like Google simplified that system.

What else has changed?

Some links are worth more than others

Rather than splitting the PageRank equally between all links on a page, some links are valued more than others. There’s speculation from patents that Google switched from a random surfer model (where a user may go to any link) to a reasonable surfer model (where some links are more likely to be clicked than others so they carry more weight).

Some links are ignored

There have been several systems put in place to ignore the value of certain links. We’ve already talked about a few of them, including:

  • Nofollow, UGC, and sponsored attributes.
  • Google’s Penguin algorithm.
  • The disavow tool.
  • Link Spam updates.

Google also won’t count any links on pages that are blocked by robots.txt. It won’t be able to crawl these pages to see any of the links. This system was likely in place from the start.

Some links are consolidated

Google has a canonicalization system that helps it determine what version of a page should be indexed and to consolidate signals from duplicate pages to that main version.

Canonicalization signals

Canonical link elements were introduced on February 12, 2009, and allow users to specify their preferred version.

Redirects were originally said to pass the same amount of PageRank as a link. But at some point, this system changed and no PageRank is currently lost.

A bit is still unknown

When pages are marked as noindex, we don’t exactly know how Google treats the links. Even Googlers have conflicting statements.

According to John Mueller, pages that are marked noindex will eventually be treated as noindex, nofollow. This means that the links eventually stop passing any value.

According to Gary, Googlebot will discover and follow the links as long as a page still has links to it.

These aren’t necessarily contradictory. But if you go by Gary’s statement, it could be a very long time before Google stops crawling and counting links—perhaps never.

Can you still check your PageRank?

There’s currently no way to see Google’s PageRank.

URL Rating (UR) is a good replacement metric for PageRank because it has a lot in common with the PageRank formula. It shows the strength of a page’s link profile on a 100-point scale. The bigger the number, the stronger the link profile.

Screenshot showing UR score from Ahrefs overview 2.0

Both PageRank and UR account for internal and external links when being calculated. Many of the other strength metrics used in the industry completely ignore internal links. I’d argue link builders should be looking more at UR than metrics like DR, which only accounts for links from other sites.

However, it’s not exactly the same. UR does ignore the value of some links and doesn’t count nofollow links. We don’t know exactly what links Google ignores and don’t know what links users may have disavowed, which will impact Google’s PageRank calculation. We also may make different decisions on how we treat some of the canonicalization signals like canonical link elements and redirects.

So our advice is to use it but know that it may not be exactly like Google’s system.

We also have Page Rating (PR) in Site Audit’s Page Explorer. This is similar to an internal PageRank calculation and can be useful to see what the strongest pages on your site are based on your internal link structure.

Page rating in Ahrefs' Site Audit

How to improve your PageRank

Since PageRank is based on links, to increase your PageRank, you need better links. Let’s look at your options.

Redirect broken pages

Redirecting old pages on your site to relevant new pages can help reclaim and consolidate signals like PageRank. Websites change over time, and people don’t seem to like to implement proper redirects. This may be the easiest win, since those links already point to you but currently don’t count for you.

Here’s how to find those opportunities:

I usually sort this by “Referring domains.”

Best by links report filtered to 404 status code to show pages you may want to redirect

Take those pages and redirect them to the current pages on your site. If you don’t know exactly where they go or don’t have the time, I have an automated redirect script that may help. It looks at the old content from archive.org and matches it with the closest current content on your site. This is where you likely want to redirect the pages.

Internal links

Backlinks aren’t always within your control. People can link to any page on your site they choose, and they can use whatever anchor text they like.

Internal links are different. You have full control over them.

Internally link where it makes sense. For instance, you may want to link more to pages that are more important to you.

We have a tool within Site Audit called Internal Link Opportunities that helps you quickly locate these opportunities. 

This tool works by looking for mentions of keywords that you already rank for on your site. Then it suggests them as contextual internal link opportunities.

For example, the tool shows a mention of “faceted navigation” in our guide to duplicate content. As Site Audit knows we have a page about faceted navigation, it suggests we add an internal link to that page.

Example of an internal link opportunity

External links

You can also get more links from other sites to your own to increase your PageRank. We have a lot of guides around link building already. Some of my favorites are:

Final thoughts

Even though PageRank has changed, we know that Google still uses it. We may not know all the details or everything involved, but it’s still easy to see the impact of links.

Also, Google just can’t seem to get away from using links and PageRank. It once experimented with not using links in its algorithm and decided against it.

So we don’t have a version like that that is exposed to the public but we have our own experiments like that internally and the quality looks much much worse. It turns out backlinks, even though there is some noise and certainly a lot of spam, for the most part are still a really really big win in terms of quality of search results.

We played around with the idea of turning off backlink relevance and at least for now backlinks relevance still really helps in terms of making sure that we turn the best, most relevant, most topical set of search results.

Source: YouTube (Google Search Central)

If you have any questions, message me on Twitter.



Source link

Continue Reading

Trending

en_USEnglish