SEO
A Technical SEO Guide To Lighthouse Performance Metrics

Maybe you’re here because you’re a die-hard fan of performance metrics. Or maybe you don’t know what Lighthouse is and are too afraid to ask.
Either is an excellent option. Welcome!
Together, we’re hoping to take your performance improvement efforts from “make all the numbers green” to some clear and meaningful action items.
Note: This article was updated for freshness in January 2022 to represent versions 8 and 9.
Technical SEO and Google Data Studio nerd Rachel Anderson joined me on this merry adventure into demystifying developer documentation.
We’re going to answer:
- What is Lighthouse?
- How is Lighthouse different from Core Web Vitals?
- Why doesn’t Lighthouse match Search Console/Crux reports?
- How is Performance Score calculated?
- Why is my score different each time I test?
- Lighthouse Performance metrics explained
- How to test performance using Lighthouse
What Is Lighthouse?
Performance is about measuring how quickly a browser can assemble a webpage.
Lighthouse uses a web browser called Chromium to build pages and runs tests on the pages as they’re built. The tool is open-source (meaning it is maintained by the community and free to use).
Each audit falls into one of five categories:
- Performance.
- Accessibility.
- Best Practices.
- SEO.
- Progressive Web App.
For the purposes of this article, we’re going to use the name Lighthouse to refer to the series of tests executed by the shared Github repo, regardless of the execution method.
Version 9 is currently out on Github and is slated for large-scale rollout with the stable Chrome 98 release in February 2022.
Lighthouse And Web Core Vitals
On May 5, 2020, the Chromium project announced a set of three metrics with which the Google-backed open-source browser would measure performance.
The metrics, known as Web Vitals, are part of a Google initiative designed to provide unified guidance for quality signals.
The goal of these metrics is to measure web performance in a user-centric manner.
Within two weeks, Lighthouse v6 rolled out with a modified version of Web Core Vitals at the heart of the update.
July 2020 saw Lighthouse v6’s unified metrics adopted across Google products with the release of Chrome 84.
Chrome DevTools Audits panel was renamed to Lighthouse. Pagespeed insights and Google Search Console also reference these unified metrics.
This change in focus sets new, more refined goals.
How Is Lighthouse Different Than Core Web Vitals?
The three metrics represented by Core Web Vital are part of Lighthouse performance scoring.
Largest Contentful Paint, Total Blocking Time, and Cumulative Layout Shift comprise 70% of Lighthouse’s weighted performance score.
The scores you’ll see for CWV in Lighthouse are the result of emulated tests.
It’s the same metric but measured off a single page load rather than calculated from page loads around the world.
Why Doesn’t Lighthouse Match Search Console/Crux reports?
For real users, how quickly a page assembles is based on factors like their network connection, the device’s network processing power, and even the user’s physical distance to the site’s servers.
Lighthouse performance data doesn’t account for all these factors.
Instead, the tool emulates a mid-range device and throttles CPU in order to simulate the average user.
These are lab tests collected within a controlled environment with predefined device and network settings.
Lab data is helpful for debugging performance issues.
It does not mean that the experience on your local machine in a controlled environment represents the experiences of real humans in the wild.
The good news is you don’t have to choose between Lighthouse and Core Web Vitals. They’re designed to be part of the same workflow.
Always start with field data from the Chrome User Experience Report to identify issues impacting real uses.
Then leverage the expanded testing capabilities of Lighthouse to identify the code causing the issue.
If you’re working on a site pre-launch or QAing changes in a non-public environment, Lighthouse will be your new best #webperf friend.

How Is Lighthouse Performance Metrics Calculated?

In versions 8 and 9, Lighthouse’s performance score is made of seven metrics with each contributing a percentage of the total performance score.

Why Is My Score Different Each Time I Test?
Your score may change each time you test.
Browser extensions, internet connection, A/B tests, or even the ads displayed on that specific page load have an impact.
If you’re curious/furious to know more, check out the documentation on performance testing variability.
Lighthouse Performance Metrics Explained
Largest Contentful Paint (LCP)
- What it represents: A user’s perception of loading experience.
- Lighthouse Performance score weighting: 25%
- What it measures: The point in the page load timeline when the page’s largest image or text block is visible within the viewport.
- How it’s measured: Lighthouse extracts LCP data from Chrome’s tracing tool.
- Is Largest Contentful Paint a Web Core Vital? Yes!
- LCP Scoring
- Goal: Achieve LCP in < 2.5 seconds.

What Elements Can Be Part Of LCP?
- Text.
- Images.
- Videos.
- Background images.
What Counts As LCP On Your Page?
It depends! LCP typically varies by page template.
This means that you can measure a handful of pages using the same template and define LCP.
Lighthouse will provide you with the exact HTML of the LCP element, but it can be useful to know the node as well when communicating with developers.
The node name will be consistent while the exact on-page image or text may change depending on which content is rendered by the template.
How To Define LCP Using Chrome Devtools
- Open the page in Chrome.
- Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Hover over the LCP marker in the Timings section.
- The element(s) that correspond to LCP is detailed in the Related Node field.

What Causes Poor LCP?
Poor LCP typically comes from four issues:
- Slow server response times.
- Render-blocking JavaScript and CSS.
- Resource load times.
- Client-side rendering.
How To Fix Poor LCP
If the cause is slow server response time:
- Optimize your server.
- Route users to a nearby CDN.
- Cache assets.
- Serve HTML pages cache-first.
- Establish third-party connections early.
If the cause is render-blocking JavaScript and CSS:
- Minify CSS.
- Defer non-critical CSS.
- Inline critical CSS.
- Minify and compress JavaScript files.
- Defer unused JavaScript.
- Minimize unused polyfills.
If the cause is resource load times:
- Optimize and compress images.
- Preload important resources.
- Compress text files.
- Deliver different assets based on the network connection (adaptive serving).
- Cache assets using a service worker.
If the cause is client-side rendering:
Resources For Improving LCP
Total Blocking Time (TBT)
- What it represents: Responsiveness to user input.
- Lighthouse Performance score weighting: 30%
- What it measures: TBT measures the time between First Contentful Paint and Time to Interactive. TBT is the lab equivalent of First Input Delay (FID) – the field data used in the Chrome User Experience Report and Google’s upcoming Page Experience ranking signal.
- How it’s measured: The total time in which the main thread is occupied by tasks taking more than 50ms to complete. If a task takes 80ms to run, 30ms of that time will be counted toward TBT. If a task takes 45ms to run, 0ms will be added to TBT.
- Is Total Blocking Time a Web Core Vital? Yes! It’s the lab data equivalent of First Input Delay (FID).
TBT Scoring
- Goal: Achieve TBT score of less than 300 milliseconds.

First Input Delay, the field data equivalent to TBT, has different thresholds.

Long Tasks And Total Blocking Time
TBT measures long tasks – those taking longer than 50ms.
When a browser loads your site, there is essentially a single line queue of scripts waiting to be executed.
Any input from the user has to go into that same queue.
When the browser can’t respond to user input because other tasks are executing, the user perceives this as lag.
Essentially, long tasks are like that person at your favorite coffee shop who takes far too long to order a drink.
Like someone ordering a 2% venti four-pump vanilla, five-pump mocha whole-fat froth, long tasks are a major source of bad experiences.

What Causes A High TBT On Your Page?
Heavy JavaScript.
That’s it.
How To See TBT Using Chrome Devtools

How To Fix Poor TBT
- Break up Long Tasks.
- Optimize your page for interaction readiness.
- Use a web worker.
- Reduce JavaScript execution time.
Resources For Improving TBT
First Contentful Paint (FCP)
- What it represents: FCP marks the time at which the first text or image is painted (visible).
- Lighthouse Performance score weighting: 10%
- What it measures: The time when I can see the page I requested is responding. My thumb can stop hovering over the back button.
- How it’s measured: Your FCP score in Lighthouse is measured by comparing your page’s FCP to FCP times for real website data stored by the HTTP Archive.
- Your FCP increases if it is faster than other pages in the HTTP Archive.
- Is First Contentful Paint a Web Core Vital? No
FCP Scoring
- Goal: Achieve FCP in < 2 seconds.

What Elements Can Be Part Of FCP?
The time it takes to render the first visible element to the DOM is the FCP.
Anything that happens before an element that renders non-white content to the page (excluding iframes) is counted toward FCP.
Since iframes are not considered part of FCP, if they are the first content to render, FCP will continue counting until the first non-iframe content loads, but the iframe load time isn’t counted toward the FCP.
The documentation around FCP also calls out that is often impacted by font load time and there are tips for improving font loads.
FCP Using Chrome Devtools
- Open the page in Chrome.
- Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Click on the FCP marker in the Timings section.
- The summary tab has a timestamp with the FCP in ms.
How To Improve FCP
In order for content to be displayed to the user, the browser must first download, parse, and process all external stylesheets it encounters before it can display or render any content to a user’s screen.
The fastest way to bypass the delay of external resources is to use in-line styles for above-the-fold content.
To keep your site sustainably scalable, use an automated tool like penthouse and Apache’s mod_pagespeed.
These solutions will come with some restrictions to functionalities, require testing, and may not be for everyone.
Universally, we can all improve our site’s time to First Contentful Paint by reducing the scope and complexity of style calculations.
If a style isn’t being used, remove it.
You can identify unused CSS with Chrome Dev Tool’s built-in Code Coverage functionality.
Use better data to make better decisions.
Similar to TTI, you can capture real user metrics for FCP using Google Analytics to correlate improvements with KPIs.
Resources For Improving FCP
Speed Index
- What it represents: How much is visible at a time during load.
- Lighthouse Performance score weighting: 10%
- What it measures: The Speed Index is the average time at which visible parts of the page are displayed.
- How it’s measured: Lighthouse’s Speed Index measurement comes from a node module called Speedline.
You’ll have to ask the kindly wizards at webpagetest.org for the specifics but roughly, Speedline scores vary by the size of the viewport (read as device screen) and have an algorithm for calculating the completeness of each frame.

- Is Speed Index a Web Core Vital? No.
SI Scoring
- Goal: achieve SI in < 4.3 seconds.

How To Improve SI
Speed score reflects your site’s Critical Rendering Path.
A “critical” resource means that the resource is required for the first paint or is crucial to the page’s core functionality.
The longer and denser the path, the slower your site will be to provide a visual page.
If your path is optimized, you’ll give users content faster and score higher on Speed Index.
How The Critical Path Affects Rendering

Lighthouse recommendations commonly associated with a slow Critical Rendering Path include:
- Minimize main-thread work.
- Reduce JavaScript execution time.
- Minimize Critical Requests Depth.
- Eliminate Render-Blocking Resources.
- Defer offscreen images.
Resources For Improving SI
Time To Interactive
- What it represents: Load responsiveness; identifying where a page looks responsive but isn’t yet.
- Lighthouse Performance score weighting: 10%
- What it measures: The time from when the page begins loading to when its main resources have loaded and are able to respond to user input.
- How it’s measured: TTI measures how long it takes a page to become fully interactive. A page is considered fully interactive when:
1. The page displays useful content, which is measured by the First Contentful Paint.
2. Event handlers are registered for most visible page elements.
3. The page responds to user interactions within 50 milliseconds.
- Is Time to Interactive a Web Core Vital? No
TTI Scoring
Goal: achieve TTI score of less than 3.8 seconds.
Resources For Improving TTI
Cumulative Layout Shift (CLS)
- What it represents: A user’s perception of a page’s visual stability.
- Lighthouse Performance score weighting: 15%
- What it measures: It quantifies shifting page elements through the end of page load.
- How it’s measured: Unlike other metrics, CLS isn’t measured in time. Instead, it’s a calculated metric based on the number of frames in which elements move and the total distance in pixels the elements moved.

CLS Scoring
- Goal: achieve CLS score of less than 0.1.

What Elements Can Be Part Of CLS?
Any visual element that appears above the fold at some point in the load.
That’s right – if you’re loading your footer first and then the hero content of the page, your CLS is going to hurt.
Causes Of Poor CLS
- Images without dimensions.
- Ads, embeds, and iframes without dimensions.
- Dynamically injected content.
- Web Fonts causing FOIT/FOUT.
- Actions waiting for a network response before updating DOM.
How To Define CLS Using Chrome Devtools
- Open the page in Chrome.
- Navigate to the Performance panel of Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Hover and move from left to right over the screenshots of the load (make sure the screenshots checkbox is checked).
- Watch for elements bouncing around after the first paint to identify elements causing CLS.
How To Improve CLS
Once you identify the element(s) at fault, you’ll need to update them to be stable during the page load.
For example, if slow-loading ads are causing the high CLS score, you may want to use placeholder images of the same size to fill that space as the ad loads to prevent the page shifting.
Some common ways to improve CLS include:
- Always include width and height size attributes on images and video elements.
- Reserve space for ad slots (and don’t collapse it).
- Avoid inserting new content above existing content.
- Take care when placing non-sticky ads near the top of the viewport.
- Preload fonts.
CLS Resources
How To Test Performance Using Lighthouse
Methodology Matters
Out of the box, Lighthouse audits a single page at a time.
A single page score doesn’t represent your site, and a fast homepage doesn’t mean a fast site.
Test multiple page types within your site.
Identify your major page types, templates, and goal conversion points (signup, subscribe, and checkout pages).
If 40% of your site is blog posts, make 40% of your testing URLs blog pages!
Example Page Testing Inventory

Before you begin optimizing, run Lighthouse on each of your sample pages and save the report data.
Record your scores and the to-do list of improvements.
Prevent data loss by saving the JSON results and utilizing Lighthouse Viewer when detailed result information is needed.
Get Your Backlog to Bite Back Using ROI
Getting development resources to action SEO recommendations is hard.
An in-house SEO professional could destroy their pancreas by having a birthday cake for every backlogged ticket’s birthday. Or at least learn to hate cake.
In my experience as an in-house enterprise SEO pro, the trick to getting performance initiatives prioritized is having the numbers to back the investment.
This starting data will become dollar signs that serve to justify and reward development efforts.
With Lighthouse testing, you can recommend specific and direct changes (Think preload this font file) and associate the change to a specific metric.
Chances are you’re going to have more than one area flagged during tests. That’s okay!
If you’re wondering which changes will have the most bang for the buck, check out the Lighthouse Scoring Calculator.
How To Run Lighthouse Tests
This is a case of many roads leading to Oz.
Sure, some scarecrow might be particularly loud about a certain shade of brick but it’s about your goals.
Looking to test an entire staging site? Time to learn some NPM.
Have less than five minutes to prep for a prospective client meeting? A couple of one-off reports should do the trick.
Whichever way you execute, default to mobile unless you have a special use-case for desktop.
For One-Off Reports: PageSpeed Insights
Test one page at a time on PageSpeed Insights. Simply enter the URL.

Pros Of Running Lighthouse From PageSpeed Insights
- Detailed Lighthouse report is combined with URL-specific data from the Chrome User Experience Report.
- Opportunities and Diagnostics can be filtered to specific metrics. This is exceptionally useful when creating tickets for your engineers and tracking the resulting impact of the changes.
- PageSpeed Insights is running already version 9.
Screenshot from PageSpeed Insights, January 2022
Cons Of Running Lighthouse From PageSpeed Insights
- One report at a time.
- Only Performance tests are run (if you need SEO, Accessibility, or Best Practices, you’ll need to run those separately)
- You can’t test local builds or authenticated pages.
- Reports can’t be saved in JSON, HTML, or Gist format. (Save as PDF via browser functionality is an option.
- Requires you to manually save results.
For Comparing Test Results: Chrome DevTools Or Web.dev
Because the report will be emulating a user’s experience using your browser, use an incognito instance with all extensions, and the browser’s cache disabled.
Pro-tip: Create a Chrome profile for testing. Keep it local (no sync enabled, password saving, or association to an existing Google account) and don’t install extensions for the user.
How To Run A Test Lighthouse Using Chrome DevTools
- Open an incognito instance of Chrome.
- Navigate to the Network panel of Chrome Dev Tools (Command + Option + I on Mac or Control + Shift + I on Windows and Linux).
- Tick the box to disable cache.
- Navigate to the Lighthouse panel.
- Click Generate Report.
- Click the dots to the right of the URL in the report
- Save in your preferred format (JSON, HTML, or Gist)
Screenshot from Lighthouse Reports, January 2022
Note that your version of Lighthouse may change depending on what version of Chrome you’re using. v8.5 is used on Chrome 97.
Lighthouse v9 will ship with DevTools in Chrome 98.
How To Run A Test Lighthouse Using Web.Dev
It’s just like DevTools but you don’t have to remember to disable all those pesky extensions!
- Go to web.dev/measure.
- Enter your URL.
- Click Run Audit.
- Click View Report.
Screenshot by author, January 2022
Pros Of Running Lighthouse From DevTools/web.dev
- You can test local builds or authenticated pages.
- Saved reports can be compared using the Lighthouse CI Diff tool.
Screenshot from Lighthouse CI Diff, January 2022
Cons Of Running Lighthouse From DevTools/web.dev
- One report at a time.
- Requires you to manually save results.
For Testing At Scale (and Sanity): Node Command Line
1. Install npm.
(Mac Pro-tip: Use homebrew to avoid obnoxious dependency issues.)
2. Install the Lighthouse node module with npm
install -g lighthouse
3. Run a single text with
lighthouse <url>
4. Run tests on lists of usings by running tests programmatically.
Pros Of Running Lighthouse From Node
- Many reports can be run at once.
- Can be set to run automatically to track change over time.
Cons Of Running Lighthouse From Node
- Requires some coding knowledge.
- More time-intensive setup.
Conclusion
The complexity of performance metrics reflects the challenges facing all sites.
We use performance metrics as a proxy for user experience – that means factoring in some unicorns.
Tools like Google’s Test My Site and What Does My Site Cost? can help you make the conversion and customer-focused arguments for why performance matters.
Hopefully, once your project has traction, these definitions will help you translate Lighthouse’s single performance metric into action tickets for a skilled and collaborative engineering team.
Track your data and shout it from the rooftops.
As much as Google struggles to quantify qualitative experiences, SEO professionals and devs must decode how to translate a concept into code.
Test, iterate, and share what you learn! I look forward to seeing what you’re capable of, you beautiful unicorn.
More resources:
Featured Image: Paulo Bobita/Search Engine Journal
SEO
Firefox URL Tracking Removal – Is This A Trend To Watch?

Firefox recently announced that they are offering users a choice on whether or not to include tracking information from copied URLs, which comes on the on the heels of iOS 17 blocking user tracking via URLs. The momentum of removing tracking information from URLs appears to be gaining speed. Where is this all going and should marketers be concerned?
Is it possible that blocking URL tracking parameters in the name of privacy will become a trend industrywide?
Firefox Announcement
Firefox recently announced that beginning in the Firefox Browser version 120.0, users will be able to select whether or not they want URLs that they copied to contain tracking parameters.
When users select a link to copy and click to raise the contextual menu for it, Firefox is now giving users a choice as to whether to copy the URL with or without the URL tracking parameters that might be attached to the URL.
Screenshot Of Firefox 120 Contextual Menu
According to the Firefox 120 announcement:
“Firefox supports a new “Copy Link Without Site Tracking” feature in the context menu which ensures that copied links no longer contain tracking information.”
Browser Trends For Privacy
All browsers, including Google’s Chrome and Chrome variants, are adding new features that make it harder for websites to track users online through referrer information embedded in a URL when a user clicks from one site and leaves through that click to visit another site.
This trend for privacy has been ongoing for many years but it became more noticeable in 2020 when Chrome made changes to how referrer information was sent when users click links to visit other sites. Firefox and Safari followed with similar referrer behavior.
Whether the current Firefox implementation would be disruptive or if the impact is overblown is kind of besides the point.
What is the point is whether or not what Firefox and Apple did to protect privacy is a trend and if that trend will extend to more blocking of URL parameters that are stronger than what Firefox recently implemented.
I asked Kenny Hyder, CEO of online marketing agency Pixel Main, what his thoughts are about the potential disruptive aspect of what Firefox is doing and whether it’s a trend.
Kenny answered:
“It’s not disruptive from Firefox alone, which only has a 3% market share. If other popular browsers follow suit it could begin to be disruptive to a limited degree, but easily solved from a marketers prospective.
If it became more intrusive and they blocked UTM tags, it would take awhile for them all to catch on if you were to circumvent UTM tags by simply tagging things in a series of sub-directories.. ie. site.com/landing/<tag1>/<tag2> etc.
Also, most savvy marketers are already integrating future proof workarounds for these exact scenarios.
A lot can be done with pixel based integrations rather than cookie based or UTM tracking. When set up properly they can actually provide better and more accurate tracking and attribution. Hence the name of my agency, Pixel Main.”
I think most marketers are aware that privacy is the trend. The good ones have already taken steps to keep it from becoming a problem while still respecting user privacy.”
Some URL Parameters Are Already Affected
For those who are on the periphery of what’s going on with browsers and privacy, it may come as a surprise that some tracking parameters are already affected by actions meant to protect user privacy.
Jonathan Cairo, Lead Solutions Engineer at Elevar shared that there is already a limited amount of tracking related information stripped from URLs.
But he also explained that there are limits to how much information can be stripped from URLs because the resulting negative effects would cause important web browsing functionality to fail.
Jonathan explained:
“So far, we’re seeing a selective trend where some URL parameters, like ‘fbclid’ in Safari’s private browsing, are disappearing, while others, such as TikTok’s ‘ttclid’, remain.
UTM parameters are expected to stay since they focus on user segmentation rather than individual tracking, provided they are used as intended.
The idea of completely removing all URL parameters seems improbable, as it would disrupt key functionalities on numerous websites, including banking services and search capabilities.
Such a drastic move could lead users to switch to alternative browsers.
On the other hand, if only some parameters are eliminated, there’s the possibility of marketers exploiting the remaining ones for tracking purposes.
This raises the question of whether companies like Apple will take it upon themselves to prevent such use.
Regardless, even in a scenario where all parameters are lost, there are still alternative ways to convey click IDs and UTM information to websites.”
Brad Redding of Elevar agreed about the disruptive effect from going too far with removing URL tracking information:
“There is still too much basic internet functionality that relies on query parameters, such as logging in, password resets, etc, which are effectively the same as URL parameters in a full URL path.
So we believe the privacy crackdown is going to continue on known trackers by blocking their tracking scripts, cookies generated from them, and their ability to monitor user’s activity through the browser.
As this grows, the reliance on brands to own their first party data collection and bring consent preferences down to a user-level (vs session based) will be critical so they can backfill gaps in conversion data to their advertising partners outside of the browser or device.”
The Future Of Tracking, Privacy And What Marketers Should Expect
Elevar raises good points about how far browsers can go in terms of how much blocking they can do. Their response that it’s down to brands to own their first party data collection and other strategies to accomplish analytics without compromising user privacy.
Given all the laws governing privacy and Internet tracking that have been enacted around the world it looks like privacy will continue to be a trend.
However, at this point it time, the advice is to keep monitoring how far browsers are going but there is no expectation that things will get out of hand.
SEO
How To Become an SEO Expert in 4 Steps

With 74.1% of SEOs charging clients upwards of $500 per month for their services, there’s a clear financial incentive to get good at SEO. But with no colleges offering degrees in the topic, it’s down to you to carve your own path in the industry.
There are many ways to do this; some take longer than others.
In this post, I’ll share how I’d go from zero to SEO pro if I had to do it all over again.
Understanding what search engine optimization really is and how it works is the first state of affairs. While you can do this by reading endless blog posts or watching YouTube videos, I wouldn’t recommend that approach for a few reasons:
- It’s hard to know where to start
- It’s hard to join the dots
- It’s hard to know who to trust
You can solve all of these problems by taking a structured course like our SEO course for beginners. It’s completely free (no signup required), consists of 14 short video lessons (2 hours total length), and covers:
- What SEO is and why it’s important
- How to do keyword research
- How to optimize pages for keywords
- How to build links (and why you need them)
- Technical SEO best practices
Here’s the first lesson to get you started:
It doesn’t matter how many books you read about golf, you’re never going to win a tournament without picking up a set of clubs and practicing. It’s the same with SEO. The theory is important, but there’s no substitute for getting your hands dirty and trying to rank a site.
If you don’t have a site already, you can get up and running fairly quickly with any major website platform. Some will set you back a few bucks, but they handle SEO basics out of the box. This saves you time sweating the small stuff.
As for what kind of site you should create, I recommend a simple hobby blog.
Here’s a simple food blog I set up in <10 minutes:


Once you’re set-up, you’re ready to start practicing and honing your SEO skills. Specifically, doing keyword research to find topics, writing and optimizing content about them, and (possibly) building a few backlinks.
For example, according to Ahrefs’ Keywords Explorer, the keyword “neopolitan pizza dough recipe” has a monthly traffic potential of 4.4K as well as a relatively low Keyword Difficulty (KD) score:


Even better, there’s a weak website (DR 16) in the top three positions—so this should definitely be quite an easy topic to rank for.


Given that most of the top-ranking posts have at least a few backlinks, a page about this topic would also likely need at least a few backlinks to compete. Check out the resources below to learn how to build these.
It’s unlikely that your hobby blog is going to pay the bills, so it’s time to use the work you’ve done so far to get a job in SEO. Here are a few benefits of doing this:
- Get paid to learn. This isn’t the case when you’re home alone reading blog posts and watching videos or working on your own site.
- Get deeper hands-on experience. Agencies work with all kinds of businesses, which means you’ll get to build experience with all kinds of sites, from blogs to ecommerce.
- Build your reputation. Future clients or employers are more likely to take you seriously if you’ve worked for a reputable SEO agency.
To find job opportunities, start by signing up for SEO newsletters like SEO Jobs and SEOFOMO. Both of these send weekly emails and feature remote job opportunities:


You can also go the traditional route and search job sites for entry-level positions. The kinds of jobs you’re looking for will usually have “Junior” in their titles or at least mention that it’s a junior position in their description.


Beyond that, you can search for SEO agencies in your local area and check their careers pages.
Even if there are no entry-level positions listed here, it’s still worth emailing and asking if there are any upcoming openings. Make sure to mention any SEO success you’ve had with your website and where you’re at in your journey so far.
This might seem pushy, but many agencies actually encourage this—such as Rise at Seven:


Here’s a quick email template to get you started:
Subject: Junior SEO position?
Hey folks,
Do you have any upcoming openings for junior SEOs?
I’ve been learning SEO for [number] months, but I’m looking to take my knowledge to the next level. So far, I’ve taken Ahrefs’ Beginner SEO course and started my own blog about [topic]—which I’ve had some success with. It’s only [number] months old but already ranks for [number] keywords and gets an estimated [number] monthly search visits according to Ahrefs.
[Ahrefs screenshot]
I checked your careers page and didn’t see any junior positions there, but I was hoping you might consider me for any upcoming positions? I’m super enthusiastic, hard-working, and eager to learn.
Let me know.
[Name]
You can pull all the numbers and screenshots you need by creating a free Ahrefs Webmaster Tools account and verifying your website.
SEO is a broad industry. It’s impossible to be an expert at every aspect of it, so you should niche down and hone your skills in the area that interests you the most. You should have a reasonable idea of what this is from working on your own site and in an agency.
For example, link building was the area that interested me the most, so that’s where I focused on deepening my knowledge. As a result, I became what’s known as a “t-shaped SEO”—someone with broad skills across all things SEO but deep knowledge in one area.


Marie Haynes is another great example of a t-shaped SEO. She specializes in Google penalty recovery. She doesn’t build links or do on-page SEO. She audits websites with traffic drops and helps their owners recover.
In terms of how to build your knowledge in your chosen area, here are a few ideas:
Here are a few SEOs I’d recommend following and their (rough) specialties:
Final thoughts
K Anders Ericsson famously theorized that it takes 10,000 hours of practice to master a new skill. Can it take less? Possibly. But the point is this: becoming an SEO expert is not an overnight process.
I’d even argue that it’s a somewhat unattainable goal because no matter how much you know, there’s always more to learn. That’s part of the fun, though. SEO is a fast-moving industry that keeps you on your toes, but it’s a very rewarding one, too.
Here are a few stats to prove it:
- 74.1% of SEOs charge clients upwards of $500 per month for their services (source)
- $49,211 median annual salary (source)
- ~$74k average salary for self-employed SEOs (source)
Got questions? Ping me on Twitter X.
SEO
A Year Of AI Developments From OpenAI

Today, ChatGPT celebrates one year since its launch in research preview.
Try talking with ChatGPT, our new AI system which is optimized for dialogue. Your feedback will help us improve it. https://t.co/sHDm57g3Kr
— OpenAI (@OpenAI) November 30, 2022
From its humble beginnings, ChatGPT has continually pushed the boundaries of what we perceive as possible with generative AI for almost any task.
a year ago tonight we were probably just sitting around the office putting the finishing touches on chatgpt before the next morning’s launch.
what a year it’s been…
— Sam Altman (@sama) November 30, 2023
In this article, we take a journey through the past year, highlighting the significant milestones and updates that have shaped ChatGPT into the versatile and powerful tool it is today.
a year ago tonight we were placing bets on how many total users we’d get by sunday
20k, 80k, 250k… i jokingly said “8B”.
little did we know… https://t.co/8YtO8GbLPy— rapha gontijo lopes (@rapha_gl) November 30, 2023
ChatGPT: From Research Preview To Customizable GPTs
This story unfolds over the course of nearly a year, beginning on November 30, when OpenAI announced the launch of its research preview of ChatGPT.
As users began to offer feedback, improvements began to arrive.
Before the holiday, on December 15, 2022, ChatGPT received general performance enhancements and new features for managing conversation history.

As the calendar turned to January 9, 2023, ChatGPT saw improvements in factuality, and a notable feature was added to halt response generation mid-conversation, addressing user feedback and enhancing control.
Just a few weeks later, on January 30, the model was further upgraded for enhanced factuality and mathematical capabilities, broadening its scope of expertise.
February 2023 was a landmark month. On February 9, ChatGPT Plus was introduced, bringing new features and a faster ‘Turbo’ version to Plus users.
This was followed closely on February 13 with updates to the free plan’s performance and the international availability of ChatGPT Plus, featuring a faster version for Plus users.
March 14, 2023, marked a pivotal moment with the introduction of GPT-4 to ChatGPT Plus subscribers.


This new model featured advanced reasoning, complex instruction handling, and increased creativity.
Less than ten days later, on March 23, experimental AI plugins, including browsing and Code Interpreter capabilities, were made available to selected users.
On May 3, users gained the ability to turn off chat history and export data.
Plus users received early access to experimental web browsing and third-party plugins on May 12.
On May 24, the iOS app expanded to more countries with new features like shared links, Bing web browsing, and the option to turn off chat history on iOS.
June and July 2023 were filled with updates enhancing mobile app experiences and introducing new features.
The mobile app was updated with browsing features on June 22, and the browsing feature itself underwent temporary removal for improvements on July 3.
The Code Interpreter feature rolled out in beta to Plus users on July 6.
Plus customers enjoyed increased message limits for GPT-4 from July 19, and custom instructions became available in beta to Plus users the next day.
July 25 saw the Android version of the ChatGPT app launch in selected countries.
As summer progressed, August 3 brought several small updates enhancing the user experience.
Custom instructions were extended to free users in most regions by August 21.
The month concluded with the launch of ChatGPT Enterprise on August 28, offering advanced features and security for enterprise users.
Entering autumn, September 11 witnessed limited language support in the web interface.
Voice and image input capabilities in beta were introduced on September 25, further expanding ChatGPT’s interactive abilities.
An updated version of web browsing rolled out to Plus users on September 27.
The fourth quarter of 2023 began with integrating DALL·E 3 in beta on October 16, allowing for image generation from text prompts.
The browsing feature moved out of beta for Plus and Enterprise users on October 17.
Customizable versions of ChatGPT, called GPTs, were introduced for specific tasks on November 6 at OpenAI’s DevDay.


On November 21, the voice feature in ChatGPT was made available to all users, rounding off a year of significant advancements and broadening the horizons of AI interaction.
And here, we have ChatGPT today, with a sidebar full of GPTs.


Looking Ahead: What’s Next For ChatGPT
The past year has been a testament to continuous innovation, but it is merely the prologue to a future rich with potential.
The upcoming year promises incremental improvements and leaps in AI capabilities, user experience, and integrative technologies that could redefine our interaction with digital assistants.
With a community of users and developers growing stronger and more diverse, the evolution of ChatGPT is poised to surpass expectations and challenge the boundaries of today’s AI landscape.
As we step into this next chapter, the possibilities are as limitless as generative AI continues to advance.
Featured image: photosince/Shutterstock
-
MARKETING6 days ago
Whiteboard Friday Recap 2023: AI Edition
-
SEARCHENGINES5 days ago
Google Merchant Center Automatically Creating Promotions
-
SEO4 days ago
Google Discusses Fixing 404 Errors From Inbound Links
-
SEARCHENGINES6 days ago
Google Bug Sends Notice To Some Advertisers That Their Ad Accounts Were Suspended
-
MARKETING5 days ago
3 Questions About AI in Content: What? So What? Now What?
-
SEARCHENGINES6 days ago
No Estimate To Share For Completion Of Google November Core & Reviews Updates
-
SEO6 days ago
Is Alt Text A Ranking Factor For Google Image Search?
-
SOCIAL2 days ago
Musk regrets controversial post but won’t bow to advertiser ‘blackmail’
You must be logged in to post a comment Login