Connect with us

SEO

High-Quality Links vs. Low-Quality Links: What’s the Difference?

Published

on

High-Quality Links vs. Low-Quality Links: What’s the Difference
We spend a lot of time as SEO professionals going after links.

They are often seen as the most powerful way to rank a site.

But not every link is created equal.

Over time, the search engines have adapted their algorithms to account for links in different ways, narrowing their use for determining the suitability of a webpage as an answer to a search query.

In this post, you will learn what makes a high-quality link, where to find opportunities to build them, and how to evaluate whether a link is worth the budget and effort to get it.

How Do Search Engines Use Links?

Search engines use links pointing to a webpage to both discover its existence and also determine information about it.

Google mentions in its help documentation,

“Google interprets a link from page A to page B as a vote by page A for page B. Votes cast by pages that are themselves “important” weigh more heavily and help to make other pages “important.”

Bing states in its Webmaster Help and How-To guide,

“Bing prefers to see links built organically. This essentially means the links are built by people linking to your content because they find value in your content. This is an important signal to a search engine because it is seen as a vote of confidence in the content.”

What Is Valuable About a Link?

We know that Google uses links like votes.

Advertisement

A link from a well-regarded website will have more clout than a lesser-regarded website.

Authority

This is often discussed as “authority.”

Many SEO tools will try to assign an authority metric to a website or webpage in an attempt to quantify the value of a link from them.

An authoritative webpage linking to your webpage can be a strong signal that it is itself an authoritative source.

In essence, an authoritative website is one that is considered by the search engines to be a reputable source of information about a subject – an authority in it.

Google will, in part, look at that site’s backlinks to determine its expertise and trustworthiness in a subject.

For instance, a website is considered an expert in interior design. It links to a lesser-known website about interior design.

The website by an expert in interior design is confident enough in the content of the lesser-known site that it’s willing to send its visitors there.

Advertisement

That’s a good, impartial way for the search engines to determine the reputation of a site and its authority on a subject.

Relevance

Authority isn’t everything, however.

Think of it like this… you’re going on holiday to a city you’ve never visited.

Who would you rather ask for restaurant recommendations: your friend who lives in the city, or a tour guide for a city 5 hours away from it?

Your friend who lives in the city is likely more of a relevant source of information on the restaurants in the area than the tour guide who doesn’t serve that area.

You might perceive a tour guide to be more knowledgeable about good restaurants, but not if it’s not their area of expertise.

In a similar way, the search engines will understand the value of a website in your industry linking to your webpage.

A website that reviews restaurants will be considered a more relevant source of information about restaurants than a local community group who had an outing to a restaurant.

Advertisement

Both sites may have a page talking about the “best sushi restaurant in New York,” but the restaurant review website will be more relevant in helping the search engines determine what to serve as an answer for “sushi restaurant in New York.”

Authority & Relevance

The best source of a link is a website that is both considered authoritative and relevant to your website.

What Makes a Link Low-Quality?

If we think of a quality link as one that is both relevant and authoritative, then it makes sense that the lowest quality link is one that is both irrelevant and not authoritative.

These sorts of links are usually easy to come by and can be self-created or requested.

For instance, a website that allows anyone to submit a link is unlikely to have highly curated content that would lend it to being authoritative.

The fact that anyone can add a link to the site means it isn’t likely to be particularly relevant to one industry or niche.

Links to your site from a website like this will be low-quality and generally useless.

At best, these links might have a little positive impact on your search rankings but at worst they could be perceived as part of a manipulative linking scheme.

Advertisement

Google has strict guidelines on what is considered a manipulative link.

You might want to familiarize yourself with Bing and Yandex’s definitions, too.

A Word About Paid Links

We all know by now that paying for links to aid rankings is against the guidelines of most big search engines.

In a best-case scenario, the link won’t be identified as having been paid for and you won’t see a penalty from it.

However, if Google detects that you’ve acquired links from websites that sell links, you may find the webpage it links to penalized.

There are legitimate reasons why links might be placed on websites for a fee.

It’s common practice to utilize banner advertising and affiliate marketing on the internet, for example.

In these instances, Google recommends that webmasters declare the links to be sponsored using the rel=”sponsored” attribute.

Advertisement

This indicates to Googlebots that the link is one that has been paid for and is not to be used for calculating PageRank.

These sorts of links have their own value for marketing and should not be discounted simply because they will not necessarily aid in search rankings.

A Word About NoFollow Links

Before Google introduced the use of the rel=”sponsored” attribute, it and other search engines were using the rel=”nofollow” attribute.

Putting a rel=”nofollow” attribute into the HTML for a link shows the search bots that they shouldn’t go to the destination of that link.

This is used by publishers to stop the search engines from visiting the page and ascribing any benefit of the link.

So, if a high-quality page links to your webpage with a link contain a rel=”nofollow” attribute, you won’t see any ranking benefit of that link.

Google announced recently that this attribute is a hint and therefore it might ignore it.

On the whole, this essentially makes a “nofollow” link useless for SEO link-building purposes as link equity will not pass through the link.

Advertisement

However, if people are following the link and discovering your webpage, I would argue it’s not useless at all!

What Do High-Quality Links Looks Like?

Low-quality links are usually those that are either:

  • Irrelevant in helping the search engines determine your site’s authority on a subject.
  • Or actually harmful.

I’m not addressing link penalties here, or even the sorts of link-building practices that will land you in hot water. For more information on that, see Chuck Price’s article on manual actions.

The low-quality links we’re talking about here are ones that you may well be going after but aren’t benefiting your site.

High-quality links are the Holy Grail of link-building.

They’re the links you show off in your “Team Wins” Slack channel and on Twitter.

They are hard to earn.

I also want to show you some “medium-quality” links.

These are the types of links that are good to get but perhaps won’t move the needle as much as you would like.

Advertisement

They form a part of a healthy backlink profile but aren’t worth your whole content marketing budget to land.

Low Quality: Low Authority/Low Relevance

The sorts of links you are likely to gain that are low-quality and low-relevance are ones that require no real effort to get.

For example, simply sourcing the links and asking for them or, in some cases, adding the link yourself.

Open Directories

These directory sites are very obviously low quality when you visit them. Typically they only offer one service – advertise your website here!

You do not need to pay for a link and everyone and their dog has taken advantage of this.

There will be links from websites in all sorts of industries with very little rhyme or reason as to why this directory exists.

Do note, however, that there are reputable local business directories that can help with verifying your business’s physical address and contact details—Yelp, for instance.

Advertisement

These listings are useful for local citations but are unlikely to really aid in boosting your site’s rankings.

The difference between reputable local directories and generic open directories is quite obvious when you visit them.

Comment Links

Forums ad blogs can be very relevant to a particular industry.

However, due to the ease with which anyone can add content to a forum page or blog comments, any links in that user-generated content are usually discounted by the search engines.

In recent SEO history, blog and forum comments were easy targets for squeezing in a link to a site.

The search engines became wise to this and started devaluing those links.

Alongside the rel=”sponsored” attribute, Google released rel=”ugc”.

Advertisement

This is a way for webmasters to indicate that the links within their forums are user-generated.

Low Quality: Low Effort & No Follow

Social Media Posts

Most large social media sites will use “no follow” tags on them.

However, Google did recently say that “nofollow” tags would be taken as hints rather than concretely respected.

Despite this, social media sites are not the place to go looking for backlinks to help your rankings.

Although social media sites themselves are often authoritative, they are full of uncurated content.

Businesses can set up their own social media pages with links back to their websites. They can talk about their sites in their posts.

These links are not unbiased. Due to this, they are largely ignored by search engines.

Advertisement

Medium Quality: Low Authority but High Relevancy

Small Industry Blogs

Most industries have a proliferation of blogs. Sites run by companies or individuals who want to share their knowledge and build their profile.

There are some highly relevant, niche blogs that might not be well-known enough to be getting their own authority-metric boosting backlinks.

They are, however, full of decent content and very relevant to the website you are trying to grow.

Small industry blog writers are often less over-run with requests to share content and add links than the well-known ones.

They are, however, keen to write and build community.

A smaller blog featuring your site is still a good reinforcement of your relevance to your industry.

This can help enormously with showing your relevance to search topics associated with that industry.

Advertisement

Small Industry Brands

There will be some staple brands in your industry that aren’t necessarily competitors but are tangentially related.

Think of paper manufacturers to your office supply store, for example.

A link from the paper manufacturer showing your store as their distributor can help show your authority in the industry.

Medium Quality: Medium Authority & Medium Low Relevancy

Local News Sites

Your local news site may report on anything to do with your community, or they might be more discerning.

Regardless, doing something considered locally newsworthy can get you featured a lot easier than in a national news website.

These are especially good links to get if you are trying to boost your local SEO efforts.

Advertisement

A link from a website known as a source of reliable local information could help the search engines to see your relevance to that physical area.

High Quality: High Authority but Medium/Low Relevancy

Some sites are extremely authoritative and hard to get a link from. These tend to be beneficial to your SEO efforts.

These sorts of links might not be highly relevant, however.

Although you will see a benefit to your search visibility, it may not help solidify your relevance for particular topics.

National News Sites

There are some national and international newspapers with extremely high authority websites. A link from these sites is worth the effort.

However, journalists are inundated with hundreds of press releases and article ideas every day.

It can be incredibly difficult to get featured, especially with a link.

Advertisement

The best way to get coverage in a national newspaper is to do something newsworthy.

Bringing it to the attention of the site’s journalists might help you get it covered, hopefully with a link back to your site.

High Quality: Medium Authority but High Relevancy

Big Industry Blogs

That website that everyone in the industry goes to for their news; your friends and family may not have heard of it, but your colleagues definitely have.

It’s likely to be a medium authority site according to authority metrics but it’s a leader in your industry.

It’s also very relevant to the website you’re promoting.

A link from a site like this will go a long way in showing your site’s expertise.

High Quality: High Authority & High Relevancy

Big Industry Brands

These are household names; the companies everyone in your industry (and possibly their families) know of.

Advertisement

These links are likely to be medium to high authority according to the tools but definitely leaders in your industry.

If you are linked to as a supplier or distributor, or even just mentioned in a favorable review, you are likely to see the ranking benefit.

Conclusion

A wide and varied link profile is good for SEO.

If you are actively looking to increase links to your site in an organic manner, it’s imperative you know how to generate high-quality links.

Don’t waste your time going for easy links on unrelated and low-quality sites.

Instead, focus your energy and budget on creating truly newsworthy content and bringing it to the attention of authoritative and relevant publishers.

More Resources:

Search Engine Journal

Advertisement
See also  Google: Sites Need to Be Worthwhile to be Indexed via @sejournal, @martinibuster

SEO

Using Python + Streamlit To Find Striking Distance Keyword Opportunities

Published

on

Using Python + Streamlit To Find Striking Distance Keyword Opportunities

Python is an excellent tool to automate repetitive tasks as well as gain additional insights into data.

In this article, you’ll learn how to build a tool to check which keywords are close to ranking in positions one to three and advises whether there is an opportunity to naturally work those keywords into the page.

It’s perfect for Python beginners and pros alike and is a great introduction to using Python for SEO.

If you’d just like to get stuck in there’s a handy Streamlit app available for the code. This is simple to use and requires no coding experience.

There’s also a Google Colaboratory Sheet if you’d like to poke around with the code. If you can crawl a website, you can use this script!

Here’s an example of what we’ll be making today:

Screenshot from Microsoft Excel, October 2021An Excel sheet documenting onpage keywords opportunites generated with Python

These keywords are found in the page title and H1, but not in the copy. Adding these keywords naturally to the existing copy would be an easy way to increase relevancy for these keywords.

By taking the hint from search engines and naturally including any missing keywords a site already ranks for, we increase the confidence of search engines to rank those keywords higher in the SERPs.

Advertisement

This report can be created manually, but it’s pretty time-consuming.

So, we’re going to automate the process using a Python SEO script.

Preview Of The Output

This is a sample of what the final output will look like after running the report:

Excel sheet showing and example of keywords that can be optimised by using the striking distance reportScreenshot from Microsoft Excel, October 2021Excel sheet showing and example of keywords that can be optimised by using the striking distance report

The final output takes the top five opportunities by search volume for each page and neatly lays each one horizontally along with the estimated search volume.

It also shows the total search volume of all keywords a page has within striking distance, as well as the total number of keywords within reach.

The top five keywords by search volume are then checked to see if they are found in the title, H1, or copy, then flagged TRUE or FALSE.

This is great for finding quick wins! Just add the missing keyword naturally into the page copy, title, or H1.

Getting Started

The setup is fairly straightforward. We just need a crawl of the site (ideally with a custom extraction for the copy you’d like to check), and an exported file of all keywords a site ranks for.

This post will walk you through the setup, the code, and will link to a Google Colaboratory sheet if you just want to get stuck in without coding it yourself.

Advertisement

To get started you will need:

We’ve named this the Striking Distance Report as it flags keywords that are easily within striking distance.

(We have defined striking distance as keywords that rank in positions four to 20, but have made this a configurable option in case you would like to define your own parameters.)

Striking Distance SEO Report: Getting Started

1. Crawl The Target Website

  • Set a custom extractor for the page copy (optional, but recommended).
  • Filter out pagination pages from the crawl.

2. Export All Keywords The Site Ranks For Using Your Favorite Provider

  • Filter keywords that trigger as a site link.
  • Remove keywords that trigger as an image.
  • Filter branded keywords.
  • Use both exports to create an actionable Striking Distance report from the keyword and crawl data with Python.

Crawling The Site

I’ve opted to use Screaming Frog to get the initial crawl. Any crawler will work, so long as the CSV export uses the same column names or they’re renamed to match.

The script expects to find the following columns in the crawl CSV export:

"Address", "Title 1", "H1-1", "Copy 1", "Indexability"

Crawl Settings

The first thing to do is to head over to the main configuration settings within Screaming Frog:

Configuration > Spider > Crawl

The main settings to use are:

Crawl Internal Links, Canonicals, and the Pagination (Rel Next/Prev) setting.

Advertisement

(The script will work with everything else selected, but the crawl will take longer to complete!)

Recommended Screaming Frog Crawl SettingsScreenshot from Screaming Frog, October 2021Recommended Screaming Frog Crawl Settings

Next, it’s on to the Extraction tab.

Configuration > Spider > Extraction

Recommended Screaming Frog Extraction Crawl SettingsScreenshot from Screaming Frog, October 2021Recommended Screaming Frog Extraction Crawl Settings

At a bare minimum, we need to extract the page title, H1, and calculate whether the page is indexable as shown below.

Indexability is useful because it’s an easy way for the script to identify which URLs to drop in one go, leaving only keywords that are eligible to rank in the SERPs.

If the script cannot find the indexability column, it’ll still work as normal but won’t differentiate between pages that can and cannot rank.

Setting A Custom Extractor For Page Copy

In order to check whether a keyword is found within the page copy, we need to set a custom extractor in Screaming Frog.

See also  2021 Google updates round up: everything businesses need to win at search

Configuration > Custom > Extraction

Name the extractor “Copy” as seen below.

Screaming Frog Custom Extraction Showing Default Options for Extracting the Page CopyScreenshot from Screaming Frog, October 2021Screaming Frog Custom Extraction Showing Default Options for Extracting the Page Copy

Important: The script expects the extractor to be named “Copy” as above, so please double check!

Lastly, make sure Extract Text is selected to export the copy as text, rather than HTML.

Advertisement

There are many guides on using custom extractors online if you need help setting one up, so I won’t go over it again here.

Once the extraction has been set it’s time to crawl the site and export the HTML file in CSV format.

Exporting The CSV File

Exporting the CSV file is as easy as changing the drop-down menu displayed underneath Internal to HTML and pressing the Export button.

Internal > HTML > Export

Screaming Frog - Export Internal HTML SettingsScreenshot from Screaming Frog, October 2021Screaming Frog - Export Internal HTML Settings

After clicking Export, It’s important to make sure the type is set to CSV format.

The export screen should look like the below:

Screaming Frog Internal HTML CSV Export SettingsScreenshot from Screaming Frog, October 2021Screaming Frog Internal HTML CSV Export Settings

Tip 1: Filtering Out Pagination Pages

I recommend filtering out pagination pages from your crawl either by selecting Respect Next/Prev under the Advanced settings (or just deleting them from the CSV file, if you prefer).

Screaming Frog Settings to Respect Rel / PrevScreenshot from Screaming Frog, October 2021Screaming Frog Settings to Respect Rel / Prev

Tip 2: Saving The Crawl Settings

Once you have set the crawl up, it’s worth just saving the crawl settings (which will also remember the custom extraction).

This will save a lot of time if you want to use the script again in the future.

Advertisement

File > Configuration > Save As

How to save a configuration file in screaming frogScreenshot from Screaming Frog, October 2021How to save a configuration file in screaming frog

Exporting Keywords

Once we have the crawl file, the next step is to load your favorite keyword research tool and export all of the keywords a site ranks for.

The goal here is to export all the keywords a site ranks for, filtering out branded keywords and any which triggered as a sitelink or image.

For this example, I’m using the Organic Keyword Report in Ahrefs, but it will work just as well with Semrush if that’s your preferred tool.

In Ahrefs, enter the domain you’d like to check in Site Explorer and choose Organic Keywords.

Ahrefs Site Explorer SettingsScreenshot from Ahrefs.com, October 2021Ahrefs Site Explorer Settings

Site Explorer > Organic Keywords

Ahrefs - How Setting to Export Organic Keywords a Site Ranks ForScreenshot from Ahrefs.com, October 2021Ahrefs - How Setting to Export Organic Keywords a Site Ranks For

This will bring up all keywords the site is ranking for.

Filtering Out Sitelinks And Image links

The next step is to filter out any keywords triggered as a sitelink or an image pack.

The reason we need to filter out sitelinks is that they have no influence on the parent URL ranking. This is because only the parent page technically ranks for the keyword, not the sitelink URLs displayed under it.

Filtering out sitelinks will ensure that we are optimizing the correct page.

Ahrefs Screenshot Demonstrating Pages Ranking for Sitelink KeywordsScreenshot from Ahrefs.com, October 2021Ahrefs Screenshot Demonstrating Pages Ranking for Sitelink Keywords

Here’s how to do it in Ahrefs.

Image showing how to exclude images and sitelinks from a keyword exportScreenshot from Ahrefs.com, October 2021Image showing how to exclude images and sitelinks from a keyword export

Lastly, I recommend filtering out any branded keywords. You can do this by filtering the CSV output directly, or by pre-filtering in the keyword tool of your choice before the export.

Finally, when exporting make sure to choose Full Export and the UTF-8 format as shown below.

Advertisement
Image showing how to export keywords in UTF-8 format as a csv fileScreenshot from Ahrefs.com, October 2021Image showing how to export keywords in UTF-8 format as a csv file

By default, the script works with Ahrefs (v1/v2) and Semrush keyword exports. It can work with any keyword CSV file as long as the column names the script expects are present.

Processing

The following instructions pertain to running a Google Colaboratory sheet to execute the code.

There is now a simpler option for those that prefer it in the form of a Streamlit app. Simply follow the instructions provided to upload your crawl and keyword file.

Now that we have our exported files, all that’s left to be done is to upload them to the Google Colaboratory sheet for processing.

Select Runtime > Run all from the top navigation to run all cells in the sheet.

Image showing how to run the stirking distance Python script from Google CollaboratoryScreenshot from Colab.research.google.com, October 2021Image showing how to run the stirking distance Python script from Google Collaboratory

The script will prompt you to upload the keyword CSV from Ahrefs or Semrush first and the crawl file afterward.

Image showing how to upload the csv files to Google CollaboratoryScreenshot from Colab.research.google.com, October 2021Image showing how to upload the csv files to Google Collaboratory

That’s it! The script will automatically download an actionable CSV file you can use to optimize your site.

Image showing the Striking Distance final outputScreenshot from Microsoft Excel, October 2021Image showing the Striking Distance final output

Once you’re familiar with the whole process, using the script is really straightforward.

Code Breakdown And Explanation

If you’re learning Python for SEO and interested in what the code is doing to produce the report, stick around for the code walkthrough!

Install The Libraries

Let’s install pandas to get the ball rolling.

!pip install pandas

Import The Modules

Next, we need to import the required modules.

import pandas as pd
from pandas import DataFrame, Series
from typing import Union
from google.colab import files

Set The Variables

Now it’s time to set the variables.

Advertisement

The script considers any keywords between positions four and 20 as within striking distance.

See also  10 Best SEO Checker and Website Analyzer Tools

Changing the variables here will let you define your own range if desired. It’s worth experimenting with the settings to get the best possible output for your needs.

# set all variables here
min_volume = 10  # set the minimum search volume
min_position = 4  # set the minimum position  / default = 4
max_position = 20 # set the maximum position  / default = 20
drop_all_true = True  # If all checks (h1/title/copy) are true, remove the recommendation (Nothing to do)
pagination_filters = "filterby|page|p="  # filter patterns used to detect and drop paginated pages

Upload The Keyword Export CSV File

The next step is to read in the list of keywords from the CSV file.

It is set up to accept an Ahrefs report (V1 and V2) as well as a Semrush export.

This code reads in the CSV file into a Pandas DataFrame.

upload = files.upload()
upload = list(upload.keys())[0]
df_keywords = pd.read_csv(
    (upload),
    error_bad_lines=False,
    low_memory=False,
    encoding="utf8",
    dtype={
        "URL": "str",
        "Keyword": "str",
        "Volume": "str",
        "Position": int,
        "Current URL": "str",
        "Search Volume": int,
    },
)
print("Uploaded Keyword CSV File Successfully!")

If everything went to plan, you’ll see a preview of the DataFrame created from the keyword CSV export. 

Dataframe showing sucessful upload of the keyword export fileScreenshot from Colab.research.google.com, October 2021Dataframe showing sucessful upload of the keyword export file

Upload The Crawl Export CSV File

Once the keywords have been imported, it’s time to upload the crawl file.

This fairly simple piece of code reads in the crawl with some error handling option and creates a Pandas DataFrame named df_crawl.

upload = files.upload()
upload = list(upload.keys())[0]
df_crawl = pd.read_csv(
    (upload),
        error_bad_lines=False,
        low_memory=False,
        encoding="utf8",
        dtype="str",
    )
print("Uploaded Crawl Dataframe Successfully!")

Once the CSV file has finished uploading, you’ll see a preview of the DataFrame.

Image showing a dataframe of the crawl file being uploaded successfullyScreenshot from Colab.research.google.com, October 2021Image showing a dataframe of the crawl file being uploaded successfully

Clean And Standardize The Keyword Data

The next step is to rename the column names to ensure standardization between the most common types of file exports.

Essentially, we’re getting the keyword DataFrame into a good state and filtering using cutoffs defined by the variables.

Advertisement
df_keywords.rename(
    columns={
        "Current position": "Position",
        "Current URL": "URL",
        "Search Volume": "Volume",
    },
    inplace=True,
)

# keep only the following columns from the keyword dataframe
cols = "URL", "Keyword", "Volume", "Position"
df_keywords = df_keywords.reindex(columns=cols)

try:
    # clean the data. (v1 of the ahrefs keyword export combines strings and ints in the volume column)
    df_keywords["Volume"] = df_keywords["Volume"].str.replace("0-10", "0")
except AttributeError:
    pass

# clean the keyword data
df_keywords = df_keywords[df_keywords["URL"].notna()]  # remove any missing values
df_keywords = df_keywords[df_keywords["Volume"].notna()]  # remove any missing values
df_keywords = df_keywords.astype({"Volume": int})  # change data type to int
df_keywords = df_keywords.sort_values(by="Volume", ascending=False)  # sort by highest vol to keep the top opportunity

# make new dataframe to merge search volume back in later
df_keyword_vol = df_keywords[["Keyword", "Volume"]]

# drop rows if minimum search volume doesn't match specified criteria
df_keywords.loc[df_keywords["Volume"] < min_volume, "Volume_Too_Low"] = "drop"
df_keywords = df_keywords[~df_keywords["Volume_Too_Low"].isin(["drop"])]

# drop rows if minimum search position doesn't match specified criteria
df_keywords.loc[df_keywords["Position"] <= min_position, "Position_Too_High"] = "drop"
df_keywords = df_keywords[~df_keywords["Position_Too_High"].isin(["drop"])]
# drop rows if maximum search position doesn't match specified criteria
df_keywords.loc[df_keywords["Position"] >= max_position, "Position_Too_Low"] = "drop"
df_keywords = df_keywords[~df_keywords["Position_Too_Low"].isin(["drop"])]

Clean And Standardize The Crawl Data

Next, we need to clean and standardize the crawl data.

Essentially, we use reindex to only keep the “Address,” “Indexability,” “Page Title,” “H1-1,” and “Copy 1” columns, discarding the rest.

We use the handy “Indexability” column to only keep rows that are indexable. This will drop canonicalized URLs, redirects, and so on. I recommend enabling this option in the crawl.

Lastly, we standardize the column names so they’re a little nicer to work with.

# keep only the following columns from the crawl dataframe
cols = "Address", "Indexability", "Title 1", "H1-1", "Copy 1"
df_crawl = df_crawl.reindex(columns=cols)
# drop non-indexable rows
df_crawl = df_crawl[~df_crawl["Indexability"].isin(["Non-Indexable"])]
# standardise the column names
df_crawl.rename(columns={"Address": "URL", "Title 1": "Title", "H1-1": "H1", "Copy 1": "Copy"}, inplace=True)
df_crawl.head()

Group The Keywords

As we approach the final output, it’s necessary to group our keywords together to calculate the total opportunity for each page.

Here, we’re calculating how many keywords are within striking distance for each page, along with the combined search volume.

# groups the URLs (remove the dupes and combines stats)
# make a copy of the keywords dataframe for grouping - this ensures stats can be merged back in later from the OG df
df_keywords_group = df_keywords.copy()
df_keywords_group["KWs in Striking Dist."] = 1  # used to count the number of keywords in striking distance
df_keywords_group = (
    df_keywords_group.groupby("URL")
    .agg({"Volume": "sum", "KWs in Striking Dist.": "count"})
    .reset_index()
)
df_keywords_group.head()
DataFrame showing how many keywords were found within striking distanceScreenshot from Colab.research.google.com, October 2021DataFrame showing how many keywords were found within striking distance

Once complete, you’ll see a preview of the DataFrame.

Display Keywords In Adjacent Rows

We use the grouped data as the basis for the final output. We use Pandas.unstack to reshape the DataFrame to display the keywords in the style of a GrepWords export.

DataFrame showing a grepwords type-view of keywords laid out horizontallyScreenshot from Colab.research.google.com, October 2021DataFrame showing a grepwords type-view of keywords laid out horizontally
# create a new df, combine the merged data with the original data. display in adjacent rows ala grepwords
df_merged_all_kws = df_keywords_group.merge(
    df_keywords.groupby("URL")["Keyword"]
    .apply(lambda x: x.reset_index(drop=True))
    .unstack()
    .reset_index()
)

# sort by biggest opportunity
df_merged_all_kws = df_merged_all_kws.sort_values(
    by="KWs in Striking Dist.", ascending=False
)

# reindex the columns to keep just the top five keywords
cols = "URL", "Volume", "KWs in Striking Dist.", 0, 1, 2, 3, 4
df_merged_all_kws = df_merged_all_kws.reindex(columns=cols)

# create union and rename the columns
df_striking: Union[Series, DataFrame, None] = df_merged_all_kws.rename(
    columns={
        "Volume": "Striking Dist. Vol",
        0: "KW1",
        1: "KW2",
        2: "KW3",
        3: "KW4",
        4: "KW5",
    }
)

# merges striking distance df with crawl df to merge in the title, h1 and category description
df_striking = pd.merge(df_striking, df_crawl, on="URL", how="inner")

Set The Final Column Order And Insert Placeholder Columns

Lastly, we set the final column order and merge in the original keyword data.

See also  How to Build Your Brand’s Authority with Strategic Content & SEO

There are a lot of columns to sort and create!

Advertisement
# set the final column order and merge the keyword data in

cols = [
    "URL",
    "Title",
    "H1",
    "Copy",
    "Striking Dist. Vol",
    "KWs in Striking Dist.",
    "KW1",
    "KW1 Vol",
    "KW1 in Title",
    "KW1 in H1",
    "KW1 in Copy",
    "KW2",
    "KW2 Vol",
    "KW2 in Title",
    "KW2 in H1",
    "KW2 in Copy",
    "KW3",
    "KW3 Vol",
    "KW3 in Title",
    "KW3 in H1",
    "KW3 in Copy",
    "KW4",
    "KW4 Vol",
    "KW4 in Title",
    "KW4 in H1",
    "KW4 in Copy",
    "KW5",
    "KW5 Vol",
    "KW5 in Title",
    "KW5 in H1",
    "KW5 in Copy",
]

# re-index the columns to place them in a logical order + inserts new blank columns for kw checks.
df_striking = df_striking.reindex(columns=cols)

Merge In The Keyword Data For Each Column

This code merges the keyword volume data back into the DataFrame. It’s more or less the equivalent of an Excel VLOOKUP function.

# merge in keyword data for each keyword column (KW1 - KW5)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW1", right_on="Keyword", how="left")
df_striking['KW1 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW2", right_on="Keyword", how="left")
df_striking['KW2 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW3", right_on="Keyword", how="left")
df_striking['KW3 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW4", right_on="Keyword", how="left")
df_striking['KW4 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW5", right_on="Keyword", how="left")
df_striking['KW5 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)

Clean The Data Some More

The data requires additional cleaning to populate empty values, (NaNs), as empty strings. This improves the readability of the final output by creating blank cells, instead of cells populated with NaN string values.

Next, we convert the columns to lowercase so that they match when checking whether a target keyword is featured in a specific column.

# replace nan values with empty strings
df_striking = df_striking.fillna("")
# drop the title, h1 and category description to lower case so kws can be matched to them
df_striking["Title"] = df_striking["Title"].str.lower()
df_striking["H1"] = df_striking["H1"].str.lower()
df_striking["Copy"] = df_striking["Copy"].str.lower()

Check Whether The Keyword Appears In The Title/H1/Copy and Return True Or False

This code checks if the target keyword is found in the page title/H1 or copy.

It’ll flag true or false depending on whether a keyword was found within the on-page elements.

df_striking["KW1 in Title"] = df_striking.apply(lambda row: row["KW1"] in row["Title"], axis=1)
df_striking["KW1 in H1"] = df_striking.apply(lambda row: row["KW1"] in row["H1"], axis=1)
df_striking["KW1 in Copy"] = df_striking.apply(lambda row: row["KW1"] in row["Copy"], axis=1)
df_striking["KW2 in Title"] = df_striking.apply(lambda row: row["KW2"] in row["Title"], axis=1)
df_striking["KW2 in H1"] = df_striking.apply(lambda row: row["KW2"] in row["H1"], axis=1)
df_striking["KW2 in Copy"] = df_striking.apply(lambda row: row["KW2"] in row["Copy"], axis=1)
df_striking["KW3 in Title"] = df_striking.apply(lambda row: row["KW3"] in row["Title"], axis=1)
df_striking["KW3 in H1"] = df_striking.apply(lambda row: row["KW3"] in row["H1"], axis=1)
df_striking["KW3 in Copy"] = df_striking.apply(lambda row: row["KW3"] in row["Copy"], axis=1)
df_striking["KW4 in Title"] = df_striking.apply(lambda row: row["KW4"] in row["Title"], axis=1)
df_striking["KW4 in H1"] = df_striking.apply(lambda row: row["KW4"] in row["H1"], axis=1)
df_striking["KW4 in Copy"] = df_striking.apply(lambda row: row["KW4"] in row["Copy"], axis=1)
df_striking["KW5 in Title"] = df_striking.apply(lambda row: row["KW5"] in row["Title"], axis=1)
df_striking["KW5 in H1"] = df_striking.apply(lambda row: row["KW5"] in row["H1"], axis=1)
df_striking["KW5 in Copy"] = df_striking.apply(lambda row: row["KW5"] in row["Copy"], axis=1)

Delete True/False Values If There Is No Keyword

This will delete true/false values when there is no keyword adjacent.

# delete true / false values if there is no keyword
df_striking.loc[df_striking["KW1"] == "", ["KW1 in Title", "KW1 in H1", "KW1 in Copy"]] = ""
df_striking.loc[df_striking["KW2"] == "", ["KW2 in Title", "KW2 in H1", "KW2 in Copy"]] = ""
df_striking.loc[df_striking["KW3"] == "", ["KW3 in Title", "KW3 in H1", "KW3 in Copy"]] = ""
df_striking.loc[df_striking["KW4"] == "", ["KW4 in Title", "KW4 in H1", "KW4 in Copy"]] = ""
df_striking.loc[df_striking["KW5"] == "", ["KW5 in Title", "KW5 in H1", "KW5 in Copy"]] = ""
df_striking.head()

Drop Rows If All Values == True

This configurable option is really useful for reducing the amount of QA time required for the final output by dropping the keyword opportunity from the final output if it is found in all three columns.

def true_dropper(col1, col2, col3):
    drop = df_striking.drop(
        df_striking[
            (df_striking[col1] == True)
            & (df_striking[col2] == True)
            & (df_striking[col3] == True)
        ].index
    )
    return drop

if drop_all_true == True:
    df_striking = true_dropper("KW1 in Title", "KW1 in H1", "KW1 in Copy")
    df_striking = true_dropper("KW2 in Title", "KW2 in H1", "KW2 in Copy")
    df_striking = true_dropper("KW3 in Title", "KW3 in H1", "KW3 in Copy")
    df_striking = true_dropper("KW4 in Title", "KW4 in H1", "KW4 in Copy")
    df_striking = true_dropper("KW5 in Title", "KW5 in H1", "KW5 in Copy")

Download The CSV File

The last step is to download the CSV file and start the optimization process.

Advertisement
df_striking.to_csv('Keywords in Striking Distance.csv', index=False)
files.download("Keywords in Striking Distance.csv")

Conclusion

If you are looking for quick wins for any website, the striking distance report is a really easy way to find them.

Don’t let the number of steps fool you. It’s not as complex as it seems. It’s as simple as uploading a crawl and keyword export to the supplied Google Colab sheet or using the Streamlit app.

The results are definitely worth it!

More Resources:


Featured Image: aurielaki/Shutterstock

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);

if( typeof sopp !== “undefined” && sopp === ‘yes’ ){
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
}else{
fbq(‘dataProcessingOptions’, []);
}

fbq(‘init’, ‘1321385257908563’);

Advertisement

fbq(‘track’, ‘PageView’);

fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’, {
content_name: ‘python-seo-striking-distance’,
content_category: ‘seo-strategy technical-seo’
});

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending