Connect with us

SEO

8 of the Most Important HTML Tags for SEO

Published

on

Are you utilizing HTML tags in your SEO process?

HTML tags are code elements with the back-end of all web pages, but there are also specific HTML code types which provide search engines with key info for SERP display. Essentially, these elements highlight parts of your content that are relevant for search, and they describe those elements for search crawlers.

That said, you don’t have to use all of these extra code tools. Search engines are getting smarter, and there’s much less use for HTML tags these days than there was in times past. But a few tags are still holding on – and some have even gained in SEO value.

In this post, we’ll go over some of the key SEO HTML tags that still make sense in 2020.

1. Title tag

Title tags are used to set up those clickable headlines that you see in the SERP:

Generally speaking, it’s up to Google to create a SERP headline for your page, and it could use any of the section headings from within the page – or it may even create a new headline altogether.

But the first place Google is going to check for headline ideas is the title tag, and where a title tag is present, Google will very likely make it the main headline in the relevant listing. As such, optimizing the title tag gives you some control over the way your page is represented in the SERP.

Best practices

On the one hand, your title should contain the keywords that will help it appear in search results. On the other, your title should be attractive enough for users to actually click through, so a balance is required between search optimization and user experience:

  • Watch the length – Google will only display the first 50-60 characters of your title, and cut the rest. It’s not a problem to have a title that’s longer than 60 characters, so long as you fit the important information before the cut-off point.​
  • Include a reasonable number of keywords – Keyword stuffing is likely to get penalized, but one or two keywords will be fine. Just make sure that your title forms a coherent sentence.​
  • Write good copy – Don’t be salesy, and don’t be generic. Create descriptive titles that highlight the value of your content, and set up proper expectations, so that users aren’t let down when they visit the page.
  • Add your brand name – If you have a recognizable brand that’s likely to increase your click-through, then feel free to add it to the title as well.

HTML code

Below is a bit of code retrieved from a BBC article on coronavirus statistics. You can see a properly set up title tag sitting right on top of the meta description tag – which is what we’re going to discuss next:

2. Meta description tag

Meta description tags are used to set up descriptions within search result snippets:

Google doesn’t always use meta description tags to create these snippets, but if the meta tag is there, then there’s a good chance that your meta description will make it onto the SERP.

Keep in mind, however, that sometimes Google will ignore the meta description tag, and instead quote a bit of copy from the page. This normally happens when the quote is a better match for a particular query than the meta description would have been.

Advertisement

Basically, Google will choose the best option for increasing your chances of click-through.

Best practices

The rules for meta descriptions are not overly strict – after all, if you fail to write a good one, even if you fail to write one altogether, then Google will write one for you.

  • Watch the length – Same as with headlines, Google will keep the first 150-160 characters of your meta description, and cut the rest. Ensure that the important aspects are included early on to maximize searcher interest.​
  • Write good copy – While the meta description is not used for ranking, it’s still important to optimize it for search intent. The more relevant your description is, based on the respective query, the more likely a user will visit your page.
  • Consider skipping meta description – It can be difficult to create good copy for particularly long-tailed keywords, or for pages that target a variety of keywords. In those cases, consider leaving the meta description out – Google will scrape your page and populate your snippet with a few relevant quotes either way.
See also  Germany weighs ban on Telegram, tool of conspiracy theorists

HTML code

Below is a bit of code retrieved from the same BBC article on coronavirus statistics, and you can see that following the title tag is a meta description tag, which provides a brief summary of what the article is about:

3. Heading (H1-H6) tags

Heading tags are used to structure your pages for both the reader and search engines:

It’s no secret that barely anyone reads through an article anymore – what we generally do instead is we scan the article until we find the section we like, we read that one section, and then we bounce. And if the article isn’t split into sections, then many will bounce right away, because it’s just too much. So, from a user perspective, headings are handy reading aids.

From the perspective of the search engine, however, heading tags form the core of the content, and help search crawler bots understand what the page is about. 

Best practices

The rules for headings are derived from the general copywriting practices – break your copy into bite-sized pieces and maintain consistent formatting:

  • Don’t use more than one H1 – H1 heading stands apart from other headings because it’s treated by search engines as the title of the page. Not to be confused with the title tag – the title tag is displayed in search results, while the H1 tag is displayed on your website.​
  • Maintain shallow structure – There’s rarely a need to go below H3. Use H1 for the title, H2 for section headings, and H3 for subsections. Anything more tends to get confusing.​
  • Form query-like headings – Treat each heading as an additional opportunity to rank in search. To this end, each heading should sound either like a query or an answer to a query – keywords included.
  • Be consistent with all headings – All of your headings should be written in such a way that if you were to remove all the text and keep only the headings, they would read like a list.

HTML code

Below is a snippet of code retrieved from the same BBC article on coronavirus statistics, and you can see that there’s a properly set up H2 heading, followed by two paragraphs:

4. Image alt text

While the main goal of alt text is web accessibility, the SEO goal of the alt attribute is image indexing.

The key goal of image alt text is to help users understand the image when it cannot be viewed, say, by a visitor who is vision impaired. In this instance, along with times when, say, there’s a problem and the image won’t load, the alt text can be used to describe what’s in the image, instead of viewing it.

See also  Core Web Vitals: Will Google Give Breaks for Slow Plugins and Apps?

From an SEO perspective, alt text is a big part of how images are indexed in Google search. So if there is a visual component to what you do – be it the images of your products, your work, your stock images, your art – then you should definitely consider using image alt texts.

Best practices

A prerequisite to adding alt text tags is finding all the images without it.

You can use a tool like WebSite Auditor to crawl your website and compile a list of images with missing alt text. 

Advertisement

Once you’ve created your list, apply these guidelines:

  • Be concise, but descriptive – Good alt text is about a line or two of text that would help a visually impaired person understand what’s pictured.​
  • Don’t be too concise – One word, or even a few words, are probably not going to cut it – there would be no way to differentiate the image from other images. Think of all possible properties displayed: object type, color, material, shape, finish, lighting, etc.​
  • Don’t do keyword stuffing – There’s no place left where keyword stuffing still works, and alt text is no exception.

HTML code

Here’s an example of an alt text snippet from an image of disease cells:

5. Schema markup

Schema markup is used to enhance regular SERP snippets with rich snippet features:

Schema.org hosts a collection of tags developed jointly by Google, Bing, Yahoo!, and Yandex, and the tags are used by webmasters to provide search engines with additional information about different types of pages. In turn, search engines use this information to enhance their SERP snippets with various rich features.

There is no certainty as to whether using Schema markup improves one’s chances of ranking – there’s no question, however, that the resulting snippets look much more attractive than regular snippets, and thus improve one’s standing in search.

Best practices

The only best practice is to visit schema.org and see whether they’ve got any tags that can be applied to your types of pages. There are hundreds, if not thousands, of tags, so there’s likely going to be an option that applies, and may help improve your website listings.

HTML code

Here’s a sample snippet of code specifying nutrition facts for a recipe. You can visit schema.org for a full list of items available for markup:

6. HTML5 semantic tags

HTML5 elements are used for better descriptions of various page components:

Before the introduction of HTML5 elements, we mostly used div tags to split HTML code into separate components, and then we used classes and ids to further specify those components. Each webmaster specified the components in their own custom way, and as such, it ended up being a bit of a mess, and a challenge for search engines to understand what was what on each page.

With the introduction of semantic HTML5 elements, we’ve got a set of intuitive tags, each describing a separate page component. So, instead of tagging our content with a bunch of confusing divs, we now have a way of describing the components in an easy-to-understand, standardized way.

As you can imagine, search engines are very enthusiastic about semantic HTML5.

HTML code

Here are some of the handiest semantic HTML5 elements, use them to improve your communication with search engines:

  • article – isolates a post from the rest of the code, makes it portable
  • section – isolates a group of posts within a blog or a group of headings within a post
  • aside – isolates supplementary content that is not part of the main content
  • header – isolates the top part of the document, article, section, may contain navigation
  • footer – isolates the bottom of the document, article, section, contains meta information
  • nav – isolates navigation menus, groups of navigational elements
See also  Google’s Mueller on Keys to a Successful Site Migration

7. Meta robots tag

Robots meta tag is all about the rules of engagement between the websites and the search engines.

This is where website owners get to outline a set of rules for crawling and indexing their pages. Some of these rules are obligatory, while others are more like suggestions – not all crawlers will respect robots meta tags, but mainstream search engines often will. And if there is no meta robots tag, then the crawlers will do as they please.

Best practices

Meta robots tag should be placed in the head section of the page code, and it should specify which crawlers it addresses and which instructions should be applied:

  • Address robots by name – Use robots if your instructions are for all crawlers, but use specific names to address individual crawlers. Google’s standard web crawler, for example, is called Googlebot. Addressing individual robots is usually done to ban malicious crawlers from the page while allowing well-intentioned crawlers to carry on.​
  • Match instructions to your goals – You’d normally want to use robots meta tags to keep search engines from indexing documents, internal search results, duplicate pages, staging areas, and whatever else you don’t want to show up in search.

HTML code

Below are some of the parameters most commonly used with robots meta tags. You can use any number of them in a single meta robots tag, separated by a comma:

  • noindex — page should not be indexed
  • nofollow — links on the page should not be followed
  • follow — links on the page should be followed, even if the page is not to be indexed
  • noimageindex — images on the page should not be indexed
  • noarchive — search results should not show a cached version of the page
  • unavailable_after — page should not be indexed beyond a certain date.

8. Canonical tag

Canonical tag spares you from the risk of duplicate content:

The gist of it is that any given page, through no fault of your own, can have several addresses. Sometimes they result from various artifacts – like http/https and various tracking tags – and other times they result from various sorting and customization options available in product catalogs.

It’s not a big problem, to be honest, except that all those addresses might be taxing on the crawl budget, and on your page authority, and it can also mess with your performance tracking. The alternative is to use a canonical tag to tell a search engine which of those page addresses is the main one.

Advertisement

Best practices

To avoid potential SEO complications, apply the canonical tag to the following pages:

  • Pages available via different URLs
  • Pages with very similar content
  • Dynamic pages that create their own URL parameters

Final thoughts

These are my top HTML tags to still worry about in 2020, though I believe some of them are firmly on their way out. As noted, with search engines getting ever smarter, there’s less and less need for HTML tag optimization, because most things can now be deduced algorithmically. Also, most modern CMS systems automatically add these elements, at least in some capacity.

Still, I wouldn’t leave it entirely up to Google to interpret my content – it’s best to meet it halfway where you can.

Socialmediatoday.com

SEO

Using Python + Streamlit To Find Striking Distance Keyword Opportunities

Published

on

Using Python + Streamlit To Find Striking Distance Keyword Opportunities

Python is an excellent tool to automate repetitive tasks as well as gain additional insights into data.

In this article, you’ll learn how to build a tool to check which keywords are close to ranking in positions one to three and advises whether there is an opportunity to naturally work those keywords into the page.

It’s perfect for Python beginners and pros alike and is a great introduction to using Python for SEO.

If you’d just like to get stuck in there’s a handy Streamlit app available for the code. This is simple to use and requires no coding experience.

There’s also a Google Colaboratory Sheet if you’d like to poke around with the code. If you can crawl a website, you can use this script!

Here’s an example of what we’ll be making today:

Screenshot from Microsoft Excel, October 2021An Excel sheet documenting onpage keywords opportunites generated with Python

These keywords are found in the page title and H1, but not in the copy. Adding these keywords naturally to the existing copy would be an easy way to increase relevancy for these keywords.

By taking the hint from search engines and naturally including any missing keywords a site already ranks for, we increase the confidence of search engines to rank those keywords higher in the SERPs.

Advertisement

This report can be created manually, but it’s pretty time-consuming.

So, we’re going to automate the process using a Python SEO script.

Preview Of The Output

This is a sample of what the final output will look like after running the report:

Excel sheet showing and example of keywords that can be optimised by using the striking distance reportScreenshot from Microsoft Excel, October 2021Excel sheet showing and example of keywords that can be optimised by using the striking distance report

The final output takes the top five opportunities by search volume for each page and neatly lays each one horizontally along with the estimated search volume.

It also shows the total search volume of all keywords a page has within striking distance, as well as the total number of keywords within reach.

The top five keywords by search volume are then checked to see if they are found in the title, H1, or copy, then flagged TRUE or FALSE.

This is great for finding quick wins! Just add the missing keyword naturally into the page copy, title, or H1.

Getting Started

The setup is fairly straightforward. We just need a crawl of the site (ideally with a custom extraction for the copy you’d like to check), and an exported file of all keywords a site ranks for.

This post will walk you through the setup, the code, and will link to a Google Colaboratory sheet if you just want to get stuck in without coding it yourself.

Advertisement

To get started you will need:

We’ve named this the Striking Distance Report as it flags keywords that are easily within striking distance.

(We have defined striking distance as keywords that rank in positions four to 20, but have made this a configurable option in case you would like to define your own parameters.)

Striking Distance SEO Report: Getting Started

1. Crawl The Target Website

  • Set a custom extractor for the page copy (optional, but recommended).
  • Filter out pagination pages from the crawl.

2. Export All Keywords The Site Ranks For Using Your Favorite Provider

  • Filter keywords that trigger as a site link.
  • Remove keywords that trigger as an image.
  • Filter branded keywords.
  • Use both exports to create an actionable Striking Distance report from the keyword and crawl data with Python.

Crawling The Site

I’ve opted to use Screaming Frog to get the initial crawl. Any crawler will work, so long as the CSV export uses the same column names or they’re renamed to match.

The script expects to find the following columns in the crawl CSV export:

"Address", "Title 1", "H1-1", "Copy 1", "Indexability"

Crawl Settings

The first thing to do is to head over to the main configuration settings within Screaming Frog:

Configuration > Spider > Crawl

The main settings to use are:

Crawl Internal Links, Canonicals, and the Pagination (Rel Next/Prev) setting.

Advertisement

(The script will work with everything else selected, but the crawl will take longer to complete!)

Recommended Screaming Frog Crawl SettingsScreenshot from Screaming Frog, October 2021Recommended Screaming Frog Crawl Settings

Next, it’s on to the Extraction tab.

Configuration > Spider > Extraction

Recommended Screaming Frog Extraction Crawl SettingsScreenshot from Screaming Frog, October 2021Recommended Screaming Frog Extraction Crawl Settings

At a bare minimum, we need to extract the page title, H1, and calculate whether the page is indexable as shown below.

Indexability is useful because it’s an easy way for the script to identify which URLs to drop in one go, leaving only keywords that are eligible to rank in the SERPs.

If the script cannot find the indexability column, it’ll still work as normal but won’t differentiate between pages that can and cannot rank.

Setting A Custom Extractor For Page Copy

In order to check whether a keyword is found within the page copy, we need to set a custom extractor in Screaming Frog.

See also  Germany weighs ban on Telegram, tool of conspiracy theorists

Configuration > Custom > Extraction

Name the extractor “Copy” as seen below.

Screaming Frog Custom Extraction Showing Default Options for Extracting the Page CopyScreenshot from Screaming Frog, October 2021Screaming Frog Custom Extraction Showing Default Options for Extracting the Page Copy

Important: The script expects the extractor to be named “Copy” as above, so please double check!

Lastly, make sure Extract Text is selected to export the copy as text, rather than HTML.

Advertisement

There are many guides on using custom extractors online if you need help setting one up, so I won’t go over it again here.

Once the extraction has been set it’s time to crawl the site and export the HTML file in CSV format.

Exporting The CSV File

Exporting the CSV file is as easy as changing the drop-down menu displayed underneath Internal to HTML and pressing the Export button.

Internal > HTML > Export

Screaming Frog - Export Internal HTML SettingsScreenshot from Screaming Frog, October 2021Screaming Frog - Export Internal HTML Settings

After clicking Export, It’s important to make sure the type is set to CSV format.

The export screen should look like the below:

Screaming Frog Internal HTML CSV Export SettingsScreenshot from Screaming Frog, October 2021Screaming Frog Internal HTML CSV Export Settings

Tip 1: Filtering Out Pagination Pages

I recommend filtering out pagination pages from your crawl either by selecting Respect Next/Prev under the Advanced settings (or just deleting them from the CSV file, if you prefer).

Screaming Frog Settings to Respect Rel / PrevScreenshot from Screaming Frog, October 2021Screaming Frog Settings to Respect Rel / Prev

Tip 2: Saving The Crawl Settings

Once you have set the crawl up, it’s worth just saving the crawl settings (which will also remember the custom extraction).

This will save a lot of time if you want to use the script again in the future.

Advertisement

File > Configuration > Save As

How to save a configuration file in screaming frogScreenshot from Screaming Frog, October 2021How to save a configuration file in screaming frog

Exporting Keywords

Once we have the crawl file, the next step is to load your favorite keyword research tool and export all of the keywords a site ranks for.

The goal here is to export all the keywords a site ranks for, filtering out branded keywords and any which triggered as a sitelink or image.

For this example, I’m using the Organic Keyword Report in Ahrefs, but it will work just as well with Semrush if that’s your preferred tool.

In Ahrefs, enter the domain you’d like to check in Site Explorer and choose Organic Keywords.

Ahrefs Site Explorer SettingsScreenshot from Ahrefs.com, October 2021Ahrefs Site Explorer Settings

Site Explorer > Organic Keywords

Ahrefs - How Setting to Export Organic Keywords a Site Ranks ForScreenshot from Ahrefs.com, October 2021Ahrefs - How Setting to Export Organic Keywords a Site Ranks For

This will bring up all keywords the site is ranking for.

Filtering Out Sitelinks And Image links

The next step is to filter out any keywords triggered as a sitelink or an image pack.

The reason we need to filter out sitelinks is that they have no influence on the parent URL ranking. This is because only the parent page technically ranks for the keyword, not the sitelink URLs displayed under it.

Filtering out sitelinks will ensure that we are optimizing the correct page.

Ahrefs Screenshot Demonstrating Pages Ranking for Sitelink KeywordsScreenshot from Ahrefs.com, October 2021Ahrefs Screenshot Demonstrating Pages Ranking for Sitelink Keywords

Here’s how to do it in Ahrefs.

Image showing how to exclude images and sitelinks from a keyword exportScreenshot from Ahrefs.com, October 2021Image showing how to exclude images and sitelinks from a keyword export

Lastly, I recommend filtering out any branded keywords. You can do this by filtering the CSV output directly, or by pre-filtering in the keyword tool of your choice before the export.

Finally, when exporting make sure to choose Full Export and the UTF-8 format as shown below.

Advertisement
Image showing how to export keywords in UTF-8 format as a csv fileScreenshot from Ahrefs.com, October 2021Image showing how to export keywords in UTF-8 format as a csv file

By default, the script works with Ahrefs (v1/v2) and Semrush keyword exports. It can work with any keyword CSV file as long as the column names the script expects are present.

Processing

The following instructions pertain to running a Google Colaboratory sheet to execute the code.

There is now a simpler option for those that prefer it in the form of a Streamlit app. Simply follow the instructions provided to upload your crawl and keyword file.

Now that we have our exported files, all that’s left to be done is to upload them to the Google Colaboratory sheet for processing.

Select Runtime > Run all from the top navigation to run all cells in the sheet.

Image showing how to run the stirking distance Python script from Google CollaboratoryScreenshot from Colab.research.google.com, October 2021Image showing how to run the stirking distance Python script from Google Collaboratory

The script will prompt you to upload the keyword CSV from Ahrefs or Semrush first and the crawl file afterward.

Image showing how to upload the csv files to Google CollaboratoryScreenshot from Colab.research.google.com, October 2021Image showing how to upload the csv files to Google Collaboratory

That’s it! The script will automatically download an actionable CSV file you can use to optimize your site.

Image showing the Striking Distance final outputScreenshot from Microsoft Excel, October 2021Image showing the Striking Distance final output

Once you’re familiar with the whole process, using the script is really straightforward.

Code Breakdown And Explanation

If you’re learning Python for SEO and interested in what the code is doing to produce the report, stick around for the code walkthrough!

Install The Libraries

Let’s install pandas to get the ball rolling.

!pip install pandas

Import The Modules

Next, we need to import the required modules.

import pandas as pd
from pandas import DataFrame, Series
from typing import Union
from google.colab import files

Set The Variables

Now it’s time to set the variables.

Advertisement

The script considers any keywords between positions four and 20 as within striking distance.

See also  Facebook Outlines Plan to Return Employees to Work, Cancels Planned Physical Events Through to June 2021

Changing the variables here will let you define your own range if desired. It’s worth experimenting with the settings to get the best possible output for your needs.

# set all variables here
min_volume = 10  # set the minimum search volume
min_position = 4  # set the minimum position  / default = 4
max_position = 20 # set the maximum position  / default = 20
drop_all_true = True  # If all checks (h1/title/copy) are true, remove the recommendation (Nothing to do)
pagination_filters = "filterby|page|p="  # filter patterns used to detect and drop paginated pages

Upload The Keyword Export CSV File

The next step is to read in the list of keywords from the CSV file.

It is set up to accept an Ahrefs report (V1 and V2) as well as a Semrush export.

This code reads in the CSV file into a Pandas DataFrame.

upload = files.upload()
upload = list(upload.keys())[0]
df_keywords = pd.read_csv(
    (upload),
    error_bad_lines=False,
    low_memory=False,
    encoding="utf8",
    dtype={
        "URL": "str",
        "Keyword": "str",
        "Volume": "str",
        "Position": int,
        "Current URL": "str",
        "Search Volume": int,
    },
)
print("Uploaded Keyword CSV File Successfully!")

If everything went to plan, you’ll see a preview of the DataFrame created from the keyword CSV export. 

Dataframe showing sucessful upload of the keyword export fileScreenshot from Colab.research.google.com, October 2021Dataframe showing sucessful upload of the keyword export file

Upload The Crawl Export CSV File

Once the keywords have been imported, it’s time to upload the crawl file.

This fairly simple piece of code reads in the crawl with some error handling option and creates a Pandas DataFrame named df_crawl.

upload = files.upload()
upload = list(upload.keys())[0]
df_crawl = pd.read_csv(
    (upload),
        error_bad_lines=False,
        low_memory=False,
        encoding="utf8",
        dtype="str",
    )
print("Uploaded Crawl Dataframe Successfully!")

Once the CSV file has finished uploading, you’ll see a preview of the DataFrame.

Image showing a dataframe of the crawl file being uploaded successfullyScreenshot from Colab.research.google.com, October 2021Image showing a dataframe of the crawl file being uploaded successfully

Clean And Standardize The Keyword Data

The next step is to rename the column names to ensure standardization between the most common types of file exports.

Essentially, we’re getting the keyword DataFrame into a good state and filtering using cutoffs defined by the variables.

Advertisement
df_keywords.rename(
    columns={
        "Current position": "Position",
        "Current URL": "URL",
        "Search Volume": "Volume",
    },
    inplace=True,
)

# keep only the following columns from the keyword dataframe
cols = "URL", "Keyword", "Volume", "Position"
df_keywords = df_keywords.reindex(columns=cols)

try:
    # clean the data. (v1 of the ahrefs keyword export combines strings and ints in the volume column)
    df_keywords["Volume"] = df_keywords["Volume"].str.replace("0-10", "0")
except AttributeError:
    pass

# clean the keyword data
df_keywords = df_keywords[df_keywords["URL"].notna()]  # remove any missing values
df_keywords = df_keywords[df_keywords["Volume"].notna()]  # remove any missing values
df_keywords = df_keywords.astype({"Volume": int})  # change data type to int
df_keywords = df_keywords.sort_values(by="Volume", ascending=False)  # sort by highest vol to keep the top opportunity

# make new dataframe to merge search volume back in later
df_keyword_vol = df_keywords[["Keyword", "Volume"]]

# drop rows if minimum search volume doesn't match specified criteria
df_keywords.loc[df_keywords["Volume"] < min_volume, "Volume_Too_Low"] = "drop"
df_keywords = df_keywords[~df_keywords["Volume_Too_Low"].isin(["drop"])]

# drop rows if minimum search position doesn't match specified criteria
df_keywords.loc[df_keywords["Position"] <= min_position, "Position_Too_High"] = "drop"
df_keywords = df_keywords[~df_keywords["Position_Too_High"].isin(["drop"])]
# drop rows if maximum search position doesn't match specified criteria
df_keywords.loc[df_keywords["Position"] >= max_position, "Position_Too_Low"] = "drop"
df_keywords = df_keywords[~df_keywords["Position_Too_Low"].isin(["drop"])]

Clean And Standardize The Crawl Data

Next, we need to clean and standardize the crawl data.

Essentially, we use reindex to only keep the “Address,” “Indexability,” “Page Title,” “H1-1,” and “Copy 1” columns, discarding the rest.

We use the handy “Indexability” column to only keep rows that are indexable. This will drop canonicalized URLs, redirects, and so on. I recommend enabling this option in the crawl.

Lastly, we standardize the column names so they’re a little nicer to work with.

# keep only the following columns from the crawl dataframe
cols = "Address", "Indexability", "Title 1", "H1-1", "Copy 1"
df_crawl = df_crawl.reindex(columns=cols)
# drop non-indexable rows
df_crawl = df_crawl[~df_crawl["Indexability"].isin(["Non-Indexable"])]
# standardise the column names
df_crawl.rename(columns={"Address": "URL", "Title 1": "Title", "H1-1": "H1", "Copy 1": "Copy"}, inplace=True)
df_crawl.head()

Group The Keywords

As we approach the final output, it’s necessary to group our keywords together to calculate the total opportunity for each page.

Here, we’re calculating how many keywords are within striking distance for each page, along with the combined search volume.

# groups the URLs (remove the dupes and combines stats)
# make a copy of the keywords dataframe for grouping - this ensures stats can be merged back in later from the OG df
df_keywords_group = df_keywords.copy()
df_keywords_group["KWs in Striking Dist."] = 1  # used to count the number of keywords in striking distance
df_keywords_group = (
    df_keywords_group.groupby("URL")
    .agg({"Volume": "sum", "KWs in Striking Dist.": "count"})
    .reset_index()
)
df_keywords_group.head()
DataFrame showing how many keywords were found within striking distanceScreenshot from Colab.research.google.com, October 2021DataFrame showing how many keywords were found within striking distance

Once complete, you’ll see a preview of the DataFrame.

Display Keywords In Adjacent Rows

We use the grouped data as the basis for the final output. We use Pandas.unstack to reshape the DataFrame to display the keywords in the style of a GrepWords export.

DataFrame showing a grepwords type-view of keywords laid out horizontallyScreenshot from Colab.research.google.com, October 2021DataFrame showing a grepwords type-view of keywords laid out horizontally
# create a new df, combine the merged data with the original data. display in adjacent rows ala grepwords
df_merged_all_kws = df_keywords_group.merge(
    df_keywords.groupby("URL")["Keyword"]
    .apply(lambda x: x.reset_index(drop=True))
    .unstack()
    .reset_index()
)

# sort by biggest opportunity
df_merged_all_kws = df_merged_all_kws.sort_values(
    by="KWs in Striking Dist.", ascending=False
)

# reindex the columns to keep just the top five keywords
cols = "URL", "Volume", "KWs in Striking Dist.", 0, 1, 2, 3, 4
df_merged_all_kws = df_merged_all_kws.reindex(columns=cols)

# create union and rename the columns
df_striking: Union[Series, DataFrame, None] = df_merged_all_kws.rename(
    columns={
        "Volume": "Striking Dist. Vol",
        0: "KW1",
        1: "KW2",
        2: "KW3",
        3: "KW4",
        4: "KW5",
    }
)

# merges striking distance df with crawl df to merge in the title, h1 and category description
df_striking = pd.merge(df_striking, df_crawl, on="URL", how="inner")

Set The Final Column Order And Insert Placeholder Columns

Lastly, we set the final column order and merge in the original keyword data.

See also  15 TikTok Tips to Maximize Your Branded Content Efforts [Infographic]

There are a lot of columns to sort and create!

Advertisement
# set the final column order and merge the keyword data in

cols = [
    "URL",
    "Title",
    "H1",
    "Copy",
    "Striking Dist. Vol",
    "KWs in Striking Dist.",
    "KW1",
    "KW1 Vol",
    "KW1 in Title",
    "KW1 in H1",
    "KW1 in Copy",
    "KW2",
    "KW2 Vol",
    "KW2 in Title",
    "KW2 in H1",
    "KW2 in Copy",
    "KW3",
    "KW3 Vol",
    "KW3 in Title",
    "KW3 in H1",
    "KW3 in Copy",
    "KW4",
    "KW4 Vol",
    "KW4 in Title",
    "KW4 in H1",
    "KW4 in Copy",
    "KW5",
    "KW5 Vol",
    "KW5 in Title",
    "KW5 in H1",
    "KW5 in Copy",
]

# re-index the columns to place them in a logical order + inserts new blank columns for kw checks.
df_striking = df_striking.reindex(columns=cols)

Merge In The Keyword Data For Each Column

This code merges the keyword volume data back into the DataFrame. It’s more or less the equivalent of an Excel VLOOKUP function.

# merge in keyword data for each keyword column (KW1 - KW5)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW1", right_on="Keyword", how="left")
df_striking['KW1 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW2", right_on="Keyword", how="left")
df_striking['KW2 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW3", right_on="Keyword", how="left")
df_striking['KW3 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW4", right_on="Keyword", how="left")
df_striking['KW4 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)
df_striking = pd.merge(df_striking, df_keyword_vol, left_on="KW5", right_on="Keyword", how="left")
df_striking['KW5 Vol'] = df_striking['Volume']
df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True)

Clean The Data Some More

The data requires additional cleaning to populate empty values, (NaNs), as empty strings. This improves the readability of the final output by creating blank cells, instead of cells populated with NaN string values.

Next, we convert the columns to lowercase so that they match when checking whether a target keyword is featured in a specific column.

# replace nan values with empty strings
df_striking = df_striking.fillna("")
# drop the title, h1 and category description to lower case so kws can be matched to them
df_striking["Title"] = df_striking["Title"].str.lower()
df_striking["H1"] = df_striking["H1"].str.lower()
df_striking["Copy"] = df_striking["Copy"].str.lower()

Check Whether The Keyword Appears In The Title/H1/Copy and Return True Or False

This code checks if the target keyword is found in the page title/H1 or copy.

It’ll flag true or false depending on whether a keyword was found within the on-page elements.

df_striking["KW1 in Title"] = df_striking.apply(lambda row: row["KW1"] in row["Title"], axis=1)
df_striking["KW1 in H1"] = df_striking.apply(lambda row: row["KW1"] in row["H1"], axis=1)
df_striking["KW1 in Copy"] = df_striking.apply(lambda row: row["KW1"] in row["Copy"], axis=1)
df_striking["KW2 in Title"] = df_striking.apply(lambda row: row["KW2"] in row["Title"], axis=1)
df_striking["KW2 in H1"] = df_striking.apply(lambda row: row["KW2"] in row["H1"], axis=1)
df_striking["KW2 in Copy"] = df_striking.apply(lambda row: row["KW2"] in row["Copy"], axis=1)
df_striking["KW3 in Title"] = df_striking.apply(lambda row: row["KW3"] in row["Title"], axis=1)
df_striking["KW3 in H1"] = df_striking.apply(lambda row: row["KW3"] in row["H1"], axis=1)
df_striking["KW3 in Copy"] = df_striking.apply(lambda row: row["KW3"] in row["Copy"], axis=1)
df_striking["KW4 in Title"] = df_striking.apply(lambda row: row["KW4"] in row["Title"], axis=1)
df_striking["KW4 in H1"] = df_striking.apply(lambda row: row["KW4"] in row["H1"], axis=1)
df_striking["KW4 in Copy"] = df_striking.apply(lambda row: row["KW4"] in row["Copy"], axis=1)
df_striking["KW5 in Title"] = df_striking.apply(lambda row: row["KW5"] in row["Title"], axis=1)
df_striking["KW5 in H1"] = df_striking.apply(lambda row: row["KW5"] in row["H1"], axis=1)
df_striking["KW5 in Copy"] = df_striking.apply(lambda row: row["KW5"] in row["Copy"], axis=1)

Delete True/False Values If There Is No Keyword

This will delete true/false values when there is no keyword adjacent.

# delete true / false values if there is no keyword
df_striking.loc[df_striking["KW1"] == "", ["KW1 in Title", "KW1 in H1", "KW1 in Copy"]] = ""
df_striking.loc[df_striking["KW2"] == "", ["KW2 in Title", "KW2 in H1", "KW2 in Copy"]] = ""
df_striking.loc[df_striking["KW3"] == "", ["KW3 in Title", "KW3 in H1", "KW3 in Copy"]] = ""
df_striking.loc[df_striking["KW4"] == "", ["KW4 in Title", "KW4 in H1", "KW4 in Copy"]] = ""
df_striking.loc[df_striking["KW5"] == "", ["KW5 in Title", "KW5 in H1", "KW5 in Copy"]] = ""
df_striking.head()

Drop Rows If All Values == True

This configurable option is really useful for reducing the amount of QA time required for the final output by dropping the keyword opportunity from the final output if it is found in all three columns.

def true_dropper(col1, col2, col3):
    drop = df_striking.drop(
        df_striking[
            (df_striking[col1] == True)
            & (df_striking[col2] == True)
            & (df_striking[col3] == True)
        ].index
    )
    return drop

if drop_all_true == True:
    df_striking = true_dropper("KW1 in Title", "KW1 in H1", "KW1 in Copy")
    df_striking = true_dropper("KW2 in Title", "KW2 in H1", "KW2 in Copy")
    df_striking = true_dropper("KW3 in Title", "KW3 in H1", "KW3 in Copy")
    df_striking = true_dropper("KW4 in Title", "KW4 in H1", "KW4 in Copy")
    df_striking = true_dropper("KW5 in Title", "KW5 in H1", "KW5 in Copy")

Download The CSV File

The last step is to download the CSV file and start the optimization process.

Advertisement
df_striking.to_csv('Keywords in Striking Distance.csv', index=False)
files.download("Keywords in Striking Distance.csv")

Conclusion

If you are looking for quick wins for any website, the striking distance report is a really easy way to find them.

Don’t let the number of steps fool you. It’s not as complex as it seems. It’s as simple as uploading a crawl and keyword export to the supplied Google Colab sheet or using the Streamlit app.

The results are definitely worth it!

More Resources:


Featured Image: aurielaki/Shutterstock

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);

if( typeof sopp !== “undefined” && sopp === ‘yes’ ){
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
}else{
fbq(‘dataProcessingOptions’, []);
}

fbq(‘init’, ‘1321385257908563’);

Advertisement

fbq(‘track’, ‘PageView’);

fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’, {
content_name: ‘python-seo-striking-distance’,
content_category: ‘seo-strategy technical-seo’
});

Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending