Connect with us

SEO

SEO Competitive Analysis: The Definitive Guide

Published

on

SEO Competitive Analysis: The Definitive Guide


Marketing is all about explaining to potential customers why your product fits their needs the best. To know that, you need to know what your competitors are offering.

The only way to do that is through competitor analysis – studying what your competitors are doing.

You have to build the best product and the best content you can, and knowing what your competitors are doing is a part of that.

This guide includes best practices that will help you identify your competitors, how they rank, and what you can do about it. And make sure to combine these tips with your favorite competitor research tools! 

1. Identify Your SEO Competitors

You probably already know who the big players in your industry are, but can you name your main SEO rivals?

They aren’t necessarily the same.

In fact, you might have multiple SEO competitors who exist outside of your niche that you need to contend with in SERPs.

For example, a bakery in New York trying to rank for keywords like “best bread in New York” would be competing with other bakeries for first-page results.

Advertisement

Continue Reading Below

But if that bakery was also trying to rank a helpful how-to blog, they’d be competing with publishing giants like Food Network and Taste of Home, too.

They’d have their work cut out for them if they wanted to break the top 10 in those SERPs!

This is true in every industry:

Your top SEO competitors are the ones who rank on the first search page of the keywords you’re targeting, regardless of whether they’re your business competitors.

If you operate in multiple niches, you may even have distinct lists of competitors for each service you offer with little-to-no overlap between them.

Fortunately, finding out who your competitors are is as easy as entering your top keywords into Google and writing down the domains of your main competitors (or entering your keywords into your competitor analysis tool and letting it do all of the heavy lifting for you).

Even if you’re using a tool, it’s in your best interest to monitor the SERP landscape you’re entering into (e.g., if your target keyword is dominated by videos, you probably want to think about creating video content to compete).

See also  Google Cites Scholarship Links in a Manual Action Penalty

Advertisement

Continue Reading Below

Pay special attention to competitors occupying local packs and the position zero too – you should definitely compete for these coveted spots!

2. Evaluate Keyword Difficulty

Before you begin analyzing specific link-building strategies or on-page SEO, it’s a good idea to assess the strength of your SEO competitors.

While you can theoretically beat out any competitor in any niche and for any keyword, the amount of resources it would take for some keywords renders them unfeasible.

Use your competitor analysis tool to look at your competitors’ total domain strength and then analyze specific factors, such as:

Write down the information and look for any weaknesses that you can turn to your advantage.

The higher the difficulty of a target competitor, the stronger their SEO, and the harder it will be to outrank them.

Focus on competitors with lower overall scores ranking well for niche keywords.

3. Look For New Keyword Opportunities

Term frequency-inverse document frequency analysis (or, because that’s a mouthful, TF-IDF analysis) can be a useful method for enriching your existing content with “proper” keywords your competitors are using.

This allows you to properly optimize your pages for search engines, or to discover low-competition keywords you might have missed.

Simply put, TF-IDF is a measurement of how often a keyword appears on a page (term frequency) multiplied by how often a keyword is expected to appear on page (inverse document frequency).

SEO Competitive Analysis: The Definitive Guide

When you analyze TF-IDF you might discover that most top-ranking pages for your target keywords share many similar terms and phrases.

Advertisement

Continue Reading Below

If you aren’t targeting those topic-relevant terms, then you need to either add them to existing applicable pages or create new content to boost your relevance in semantic search.

This concept is a little more complicated than any of the other strategies we’ll discuss but it can quickly become a vital part of creating a comprehensive content strategy.

For example, using TF-IDF, we discovered that high-ranking content for the keyword “coffee brewing recipes” almost always contains specific information about different coffee bean blends, roasting techniques, and types of filters.

See also  Can Chrome-Based Spammers Impact Core Web Vitals? via @sejournal, @martinibuster

4. Analyze On-Page Optimization & On-Site Content

Using your competitive analysis tool to analyze your competitors’ on-site SEO will give you a veritable goldmine of new information to work with.

You’ll learn how often they’re publishing content, what types of content they’re publishing, and which keywords they’re targeting.

Pay special attention to:

  • Metadata.
  • Headline strategies (title length, keywords in the title, proper title tags, etc).

Advertisement

Continue Reading Below

Try to unravel their internal linking strategies, too. Use this information as a benchmark for your on-site SEO efforts.

Figure out what they’re doing well so you can learn from it, and what they’re missing so you can do it better.

When analyzing content, you’ll want to keep track of:

  • Topical relevance.
  • What types of content or media they’re creating.
  • Video length or word count.
  • The depth of detail covered.

When Googlebot crawls your website, all of these play a significant role.

5. Dig Into Competitor Backlink Profiles

One of the most important parts of a competitive analysis is figuring out where your rivals are earning their backlinks from and using that information to build high-quality links for your website.

Dissecting your opponents’ link profiles is a great way to find new link opportunities.

Again, you’ll need a robust SEO tool for this step – it’s practically impossible to pull off manually.

Advertisement

Continue Reading Below

6. Examine Site Structure & UX

If you don’t know that Google’s been hyper-focused on improving user experience then you haven’t been paying attention.

Almost all of the major algorithmic changes we’ve seen over the past few years have been focused on UX – better mobile experiences, faster pages, and improved search results.

If your website is slower than your competitors’, unresponsive, or more confusing to navigate then that’s something you absolutely need to correct. I recommend:

To see what your competitors are doing, you’ll want to take a look at their landing pages:

  • Analyze their click depth.
  • See if they have any orphan pages.
  • Check out their PageRank distribution.

Advertisement

Continue Reading Below

If you analyze competitor sites and see that they’re ranking well despite having an outdated website or terrible mobile optimization, that’s a prime opportunity for you to gain some real estate on SERPs.

See also  First Input Delay – A Simple Explanation via @martinibuster

7. Learn How They’re Leveraging Social Media

The exact nature of how social media intersects with SEO is hotly contested, but few optimization specialists would disagree that it’s an important element of any healthy SEO strategy.

Of course, that’s because a good social listening tool does way more than up-to-date you on every new cat meme your competitor is tweeting.

A good social listening tool enables you to:

  • Increase website traffic by tracking linkless mentions on social media and engaging with your audience – especially when people are specifically using or looking for your product.
  • Track brand mentions off social media platforms and do the same thing (a good social listening tool should be able to monitor news sites, blogs, forums, etc.).
  • Monitor user sentiment.

Some easy research you can perform includes monitoring:

  • Which platforms your competitors are (or are not) using.
  • How often they publish new content.
  • How they communicate with their followers.
  • Which types of content get the most engagement.

Advertisement

Continue Reading Below

You may even want to track competitor linkless mentions, user reviews, and PR so that you can see what their customers like about their product or service – and what you could do better.

8. Try To Track Competitor Ad Spend

If you’ve done everything you can to optimize your website and you’re still getting beat in the SERPs, it’s possible that your competitors are simply outspending you and using paid traffic campaigns to generate conversions and sales.

I recommend against trying to match each competitor’s spending tit for tat, but you may find it valuable to monitor their Google Ads campaigns, promoted content, banner ads, paid posts, and more so that you can at least gauge what other people in your niche are spending on advertising.

Conclusion

Now that you have a handle on competitive analysis, the only thing left to do is keep at it.

Continue making small improvements, keeping tabs on your competitors, and monitoring your rankings.

Advertisement

Continue Reading Below

Eventually, your hard work should pay off and you’ll start to improve your position.

More Resources:


Featured Image: BusyPic/Shutterstock





Source link

SEO

Are Contextual Links A Google Ranking Factor?

Published

on

Are Contextual Links A Google Ranking Factor?


Inbound links are a ranking signal that can vary greatly in terms of how they’re weighted by Google.

One of the key attributes that experts say can separate a high value link from a low value link is the context in which it appears.

When a link is placed within relevant content, it’s thought to have a greater impact on rankings than a link randomly inserted within unrelated text.

Is there any bearing to that claim?

Let’s dive deeper into what has been said about contextual links as a ranking factor to see whether there’s any evidence to support those claims.

The Claim: Contextual Links Are A Ranking Factor

A “contextual link” refers to an inbound link pointing to a URL that’s relevant to the content in which the link appears.

When an article links to a source to provide additional context for the reader, for example, that’s a contextual link.

Contextual links add value rather than being a distraction.

They should flow naturally with the content, giving the reader some clues about the page they’re being directed to.

Not to be confused with anchor text, which refers to the clickable part of a link, a contextual link is defined by the surrounding text.

A link’s anchor text could be related to the webpage it’s pointing to, but if it’s surrounded by content that’s otherwise irrelevant then it doesn’t qualify as a contextual link.

Contextual links are said to be a Google ranking factor, with claims that they’re weighted higher by the search engine than other types of links.

One of the reasons why Google might care about context when it comes to links is because of the experience it creates for users.

See also  Google Still Uses Original Page Titles For Search Rankings

When a user clicks a link and lands on a page related to what they were previously looking at, it’s a better experience than getting directed to a webpage they aren’t interested in.

Modern guides to link building all recommend getting links from relevant URLs, as opposed to going out and placing links anywhere that will take them.

There’s now a greater emphasis on quality over quantity when it comes to link building, and a link is considered higher quality when its placement makes sense in context.

One high quality contextual link can, in theory, be worth more than multiple lower quality links.

That’s why experts advise site owners to gain at least a few contextual links, as that will get them further than building dozens of random links.

If Google weights the quality of links higher or lower based on context, it would mean Google’s crawlers can understand webpages and assess how closely they relate to other URLs on the web.

Is there any evidence to support this?

The Evidence For Contextual Links As A Ranking Factor

Evidence in support of contextual links as a ranking factor can be traced back to 2012 with the launch of the Penguin algorithm update.

Google’s original algorithm, PageRank, was built entirely on links. The more links pointing to a website, the more authority it was considered to have.

Websites could catapult their site up to the top of Google’s search results by building as many links as possible. It didn’t matter if the links were contextual or arbitrary.

Google’s PageRank algorithm wasn’t as selective about which links it valued (or devalued) over others until it was augmented with the Penguin update.

See also  Google Introduces Price Drop Rich Result

Penguin brought a number of changes to Google’s algorithm that made it more difficult to manipulate search rankings through spammy link building practices.

In Google’s announcement of the launch of Penguin, former search engineer Matt Cutts highlighted a specific example of the link spam it’s designed to target.

This example depicts the exact opposite of a contextual link, with Cutts saying:

“Here’s an example of a site with unusual linking patterns that is also affected by this change. Notice that if you try to read the text aloud you’ll discover that the outgoing links are completely unrelated to the actual content, and in fact, the page text has been “spun” beyond recognition.”

A contextual link, on the other hand, looks like the one a few paragraphs above linking to Google’s blog post.

Links with context share the following characteristics:

  • Placement fits in naturally with the content.
  • Linked URL is relevant to the article.
  • Reader knows where they’re going when they click on it.

All of the documentation Google has published about Penguin over the years is the strongest evidence available in support of contextual links as a ranking factor.

See: A Complete Guide to the Google Penguin Algorithm Update

Google will never outright say “contextual link building is a ranking factor,” however, because the company discourages any deliberate link building at all.

As Cutts adds at the end of his Penguin announcement, Google would prefer to see webpages acquire links organically:

“We want people doing white hat search engine optimization (or even no search engine optimization at all) to be free to focus on creating amazing, compelling web sites.”

Contextual Links Are A Ranking Factor: Our Verdict

See also  Google Offers Advice on Top Ranked Spammy Competitors

Contextual links are probably a Google ranking factor.

A link is weighted higher when it’s used in context than if it’s randomly placed within unrelated content.

But that doesn’t necessarily mean links without context will negatively impact a site’s rankings.

External links are largely outside a site owner’s control.

If a website links to you out of context it’s not a cause for concern, because Google is capable of ignoring low value links.

On the other hand, if Google detects a pattern of unnatural links, then that could count against a site’s rankings.

If you have actively engaged in non-contextual link building in the past, it may be wise to consider using the disavow tool.


Featured Image: Paulo Bobita/Search Engine Journal





Source link

Continue Reading

SEO

Is It A Google Ranking Factor?

Published

on

Is It A Google Ranking Factor?


Latent semantic indexing (LSI) is an indexing and information retrieval method used to identify patterns in the relationships between terms and concepts.

With LSI, a mathematical technique is used to find semantically related terms within a collection of text (an index) where those relationships might otherwise be hidden (or latent).

And in that context, this sounds like it could be super important for SEO.

Right?

After all, Google is a massive index of information, and we’re hearing all kinds of things about semantic search and the importance of relevance in the search ranking algorithm.

If you’ve heard rumblings about latent semantic indexing in SEO or been advised to use LSI keywords, you aren’t alone.

But will LSI actually help improve your search rankings? Let’s take a look.

The Claim: Latent Semantic Indexing As A Ranking Factor

The claim is simple: Optimizing web content using LSI keywords helps Google better understand it and you’ll be rewarded with higher rankings.

Backlinko defines LSI keywords in this way:

“LSI (Latent Semantic Indexing) Keywords are conceptually related terms that search engines use to deeply understand content on a webpage.”

By using contextually related terms, you can deepen Google’s understanding of your content. Or so the story goes.

That resource goes on to make some pretty compelling arguments for LSI keywords:

  • Google relies on LSI keywords to understand content at such a deep level.”
  • LSI Keywords are NOT synonyms. Instead, they’re terms that are closely tied to your target keyword.”
  • Google doesn’t ONLY bold terms that exactly match what you just searched for (in search results). They also bold words and phrases that are similar. Needless to say, these are LSI keywords that you want to sprinkle into your content.”

Does this practice of “sprinkling” terms closely related to your target keyword help improve your rankings via LSI?

The Evidence For LSI As A Ranking Factor

Relevance is identified as one of five key factors that help Google determine which result is the best answer for any given query.

See also  How to Know Core Web Vitals Affects Your Search Rankings

As Google explains is its How Search Works resource:

“To return relevant results for your query, we first need to establish what information you’re looking forーthe intent behind your query.”

Once intent has been established:

“…algorithms analyze the content of webpages to assess whether the page contains information that might be relevant to what you are looking for.”

Google goes on to explain that the “most basic signal” of relevance is that the keywords used in the search query appear on the page. That makes sense – if you aren’t using the keywords the searcher is looking for, how could Google tell you’re the best answer?

Now, this is where some believe LSI comes into play.

If using keywords is a signal of relevance, using just the right keywords must be a stronger signal.

There are purpose-build tools dedicated to helping you find these LSI keywords, and believers in this tactic recommend using all kinds of other keyword research tactics to identify them, as well.

The Evidence Against LSI As A Ranking Factor

Google’s John Mueller has been crystal clear on this one:

“…we have no concept of LSI keywords. So that’s something you can completely ignore.”

There’s a healthy skepticism in SEO that Google may say things to lead us astray in order to protect the integrity of the algorithm. So let’s dig in here.

First, it’s important to understand what LSI is and where it came from.

Latent semantic structure emerged as a methodology for retrieving textual objects from files stored in a computer system in the late 1980s. As such, it’s an example of one of the earlier information retrieval (IR) concepts available to programmers.

As computer storage capacity improved and electronically available sets of data grew in size, it became more difficult to locate exactly what one was looking for in that collection.

Researchers described the problem they were trying to solve in a patent application filed September 15, 1988:

“Most systems still require a user or provider of information to specify explicit relationships and links between data objects or text objects, thereby making the systems tedious to use or to apply to large, heterogeneous computer information files whose content may be unfamiliar to the user.”

See also  Can Chrome-Based Spammers Impact Core Web Vitals? via @sejournal, @martinibuster

Keyword matching was being used in IR at the time, but its limitations were evident long before Google came along.

Too often, the words a person used to search for the information they sought were not exact matches for the words used in the indexed information.

There are two reasons for this:

  • Synonymy: the diverse range of words used to describe a single object or idea results in relevant results being missed.
  • Polysemy: the different meanings of a single word results in irrelevant results being retrieved.

These are still issues today, and you can imagine what a massive headache it is for Google.

However, the methodologies and technology Google uses to solve for relevance long ago moved on from LSI.

What LSI did was automatically create a “semantic space” for information retrieval.

As the patent explains, LSI treated this unreliability of association data as a statistical problem.

Without getting too into the weeds, these researchers essentially believed that there was a hidden underlying latent semantic structure they could tease out of word usage data.

Doing so would reveal the latent meaning and enable the system to bring back more relevant results – and only the most relevant results – even if there’s no exact keyword match.

Here’s what that LSI process actually looks like:

Image created by author, January 2022

And here’s the most important thing you should note about the above illustration of this methodology from the patent application: there are two separate processes happening.

First, the collection or index undergoes Latent Semantic Analysis.

Second, the query is analyzed and the already-processed index is then searched for similarities.

And that’s where the fundamental problem with LSI as a Google search ranking signal lies.

Google’s index is massive at hundreds of billions of pages, and it’s growing constantly.

See also  Google Explains SEO Site Migrations Are Hard Because URL Signals Need To Be Forwarded

Each time a user inputs a query, Google is sorting through its index in a fraction of a second to find the best answer.

Using the above methodology in the algorithm would require that Google:

  1. Recreate that semantic space using LSA across its entire index.
  2. Analyze the semantic meaning of the query.
  3. Find all similarities between the semantic meaning of the query and documents in the semantic space created from analyzing the entire index.
  4. Sort and rank those results.

That’s a gross oversimplification, but the point is that this isn’t a scalable process.

This would be super useful for small collections of information. It was helpful for surfacing relevant reports inside a company’s computerized archive of technical documentation, for example.

The patent application illustrates how LSI works using a collection of nine documents. That’s what it was designed to do. LSI is primitive in terms of computerized information retrieval.

Latent Semantic Indexing As A Ranking Factor: Our Verdict

Latent Semantic Indexing (LSI): Is It A Google Ranking Factor?

While the underlying principles of eliminating noise by determining semantic relevance have surely informed developments in search ranking since LSA/LSI was patented, LSI itself has no useful application in SEO today.

It hasn’t been ruled out completely, but there is no evidence that Google has ever used LSI to rank results. And Google definitely isn’t using LSI or LSI keywords today to rank search results.

Those who recommend using LSI keywords are latching on to a concept they don’t quite understand in an effort to explain why the ways in which words are related (or not) is important in SEO.

Relevance and intent are foundational considerations in Google’s search ranking algorithm.

Those are two of the big questions they’re trying to solve for in surfacing the best answer for any query.

Synonymy and polysemy are still major challenges.

Semantics – that is, our understanding of the various meanings of words and how they’re related – is essential in producing more relevant search results.

But LSI has nothing to do with that.


Featured Image: Paulo Bobita/Search Engine Journal





Source link

Continue Reading

SEO

What Is a Google Broad Core Algorithm Update?

Published

on

What Is A Google Broad Core Algorithm Update?


When Google announces a broad core algorithm update, many SEO professionals find themselves asking what exactly changed (besides their rankings).

Google’s acknowledgment of core updates is always vague and doesn’t provide much detail other than to say the update occurred.

The SEO community is typically notified about core updates via the same standard tweets from Google’s Search Liaison.

There’s one announcement from Google when the update begins rolling out, and one on its conclusion, with few additional details in between (if any).

This invariably leaves SEO professionals and site owners asking many questions with respect to how their rankings were impacted by the core update.

To gain insight into what may have caused a site’s rankings to go up, down, or stay the same, it helps to understand what a broad core update is and how it differs from other types of algorithm updates.

After reading this article you’ll have a better idea of what a core update is designed to do, and how to recover from one if your rankings were impacted.

So, What Exactly Is A Core Update?

First, let me get the obligatory “Google makes hundreds of algorithm changes per year, often more than one per day” boilerplate out of the way.

Many of the named updates we hear about (Penguin, Panda, Pigeon, Fred, etc.) are implemented to address specific faults or issues in Google’s algorithms.

In the case of Penguin, it was link spam; in the case of Pigeon, it was local SEO spam.

They all had a specific purpose.

In these cases, Google (sometimes reluctantly) informed us what they were trying to accomplish or prevent with the algorithm update, and we were able to go back and remedy our sites.

A core update is different.

The way I understand it, a core update is a tweak or change to the main search algorithm itself.

You know, the one that has between 200 and 500 ranking factors and signals (depending on which SEO blog you’re reading today).

See also  Google Search Console Geotargeting Can Hurt Search Rankings

What a core update means to me is that Google slightly tweaked the importance, order, weights, or values of these signals.

Because of that, they can’t come right out and tell us what changed without revealing the secret sauce.

The simplest way to visualize this would be to imagine 200 factors listed in order of importance.

Now imagine Google changing the order of 42 of those 200 factors.

Rankings would change, but it would be a combination of many things, not due to one specific factor or cause.

Obviously, it isn’t that simple, but that’s a good way to think about a core update.

Here’s a purely made up, slightly more complicated example of what Google wouldn’t tell us:

“In this core update, we increased the value of keywords in H1 tags by 2%, increased the value of HTTPS by 18%, decreased the value of keyword in title tag by 9%, changed the D value in our PageRank calculation from .85 to .70, and started using a TF-iDUF retrieval method for logged in users instead of the traditional TF-PDF method.”

(I swear these are real things. I just have no idea if they’re real things used by Google.)

For starters, many SEO pros wouldn’t understand it.

Basically, it means Google may have changed the way they calculate term importance on a page, or the weighing of links in PageRank, or both, or a whole bunch of other factors that they can’t talk about (without giving away the algorithm).

Put simply: Google changed the weight and importance of many ranking factors.

That’s the simple explanation.

At its most complex form, Google ran a new training set through their machine learning ranking model and quality raters picked this new set of results as more relevant than the previous set, and the engineers have no idea what weights changed or how they changed because that’s just how machine learning works.

See also  Google: How to Use Search Operators for SEO via @sejournal, @martinibuster

(We all know Google uses quality raters to rate search results. These ratings are how they choose one algorithm change over another – not how they rate your site. Whether they feed this into machine learning is anybody’s guess. But it’s one possibility.)

It’s likely some random combination of weighting delivered more relevant results for the quality raters, so they tested it more, the test results confirmed it, and they pushed it live.

How Can You Recover From A Core Update?

Unlike a major named update that targeted specific things, a core update may tweak the values of everything.

Because websites are weighted against other websites relevant to your query (engineers call this a corpus) the reason your site dropped could be entirely different than the reason somebody else’s increased or decreased in rankings.

To put it simply, Google isn’t telling you how to “recover” because it’s likely a different answer for every website and query.

It all depends on what everybody else trying to rank for your query is doing.

Does every one of them but you have their keyword in the H1 tag? If so then that could be a contributing factor.

Do you all do that already? Then that probably carries less weight for that corpus of results.

It’s very likely that this algorithm update didn’t “penalize” you for something at all. It most likely just rewarded another site more for something else.

Maybe you were killing it with internal anchor text and they were doing a great job of formatting content to match user intent – and Google shifted the weights so that content formatting was slightly higher and internal anchor text was slightly lower.

(Again, hypothetical examples here.)

In reality, it was probably several minor tweaks that, when combined, tipped the scales slightly in favor of one site or another (think of our reordered list here).

See also  First Input Delay – A Simple Explanation via @martinibuster

Finding that “something else” that is helping your competitors isn’t easy – but it’s what keeps SEO professionals in the business.

Next Steps And Action Items

Rankings are down after a core update – now what?

Your next step is to gather intel on the pages that are ranking where your site used to be.

Conduct a SERP analysis to find positive correlations between pages that are ranking higher for queries where your site is now lower.

Try not to overanalyze the technical details, such as how fast each page loads or what their core web vitals scores are.

Pay attention to the content itself. As you go through it, ask yourself questions like:

  • Does it provide a better answer to the query than your article?
  • Does the content contain more recent data and current stats than yours?
  • Are there pictures and videos that help bring the content to life for the reader?

Google aims to serve content that provides the best and most complete answers to searchers’ queries. Relevance is the one ranking factor that will always win out over all others.

Take an honest look at your content to see if it’s as relevant today as it was prior to the core algorithm update.

From there you’ll have an idea of what needs improvement.

The best advice for conquering core updates?

Keep focusing on:

  • User intent.
  • Quality content.
  • Clean architecture.
  • Google’s guidelines.

Finally, don’t stop improving your site once you reach Position 1, because the site in Position 2 isn’t going to stop.

Yeah, I know, it’s not the answer anybody wants and it sounds like Google propaganda. I swear it’s not.

It’s just the reality of what a core update is.

Nobody said SEO was easy.

More resources:


Featured Image: Ulvur/Shutterstock





Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending