Connect with us

SEO

A Simple (But Complete) Guide

Published

on

A Simple (But Complete) Guide


Most website owners have to deal with redirects at one point or another. Redirects help keep things accessible for users and search engines when you rebrand, merge multiple websites, delete a page, or simply move a page to a new location.

However, the world of redirects is a murky one, as different types of redirects exist for different scenarios. So it’s important to understand the differences between them.

In this guide, you’ll learn:

Redirects are a way to forward users (and bots) to a URL other than the one they requested.

Why should you use redirects?

There are two reasons why you should use redirects when moving content:

  • Better user experience for visitors – You don’t want visitors to get hit with a “page not found” warning when they’re trying to access a page that’s moved. Redirects solve this problem by seamlessly sending visitors to the content’s new location.
  • Help search engines understand your site – Redirects tell search engines where content has moved and whether the move is permanent or temporary. This affects if and how the pages appear in their search results.

When should you use redirects?

You should use redirects when you move content from one URL to another and, occasionally, when you delete content. Let’s take a quick look at a few common scenarios where you’ll want to use them.

When moving domains

If you’re rebranding and moving from one domain to another, you’ll need to permanently redirect all the pages on the old domain to their locations on the new domain.

When merging websites

If you’re merging multiple websites into one, you’ll need to permanently redirect old URLs to new URLs.

When switching to HTTPS

If you’re switching from HTTP to HTTPS (strongly recommended), you’ll need to permanently redirect every unsecure (HTTP) page and resource to its secure (HTTPS) location.

When running a promotion

If you’re running a temporary promotion and want to send visitors from, say, domain.com/laptops to domain.com/laptops-black-friday-deals, you’ll need to use a temporary redirect.

When deleting pages

If you’re removing content from your site, you should permanently redirect its URL to a relevant, similar page where possible. This helps to ensure that any backlinks to the old page still count for SEO purposes. It also ensures that any bookmarks or internal links still work.

Redirects are split into two groups: server-side redirects and client-side redirects. Each group contains a number of redirects that search engines view as either temporary or permanent. So you’ll need to use the right redirect for the task at hand to avoid potential SEO issues.

Server-side redirects

A server-side redirect is one where the server decides where to redirect the user or search engine when a page is requested. It does this by returning a 3XX HTTP status code.

If you’re doing SEO, you’ll be using server-side redirects most of the time, as client-side redirects (we’ll discuss those shortly) have a few drawbacks and tend to be more suitable for quite specific and rare use cases.

Here are the 3XX redirects every SEO should know:

301 redirect

A 301 redirect forwards users to the new URL and tells search engines that the resource has permanently moved. When confronted with a 301 redirect, search engines typically drop the old redirected URL from their index in favor of the new URL. They also transfer PageRank (authority) to the new URL.

302 redirect

A 302 redirect forwards users to the new URL and tells search engines that the resource has temporarily moved. When confronted with a 302 redirect, search engines keep the old URL indexed even though it’s redirected. However, if you leave the 302 redirect in place for a long time, search engines will likely start treating it like a 301 redirect and index the new URL instead.

See also  A Visual Guide to Profile Picture NFTs [Infographic]

Like 301s, 302s transfer PageRank. The difference is the transfer happens “backward.” In other words, the “new” URL’s PageRank transfers backward to the old URL (unless search engines are treating it like a 301).

303 redirect

A 303 redirect forwards the user to a resource similar to the one requested and is a temporary form of redirect. It’s typically used for things like preventing form resubmissions when a user hits the “back” button in their browser. You won’t typically use 303 redirects for SEO purposes. If you do, search engines may treat them as either a 301 or 302.

307 redirect

A 307 redirect is the same as a 302 redirect, except it retains the HTTP method (POST, GET) of the original request when performing the redirect.

308 redirect

A 308 redirect is the same as a 301 redirect, except it retains the HTTP method of the original request when performing the redirect. Google says it treats 308 redirects the same as 301 redirects, but most SEOs still use 301 redirects.

Client-side redirects

A client-side redirect is one where the browser decides where to redirect the user. You generally shouldn’t use it unless you don’t have another option.

307 redirect

A 307 redirect commonly occurs client-side when a site uses HSTS. This is because HSTS tells the client’s browser that the server only accepts secure (HTTPS) connections and to perform an internal 307 redirect if asked to request unsecure (HTTP) resources from the site in the future.

Meta refresh redirect

A meta refresh redirect tells the browser to redirect the user after a set number of seconds. Google understands it and will typically treat it the same as a 301 redirect. However, when asked about meta redirects with delays on Twitter, Google’s John Mueller said, “If you want it treated like a redirect, it makes sense to have it act like a redirect.”

Either way, Google doesn’t recommend using them, as they can be confusing for the user and aren’t supported by all browsers. Google recommends using a server-side 301 redirect instead.

JavaScript redirect

A JavaScript redirect, as you probably guessed, uses JavaScript to instruct the browser to redirect the user to a different URL. Some people believe a JS redirect causes issues for search engines because they have to render the page to see the redirect. Although this is true, it’s not usually an issue for Google because it renders pages so fast these days. (Though, there could still be issues with other search engines.) All in all, it’s still better to use a 3XX redirect where possible, but a JS redirect is typically fine if that’s your only option.

Best practices for redirects

Redirects can get complicated. To help you along, here are a few best practices to keep in mind if you’re involved in SEO.

Redirect HTTP to HTTPS

Everyone should be using HTTPS at this stage. It gives your site an extra layer of security, and it’s a small Google ranking factor.

See also  The Ultimate Guide to Digital Asset Management

There are a couple of ways to check that your site is properly redirecting from HTTP to HTTPS. The first is to install and activate Ahrefs’ SEO Toolbar, then try to navigate to the HTTP version of your homepage. It should redirect, and you should see a 301 response code on the toolbar.

The problem with this method is you may see a 307 if your site uses HSTS. So here’s another method:

  1. Go to Ahrefs’ Site Audit
  2. Click + New Project
  3. Click Add manually
  4. Change the Scope to HTTP
  5. Enter your domain

You should see the “Not crawlable” error for both the www and non-www versions of your homepage, along with the “301 moved permanently” notification.

Checking for redirects in Ahrefs' Site Audit

If there isn’t a redirect in place or you’re using a type of redirect other than 301 or 308, it’s probably worth asking your developer to switch to 301.

TIP

Whichever method you use, it’s worth repeating it for a few pages so that you can be confident proper redirects are in place across your site.

Use HSTS (to create 307 redirects)

Implementing HSTS (HTTP Strict Transport Security) on your server stops people from accessing non-secure (HTTP) content on your site. It does this by telling browsers that your server only accepts secure connections and that they should do an internal 307 redirect to the HTTPS version of any HTTP resource they’re asked to access.

This isn’t a substitute for 301 or 302 redirects, and it’s not strictly necessary if those are properly set up on your site. However, we argue that it’s best practice these days—even if just to speed things up a bit for users.

Learn more: Strict-Transport-Security — Mozilla

TIP

After implementing HSTS, consider submitting your site to the HSTS preload list. This enables HSTS for everyone trying to visit your website—even if they haven’t visited it before.

Avoid meta refresh redirects

Meta refresh redirects aren’t ideal, so it’s worth checking your site for these and replacing them with either a 301 or 302 redirect. You can do this easily enough with a free Ahrefs Webmaster Tools account. Just crawl your site with Site Audit and look for the “meta refresh redirect” error.

If you then click the error and hit “View affected URLs,” you’ll see the URLs with meta refresh redirects.

Redirect deleted pages to relevant working alternatives (where possible)

Redirecting URLs makes sense when you move content, but it also often makes sense to redirect when you delete content. This is because seeing a “404 not found” error isn’t ideal when a user tries to access a deleted page. It’s often more user friendly to redirect them to a relevant working alternative.

For example, we recently revamped our blog category pages. During the process, we deleted a few categories, including “Outreach & Content Promotion.” Rather than leave this as a 404, we redirected it to our “Link Building” category, as it’s a closely related working alternative.

You can’t do this every time, as there isn’t always a relevant alternative. But if there is, doing so also has the benefit of preserving and transferring PageRank (authority) from the redirected page to the alternative resource.

Most sites will already have some dead or deleted pages that return a 404 status code. To find these, sign up for a free Ahrefs Webmaster Tools account, crawl your site with Site Audit, go to the Internal pages report, then look for the “4XX page” error:

See also  B2B Marketing: The Beginner's Guide

TIP

Enable “backlinks” as a source when setting up your crawl. This will allow Site Audit to find deleted pages with backlinks, even if there are no internal links to the pages on your site.

Crawl sources in Ahrefs' Site Audit

To see the affected pages, click the error and hit “View affected URLs.” If you see a lot of URLs, click the “Manage columns” button, add the “Referring domains” column, then sort by referring domains in descending order. You can then tackle the 404s with the most backlinks first.

404s with backlinks in Ahrefs' Site Audit

Avoid long redirect chains

Redirect chains are when multiple redirects take place between a requested resource and its final destination.

What a redirect chain looks like

Google’s official documentation says that it follows up to 10 redirect hops, so any redirect chains shorter than that aren’t really a problem for SEO.

Googlebot follows up to 10 redirect hops. If the crawler doesn’t receive content within 10 hops, Search Console will show a redirect error in the site’s Index Coverage report.

However, long chains still slow things down for users, so it’s best to avoid them if possible.

You can find long redirect chains for free using Ahrefs Webmaster Tools:

  1. Crawl your site with Site Audit
  2. Go to the Redirects report
  3. Click the Issues tab
  4. Look for the “Redirect chain too long” error

Click the issue and hit “View affected URLs” to see URLs that begin a redirect chain and all the URLs in the chain.

Redirect chain URLs in Ahrefs' Site Audit

Avoid redirect loops

Redirect loops are infinite loops of redirects that occur when a URL redirects to itself or when a URL in a redirect chain redirects back to a URL earlier in the chain.

What a redirect loop looks like

They’re problematic for two reasons:

  • For users –They cut off access to an intended resource and trigger a “too many redirects” error in the browser.
  • For search engines – They “trap” crawlers and waste the crawl budget.

The simplest way to find redirect loops is to crawl your site with a tool like Ahrefs’ Site Audit. You can do this for free with an Ahrefs Webmaster Tools account.

  1. Crawl your site with Site Audit
  2. Go to the Redirects report
  3. Click the Issues tab
  4. Look for the “Redirect loop” error

If you then click the error and click “View affected URLs,” you’ll see a list of URLs that redirect, as well as all URLs in the chain:

Redirect chain URLs in Ahrefs' Site Audit

The best way to fix a redirect loop depends on whether the last URL in the chain (before the loop) is the intended final destination.

If it is, remove the redirect from the final URL. Then make sure the resource is accessible and returns a 200 status code.

How to fix a redirect loop when the final URL is the intended final destination

If it isn’t, change the looping redirect to the intended final destination.

How to fix a redirect loop when the final URL isn't the intended final destination

In both cases, it’s good practice to swap out any internal links to remaining redirects for direct links to the final URL.

Final thoughts

Redirects for SEO are pretty straightforward. You’ll be using server-side 301 and 302 redirects most of the time, depending on whether the redirect is permanent or temporary. However, there are some nuances to the way Google treats 301s and 302s, so it’s worth reading these two guides if you’re facing issues:

Got questions? Ping me on Twitter.





Source link

SEO

Are Contextual Links A Google Ranking Factor?

Published

on

Are Contextual Links A Google Ranking Factor?


Inbound links are a ranking signal that can vary greatly in terms of how they’re weighted by Google.

One of the key attributes that experts say can separate a high value link from a low value link is the context in which it appears.

When a link is placed within relevant content, it’s thought to have a greater impact on rankings than a link randomly inserted within unrelated text.

Is there any bearing to that claim?

Let’s dive deeper into what has been said about contextual links as a ranking factor to see whether there’s any evidence to support those claims.

The Claim: Contextual Links Are A Ranking Factor

A “contextual link” refers to an inbound link pointing to a URL that’s relevant to the content in which the link appears.

When an article links to a source to provide additional context for the reader, for example, that’s a contextual link.

Contextual links add value rather than being a distraction.

They should flow naturally with the content, giving the reader some clues about the page they’re being directed to.

Not to be confused with anchor text, which refers to the clickable part of a link, a contextual link is defined by the surrounding text.

A link’s anchor text could be related to the webpage it’s pointing to, but if it’s surrounded by content that’s otherwise irrelevant then it doesn’t qualify as a contextual link.

Contextual links are said to be a Google ranking factor, with claims that they’re weighted higher by the search engine than other types of links.

One of the reasons why Google might care about context when it comes to links is because of the experience it creates for users.

See also  17 SEO Copywriting Tips To Help Your Rankings

When a user clicks a link and lands on a page related to what they were previously looking at, it’s a better experience than getting directed to a webpage they aren’t interested in.

Modern guides to link building all recommend getting links from relevant URLs, as opposed to going out and placing links anywhere that will take them.

There’s now a greater emphasis on quality over quantity when it comes to link building, and a link is considered higher quality when its placement makes sense in context.

One high quality contextual link can, in theory, be worth more than multiple lower quality links.

That’s why experts advise site owners to gain at least a few contextual links, as that will get them further than building dozens of random links.

If Google weights the quality of links higher or lower based on context, it would mean Google’s crawlers can understand webpages and assess how closely they relate to other URLs on the web.

Is there any evidence to support this?

The Evidence For Contextual Links As A Ranking Factor

Evidence in support of contextual links as a ranking factor can be traced back to 2012 with the launch of the Penguin algorithm update.

Google’s original algorithm, PageRank, was built entirely on links. The more links pointing to a website, the more authority it was considered to have.

Websites could catapult their site up to the top of Google’s search results by building as many links as possible. It didn’t matter if the links were contextual or arbitrary.

Google’s PageRank algorithm wasn’t as selective about which links it valued (or devalued) over others until it was augmented with the Penguin update.

See also  8 Quick SEO Wins For Your Brand New Website

Penguin brought a number of changes to Google’s algorithm that made it more difficult to manipulate search rankings through spammy link building practices.

In Google’s announcement of the launch of Penguin, former search engineer Matt Cutts highlighted a specific example of the link spam it’s designed to target.

This example depicts the exact opposite of a contextual link, with Cutts saying:

“Here’s an example of a site with unusual linking patterns that is also affected by this change. Notice that if you try to read the text aloud you’ll discover that the outgoing links are completely unrelated to the actual content, and in fact, the page text has been “spun” beyond recognition.”

A contextual link, on the other hand, looks like the one a few paragraphs above linking to Google’s blog post.

Links with context share the following characteristics:

  • Placement fits in naturally with the content.
  • Linked URL is relevant to the article.
  • Reader knows where they’re going when they click on it.

All of the documentation Google has published about Penguin over the years is the strongest evidence available in support of contextual links as a ranking factor.

See: A Complete Guide to the Google Penguin Algorithm Update

Google will never outright say “contextual link building is a ranking factor,” however, because the company discourages any deliberate link building at all.

As Cutts adds at the end of his Penguin announcement, Google would prefer to see webpages acquire links organically:

“We want people doing white hat search engine optimization (or even no search engine optimization at all) to be free to focus on creating amazing, compelling web sites.”

Contextual Links Are A Ranking Factor: Our Verdict

See also  The Ultimate Guide to Digital Asset Management

Contextual links are probably a Google ranking factor.

A link is weighted higher when it’s used in context than if it’s randomly placed within unrelated content.

But that doesn’t necessarily mean links without context will negatively impact a site’s rankings.

External links are largely outside a site owner’s control.

If a website links to you out of context it’s not a cause for concern, because Google is capable of ignoring low value links.

On the other hand, if Google detects a pattern of unnatural links, then that could count against a site’s rankings.

If you have actively engaged in non-contextual link building in the past, it may be wise to consider using the disavow tool.


Featured Image: Paulo Bobita/Search Engine Journal





Source link

Continue Reading

SEO

Is It A Google Ranking Factor?

Published

on

Is It A Google Ranking Factor?


Latent semantic indexing (LSI) is an indexing and information retrieval method used to identify patterns in the relationships between terms and concepts.

With LSI, a mathematical technique is used to find semantically related terms within a collection of text (an index) where those relationships might otherwise be hidden (or latent).

And in that context, this sounds like it could be super important for SEO.

Right?

After all, Google is a massive index of information, and we’re hearing all kinds of things about semantic search and the importance of relevance in the search ranking algorithm.

If you’ve heard rumblings about latent semantic indexing in SEO or been advised to use LSI keywords, you aren’t alone.

But will LSI actually help improve your search rankings? Let’s take a look.

The Claim: Latent Semantic Indexing As A Ranking Factor

The claim is simple: Optimizing web content using LSI keywords helps Google better understand it and you’ll be rewarded with higher rankings.

Backlinko defines LSI keywords in this way:

“LSI (Latent Semantic Indexing) Keywords are conceptually related terms that search engines use to deeply understand content on a webpage.”

By using contextually related terms, you can deepen Google’s understanding of your content. Or so the story goes.

That resource goes on to make some pretty compelling arguments for LSI keywords:

  • Google relies on LSI keywords to understand content at such a deep level.”
  • LSI Keywords are NOT synonyms. Instead, they’re terms that are closely tied to your target keyword.”
  • Google doesn’t ONLY bold terms that exactly match what you just searched for (in search results). They also bold words and phrases that are similar. Needless to say, these are LSI keywords that you want to sprinkle into your content.”

Does this practice of “sprinkling” terms closely related to your target keyword help improve your rankings via LSI?

The Evidence For LSI As A Ranking Factor

Relevance is identified as one of five key factors that help Google determine which result is the best answer for any given query.

See also  The Ultimate Guide to Digital Asset Management

As Google explains is its How Search Works resource:

“To return relevant results for your query, we first need to establish what information you’re looking forーthe intent behind your query.”

Once intent has been established:

“…algorithms analyze the content of webpages to assess whether the page contains information that might be relevant to what you are looking for.”

Google goes on to explain that the “most basic signal” of relevance is that the keywords used in the search query appear on the page. That makes sense – if you aren’t using the keywords the searcher is looking for, how could Google tell you’re the best answer?

Now, this is where some believe LSI comes into play.

If using keywords is a signal of relevance, using just the right keywords must be a stronger signal.

There are purpose-build tools dedicated to helping you find these LSI keywords, and believers in this tactic recommend using all kinds of other keyword research tactics to identify them, as well.

The Evidence Against LSI As A Ranking Factor

Google’s John Mueller has been crystal clear on this one:

“…we have no concept of LSI keywords. So that’s something you can completely ignore.”

There’s a healthy skepticism in SEO that Google may say things to lead us astray in order to protect the integrity of the algorithm. So let’s dig in here.

First, it’s important to understand what LSI is and where it came from.

Latent semantic structure emerged as a methodology for retrieving textual objects from files stored in a computer system in the late 1980s. As such, it’s an example of one of the earlier information retrieval (IR) concepts available to programmers.

As computer storage capacity improved and electronically available sets of data grew in size, it became more difficult to locate exactly what one was looking for in that collection.

Researchers described the problem they were trying to solve in a patent application filed September 15, 1988:

“Most systems still require a user or provider of information to specify explicit relationships and links between data objects or text objects, thereby making the systems tedious to use or to apply to large, heterogeneous computer information files whose content may be unfamiliar to the user.”

See also  Data-Backed Ways to Optimize for Google Featured Snippets

Keyword matching was being used in IR at the time, but its limitations were evident long before Google came along.

Too often, the words a person used to search for the information they sought were not exact matches for the words used in the indexed information.

There are two reasons for this:

  • Synonymy: the diverse range of words used to describe a single object or idea results in relevant results being missed.
  • Polysemy: the different meanings of a single word results in irrelevant results being retrieved.

These are still issues today, and you can imagine what a massive headache it is for Google.

However, the methodologies and technology Google uses to solve for relevance long ago moved on from LSI.

What LSI did was automatically create a “semantic space” for information retrieval.

As the patent explains, LSI treated this unreliability of association data as a statistical problem.

Without getting too into the weeds, these researchers essentially believed that there was a hidden underlying latent semantic structure they could tease out of word usage data.

Doing so would reveal the latent meaning and enable the system to bring back more relevant results – and only the most relevant results – even if there’s no exact keyword match.

Here’s what that LSI process actually looks like:

Image created by author, January 2022

And here’s the most important thing you should note about the above illustration of this methodology from the patent application: there are two separate processes happening.

First, the collection or index undergoes Latent Semantic Analysis.

Second, the query is analyzed and the already-processed index is then searched for similarities.

And that’s where the fundamental problem with LSI as a Google search ranking signal lies.

Google’s index is massive at hundreds of billions of pages, and it’s growing constantly.

See also  8 Quick SEO Wins For Your Brand New Website

Each time a user inputs a query, Google is sorting through its index in a fraction of a second to find the best answer.

Using the above methodology in the algorithm would require that Google:

  1. Recreate that semantic space using LSA across its entire index.
  2. Analyze the semantic meaning of the query.
  3. Find all similarities between the semantic meaning of the query and documents in the semantic space created from analyzing the entire index.
  4. Sort and rank those results.

That’s a gross oversimplification, but the point is that this isn’t a scalable process.

This would be super useful for small collections of information. It was helpful for surfacing relevant reports inside a company’s computerized archive of technical documentation, for example.

The patent application illustrates how LSI works using a collection of nine documents. That’s what it was designed to do. LSI is primitive in terms of computerized information retrieval.

Latent Semantic Indexing As A Ranking Factor: Our Verdict

Latent Semantic Indexing (LSI): Is It A Google Ranking Factor?

While the underlying principles of eliminating noise by determining semantic relevance have surely informed developments in search ranking since LSA/LSI was patented, LSI itself has no useful application in SEO today.

It hasn’t been ruled out completely, but there is no evidence that Google has ever used LSI to rank results. And Google definitely isn’t using LSI or LSI keywords today to rank search results.

Those who recommend using LSI keywords are latching on to a concept they don’t quite understand in an effort to explain why the ways in which words are related (or not) is important in SEO.

Relevance and intent are foundational considerations in Google’s search ranking algorithm.

Those are two of the big questions they’re trying to solve for in surfacing the best answer for any query.

Synonymy and polysemy are still major challenges.

Semantics – that is, our understanding of the various meanings of words and how they’re related – is essential in producing more relevant search results.

But LSI has nothing to do with that.


Featured Image: Paulo Bobita/Search Engine Journal





Source link

Continue Reading

SEO

What Is a Google Broad Core Algorithm Update?

Published

on

What Is A Google Broad Core Algorithm Update?


When Google announces a broad core algorithm update, many SEO professionals find themselves asking what exactly changed (besides their rankings).

Google’s acknowledgment of core updates is always vague and doesn’t provide much detail other than to say the update occurred.

The SEO community is typically notified about core updates via the same standard tweets from Google’s Search Liaison.

There’s one announcement from Google when the update begins rolling out, and one on its conclusion, with few additional details in between (if any).

This invariably leaves SEO professionals and site owners asking many questions with respect to how their rankings were impacted by the core update.

To gain insight into what may have caused a site’s rankings to go up, down, or stay the same, it helps to understand what a broad core update is and how it differs from other types of algorithm updates.

After reading this article you’ll have a better idea of what a core update is designed to do, and how to recover from one if your rankings were impacted.

So, What Exactly Is A Core Update?

First, let me get the obligatory “Google makes hundreds of algorithm changes per year, often more than one per day” boilerplate out of the way.

Many of the named updates we hear about (Penguin, Panda, Pigeon, Fred, etc.) are implemented to address specific faults or issues in Google’s algorithms.

In the case of Penguin, it was link spam; in the case of Pigeon, it was local SEO spam.

They all had a specific purpose.

In these cases, Google (sometimes reluctantly) informed us what they were trying to accomplish or prevent with the algorithm update, and we were able to go back and remedy our sites.

A core update is different.

The way I understand it, a core update is a tweak or change to the main search algorithm itself.

You know, the one that has between 200 and 500 ranking factors and signals (depending on which SEO blog you’re reading today).

See also  Living the agile marketing values: A do’s and don’ts guide

What a core update means to me is that Google slightly tweaked the importance, order, weights, or values of these signals.

Because of that, they can’t come right out and tell us what changed without revealing the secret sauce.

The simplest way to visualize this would be to imagine 200 factors listed in order of importance.

Now imagine Google changing the order of 42 of those 200 factors.

Rankings would change, but it would be a combination of many things, not due to one specific factor or cause.

Obviously, it isn’t that simple, but that’s a good way to think about a core update.

Here’s a purely made up, slightly more complicated example of what Google wouldn’t tell us:

“In this core update, we increased the value of keywords in H1 tags by 2%, increased the value of HTTPS by 18%, decreased the value of keyword in title tag by 9%, changed the D value in our PageRank calculation from .85 to .70, and started using a TF-iDUF retrieval method for logged in users instead of the traditional TF-PDF method.”

(I swear these are real things. I just have no idea if they’re real things used by Google.)

For starters, many SEO pros wouldn’t understand it.

Basically, it means Google may have changed the way they calculate term importance on a page, or the weighing of links in PageRank, or both, or a whole bunch of other factors that they can’t talk about (without giving away the algorithm).

Put simply: Google changed the weight and importance of many ranking factors.

That’s the simple explanation.

At its most complex form, Google ran a new training set through their machine learning ranking model and quality raters picked this new set of results as more relevant than the previous set, and the engineers have no idea what weights changed or how they changed because that’s just how machine learning works.

See also  8 Quick SEO Wins For Your Brand New Website

(We all know Google uses quality raters to rate search results. These ratings are how they choose one algorithm change over another – not how they rate your site. Whether they feed this into machine learning is anybody’s guess. But it’s one possibility.)

It’s likely some random combination of weighting delivered more relevant results for the quality raters, so they tested it more, the test results confirmed it, and they pushed it live.

How Can You Recover From A Core Update?

Unlike a major named update that targeted specific things, a core update may tweak the values of everything.

Because websites are weighted against other websites relevant to your query (engineers call this a corpus) the reason your site dropped could be entirely different than the reason somebody else’s increased or decreased in rankings.

To put it simply, Google isn’t telling you how to “recover” because it’s likely a different answer for every website and query.

It all depends on what everybody else trying to rank for your query is doing.

Does every one of them but you have their keyword in the H1 tag? If so then that could be a contributing factor.

Do you all do that already? Then that probably carries less weight for that corpus of results.

It’s very likely that this algorithm update didn’t “penalize” you for something at all. It most likely just rewarded another site more for something else.

Maybe you were killing it with internal anchor text and they were doing a great job of formatting content to match user intent – and Google shifted the weights so that content formatting was slightly higher and internal anchor text was slightly lower.

(Again, hypothetical examples here.)

In reality, it was probably several minor tweaks that, when combined, tipped the scales slightly in favor of one site or another (think of our reordered list here).

See also  Ecommerce SEO: Optimizing and Ranking Category Pages

Finding that “something else” that is helping your competitors isn’t easy – but it’s what keeps SEO professionals in the business.

Next Steps And Action Items

Rankings are down after a core update – now what?

Your next step is to gather intel on the pages that are ranking where your site used to be.

Conduct a SERP analysis to find positive correlations between pages that are ranking higher for queries where your site is now lower.

Try not to overanalyze the technical details, such as how fast each page loads or what their core web vitals scores are.

Pay attention to the content itself. As you go through it, ask yourself questions like:

  • Does it provide a better answer to the query than your article?
  • Does the content contain more recent data and current stats than yours?
  • Are there pictures and videos that help bring the content to life for the reader?

Google aims to serve content that provides the best and most complete answers to searchers’ queries. Relevance is the one ranking factor that will always win out over all others.

Take an honest look at your content to see if it’s as relevant today as it was prior to the core algorithm update.

From there you’ll have an idea of what needs improvement.

The best advice for conquering core updates?

Keep focusing on:

  • User intent.
  • Quality content.
  • Clean architecture.
  • Google’s guidelines.

Finally, don’t stop improving your site once you reach Position 1, because the site in Position 2 isn’t going to stop.

Yeah, I know, it’s not the answer anybody wants and it sounds like Google propaganda. I swear it’s not.

It’s just the reality of what a core update is.

Nobody said SEO was easy.

More resources:


Featured Image: Ulvur/Shutterstock





Source link

Continue Reading

DON'T MISS ANY IMPORTANT NEWS!
Subscribe To our Newsletter
We promise not to spam you. Unsubscribe at any time.
Invalid email address

Trending