Connect with us

SEO

A Simple (But Complete) Guide

Published

on

A Simple (But Complete) Guide

Most website owners have to deal with redirects at one point or another. Redirects help keep things accessible for users and search engines when you rebrand, merge multiple websites, delete a page, or simply move a page to a new location.

However, the world of redirects is a murky one, as different types of redirects exist for different scenarios. So it’s important to understand the differences between them.

In this guide, you’ll learn:

Redirects are a way to forward users (and bots) to a URL other than the one they requested.

Why should you use redirects?

There are two reasons why you should use redirects when moving content:

  • Better user experience for visitors – You don’t want visitors to get hit with a “page not found” warning when they’re trying to access a page that’s moved. Redirects solve this problem by seamlessly sending visitors to the content’s new location.
  • Help search engines understand your site – Redirects tell search engines where content has moved and whether the move is permanent or temporary. This affects if and how the pages appear in their search results.

When should you use redirects?

You should use redirects when you move content from one URL to another and, occasionally, when you delete content. Let’s take a quick look at a few common scenarios where you’ll want to use them.

When moving domains

If you’re rebranding and moving from one domain to another, you’ll need to permanently redirect all the pages on the old domain to their locations on the new domain.

When merging websites

If you’re merging multiple websites into one, you’ll need to permanently redirect old URLs to new URLs.

When switching to HTTPS

If you’re switching from HTTP to HTTPS (strongly recommended), you’ll need to permanently redirect every unsecure (HTTP) page and resource to its secure (HTTPS) location.

When running a promotion

If you’re running a temporary promotion and want to send visitors from, say, domain.com/laptops to domain.com/laptops-black-friday-deals, you’ll need to use a temporary redirect.

When deleting pages

If you’re removing content from your site, you should permanently redirect its URL to a relevant, similar page where possible. This helps to ensure that any backlinks to the old page still count for SEO purposes. It also ensures that any bookmarks or internal links still work.

Redirects are split into two groups: server-side redirects and client-side redirects. Each group contains a number of redirects that search engines view as either temporary or permanent. So you’ll need to use the right redirect for the task at hand to avoid potential SEO issues.

Server-side redirects

A server-side redirect is one where the server decides where to redirect the user or search engine when a page is requested. It does this by returning a 3XX HTTP status code.

If you’re doing SEO, you’ll be using server-side redirects most of the time, as client-side redirects (we’ll discuss those shortly) have a few drawbacks and tend to be more suitable for quite specific and rare use cases.

Here are the 3XX redirects every SEO should know:

301 redirect

A 301 redirect forwards users to the new URL and tells search engines that the resource has permanently moved. When confronted with a 301 redirect, search engines typically drop the old redirected URL from their index in favor of the new URL. They also transfer PageRank (authority) to the new URL.

302 redirect

A 302 redirect forwards users to the new URL and tells search engines that the resource has temporarily moved. When confronted with a 302 redirect, search engines keep the old URL indexed even though it’s redirected. However, if you leave the 302 redirect in place for a long time, search engines will likely start treating it like a 301 redirect and index the new URL instead.

Like 301s, 302s transfer PageRank. The difference is the transfer happens “backward.” In other words, the “new” URL’s PageRank transfers backward to the old URL (unless search engines are treating it like a 301).

303 redirect

A 303 redirect forwards the user to a resource similar to the one requested and is a temporary form of redirect. It’s typically used for things like preventing form resubmissions when a user hits the “back” button in their browser. You won’t typically use 303 redirects for SEO purposes. If you do, search engines may treat them as either a 301 or 302.

307 redirect

A 307 redirect is the same as a 302 redirect, except it retains the HTTP method (POST, GET) of the original request when performing the redirect.

308 redirect

A 308 redirect is the same as a 301 redirect, except it retains the HTTP method of the original request when performing the redirect. Google says it treats 308 redirects the same as 301 redirects, but most SEOs still use 301 redirects.

Client-side redirects

A client-side redirect is one where the browser decides where to redirect the user. You generally shouldn’t use it unless you don’t have another option.

307 redirect

A 307 redirect commonly occurs client-side when a site uses HSTS. This is because HSTS tells the client’s browser that the server only accepts secure (HTTPS) connections and to perform an internal 307 redirect if asked to request unsecure (HTTP) resources from the site in the future.

Meta refresh redirect

A meta refresh redirect tells the browser to redirect the user after a set number of seconds. Google understands it and will typically treat it the same as a 301 redirect. However, when asked about meta redirects with delays on Twitter, Google’s John Mueller said, “If you want it treated like a redirect, it makes sense to have it act like a redirect.”

Either way, Google doesn’t recommend using them, as they can be confusing for the user and aren’t supported by all browsers. Google recommends using a server-side 301 redirect instead.

JavaScript redirect

A JavaScript redirect, as you probably guessed, uses JavaScript to instruct the browser to redirect the user to a different URL. Some people believe a JS redirect causes issues for search engines because they have to render the page to see the redirect. Although this is true, it’s not usually an issue for Google because it renders pages so fast these days. (Though, there could still be issues with other search engines.) All in all, it’s still better to use a 3XX redirect where possible, but a JS redirect is typically fine if that’s your only option.

Best practices for redirects

Redirects can get complicated. To help you along, here are a few best practices to keep in mind if you’re involved in SEO.

Redirect HTTP to HTTPS

Everyone should be using HTTPS at this stage. It gives your site an extra layer of security, and it’s a small Google ranking factor.

There are a couple of ways to check that your site is properly redirecting from HTTP to HTTPS. The first is to install and activate Ahrefs’ SEO Toolbar, then try to navigate to the HTTP version of your homepage. It should redirect, and you should see a 301 response code on the toolbar.

The problem with this method is you may see a 307 if your site uses HSTS. So here’s another method:

  1. Go to Ahrefs’ Site Audit
  2. Click + New Project
  3. Click Add manually
  4. Change the Scope to HTTP
  5. Enter your domain

You should see the “Not crawlable” error for both the www and non-www versions of your homepage, along with the “301 moved permanently” notification.

Checking for redirects in Ahrefs' Site Audit

If there isn’t a redirect in place or you’re using a type of redirect other than 301 or 308, it’s probably worth asking your developer to switch to 301.

TIP

Whichever method you use, it’s worth repeating it for a few pages so that you can be confident proper redirects are in place across your site.

Use HSTS (to create 307 redirects)

Implementing HSTS (HTTP Strict Transport Security) on your server stops people from accessing non-secure (HTTP) content on your site. It does this by telling browsers that your server only accepts secure connections and that they should do an internal 307 redirect to the HTTPS version of any HTTP resource they’re asked to access.

This isn’t a substitute for 301 or 302 redirects, and it’s not strictly necessary if those are properly set up on your site. However, we argue that it’s best practice these days—even if just to speed things up a bit for users.

Learn more: Strict-Transport-Security — Mozilla

TIP

After implementing HSTS, consider submitting your site to the HSTS preload list. This enables HSTS for everyone trying to visit your website—even if they haven’t visited it before.

Avoid meta refresh redirects

Meta refresh redirects aren’t ideal, so it’s worth checking your site for these and replacing them with either a 301 or 302 redirect. You can do this easily enough with a free Ahrefs Webmaster Tools account. Just crawl your site with Site Audit and look for the “meta refresh redirect” error.

If you then click the error and hit “View affected URLs,” you’ll see the URLs with meta refresh redirects.

Redirect deleted pages to relevant working alternatives (where possible)

Redirecting URLs makes sense when you move content, but it also often makes sense to redirect when you delete content. This is because seeing a “404 not found” error isn’t ideal when a user tries to access a deleted page. It’s often more user friendly to redirect them to a relevant working alternative.

For example, we recently revamped our blog category pages. During the process, we deleted a few categories, including “Outreach & Content Promotion.” Rather than leave this as a 404, we redirected it to our “Link Building” category, as it’s a closely related working alternative.

You can’t do this every time, as there isn’t always a relevant alternative. But if there is, doing so also has the benefit of preserving and transferring PageRank (authority) from the redirected page to the alternative resource.

Most sites will already have some dead or deleted pages that return a 404 status code. To find these, sign up for a free Ahrefs Webmaster Tools account, crawl your site with Site Audit, go to the Internal pages report, then look for the “4XX page” error:

TIP

Enable “backlinks” as a source when setting up your crawl. This will allow Site Audit to find deleted pages with backlinks, even if there are no internal links to the pages on your site.

Crawl sources in Ahrefs' Site Audit

To see the affected pages, click the error and hit “View affected URLs.” If you see a lot of URLs, click the “Manage columns” button, add the “Referring domains” column, then sort by referring domains in descending order. You can then tackle the 404s with the most backlinks first.

404s with backlinks in Ahrefs' Site Audit

Avoid long redirect chains

Redirect chains are when multiple redirects take place between a requested resource and its final destination.

What a redirect chain looks like

Google’s official documentation says that it follows up to 10 redirect hops, so any redirect chains shorter than that aren’t really a problem for SEO.

Googlebot follows up to 10 redirect hops. If the crawler doesn’t receive content within 10 hops, Search Console will show a redirect error in the site’s Index Coverage report.

However, long chains still slow things down for users, so it’s best to avoid them if possible.

You can find long redirect chains for free using Ahrefs Webmaster Tools:

  1. Crawl your site with Site Audit
  2. Go to the Redirects report
  3. Click the Issues tab
  4. Look for the “Redirect chain too long” error

Click the issue and hit “View affected URLs” to see URLs that begin a redirect chain and all the URLs in the chain.

Redirect chain URLs in Ahrefs' Site Audit

Avoid redirect loops

Redirect loops are infinite loops of redirects that occur when a URL redirects to itself or when a URL in a redirect chain redirects back to a URL earlier in the chain.

What a redirect loop looks like

They’re problematic for two reasons:

  • For users –They cut off access to an intended resource and trigger a “too many redirects” error in the browser.
  • For search engines – They “trap” crawlers and waste the crawl budget.

The simplest way to find redirect loops is to crawl your site with a tool like Ahrefs’ Site Audit. You can do this for free with an Ahrefs Webmaster Tools account.

  1. Crawl your site with Site Audit
  2. Go to the Redirects report
  3. Click the Issues tab
  4. Look for the “Redirect loop” error

If you then click the error and click “View affected URLs,” you’ll see a list of URLs that redirect, as well as all URLs in the chain:

Redirect chain URLs in Ahrefs' Site Audit

The best way to fix a redirect loop depends on whether the last URL in the chain (before the loop) is the intended final destination.

If it is, remove the redirect from the final URL. Then make sure the resource is accessible and returns a 200 status code.

How to fix a redirect loop when the final URL is the intended final destination

If it isn’t, change the looping redirect to the intended final destination.

How to fix a redirect loop when the final URL isn't the intended final destination

In both cases, it’s good practice to swap out any internal links to remaining redirects for direct links to the final URL.

Final thoughts

Redirects for SEO are pretty straightforward. You’ll be using server-side 301 and 302 redirects most of the time, depending on whether the redirect is permanent or temporary. However, there are some nuances to the way Google treats 301s and 302s, so it’s worth reading these two guides if you’re facing issues:

Got questions? Ping me on Twitter.




Source link

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SEO

Google CEO Confirms AI Features Coming To Search “Soon”

Published

on

Google CEO Confirms AI Features Coming To Search "Soon"

Google announced today that it will soon be rolling out AI-powered features in its search results, providing users with a new, more intuitive way to navigate and understand the web.

These new AI features will help users quickly understand the big picture and learn more about a topic by distilling complex information into easy-to-digest formats.

Google has a long history of using AI to improve its search results for billions of people.

The company’s latest AI technologies, such as LaMDA, PaLM, Imagen, and MusicLM, provide users with entirely new ways to engage with information.

Google is working to bring these latest advancements into its products, starting with search.

Statement From Google CEO Sundar Pichai

Sundar Pichai, CEO of Google and Alphabet, released a statement on Twitter about a conversational AI service that will be available in the coming weeks.

Bard, powered by LaMDA, is Google’s new language model for dialogue applications.

According to Pichai, Bard, which leverages Google’s vast intelligence and knowledge base, can deliver accurate and high-quality answers:

“In 2021, we shared next-gen language + conversation capabilities powered by our Language Model for Dialogue Applications (LaMDA). Coming soon: Bard, a new experimental conversational #GoogleAI service powered by LaMDA.

Bard seeks to combine the breadth of the world’s knowledge with the power, intelligence, and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses. Today we’re opening Bard up to trusted external testers.

We’ll combine their feedback with our own internal testing to make sure Bard’s responses meet our high bar for quality, safety, and groundedness and we will make it more widely available in coming weeks. It’s early, we will launch, iterate and make it better.”

In Summary

Increasingly, people are turning to Google for deeper insights and understanding.

With the help of AI, Google can consolidate insights for questions where there is no one correct answer, making it easier for people to get to the core of what they are searching for.

In addition to the AI features being rolled out in search, Google is also introducing a new experimental conversational AI service called Bard. Powered by LaMDA, Bard will use Google’s vast intelligence and knowledge base to deliver accurate and high-quality answers to users.

Google continues demonstrating its commitment to making search more intuitive and effective for users. As Pichai said in his statement, the company will continue to launch, iterate, and improve these new offerings in the coming weeks and months.

Source: Google



Source link

Continue Reading

SEO

Google Updates Structured Data Guidance To Clarify Supported Formats

Published

on

Google Updates Structured Data Guidance To Clarify Supported Formats

Google updated the structured data guidance to better emphasize that all three structured data formats are acceptable to Google and also explain why JSON-LD is is recommended.

The updated Search Central page that was updated is the Supported Formats section of the Introduction to structured data markup in Google Search webpage.

The most important changes were to add a new section title (Supported Formats), and to expand that section with an explanation of supported structured data formats.

Three Structured Data Formats

Google supports three structured data formats.

  1. JSON-LD
  2. Microdata
  3. RDFa

But only one of the above formats, JSON-LD, is recommended.

According to the documentation, the other two formats (Microdata and RDFa) are still fine to use. The update to the documentation explains why JSON-LD is recommended.

Google also made a minor change to a title of a preceding section to reflect that the section addresses structured data vocabulary

The original section title, Structured data format, is now Structured data vocabulary and format.

Google added a section title the section that offers guidance on Google’s preferred structured data format.

This is also the section with the most additional text added to it.

New Supported Formats Section Title

The updated content explains why Google prefers the JSON-LD structured data format, while confirming that the other two formats are acceptable.

Previously this section contained just two sentences:

“Google Search supports structured data in the following formats, unless documented otherwise:

Google recommends using JSON-LD for structured data whenever possible.”

The updated section now has the following content:

“Google Search supports structured data in the following formats, unless documented otherwise.

In general, we recommend using a format that’s easiest for you to implement and maintain (in most cases, that’s JSON-LD); all 3 formats are equally fine for Google, as long as the markup is valid and properly implemented per the feature’s documentation.

In general, Google recommends using JSON-LD for structured data if your site’s setup allows it, as it’s the easiest solution for website owners to implement and maintain at scale (in other words, less prone to user errors).”

Structured Data Formats

JSON-LD is arguably the easiest structured data format to implement, the easiest to scale, and the most straightforward to edit.

Most, if not all, WordPress SEO and structured data plugins output JSON-LD structured data.

Nevertheless, it’s a useful update to Google’s structured data guidance in order to make it clear that all three formats are still supported.

Google’s documentation on the change can be read here.

Featured image by Shutterstock/Olena Zaskochenko



Source link

Continue Reading

SEO

Ranking Factors & The Myths We Found

Published

on

Ranking Factors & The Myths We Found

Yandex is the search engine with the majority of market share in Russia and the fourth-largest search engine in the world.

On January 27, 2023, it suffered what is arguably one of the largest data leaks that a modern tech company has endured in many years – but is the second leak in less than a decade.

In 2015, a former Yandex employee attempted to sell Yandex’s search engine code on the black market for around $30,000.

The initial leak in January this year revealed 1,922 ranking factors, of which more than 64% were listed as unused or deprecated (superseded and best avoided).

This leak was just the file labeled kernel, but as the SEO community and I delved deeper, more files were found that combined contain approximately 17,800 ranking factors.

When it comes to practicing SEO for Yandex, the guide I wrote two years ago, for the most part, still applies.

Yandex, like Google, has always been public with its algorithm updates and changes, and in recent years, how it has adopted machine learning.

Notable updates from the past two-three years include:

  • Vega (which doubled the size of the index).
  • Mimicry (penalizing fake websites impersonating brands).
  • Y1 update (introducing YATI).
  • Y2 update (late 2022).
  • Adoption of IndexNow.
  • A fresh rollout and assumed update of the PF filter.

On a personal note, this data leak is like a second Christmas.

Since January 2020, I’ve run an SEO news website as a hobby dedicated to covering Yandex SEO and search news in Russia with 600+ articles, so this is probably the peak event of the hobby site.

I’ve also spoken twice at the Optimization conference – the largest SEO conference in Russia.

This is also a good test to see how closely Yandex’s public statements match the codebase secrets.

In 2019, working with Yandex’s PR team, I was able to interview engineers in their Search team and ask a number of questions sourced from the wider Western SEO community.

You can read the interview with the Yandex Search team here.

Whilst Yandex is primarily known for its presence in Russia, the search engine also has a presence in Turkey, Kazakhstan, and Georgia.

The data leak was believed to be politically motivated and the actions of a rogue employee, and contains a number of code fragments from Yandex’s monolithic repository, Arcadia.

Within the 44GB of leaked data, there’s information relating to a number of Yandex products including Search, Maps, Mail, Metrika, Disc, and Cloud.

What Yandex Has Had To Say

As I write this post (January 31st, 2023), Yandex has publicly stated that:

the contents of the archive (leaked code base) correspond to the outdated version of the repository – it differs from the current version used by our services

And:

It is important to note that the published code fragments also contain test algorithms that were used only within Yandex to verify the correct operation of the services.

So, how much of this code base is actively used is questionable.

Yandex has also revealed that during its investigation and audit, it found a number of errors that violate its own internal principles, so it is likely that portions of this leaked code (that are in current use) may be changing in the near future.

Factor Classification

Yandex classifies its ranking factors into three categories.

This has been outlined in Yandex’s public documentation for some time, but I feel is worth including here, as it better helps us understand the ranking factor leak.

  • Static factors – Factors that are related directly to the website (e.g. inbound backlinks, inbound internal links, headers, and ads ratio).
  • Dynamic factors – Factors that are related to both the website and the search query (e.g. text relevance, keyword inclusions, TF*IDF).
  • User search-related factors – Factors relating to the user query (e.g. where is the user located, query language, and intent modifiers).

The ranking factors in the document are tagged to match the corresponding category, with TG_STATIC and TG_DYNAMIC, and then TG_QUERY_ONLY, TG_QUERY, TG_USER_SEARCH, and TG_USER_SEARCH_ONLY.

Yandex Leak Learnings So Far

From the data thus far, below are some of the affirmations and learnings we’ve been able to make.

There is so much data in this leak, it is very likely that we will be finding new things and making new connections in the next few weeks.

These include:

  • PageRank (a form of).
  • At some point Yandex utilized TF*IDF.
  • Yandex still uses meta keywords, which are also highlighted in its documentation.
  • Yandex has specific factors for medical, legal, and financial topics (YMYL).
  • It also uses a form of page quality scoring, but this is known (ICS score).
  • Links from high-authority websites have an impact on rankings.
  • There’s nothing new to suggest Yandex can crawl JavaScript yet outside of already publicly documented processes.
  • Server errors and excessive 4xx errors can impact ranking.
  • The time of day is taken into consideration as a ranking factor.

Below, I’ve expanded on some other affirmations and learnings from the leak.

Where possible, I’ve also tied these leaked ranking factors to the algorithm updates and announcements that relate to them, or where we were told about them being impactful.

MatrixNet

MatrixNet is mentioned in a few of the ranking factors and was announced in 2009, and then superseded in 2017 by Catboost, which was rolled out across the Yandex product sphere.

This further adds validity to comments directly from Yandex, and one of the factor authors DenPlusPlus (Den Raskovalov), that this is, in fact, an outdated code repository.

MatrixNet was originally introduced as a new, core algorithm that took into consideration thousands of ranking factors and assigned weights based on the user location, the actual search query, and perceived search intent.

It is typically seen as an early version of Google’s RankBrain, when they are indeed two very different systems. MatrixNet was launched six years before RankBrain was announced.

MatrixNet has also been built upon, which isn’t surprising, given it is now 14 years old.

In 2016, Yandex introduced the Palekh algorithm that used deep neural networks to better match documents (webpages) and queries, even if they didn’t contain the right “levels” of common keywords, but satisfied the user intents.

Palekh was capable of processing 150 pages at a time, and in 2017 was updated with the Korolyov update, which took into account more depth of page content, and could work off 200,000 pages at once.

URL & Page-Level Factors

From the leak, we have learned that Yandex takes into consideration URL construction, specifically:

  • The presence of numbers in the URL.
  • The number of trailing slashes in the URL (and if they are excessive).
  • The number of capital letters in the URL is a factor.
Screenshot from author, January 2023

The age of a page (document age) and the last updated date are also important, and this makes sense.

As well as document age and last update, a number of factors in the data relate to freshness – particularly for news-related queries.

Yandex formerly used timestamps, specifically not for ranking purposes but “reordering” purposes, but this is now classified as unused.

Also in the deprecated column are the use of keywords in the URL. Yandex has previously measured that three keywords from the search query in the URL would be an “optimal” result.

Internal Links & Crawl Depth

Whilst Google has gone on the record to say that for its purposes, crawl depth isn’t explicitly a ranking factor, Yandex appears to have an active piece of code that dictates that URLs that are reachable from the homepage have a “higher” level of importance.

Yandex factorsScreenshot from author, January 2023

This mirrors John Mueller’s 2018 statement that Google gives “a little more weight” to pages found more than one click from the homepage.

The ranking factors also highlight a specific token weighting for webpages that are “orphans” within the website linking structure.

Clicks & CTR

In 2011, Yandex released a blog post talking about how the search engine uses clicks as part of its rankings and also addresses the desires of the SEO pros to manipulate the metric for ranking gain.

Specific click factors in the leak look at things like:

  • The ratio of the number of clicks on the URL, relative to all clicks on the search.
  • The same as above, but broken down by region.
  • How often do users click on the URL for the search?

Manipulating Clicks

Manipulating user behavior, specifically “click-jacking”, is a known tactic within Yandex.

Yandex has a filter, known as the PF filter, that actively seeks out and penalizes websites that engage in this activity using scripts that monitor IP similarities and then the “user actions” of those clicks – and the impact can be significant.

The below screenshot shows the impact on organic sessions (сессии) after being penalized for imitating user clicks.

Image Source: Russian Search NewsImage from Russian Search News, January 2023

User Behavior

The user behavior takeaways from the leak are some of the more interesting findings.

User behavior manipulation is a common SEO violation that Yandex has been combating for years. At the 2020 Optimization conference, then Head of Yandex Webmaster Tools Mikhail Slevinsky said the company is making good progress in detecting and penalizing this type of behavior.

Yandex penalizes user behavior manipulation with the same PF filter used to combat CTR manipulation.

Dwell Time

102 of the ranking factors contain the tag TG_USERFEAT_SEARCH_DWELL_TIME, and reference the device, user duration, and average page dwell time.

All but 39 of these factors are deprecated.

Yandex factorsScreenshot from author, January 2023

Bing first used the term Dwell time in a 2011 blog, and in recent years Google has made it clear that it doesn’t use dwell time (or similar user interaction signals) as ranking factors.

YMYL

YMYL (Your Money, Your Life) is a concept well-known within Google and is not a new concept to Yandex.

Within the data leak, there are specific ranking factors for medical, legal, and financial content that exist – but this was notably revealed in 2019 at the Yandex Webmaster conference when it announced the Proxima Search Quality Metric.

Metrika Data Usage

Six of the ranking factors relate to the usage of Metrika data for the purposes of ranking. However, one of them is tagged as deprecated:

  • The number of similar visitors from the YandexBar (YaBar/Ябар).
  • The average time spent on URLs from those same similar visitors.
  • The “core audience” of pages on which there is a Metrika counter [deprecated].
  • The average time a user spends on a host when accessed externally (from another non-search site) from a specific URL.
  • Average ‘depth’ (number of hits within the host) of a user’s stay on the host when accessed externally (from another non-search site) from a particular URL.
  • Whether or not the domain has Metrika installed.

In Metrika, user data is handled differently.

Unlike Google Analytics, there are a number of reports focused on user “loyalty” combining site engagement metrics with return frequency, duration between visits, and source of the visit.

For example, I can see a report in one click to see a breakdown of individual site visitors:

MetrikaScreenshot from Metrika, January 2023

Metrika also comes “out of the box” with heatmap tools and user session recording, and in recent years the Metrika team has made good progress in being able to identify and filter bot traffic.

With Google Analytics, there is an argument that Google doesn’t use UA/GA4 data for ranking purposes because of how easy it is to modify or break the tracking code – but with Metrika counters, they are a lot more linear, and a lot of the reports are unchangeable in terms of how the data is collected.

Impact Of Traffic On Rankings

Following on from looking at Metrika data as a ranking factor; These factors effectively confirm that direct traffic and paid traffic (buying ads via Yandex Direct) can impact organic search performance:

  • Share of direct visits among all incoming traffic.
  • Green traffic share (aka direct visits) – Desktop.
  • Green traffic share (aka direct visits) – Mobile.
  • Search traffic – transitions from search engines to the site.
  • Share of visits to the site not by links (set by hand or from bookmarks).
  • The number of unique visitors.
  • Share of traffic from search engines.

News Factors

There are a number of factors relating to “News”, including two that mention Yandex.News directly.

Yandex.News was an equivalent of Google News, but was sold to the Russian social network VKontakte in August 2022, along with another Yandex product “Zen”.

So, it’s not clear if these factors related to a product no longer owned or operated by Yandex, or to how news websites are ranked in “regular” search.

Backlink Importance

Yandex has similar algorithms to combat link manipulation as Google – and has since the Nepot filter in 2005.

From reviewing the backlink ranking factors and some of the specifics in the descriptions, we can assume that the best practices for building links for Yandex SEO would be to:

  • Build links with a more natural frequency and varying amounts.
  • Build links with branded anchor texts as well as use commercial keywords.
  • If buying links, avoid buying links from websites that have mixed topics.

Below is a list of link-related factors that can be considered affirmations of best practices:

  • The age of the backlink is a factor.
  • Link relevance based on topics.
  • Backlinks built from homepages carry more weight than internal pages.
  • Links from the top 100 websites by PageRank (PR) can impact rankings.
  • Link relevance based on the quality of each link.
  • Link relevance, taking into account the quality of each link, and the topic of each link.
  • Link relevance, taking into account the non-commercial nature of each link.
  • Percentage of inbound links with query words.
  • Percentage of query words in links (up to a synonym).
  • The links contain all the words of the query (up to a synonym).
  • Dispersion of the number of query words in links.

However, there are some link-related factors that are additional considerations when planning, monitoring, and analyzing backlinks:

  • The ratio of “good” versus “bad” backlinks to a website.
  • The frequency of links to the site.
  • The number of incoming SEO trash links between hosts.

The data leak also revealed that the link spam calculator has around 80 active factors that are taken into consideration, with a number of deprecated factors.

This creates the question as to how well Yandex is able to recognize negative SEO attacks, given it looks at the ratio of good versus bad links, and how it determines what a bad link is.

A negative SEO attack is also likely to be a short burst (high frequency) link event in which a site will unwittingly gain a high number of poor quality, non-topical, and potentially over-optimized links.

Yandex uses machine learning models to identify Private Blog Networks (PBNs) and paid links, and it makes the same assumption between link velocity and the time period they are acquired.

Typically, paid-for links are generated over a longer period of time, and these patterns (including link origin site analysis) are what the Minusinsk update (2015) was introduced to combat.

Yandex Penalties

There are two ranking factors, both deprecated, named SpamKarma and Pessimization.

Pessimization refers to reducing PageRank to zero and aligns with the expectations of severe Yandex penalties.

SpamKarma also aligns with assumptions made around Yandex penalizing hosts and individuals, as well as individual domains.

Onpage Advertising

There are a number of factors relating to advertising on the page, some of them deprecated (like the screenshot example below).

Yandex factorsScreenshot from author, January 2023

It’s not known from the description exactly what the thought process with this factor was, but it could be assumed that a high ratio of adverts to visible screen was a negative factor – much like how Google takes umbrage if adverts obfuscate the page’s main content, or are obtrusive.

Tying this back to known Yandex mechanisms, the Proxima update also took into consideration the ratio of useful and advertising content on a page.

Can We Apply Any Yandex Learnings To Google?

Yandex and Google are disparate search engines, with a number of differences, despite the tens of engineers who have worked for both companies.

Because of this fight for talent, we can infer that some of these master builders and engineers will have built things in a similar fashion (though not direct copies), and applied learnings from previous iterations of their builds with their new employers.

What Russian SEO Pros Are Saying About The Leak

Much like the Western world, SEO professionals in Russia have been having their say on the leak across the various Runet forums.

The reaction in these forums has been different to SEO Twitter and Mastodon, with a focus more on Yandex’s filters, and other Yandex products that are optimized as part of wider Yandex optimization campaigns.

It is also worth noting that a number of conclusions and findings from the data match what the Western SEO world is also finding.

Common themes in the Russian search forums:

  • Webmasters asking for insights into recent filters, such as Mimicry and the updated PF filter.
  • The age and relevance of some of the factors, due to author names no longer being at Yandex, and mentions of long-retired Yandex products.
  • The main interesting learnings are around the use of Metrika data, and information relating to the Crawler & Indexer.
  • A number of factors outline the usage of DSSM, which in theory was superseded by the release of Palekh in 2016. This was a search algorithm utilizing machine learning, announced by Yandex in 2016.
  • A debate around ICS scoring in Yandex, and whether or not Yandex may provide more traffic to a site and influence its own factors by doing so.

The leaked factors, particularly around how Yandex evaluates site quality, have also come under scrutiny.

There is a long-standing sentiment in the Russian SEO community that Yandex oftentimes favors its own products and services in search results ahead of other websites, and webmasters are asking questions like:

Why does it bother going to all this trouble, when it just nails its services to the top of the page anyway?

In loosely translated documents, these are referred to as the Sorcerers or Yandex Sorcerers. In Google, we’d call these search engine results pages (SERPs) features – like Google Hotels, etc.

In October 2022, Kassir (a Russian ticket portal) claimed ₽328m compensation from Yandex due to lost revenue, caused by the “discriminatory conditions” in which Yandex Sorcerers took the customer base away from the private company.

This is off the back of a 2020 class action in which multiple companies raised a case with the Federal Antimonopoly Service (FAS) for anticompetitive promotion of its own services.

More resources:


Featured Image: FGC/Shutterstock



Source link

Continue Reading

Trending

en_USEnglish