Connect with us

SEO

Everything You Need To Know About The X-Robots-Tag HTTP Header

Published

on

Everything You Need To Know About The X-Robots-Tag HTTP Header

Search engine optimization, in its most basic sense, relies upon one thing above all others: Search engine spiders crawling and indexing your site.

But nearly every website is going to have pages that you don’t want to include in this exploration.

For example, do you really want your privacy policy or internal search pages showing up in Google results?

In a best-case scenario, these are doing nothing to drive traffic to your site actively, and in a worst-case, they could be diverting traffic from more important pages.

Luckily, Google allows webmasters to tell search engine bots what pages and content to crawl and what to ignore. There are several ways to do this, the most common being using a robots.txt file or the meta robots tag.

Advertisement

We have an excellent and detailed explanation of the ins and outs of robots.txt, which you should definitely read.

But in high-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exclusion Protocol (REP).

Robots.txt provides crawlers with instructions about the site as a whole, while meta robots tags include directions for specific pages.

Some meta robots tags you might employ include index, which tells search engines to add the page to their index; noindex, which tells it not to add a page to the index or include it in search results; follow, which instructs a search engine to follow the links on a page; nofollow, which tells it not to follow links, and a whole host of others.

Both robots.txt and meta robots tags are useful tools to keep in your toolbox, but there’s also another way to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for an entire page, as well as the specific elements on that page.

Advertisement

And whereas using meta robots tags is fairly straightforward, the X-Robots-Tag is a bit more complicated.

But this, of course, raises the question:

When Should You Use The X-Robots-Tag?

According to Google, “Any directive that can be used in a robots meta tag can also be specified as an X-Robots-Tag.”

While you can set robots.txt-related directives in the headers of an HTTP response with both the meta robots tag and X-Robots Tag, there are certain situations where you would want to use the X-Robots-Tag – the two most common being when:

  • You want to control how your non-HTML files are being crawled and indexed.
  • You want to serve directives site-wide instead of on a page level.

For example, if you want to block a specific image or video from being crawled – the HTTP response method makes this easy.

The X-Robots-Tag header is also useful because it allows you to combine multiple tags within an HTTP response or use a comma-separated list of directives to specify directives.

Maybe you don’t want a certain page to be cached and want it to be unavailable after a certain date. You can use a combination of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these instructions.

Advertisement

Essentially, the power of the X-Robots-Tag is that it is much more flexible than the meta robots tag.

The advantage of using an X-Robots-Tag with HTTP responses is that it allows you to use regular expressions to execute crawl directives on non-HTML, as well as apply parameters on a larger, global level.

To help you understand the difference between these directives, it’s helpful to categorize them by type. That is, are they crawler directives or indexer directives?

Here’s a handy cheat sheet to explain:

Crawler Directives Indexer Directives
Robots.txt – uses the user agent, allow, disallow, and sitemap directives to specify where on-site search engine bots are allowed to crawl and not allowed to crawl. Meta Robots tag – allows you to specify and prevent search engines from showing particular pages on a site in search results.

Nofollow – allows you to specify links that should not pass on authority or PageRank.

Advertisement

X-Robots-tag – allows you to control how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s say you want to block specific file types. An ideal approach would be to add the X-Robots-Tag to an Apache configuration or a .htaccess file.

The X-Robots-Tag can be added to a site’s HTTP responses in an Apache server configuration via .htaccess file.

Real-World Examples And Uses Of The X-Robots-Tag

So that sounds great in theory, but what does it look like in the real world? Let’s take a look.

Let’s say we wanted search engines not to index .pdf file types. This configuration on Apache servers would look something like the below:

<Files ~ ".pdf$">
  Header set X-Robots-Tag "noindex, nofollow"
</Files>

In Nginx, it would look like the below:

Advertisement
location ~* .pdf$ {
  add_header X-Robots-Tag "noindex, nofollow";
}

Now, let’s look at a different scenario. Let’s say we want to use the X-Robots-Tag to block image files, such as .jpg, .gif, .png, etc., from being indexed. You could do this with an X-Robots-Tag that would look like the below:

<Files ~ ".(png|jpe?g|gif)$">
Header set X-Robots-Tag "noindex"
</Files>

Please note that understanding how these directives work and the impact they have on one another is crucial.

For example, what happens if both the X-Robots-Tag and a meta robots tag are located when crawler bots discover a URL?

If that URL is blocked from robots.txt, then certain indexing and serving directives cannot be discovered and will not be followed.

If directives are to be followed, then the URLs containing those cannot be disallowed from crawling.

Check For An X-Robots-Tag

There are a few different methods that can be used to check for an X-Robots-Tag on the site.

Advertisement

The easiest way to check is to install a browser extension that will tell you X-Robots-Tag information about the URL.

Screenshot of Robots Exclusion Checker, December 2022Robots Exclusion Checker

Another plugin you can use to determine whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking on the plugin in your browser and navigating to “View Response Headers,” you can see the various HTTP headers being used.

web developer plugin

web developer plugin

Another method that can be used for scaling in order to pinpoint issues on websites with a million pages is Screaming Frog.

After running a site through Screaming Frog, you can navigate to the “X-Robots-Tag” column.

This will show you which sections of the site are using the tag, along with which specific directives.

Screaming Frog Report. X-Robot-TagScreenshot of Screaming Frog Report. X-Robot-Tag, December 2022Screaming Frog Report. X-Robot-Tag

Using X-Robots-Tags On Your Site

Understanding and controlling how search engines interact with your website is the cornerstone of search engine optimization. And the X-Robots-Tag is a powerful tool you can use to do just that.

Just be aware: It’s not without its dangers. It is very easy to make a mistake and deindex your entire site.

Advertisement

That said, if you’re reading this piece, you’re probably not an SEO beginner. So long as you use it wisely, take your time and check your work, you’ll find the X-Robots-Tag to be a useful addition to your arsenal.

More Resources:


Featured Image: Song_about_summer/Shutterstock

window.addEventListener( ‘load’, function() {
setTimeout(function(){ striggerEvent( ‘load2’ ); }, 2000);
});

window.addEventListener( ‘load2’, function() {

if( sopp != ‘yes’ && addtl_consent != ‘1~’ && !ss_u ){

Advertisement

!function(f,b,e,v,n,t,s)
{if(f.fbq)return;n=f.fbq=function(){n.callMethod?
n.callMethod.apply(n,arguments):n.queue.push(arguments)};
if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version=’2.0′;
n.queue=[];t=b.createElement(e);t.async=!0;
t.src=v;s=b.getElementsByTagName(e)[0];
s.parentNode.insertBefore(t,s)}(window,document,’script’,
‘https://connect.facebook.net/en_US/fbevents.js’);

if( typeof sopp !== “undefined” && sopp === ‘yes’ ){
fbq(‘dataProcessingOptions’, [‘LDU’], 1, 1000);
}else{
fbq(‘dataProcessingOptions’, []);
}

fbq(‘init’, ‘1321385257908563’);

fbq(‘track’, ‘PageView’);

fbq(‘trackSingle’, ‘1321385257908563’, ‘ViewContent’, {
content_name: ‘everything-x-robots-tag’,
content_category: ‘seo technical-seo’
});
}
});

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address

SEO

Big Update To Google’s Ranking Drop Documentation

Published

on

By

Google updates documentation for diagnosing ranking drops

Google updated their guidance with five changes on how to debug ranking drops. The new version contains over 400 more words that address small and large ranking drops. There’s room to quibble about some of the changes but overall the revised version is a step up from what it replaced.

Change# 1: Downplays Fixing Traffic Drops

The opening sentence was changed so that it offers less hope for bouncing back from an algorithmic traffic drop. Google also joined two sentences into one sentence in the revised version of the documentation.

The documentation previously said that most traffic drops can be reversed and that identifying the reasons for a drop aren’t straightforward. The part about most of them can be reversed was completely removed.

Here is the original two sentences:

“A drop in organic Search traffic can happen for several reasons, and most of them can be reversed. It may not be straightforward to understand what exactly happened to your site”

Now there’s no hope offered for “most of them can be reversed” and more emphasis on understanding what happened is not straightforward.

Advertisement

This is the new guidance

“A drop in organic Search traffic can happen for several reasons, and it may not be straightforward to understand what exactly happened to your site.”

Change #2 Security Or Spam Issues

Google updated the traffic graph illustrations so that they precisely align with the causes for each kind of traffic decline.

The previous version of the graph was labeled:

“Site-level technical issue (Manual Action, strong algorithmic changes)”

The problem with the previous label is that manual actions and strong algorithmic changes are not technical issues and the new version fixes that issue.

The updated version now reads:

“Large drop from an algorithmic update, site-wide security or spam issue”

Change #3 Technical Issues

There’s one more change to a graph label, also to make it more accurate.

Advertisement

This is how the previous graph was labeled:

“Page-level technical issue (algorithmic changes, market disruption)”

The updated graph is now labeled:

“Technical issue across your site, changing interests”

Now the graph and label are more specific as a sitewide change and “changing interests” is more general and covers a wider range of changes than market disruption. Changing interests includes market disruption (where a new product makes a previous one obsolete or less desirable) but it also includes products that go out of style or loses their trendiness.

Graph titled

Change #4 Google Adds New Guidance For Algorithmic Changes

The biggest change by far is their brand new section for algorithmic changes which replaces two smaller sections, one about policy violations and manual actions and a second one about algorithm changes.

The old version of this one section had 108 words. The updated version contains 443 words.

A section that’s particularly helpful is where the guidance splits algorithmic update damage into two categories.

Advertisement

Two New Categories:

  • Small drop in position? For example, dropping from position 2 to 4.
  • Large drop in position? For example, dropping from position 4 to 29.

The two new categories are perfect and align with what I’ve seen in the search results for sites that have lost rankings. The reasons for dropping up and down within the top ten are different from the reasons why a site drops completely out of the top ten.

I don’t agree with the guidance for large drops. They recommend reviewing your site for large drops, which is good advice for some sites that have lost rankings. But in other cases there’s nothing wrong with the site and this is where less experienced SEOs tend to be unable to fix the problems because there’s nothing wrong with the site. Recommendations for improving EEAT, adding author bios or filing link disavows do not solve what’s going on because there’s nothing wrong with the site. The problem is something else in some of the cases.

Here is the new guidance for debugging search position drops:

Algorithmic update
Google is always improving how it assesses content and updating its search ranking and serving algorithms accordingly; core updates and other smaller updates may change how some pages perform in Google Search results. We post about notable improvements to our systems on our list of ranking updates page; check it to see if there’s anything that’s applicable to your site.

If you suspect a drop in traffic is due to an algorithmic update, it’s important to understand that there might not be anything fundamentally wrong with your content. To determine whether you need to make a change, review your top pages in Search Console and assess how they were ranking:

Small drop in position? For example, dropping from position 2 to 4.
Large drop in position? For example, dropping from position 4 to 29.

Keep in mind that positions aren’t static or fixed in place. Google’s search results are dynamic in nature because the open web itself is constantly changing with new and updated content. This constant change can cause both gains and drops in organic Search traffic.

Small drop in position
A small drop in position is when there’s a small shift in position in the top results (for example, dropping from position 2 to 4 for a search query). In Search Console, you might see a noticeable drop in traffic without a big change in impressions.

Advertisement

Small fluctuations in position can happen at any time (including moving back up in position, without you needing to do anything). In fact, we recommend avoiding making radical changes if your page is already performing well.

Large drop in position
A large drop in position is when you see a notable drop out of the top results for a wide range of terms (for example, dropping from the top 10 results to position 29).

In cases like this, self-assess your whole website overall (not just individual pages) to make sure it’s helpful, reliable and people-first. If you’ve made changes to your site, it may take time to see an effect: some changes can take effect in a few days, while others could take several months. For example, it may take months before our systems determine that a site is now producing helpful content in the long term. In general, you’ll likely want to wait a few weeks to analyze your site in Search Console again to see if your efforts had a beneficial effect on ranking position.

Keep in mind that there’s no guarantee that changes you make to your website will result in noticeable impact in search results. If there’s more deserving content, it will continue to rank well with our systems.”

Change #5 Trivial Changes

The rest of the changes are relatively trivial but nonetheless makes the documentation more precise.

For example, one of the headings was changed from this:

Advertisement

You recently moved your site

To this new heading:

Site moves and migrations

Google’s Updated Ranking Drops Documentation

Google’s updated documentation is a well thought out but I think that the recommendations for large algorithmic drops are helpful for some cases and not helpful for other cases. I have 25 years of SEO experience and have experienced every single Google algorithm update. There are certain updates where the problem is not solved by trying to fix things and Google’s guidance used to be that sometimes there’s nothing to fix. The documentation is better but in my opinion it can be improved even further.

Read the new documentation here:

Debugging drops in Google Search traffic

Review the previous documentation:

Internet Archive Wayback Machine: Debugging drops in Google Search traffic

Advertisement

Featured Image by Shutterstock/Tomacco

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google March 2024 Core Update Officially Completed A Week Ago

Published

on

By

Graphic depicting the Google logo with colorful segments on a blue circuit board background, accompanied by the text "Google March 2024 Core Update.

Google has officially completed its March 2024 Core Update, ending over a month of ranking volatility across the web.

However, Google didn’t confirm the rollout’s conclusion on its data anomaly page until April 26—a whole week after the update was completed on April 19.

Many in the SEO community had been speculating for days about whether the turbulent update had wrapped up.

The delayed transparency exemplifies Google’s communication issues with publishers and the need for clarity during core updates

Google March 2024 Core Update Timeline & Status

First announced on March 5, the core algorithm update is complete as of April 19. It took 45 days to complete.

Advertisement

Unlike more routine core refreshes, Google warned this one was more complex.

Google’s documentation reads:

“As this is a complex update, the rollout may take up to a month. It’s likely there will be more fluctuations in rankings than with a regular core update, as different systems get fully updated and reinforce each other.”

The aftershocks were tangible, with some websites reporting losses of over 60% of their organic search traffic, according to data from industry observers.

The ripple effects also led to the deindexing of hundreds of sites that were allegedly violating Google’s guidelines.

Addressing Manipulation Attempts

In its official guidance, Google highlighted the criteria it looks for when targeting link spam and manipulation attempts:

  • Creating “low-value content” purely to garner manipulative links and inflate rankings.
  • Links intended to boost sites’ rankings artificially, including manipulative outgoing links.
  • The “repurposing” of expired domains with radically different content to game search visibility.

The updated guidelines warn:

“Any links that are intended to manipulate rankings in Google Search results may be considered link spam. This includes any behavior that manipulates links to your site or outgoing links from your site.”

John Mueller, a Search Advocate at Google, responded to the turbulence by advising publishers not to make rash changes while the core update was ongoing.

Advertisement

However, he suggested sites could proactively fix issues like unnatural paid links.

Mueller stated on Reddit:

“If you have noticed things that are worth improving on your site, I’d go ahead and get things done. The idea is not to make changes just for search engines, right? Your users will be happy if you can make things better even if search engines haven’t updated their view of your site yet.”

Emphasizing Quality Over Links

The core update made notable changes to how Google ranks websites.

Most significantly, Google reduced the importance of links in determining a website’s ranking.

In contrast to the description of links as “an important factor in determining relevancy,” Google’s updated spam policies stripped away the “important” designation, simply calling links “a factor.”

This change aligns with Google’s Gary Illyes’ statements that links aren’t among the top three most influential ranking signals.

Advertisement

Instead, Google is giving more weight to quality, credibility, and substantive content.

Consequently, long-running campaigns favoring low-quality link acquisition and keyword optimizations have been demoted.

With the update complete, SEOs and publishers are left to audit their strategies and websites to ensure alignment with Google’s new perspective on ranking.

Core Update Feedback

Google has opened a ranking feedback form related to this core update.

You can use this form until May 31 to provide feedback to Google’s Search team about any issues noticed after the core update.

While the feedback provided won’t be used to make changes for specific queries or websites, Google says it may help inform general improvements to its search ranking systems for future updates.

Advertisement

Google also updated its help documentation on “Debugging drops in Google Search traffic” to help people understand ranking changes after a core update.


Featured Image: Rohit-Tripathi/Shutterstock

FAQ

After the update, what steps should websites take to align with Google’s new ranking criteria?

After Google’s March 2024 Core Update, websites should:

  • Improve the quality, trustworthiness, and depth of their website content.
  • Stop heavily focusing on getting as many links as possible and prioritize relevant, high-quality links instead.
  • Fix any shady or spam-like SEO tactics on their sites.
  • Carefully review their SEO strategies to ensure they follow Google’s new guidelines.

Source link

Advertisement
Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

SEO

Google Declares It The “Gemini Era” As Revenue Grows 15%

Published

on

By

A person holding a smartphone displaying the Google Gemini Era logo, with a blurred background of stock market charts.

Alphabet Inc., Google’s parent company, announced its first quarter 2024 financial results today.

While Google reported double-digit growth in key revenue areas, the focus was on its AI developments, dubbed the “Gemini era” by CEO Sundar Pichai.

The Numbers: 15% Revenue Growth, Operating Margins Expand

Alphabet reported Q1 revenues of $80.5 billion, a 15% increase year-over-year, exceeding Wall Street’s projections.

Net income was $23.7 billion, with diluted earnings per share of $1.89. Operating margins expanded to 32%, up from 25% in the prior year.

Ruth Porat, Alphabet’s President and CFO, stated:

Advertisement

“Our strong financial results reflect revenue strength across the company and ongoing efforts to durably reengineer our cost base.”

Google’s core advertising units, such as Search and YouTube, drove growth. Google advertising revenues hit $61.7 billion for the quarter.

The Cloud division also maintained momentum, with revenues of $9.6 billion, up 28% year-over-year.

Pichai highlighted that YouTube and Cloud are expected to exit 2024 at a combined $100 billion annual revenue run rate.

Generative AI Integration in Search

Google experimented with AI-powered features in Search Labs before recently introducing AI overviews into the main search results page.

Regarding the gradual rollout, Pichai states:

“We are being measured in how we do this, focusing on areas where gen AI can improve the Search experience, while also prioritizing traffic to websites and merchants.”

Pichai reports that Google’s generative AI features have answered over a billion queries already:

Advertisement

“We’ve already served billions of queries with our generative AI features. It’s enabling people to access new information, to ask questions in new ways, and to ask more complex questions.”

Google reports increased Search usage and user satisfaction among those interacting with the new AI overview results.

The company also highlighted its “Circle to Search” feature on Android, which allows users to circle objects on their screen or in videos to get instant AI-powered answers via Google Lens.

Reorganizing For The “Gemini Era”

As part of the AI roadmap, Alphabet is consolidating all teams building AI models under the Google DeepMind umbrella.

Pichai revealed that, through hardware and software improvements, the company has reduced machine costs associated with its generative AI search results by 80% over the past year.

He states:

“Our data centers are some of the most high-performing, secure, reliable and efficient in the world. We’ve developed new AI models and algorithms that are more than one hundred times more efficient than they were 18 months ago.

How Will Google Make Money With AI?

Alphabet sees opportunities to monetize AI through its advertising products, Cloud offerings, and subscription services.

Advertisement

Google is integrating Gemini into ad products like Performance Max. The company’s Cloud division is bringing “the best of Google AI” to enterprise customers worldwide.

Google One, the company’s subscription service, surpassed 100 million paid subscribers in Q1 and introduced a new premium plan featuring advanced generative AI capabilities powered by Gemini models.

Future Outlook

Pichai outlined six key advantages positioning Alphabet to lead the “next wave of AI innovation”:

  1. Research leadership in AI breakthroughs like the multimodal Gemini model
  2. Robust AI infrastructure and custom TPU chips
  3. Integrating generative AI into Search to enhance the user experience
  4. A global product footprint reaching billions
  5. Streamlined teams and improved execution velocity
  6. Multiple revenue streams to monetize AI through advertising and cloud

With upcoming events like Google I/O and Google Marketing Live, the company is expected to share further updates on its AI initiatives and product roadmap.


Featured Image: Sergei Elagin/Shutterstock

Source link

Keep an eye on what we are doing
Be the first to get latest updates and exclusive content straight to your email inbox.
We promise not to spam you. You can unsubscribe at any time.
Invalid email address
Continue Reading

Trending

Follow by Email
RSS