Whatever You Required To Understand About The X-Robots-Tag HTTP Header

Posted by

Seo, in its most standard sense, trusts one thing above all others: Online search engine spiders crawling and indexing your website.

However almost every site is going to have pages that you do not want to include in this exploration.

For instance, do you really want your personal privacy policy or internal search pages appearing in Google results?

In a best-case circumstance, these are not doing anything to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more vital pages.

Thankfully, Google allows webmasters to tell online search engine bots what pages and content to crawl and what to neglect. There are numerous ways to do this, the most typical being utilizing a robots.txt file or the meta robotics tag.

We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you must certainly check out.

However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exclusion Procedure (REPRESENTATIVE).

Robots.txt offers spiders with directions about the site as a whole, while meta robotics tags consist of directions for particular pages.

Some meta robots tags you may utilize consist of index, which informs search engines to include the page to their index; noindex, which informs it not to include a page to the index or include it in search results page; follow, which instructs an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.

Both robots.txt and meta robots tags are useful tools to keep in your tool kit, but there’s also another method to advise search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to control how your webpages are crawled and indexed by spiders. As part of the HTTP header action to a URL, it controls indexing for a whole page, along with the particular components on that page.

And whereas using meta robots tags is fairly straightforward, the X-Robots-Tag is a bit more complicated.

However this, of course, raises the question:

When Should You Use The X-Robots-Tag?

According to Google, “Any directive that can be used in a robotics meta tag can also be specified as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP reaction with both the meta robotics tag and X-Robots Tag, there are particular scenarios where you would wish to use the X-Robots-Tag– the 2 most typical being when:

  • You want to control how your non-HTML files are being crawled and indexed.
  • You want to serve directives site-wide instead of on a page level.

For instance, if you want to block a particular image or video from being crawled– the HTTP action approach makes this easy.

The X-Robots-Tag header is also useful since it enables you to combine multiple tags within an HTTP action or use a comma-separated list of directives to define instructions.

Perhaps you don’t desire a certain page to be cached and want it to be unavailable after a certain date. You can utilize a combination of “noarchive” and “unavailable_after” tags to instruct search engine bots to follow these guidelines.

Essentially, the power of the X-Robots-Tag is that it is far more flexible than the meta robotics tag.

The benefit of using an X-Robots-Tag with HTTP reactions is that it allows you to utilize regular expressions to execute crawl regulations on non-HTML, as well as use specifications on a larger, international level.

To help you understand the distinction between these instructions, it’s practical to classify them by type. That is, are they crawler instructions or indexer regulations?

Here’s an useful cheat sheet to explain:

Spider Directives Indexer Directives
Robots.txt– uses the user agent, permit, disallow, and sitemap regulations to specify where on-site search engine bots are allowed to crawl and not allowed to crawl. Meta Robots tag– enables you to specify and avoid online search engine from showing specific pages on a site in search results page.

Nofollow– permits you to define links that need to not pass on authority or PageRank.

X-Robots-tag– allows you to control how defined file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you wish to block particular file types. An ideal approach would be to add the X-Robots-Tag to an Apache setup or a.htaccess file.

The X-Robots-Tag can be added to a site’s HTTP responses in an Apache server configuration via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds excellent in theory, however what does it appear like in the real life? Let’s have a look.

Let’s state we wanted online search engine not to index.pdf file types. This setup on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would appear like the listed below:

place ~ *. pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s take a look at a various scenario. Let’s state we want to use the X-Robots-Tag to obstruct image files, such as.jpg,. gif,. png, and so on, from being indexed. You might do this with an X-Robots-Tag that would appear like the below:

Header set X-Robots-Tag “noindex”

Please note that comprehending how these directives work and the impact they have on one another is crucial.

For example, what takes place if both the X-Robots-Tag and a meta robotics tag lie when spider bots find a URL?

If that URL is obstructed from robots.txt, then specific indexing and serving directives can not be discovered and will not be followed.

If instructions are to be followed, then the URLs including those can not be prohibited from crawling.

Check For An X-Robots-Tag

There are a couple of different techniques that can be used to check for an X-Robots-Tag on the website.

The simplest method to inspect is to set up a browser extension that will tell you X-Robots-Tag info about the URL.

Screenshot of Robots Exemption Checker, December 2022

Another plugin you can utilize to identify whether an X-Robots-Tag is being used, for example, is the Web Designer plugin.

By clicking on the plugin in your internet browser and navigating to “View Response Headers,” you can see the various HTTP headers being used.

Another approach that can be utilized for scaling in order to pinpoint problems on websites with a million pages is Shrieking Frog

. After running a site through Shouting Frog, you can browse to the “X-Robots-Tag” column.

This will show you which areas of the website are using the tag, together with which specific directives.

Screenshot of Shrieking Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and managing how online search engine engage with your website is

the foundation of search engine optimization. And the X-Robots-Tag is an effective tool you can use to do simply that. Just know: It’s not without its threats. It is really easy to slip up

and deindex your whole site. That said, if you read this piece, you’re most likely not an SEO novice.

So long as you utilize it sensibly, take your time and check your work, you’ll discover the X-Robots-Tag to be a beneficial addition to your toolbox. More Resources: Featured Image: Song_about_summer/ SMM Panel