Whatever You Need To Know About The X-Robots-Tag HTTP Header

Posted by

Search engine optimization, in its a lot of standard sense, trusts one thing above all others: Online search engine spiders crawling and indexing your website.

However almost every website is going to have pages that you do not wish to consist of in this exploration.

For example, do you really desire your privacy policy or internal search pages showing up in Google results?

In a best-case situation, these are doing nothing to drive traffic to your website actively, and in a worst-case, they might be diverting traffic from more crucial pages.

Fortunately, Google allows web designers to inform online search engine bots what pages and material to crawl and what to neglect. There are numerous methods to do this, the most common being using a robots.txt file or the meta robotics tag.

We have an exceptional and in-depth explanation of the ins and outs of robots.txt, which you ought to absolutely check out.

However in high-level terms, it’s a plain text file that resides in your site’s root and follows the Robots Exemption Procedure (REP).

Robots.txt provides crawlers with guidelines about the site as a whole, while meta robots tags consist of instructions for particular pages.

Some meta robots tags you may utilize consist of index, which tells online search engine to include the page to their index; noindex, which tells it not to add a page to the index or include it in search results; follow, which advises an online search engine to follow the links on a page; nofollow, which informs it not to follow links, and a whole host of others.

Both robots.txt and meta robotics tags are useful tools to keep in your tool kit, however there’s likewise another way to advise online search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to control how your web pages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it manages indexing for a whole page, along with the particular aspects on that page.

And whereas utilizing meta robots tags is fairly straightforward, the X-Robots-Tag is a bit more complicated.

But this, of course, raises the concern:

When Should You Use The X-Robots-Tag?

According to Google, “Any instruction that can be used in a robotics meta tag can also be defined as an X-Robots-Tag.”

While you can set robots.txt-related instructions in the headers of an HTTP action with both the meta robots tag and X-Robots Tag, there are specific scenarios where you would wish to utilize the X-Robots-Tag– the 2 most common being when:

  • You want to manage how your non-HTML files are being crawled and indexed.
  • You wish to serve regulations site-wide rather of on a page level.

For example, if you wish to obstruct a particular image or video from being crawled– the HTTP response method makes this simple.

The X-Robots-Tag header is likewise useful because it allows you to combine several tags within an HTTP action or utilize a comma-separated list of directives to define instructions.

Maybe you do not want a specific page to be cached and want it to be unavailable after a particular date. You can use a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these instructions.

Essentially, the power of the X-Robots-Tag is that it is much more flexible than the meta robots tag.

The benefit of using an X-Robots-Tag with HTTP responses is that it allows you to use regular expressions to carry out crawl directives on non-HTML, as well as apply specifications on a larger, international level.

To help you understand the distinction in between these instructions, it’s helpful to classify them by type. That is, are they crawler regulations or indexer instructions?

Here’s a helpful cheat sheet to discuss:

Spider Directives Indexer Directives
Robots.txt– uses the user agent, enable, disallow, and sitemap directives to specify where on-site search engine bots are enabled to crawl and not enabled to crawl. Meta Robots tag– permits you to specify and avoid search engines from showing particular pages on a website in search results.

Nofollow– permits you to define links that ought to not hand down authority or PageRank.

X-Robots-tag– allows you to manage how specified file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s state you want to block particular file types. A perfect method would be to include the X-Robots-Tag to an Apache configuration or a.htaccess file.

The X-Robots-Tag can be contributed to a site’s HTTP responses in an Apache server configuration via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds great in theory, however what does it appear like in the real world? Let’s have a look.

Let’s say we desired search engines not to index.pdf file types. This configuration on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would look like the listed below:

place ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a different circumstance. Let’s state we wish to utilize the X-Robots-Tag to block image files, such as.jpg,. gif,. png, etc, from being indexed. You could do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please keep in mind that understanding how these instructions work and the impact they have on one another is vital.

For instance, what occurs if both the X-Robots-Tag and a meta robotics tag lie when spider bots discover a URL?

If that URL is obstructed from robots.txt, then particular indexing and serving instructions can not be found and will not be followed.

If instructions are to be followed, then the URLs containing those can not be disallowed from crawling.

Check For An X-Robots-Tag

There are a couple of different methods that can be used to look for an X-Robots-Tag on the website.

The simplest way to inspect is to install an internet browser extension that will tell you X-Robots-Tag details about the URL.

Screenshot of Robots Exemption Checker, December 2022

Another plugin you can use to determine whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking the plugin in your internet browser and navigating to “View Reaction Headers,” you can see the numerous HTTP headers being used.

Another approach that can be used for scaling in order to pinpoint problems on sites with a million pages is Shrieking Frog

. After running a site through Yelling Frog, you can browse to the “X-Robots-Tag” column.

This will reveal you which areas of the website are utilizing the tag, along with which specific directives.

Screenshot of Shouting Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Website Understanding and controlling how online search engine communicate with your site is

the cornerstone of search engine optimization. And the X-Robots-Tag is an effective tool you can use to do simply that. Simply know: It’s not without its threats. It is very simple to make a mistake

and deindex your entire site. That stated, if you’re reading this piece, you’re most likely not an SEO novice.

So long as you use it sensibly, take your time and examine your work, you’ll find the X-Robots-Tag to be a beneficial addition to your toolbox. More Resources: Featured Image: Song_about_summer/ Best SMM Panel