Keep Up To Date With DubSEO’s Blogs on SEO & Digital Marketing Trends

Check out our updates on Search Engine Optimisation, Social Media Marketing,
PPC Management Content Marketing, Facebook Advertising, e-mail Marketing etc.

Important Step for SEO Audits

Indexing is an Important Step for SEO Audits and Improving your Site Ranking

Posted on: May 13, 2018

Indexing is said to be the first step for any SEO audit. If your website is not being indexed, then it is basically not read by Google and Bing. If search engines cannot find and “read” it, then you cannot improve the ranking of your web pages. It is important to see that your site gets indexed first in order to be ranked higher in search engines.

So the question is, is your site being indexed?

Different tools are there that will help you to determine if a website is being indexed.

Professionals who provide SEO services in London have said, indexing is a page-level process and search engines read web pages and then treat them separately.

A quick way to find out if a web page is being indexed by Google is to use website and operator through Google search. An example that has been mentioned below is to enter the domain and you can see all the pages that have been indexed by Google for the particular domain. You may enter a web page URL to determine if the specific page has been indexed.

What happens when a web page is not indexed?

If your web page or site is not being indexed, the common problem is meta robots tag that is used on a page or inappropriate use of disallow inthe robots.txt file.

Both the robots.txt file and the meta tag provide necessary instructions to search engine indexing robots for how content should be treated on your website or page.

The difference is that the robots.txt file provides instructions for the website while the robots meta tag is usually found on the individual web page. On the robots.txt file, you can separate directories or pages and how the robots should treat these areas at the time of indexing. Let us find out how you should use each of them.


Experts from a professional seo services company have said, if you do not know whether or not your website uses a robots.txt file, then there is an easy way to check it. All you need to do is enter the domain in a web browser which is then followed by /robots.txt.

Here is an example with Amazon (

The list of “disallows” for Amazon will continue for a short time!

Google Search Console  consists of a suitable robots.txt Tester tool that enables you to detect errors in robots file. You may even test a web page on the site with the bar at the bottom in order to determine if the robots file is actually blocking Googlebot in the present form.

If a directory or web page on the site is not permitted, it will be visible after Disallow: in the robots file. The above example shows, I have excluded my landing page folder (/lp/) from indexing with the robots file. This will prevent any web page in the directory from being indexed by the search engines.

There are various easy yet complicated options where you can use the robots file. Google’s Developers site consists of improperlymaintained ways that can be used for robots.txt file. Check out a few of them.

Robots meta tag

The robots meta tag is foundin the header of a web page. There is no need for using both the robots.txt and the robots meta tag thatforbids indexing of a particular web page.

In the above Search Console image, I did not need to includerobots meta tag to all landing pages in the landing page folder (/lp/) for preventing Google from indexing them. I have actually prohibited the folder from indexing with the robots.txt file.

However, the robots meta tag perform other functions too.

For example, you can inform search engines that links on the web page for not being followed for search engine optimization.

The two directives that have been used quite often for SEO purpose along with this tag are noindex/index and nofollow/follow:

  • Index follow. This is actually implied by default. Search engine indexing robots have to index the information on this particular page. Search engine indexing robots need to follow links on this web page.
  • Noindexnofollow. Search engine indexing robots need not index the information on this web page. Search engine indexing robots should not follow the links on this particular page.

The Google Developer’s website provides a thorough explanation of using robots meta tag.

XML sitemaps

Experts of the best SEO company in London said, when you have a new web page on your site, you obviously want search engines to find and then index it quickly. One way to do this is using an extensible mark up language (XML) sitemap and then registering it with search engines.

XML sitemaps will provide search engines with the listing of web pages on your site. This is useful when you have new content without many inbound links that are pointing to it. Yet, this will make it difficult for search engine robots to follow a link for finding the particular content. There are different content management systems that consist ofa XML sitemap capability already built in or available through a plugin such as Yoast SEO Plugin for WordPress.

Check there is an XML sitemap and it is registered with Google Search Console as well as Bing Webmaster Tools. This ensures both Google and Bing knows where the sitemap is actually located and can come back continually for indexing it.

It is important to find out how soon new content can be indexed with this method. I performed a test once and found that my new content was being indexed by Google within eight seconds only. That was when I had to change the browser tabs and then perform the site: operator command.


In the year 2011, Google had announced that it can execute JavaScript and index some dynamic elements. However, Google cannot execute and index all JavaScript all the time. In Google Search Console, the Fetch and Render tool may help to determine if Googlebot and Google’s robot are going through the content in JavaScript.

In the above-mentioned example, the university website is using XML (AJAX) and asynchronous JavaScript, a form of JavaScript that generates a course subject menu and links it to particular areas of study.

The Fetch and Render tool shows  that Googlebot cannot see the content properly and links in the same way how people will do it. This means Googlebot cannot follow those links in the JavaScript for the course pages on the website.

You need to know that the website is being indexed to improve ranking in search engines. Now, if the search engines cannot read or find your content, how they can assess and rank it? Make sure you prioritize checking the indexability of your website while you are performing an SEO audit. You may contact DubSEO, a reliable SEO company where the experts can help you in indexing your website for SEO purpose and thus, improve ranking in search results.

Contact Us
  0207 183 2266
get a quick quote
What Everyone's Talking About