Alexa Robots.txt


A robots.txt file can be used to restrict access by crawlers to your entire website or to areas of your website. When properly used, the robots.txt file prevents search engines from indexing contact forms, duplicate content, and other pages of your site that shouldn’t be indexed. However, a mis-configured robots.txt file can prevent search engines from indexing valuable content. If you are not familiar with ways to use a robots.txt file to your advantage, you can find information about this tool on Wikipedia or The Web Robots Pages.



Your robots.txt file does not block the major search engine crawlers ( baiduspider, googlebot, ia_archiver, msnbot, slurp or teoma) from reaching any of the pages that the Alexa Site Audit examined. No action needed.


About The Author: Rosendo Cuyasen Jr. is the head of Eyewebmaster a web developing and SEO firm in the country. You can follow him on twitter@Eyewebmaster account, you can also like him to his Facebook account and Google + account.

Comments are closed.