Robots exclusion protocol (REP), or robots.txt, is a text file webmasters create to instruct robots– typically search engine robots– how to crawl and index their website pages. A robots.txt file is a publicly available file, meaning that anyone can see what sections a webmaster has blocked from search engines. Essentially robots.txt tells Googlebot and other crawlers what is and is not allowed to be crawled; while the noindex tag tells Google Search what is and is not allowed to be indexed and displayed in Google Search.
The Basics of robot.txt
So now that you have a better understanding of robots.txt, you may be motivated to get your website in shape, but the next question is, how? You can either hire a webmaster to do this for you, or you can use Inbound Brew’s free robots.txt management system to help you along the way.
Ready to begin?Download Inbound Brew