As a website operator that is search engine optimization is an important measure for you to make your website or blog more visible and more to attract readers. But for SEO is as much that it does not make sense for each side to let them indexed by Googlebot. For example, if you do not want your Contacts can be found via Google.
To block out the Googlebot, there are different ways, depending on the type of the page or the link:
(Not in a subdirectory) Save this file in the root directory on your server to deny the Web crawler to access the files and directories on your server. According to Google’s official information pages read neither the Google Robot other, reputable web crawlers with robots.txt locked pages (even if it is technically possible). The robots.txt Agent however not for web pages that contain a lot of dynamic pages, ie for example for online stores with many filter functions.
- “Noindex” meta tag
This causes the day that the complete page, on which you have installed it, will not be indexed. Of course, you can lock your complete website with all pages for crawling by placing meta tag all the “noindex”.
- password protected directories
If you have files that are not indexed and, therefore, should be discoverable on Google, you can push them on your server in a password-protected directory. Note, however, that this password protection also applies to your users! Therefore, this is only suitable for blogs and websites that are created purely for private purposes and should in which only certain people have access (for example, a blog about your planned wedding).
- Deliver 404
A not very clean, but conceivable possibility is to deliver the Googlebot a 404 page (Page Not Found Error), the user but instead display the usual content. However, this is feasible only as a last resort and should soon be transformed into a more efficient method as nofollow or when using robots.txt.
- Empty Body
- use iFrames
Again something for hobbyists: Googlebot can read iFrames, which often leads to problems with the content. This can, of course, be used consciously by representing its entire content in iFrames. But they do not communicate the source of their iFrames! Immerse the link to the source in another source, you can content which is at this link, of course, still be indexed.
- User Agent Query
If you query the server-side user agent, you can specify that (only) is the user agent, indicating the Googlebot, a 404 page or a blank document issued.