SEO How Google sees your site

google looking at a webpage

Note: Google updated this guideline on October 27, 2014. It now states...

"To help Google fully understand your site's contents, allow all of your site's assets, such as CSS and JavaScript files, to be crawled. The Google indexing system renders webpages using the HTML of a page as well as its assets such as images, CSS, and Javascript files."

If Google can not understand your page, it can not rank you

Google needs a complete picture of your webpages in order to understand it fully.

Googlebot

Google uses a web crawler named Googlebot to gather information about your website.

Every webmaster should know that a search engine crawler like Googlebot must be able to "crawl" your site in order for it to be included in search engine results.

The way search engine crawlers visit your webpages is determined by a file called robots.txt.

Page Resources

Most webpages use CSS and / or javascript. These are often external files that are linked to from your HTML.

Google must have access to these resources in order to fully understand your webpage, but often these files are blocked by the robots.txt file.

How to check if your site is following this guideline

Our Google guideline tool can tell you what files (if any) are blocked from Googlebot.

Key Concepts:

Make sure that search engine spiders are able to see your site correctly. Ensuring that your website is seen correctly by search engine spiders is vital.


Patrick Sextonby Patrick Sexton

Be seen correctly by Google