Should I block Googlebot from crawling javascript and css?

I noticed that google bot is crawling javascript and css regularly from by wordpress blog site. Here are some entries from my apache log:

66.249.75.66 - - [18/Mar/2013:08:07:28 +0000] "GET /wp-content/themes/shell-master/media-queries.css?ver=0.1.1 HTTP/1.1" 200 1541 "http://infoheap.com/" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "V:infoheap.com t:20130318080728 D:875 -"
66.249.76.66 - - [18/Mar/2013:18:45:08 +0000] "GET /wp-content/plugins/contact-form-7/includes/js/scripts.js?ver=3.3.3 HTTP/1.1" 301 286 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "V:infoheap.com t:20130318184508 D:323 -"

Earlier I never paid too much attention to it. But recently I tried to do some research on it thinking that may be I can block crawling of javascript and css so that Googlebot can crawl other content from the site.

I found this official video titled Don’t block Googlebot from crawling JavaScript and CSS by Matt Cutts (published in Google Webmaster Channel) on this topic.

This is pretty interesting and makes lot of sense. Matt Cutts clearly says Google is getting better at processing javascript and css. And it makes sense from user perspective we well. Here are my thoughts on it.

  1. Presentation of the content is getting more and more important in addition to the content. So it is important for any search engines to crawl javascript and css.
  2. There may be things hidden in html. So just plain text analysis may not a good idea. A responsible search engine should crawl and interpret everything.
  3. There may be sites with malicious javascript. They have something in html but may show something else to user. So it makes more sense to crawl everything on such pages.
  4. I have even seen flash content being shown in search results. Its a good thing for flash content discovery.
  5. It is also a good idea from site performance perspective to remove unused javascript and css from pages. That way Google does not have to crawl dead javascript and css.

So what should be in robots.txt? I think a good robots.txt (at least as a starting point) for a wordpress site is:

User-agent: *
Disallow: /wp-admin/

Also see: Online robots.txt sandbox

Share this article: share on Google+ share on facebook share on linkedin tweet this submit to reddit

Comments

Click here to write/view comments