Another helpful video from Google’s Matt Cutts, where he answers a question in a Webmaster Help video.
John Mueller from Zurich:
“Googlebot keeps crawling my JavaScript and thinking that text in my scripts refers to URLs. Can I just disallow crawling of my JavaScript files to fix that.”
Basically Matt would strongly discourage anyone from blocking Google from ALL JavaScripts. If there’s one individual JavaScript file that’s the source of the problem, you could disallow it using the rel=”nofollow” attribute. The same also applies to CSS.
He says, “It turns out, as we’re executing JavaScript, we do look at the attributes, so you can actually use JavaScript, and put like a nofollow attribute on individual URLs, and so it is the sort of thing where you can do link level granularity there. And you can block, for example, an individual JavaScript file, but in general, I would not recommend blocking all of your JavaScript files.”