Been getting some questions about my robots.txt file and what certain things do.
Thankfully some regular expressions are supported in the robots.txt (but not many).
$ in regex means the end of the file. So if you do .php$ it your robots.txt that means it will match anything that ends in .php
This is really handy when you want to block all .exe .php or other files. For example:
Disallow: /*.PDF$
Disallow: /*.jpeg$
Disallow: /*.exe$
Specifically this is some of the things I use in my robots.txt
Disallow: /*? – this blocks all urls with a ? in them. A good way to avoid duplicate content issues with wordpress blogs. Obviously you only want to use this if you have changed your url structure to not be 100% ?=.
Disallow: /*.php$ – This blocks all .php files. Another good way to avoid duplicate content with a wordpress blog.
Disallow: /*.inc$ – you should not be showing .inc or include files to bots (google code search will eat you alive)
Disallow: /*.css$ – why would you show css files for indexing seems silly.. The wildcard is used here in case there are many css files.
Disallow: */feed/ feeds being indexed dilute your site equity. The wildcard * is used incase there is preceding chars.
Disallow: */trackback/ – no reason a trackback url should be indexed. The wildcard * is used incase there is preceding chars.
Disallow: /page/ – assloads of duplicate content in pages for wordpress.
Disallow: /tag/ – more douplicate content.
Disallow: /category/ – even more duplicate content.
SO what if you want to ALLOW a page. Like for instance my serps tool is serps.php and from the above rules that would not fly.
Allow: /serps.php – this does the trick!
Keep in mind I am not a SEO but I have picked up a few tricks along the way.
I never mess around with this stuff, but does duplicate content reduce how well your site ranks overall?
This is an old article but still the same rule applies even now in 2010. The fact is that over the period of time Google has managed to know how to play around with duplicate content.
So having a good robots.txt is good and is awesome if you want to block something from your blog or site.
Some really good info, if you are not an SEO you are pretty darn close
I’m about to implement this:(how does it look?)
User-agent: *
Disallow: /cgi-bin
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
Disallow: /trackback
Disallow: /comments
Disallow: /category/*/*
Disallow: */trackback
Disallow: */comments
Disallow: /*?*
Disallow: /*?
Allow: /wp-content/uploads
bob not sure why you need the extra /*/* after category. just /category/ should get that and all sub directories of category.
Thanks for the tips shoe. A lot of people don’t realize how much duplicate content on your site can really hurt you.
Thanks, I copied that from another blog so I’ll fix that.
Big Help! Thanks. This should help a bunch.
I am not an SEO, but I play one on the internet…
If Shoe isn’t an expert…he is the closest thing that will talk to us!
Very Nice post! We all know how many site’s leave this simple step out (like the ebook sales people, who u do a simple site: and u find the members download area). You put it out plain and simple!!! Don’t you find it funny how people who are non seo people like you, make more money then the seo people. LOL. Have a fantastic week Shoe and everyone else! Make that $$$$$
thx for the great tips for the robots.txt and wordpress blogs
Thanks for the excellent tips Shoe. One of my blogs had been performing amazingly until Google decided to hate it last week. These tips are just what I need to try and work out if it’s a duplicate content issue..
I disagree with this assessment on some level. Sure, you don’t want duplicate content and it will negatively impact your site, but using the robots.txt file to fix the problem wouldn’t be my way to go.
The robots file tells Google not to even crawl the page. A better scenario would be to use the meta noindex and follow. This tells Google not to index the page, but it can and will still accumulate link juice to pass it on (unless this page is a dead end, then it’s pointless).
See this interview with Matt from a few months ago for a little more in-depth conversation.
I use the robot.txt file, because I use a CMS with URL rewriting. I can’t (I don’t think) use meta tags because I have the appearance of duplicate content–not actual duplicate content. For example, teh same page might appear under the URL ./Default.aspx?tabid=1 or ./tabid/1/Default.aspx, depending on how the page is accessed. If I add meta tags, then none of pages will get indexed.
I have the all-in-one seo pack which applies noindex, nofollow meta tags on the actual archive/category/tag pages. I wonder if this is still worth doing but I guess it can’t hurt.
it can.
Thanks a lot for the tips Jeremy
I agree there totally with the above person.
Thanks for that, really need to get to grips with this robots stuff, I am sure it helps with SEO although don’t quite understand how. 🙂
thanks, very helpful
lol at all the spammy comments. “I totally agree with everyone” lol
This is very helpful! Thanks for sharing.
Disallow /category/ is a good one to add. Just make *extra* sure your Permalink structure isn’t set up to include “category” == otherwise nothing will be indexed.
To help reduce DC, I also recommend blocking the archives (just add a new line for each year your blog has been online)
# Block Duplicate Content from Archives
Disallow: /2006/
Disallow: /2007/
Disallow: /2008/
I also have this
Disallow: /*?*
instead of this;
Disallow: /*?
Thanks Shoe! I appreciate all you have done.
🙂
Blocking /category/ is a good one. Just need to be careful that your Permalink structure isn’t setup to include “category” — otherwise nothing will get indexed.
I also use the following to block the archives. Just add a new line for each year your blog has been online.
# Block Duplicate Content From Archives
Disallow: /2006/
Disallow: /2007/
Disallow: /2008/
One more is that I use;
Disallow: /*?*
instead of;
Disallow: /*?
Great tutorial – more of this please! No matter what you say, its pretty good SEO stuff.
Thank you for the tips.
Excellent post Jeremy! Every single day I’m learning something new from you it seems. Just the other day with the link rel= and now today with the robot.txt file.
I’ve just read the basics about the robot.txt file and never really thought much more into it. It’s good we have people like you helping us out along the way.
Thanks!
~Terry
Thanks for the tip. I guess I have the same question as someone above. How does duplicate content hurt your ranking? Is it a consequence of PR being spread across multiple pages – or is it just a case of being penalized for duplication? I’ll have to do more research. Thanks again.
yes ok, Thanks, use in 28 blogs maide in brasil
Interesting that the question mark doesn’t have to be escaped. Normally a question mark would be a RegEx meta character, but I just looked it up in the Google guidelines: a question mark is treaded as a regular character.
An important note: Not every crawler understands RegEx in the robots.txt. So you are “protecting” your sites against the major search engines, but not from normal bots. This is ok to avoid duplicate content, I guess.
I wonder if Google isn’t already good at detecting a wordpress installation and can therefore react on the duplicate content accordingly (like ignoring part of the sites, indexing after a schema normal wp blogs will follow)… Just a thought 🙂
OMG. Will this keep spammers from doing that obnoxious thing where they copy a whole journal entry (or the majority of one) into their fake blogs, making it look like they are quoting it (“Someone said something great over at blahblahblah dot com, ‘entire post here,'”) with no other content? Just to get on google and steal my links? I’m sure they’re using robots at some stage….
well.. just had a conversion with mr cutts about this and many other things 3 days ago.
You are getting the Disallow and noindex tags confused in the robots.txt. Disallow will still let the bots visit and index them but not take in the content.
well its not really true regex… its just a somewhat adaptation
I doubt its going to keep spammers out 😉
Agreed that disallow will allow the bots to visit but not take the content. Maybe I said this wrong.
Say for example you’ve got links coming into a page you’ve disallowed in robots.txt. This wastes any link juice that (linking) page is giving you. Using “meta noindex” will allow the bots to follow the links on the “meta noindexed” page and pass on the link juice, and also alleviate any dup issues.
So has he changed his stance on the fact that a “meta noindexed” page accumulating and pass page rank? On a robots disallowed page the bots won’t take the content thereby there will be nowhere to pass page rank to.
The way I understand it is this:
meta noindex – don’t index but follow and pass pr
meta nofollow – index but don’t follow links or pass any pr on entire page
href nofollow – don’t pass pr on that link
robots disallow – don’t index or follow or pass pr (they can reference the url still, just without content there is nowhere to pass any link juice).
Great list of tips shoe … i can bet this helps alot.
yeah nothing keeps them out
[…] WordPress robots.txt tips against douplicate content – ShoeMoney® Some useful tips on updating your robots.txt file to avoid duplicate content problems with WordPress. (tags: seo wordpress) […]
Matt Cutts says it does.
For smaller blogs this might not be the best thing to do when it comes to SEO. If implementing everything this way, you are relying on Google to find older posts (if they don’t have links to them) by going directly through the homepage. Requiring Google to go back 20 pages to find an article is a good way to end up in the supplemental index (which, of course they claim doesn’t exist anymore, but IMO it does).
Thanks for the list and explaining it. I need to add a robots.txt file to my blog.
Thanks for these tips – I hadn’t even thought of leveraging the robots file against duplicate content (much easier than disabling those features!). Thanks!
I shall have to take another look at my robots.txt!
Typo in the title? Or am i seeing things
Thank you for posting these tips for WordPress on the robot.txt file.
Bob,
Why do you need to block /comments/, I thought having comments indexed would be a good thing. This is new to me so any pointers would be great.
Thanks.
Thats great. But don’t you think you are getting off topic.
It does. Duplicate content ruins your site.
You can also use
Disallow: /wp
instead of all those others like
Disallow: /wp-admin
Disallow: /wp-includes
Disallow: /wp-content/plugins
Disallow: /wp-content/cache
Disallow: /wp-content/themes
I was wondering that as well. They can find and react to all sorts of things, I think they would know about WP installs and the issues it has.
[…] Read more of this article at ShoeMoney.com […]
This is only applicable if your Permalink is not structuterd to have year on it. Or else this will result with a mess..
Hello to all I just want to share my post regarding Robots.txt that really helps my site
Thanks! Very useful again. Avoiding duplicate content really helped me ranking well.
[…] WordPress robots.txt tips against duplicate content […]
Well, this seems to be a better version than all noindex plugins going arround. Def. will give it a try!
[…] citeva zile am citit un articol al lui Jeremy Shoemaker pe aceasta tema. El propunea folosirea unui fisier robots.txt, care este […]
Very helpful post for me as I have been looking how to use the robots.txt file in this way for some time.
[…] WordPress robots.txt tips against duplicate content Disallow: /*? – this blocks all urls with a ? in them. A good way to avoid duplicate content issues with wordpress blogs. Obviously you only want to use this if you have changed your url structure to not be 100% ?=. […]
I didn’t initially include a robots.txt in my blog and never had any issues with dupe content. It wasn’t until just recently I decided to add one, more for experimental purposes. So far, search engine traffic hasn’t improved or declined either way. WordPress out of the box isn’t great for SEO purposes, but with minor tweeks, I find that a robots.txt isn’t really necessary.
-Guy
http://www.nullamatix.com
Uzair,
How is this off topic? If Shoe thinks a robots.txt will help in SERPs, your site will get more traffic, and ultimately earn more cash. Isn’t that one of the focuses of this blog? “Skills to Pay the Bills” right?
-Guy
http://www.nullamatix.com
Um, no. The only way to prevent those types of attacks would involve IP based content delivery.
Do you have some before /after stats you can share? I understand the penalty, but just want to understand how it improves.
[…] ist. Um dies wirkungsvoll zu vermeiden habe ich eine feine und vor allem schnelle Lösung bei Shoemoney.com gefunden. Er benutzt diese […]
Ya,I dont use Disfollows..
Why my post cann’t be displayed.
I agree, disallowing category and page is not the smartest move to let google find old content.
Shoe is making an “SEO Linking Gotcha”
All the pages blocked with robots.txt will still gather juice and can still rank
Simple proof is that my WordPress SEO Masterclass page is still ranking after being blocked by robots.txt for a couple of weeks as it was written as a paid post – actually it is ranking higher that Joost’s similar page.
This article explains why so many people have got this wrong for years
http://andybeard.eu/2007/11/seo-linking-gotchas-even-the-pros-make.html
It gets worse when people start mixing this kind of advice with their “All in one SEO” because the noindex statements added don’t get seen by googlebot.
[…] Shoemoney hat vor einigen Tagen darüber berichtet wie man mit ein paar Einträgen in der robots.txt solche doppler vermeidet. In diesem Fall ist die Liste mit Befehlen auf WordPress angepasst, kann aber auch für andere Systeme genutzt werden (evtl. Anpassungen nötig). […]
Thanks for this Jeremy. I have been looking for a good robots.txt file. I have no idea what to put in, so this will help.
[…] WordPress Robots.txt Tips Against Duplicate Content from Shoemoney. […]
[…] Sollte man nun besser die oben angegebenen Plugins oder die robots.txt-Methode verwenden? Um die Unterschiede zu verstehen, muss man ein weniger tiefer in SEO-Welten abtauchen: während die beschriebenen Plugins die von Google vorgesehene Syntax noindex bzw. nofollow in den Header der betreffenden Dateien einfügen, sorgt die robots.txt-Variante dafür, dass überhaupt nie auf die betreffenden Seiten zugegriffen wird. Ob die beiden Varianten in der Praxis einen Unterschied machen, darüber streiten derzeit die SEO-Experten – siehe auch die Diskussion zum betreffenden Eintrag bei Shoemoney. […]
Thats good that they added it, duplication is bad.
Great post with some great descriptions of what these certain words will “do.” Thanks for the post!
[…] are less likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]
[…] to basics” type post when he shared his tips on how to use the robots.txt in WordPress to prevent duplicate content. This is a great reference to use when editing your robots.txt to tweak your site and ensure you […]
wow!!! Never knew that..??
Great tips, I’ll enhance my robots.txt file ASAP
[…] Als robots.txt speichern. Das wurde alles für WordPress optimiert. […]
[…] talks about his robots.txt file and how it guards against duplicate content in search engine results. Most of the strategies he’s using can be replicated for Movable Type and TypePad users. […]
[…] likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]
Shoe, I just checked your actual robots.txt. Why do you have;
Disallow: /sitemap.xml
That seems like trouble?
[…] more tips on optimizing robots.txt for WordPress, check out Shoemoney’s suggestions. And keep in mind that like Shoemoney, I am not an SEO. I’ve just been using this method for […]
[…] Line: We looked at Josh’s robots.txt post, as well as at ShoeMoney’s robot.txt post to figure out what we want our robots.txt file to look […]
[…] Sort out your Robots.txt file to make sure Google doesn’t index that RSS or other content in WordPress that can cause dupe content horror for you and your blog. Shoemoney said it better than I can here. […]
[…] WordPress robots.txt tips against duplicate content […]
[…] WordPress robots.txt tips against duplicate content […]
[…] to reading a post by Jeremy at ShoeMoney.com and reading about different User-agents, I was able to create a robots.txt file that will help […]
Very nice article, thanks!
[…] are less likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]
I think this article has been translated into other languages in the converter to use a car very enjoyable
thank u this article and all comments
[…] Robots.txt, μποÏείτε να βÏείτε σχετικÎÏ‚ οδηγίες στο Shoemoney ή να χÏησιμοποιήσετε κάποιο generator όπως αυτόν που […]
[…] http://www.shoemoney.com/2008/03/03/wordpress-robotstxt-tips-against-duplicate-content/ […]
Ya,I dont use Disfollows..
Some really good info, if you are not an SEO you are pretty darn close
I also think the author,date,comment and in some cases rss archives should be noindex if you want silo seo. Even adding the .html extension is a good idea, and stripping out the top level category.
there is a post about it at WordPress Robots.txt for Silo SEO<
Very helpful post for me as I have been looking how to use the robots.txt file in this way for some time.
Hi, I hope it’s good for my writing assignments, if I get a note if I do not repeat here
I visited some of this blogs. Some are good but some were not interested for me. But good list.
Nice trick, thanks for that, I’ll be using that in my sites to lop off spurious content from the feeds…
Some really good info, if you are not an SEO you are pretty darn close
Thanks for the tips shoe. A lot of people don’t realize how much duplicate content on your site can really hurt you.
great great post