Been getting some questions about my robots.txt file and what certain things do.

Thankfully some regular expressions are supported in the robots.txt (but not many).

$ in regex means the end of the file. So if you do .php$ it your robots.txt that means it will match anything that ends in .php

This is really handy when you want to block all .exe .php or other files. For example:

Disallow: /*.PDF$
Disallow: /*.jpeg$
Disallow: /*.exe$

Specifically this is some of the things I use in my robots.txt

Disallow: /*? – this blocks all urls with a ? in them. A good way to avoid duplicate content issues with wordpress blogs. Obviously you only want to use this if you have changed your url structure to not be 100% ?=.

Disallow: /*.php$ – This blocks all .php files. Another good way to avoid duplicate content with a wordpress blog.

Disallow: /*.inc$ – you should not be showing .inc or include files to bots (google code search will eat you alive)

Disallow: /*.css$ – why would you show css files for indexing seems silly.. The wildcard is used here in case there are many css files.

Disallow: */feed/ feeds being indexed dilute your site equity. The wildcard * is used incase there is preceding chars.

Disallow: */trackback/ – no reason a trackback url should be indexed. The wildcard * is used incase there is preceding chars.

Disallow: /page/ – assloads of duplicate content in pages for wordpress.

Disallow: /tag/ – more douplicate content.

Disallow: /category/ – even more duplicate content.

SO what if you want to ALLOW a page. Like for instance my serps tool is serps.php and from the above rules that would not fly.

Allow: /serps.php – this does the trick!

Keep in mind I am not a SEO but I have picked up a few tricks along the way.

By Jeremy Schoemaker

Jeremy "ShoeMoney" Schoemaker is the founder & CEO of ShoeMoney Media Group, and to date has sold 6 companies and done over 10 million in affiliate revenue. In 2013 Jeremy released his #1 International Best selling Autobiography titled "Nothing's Changed But My Change" - The ShoeMoney Story. You can read more about Jeremy on his wikipedia page here.

106 thoughts on “WordPress robots.txt tips against duplicate content”
  1. I never mess around with this stuff, but does duplicate content reduce how well your site ranks overall?

    1. This is an old article but still the same rule applies even now in 2010. The fact is that over the period of time Google has managed to know how to play around with duplicate content.

      So having a good robots.txt is good and is awesome if you want to block something from your blog or site.

  2. I’m about to implement this:(how does it look?)
    User-agent: *
    Disallow: /cgi-bin
    Disallow: /wp-admin
    Disallow: /wp-includes
    Disallow: /wp-content/plugins
    Disallow: /wp-content/cache
    Disallow: /wp-content/themes
    Disallow: /trackback
    Disallow: /comments
    Disallow: /category/*/*
    Disallow: */trackback
    Disallow: */comments
    Disallow: /*?*
    Disallow: /*?
    Allow: /wp-content/uploads

  3. bob not sure why you need the extra /*/* after category. just /category/ should get that and all sub directories of category.

  4. Thanks for the tips shoe. A lot of people don’t realize how much duplicate content on your site can really hurt you.

  5. I am not an SEO, but I play one on the internet…

    If Shoe isn’t an expert…he is the closest thing that will talk to us!

  6. Very Nice post! We all know how many site’s leave this simple step out (like the ebook sales people, who u do a simple site: and u find the members download area). You put it out plain and simple!!! Don’t you find it funny how people who are non seo people like you, make more money then the seo people. LOL. Have a fantastic week Shoe and everyone else! Make that $$$$$

  7. Thanks for the excellent tips Shoe. One of my blogs had been performing amazingly until Google decided to hate it last week. These tips are just what I need to try and work out if it’s a duplicate content issue..

  8. I disagree with this assessment on some level. Sure, you don’t want duplicate content and it will negatively impact your site, but using the robots.txt file to fix the problem wouldn’t be my way to go.

    The robots file tells Google not to even crawl the page. A better scenario would be to use the meta noindex and follow. This tells Google not to index the page, but it can and will still accumulate link juice to pass it on (unless this page is a dead end, then it’s pointless).

    See this interview with Matt from a few months ago for a little more in-depth conversation.

    1. I use the robot.txt file, because I use a CMS with URL rewriting. I can’t (I don’t think) use meta tags because I have the appearance of duplicate content–not actual duplicate content. For example, teh same page might appear under the URL ./Default.aspx?tabid=1 or ./tabid/1/Default.aspx, depending on how the page is accessed. If I add meta tags, then none of pages will get indexed.

  9. I have the all-in-one seo pack which applies noindex, nofollow meta tags on the actual archive/category/tag pages. I wonder if this is still worth doing but I guess it can’t hurt.

  10. Thanks for that, really need to get to grips with this robots stuff, I am sure it helps with SEO although don’t quite understand how. 🙂

  11. Disallow /category/ is a good one to add. Just make *extra* sure your Permalink structure isn’t set up to include “category” == otherwise nothing will be indexed.

    To help reduce DC, I also recommend blocking the archives (just add a new line for each year your blog has been online)

    # Block Duplicate Content from Archives
    Disallow: /2006/
    Disallow: /2007/
    Disallow: /2008/

    I also have this
    Disallow: /*?*

    instead of this;
    Disallow: /*?

  12. Blocking /category/ is a good one. Just need to be careful that your Permalink structure isn’t setup to include “category” — otherwise nothing will get indexed.

    I also use the following to block the archives. Just add a new line for each year your blog has been online.

    # Block Duplicate Content From Archives
    Disallow: /2006/
    Disallow: /2007/
    Disallow: /2008/

    One more is that I use;
    Disallow: /*?*

    instead of;
    Disallow: /*?

  13. Great tutorial – more of this please! No matter what you say, its pretty good SEO stuff.

  14. Excellent post Jeremy! Every single day I’m learning something new from you it seems. Just the other day with the link rel= and now today with the robot.txt file.

    I’ve just read the basics about the robot.txt file and never really thought much more into it. It’s good we have people like you helping us out along the way.

    Thanks!
    ~Terry

  15. Thanks for the tip. I guess I have the same question as someone above. How does duplicate content hurt your ranking? Is it a consequence of PR being spread across multiple pages – or is it just a case of being penalized for duplication? I’ll have to do more research. Thanks again.

  16. Interesting that the question mark doesn’t have to be escaped. Normally a question mark would be a RegEx meta character, but I just looked it up in the Google guidelines: a question mark is treaded as a regular character.
    An important note: Not every crawler understands RegEx in the robots.txt. So you are “protecting” your sites against the major search engines, but not from normal bots. This is ok to avoid duplicate content, I guess.

  17. I wonder if Google isn’t already good at detecting a wordpress installation and can therefore react on the duplicate content accordingly (like ignoring part of the sites, indexing after a schema normal wp blogs will follow)… Just a thought 🙂

  18. OMG. Will this keep spammers from doing that obnoxious thing where they copy a whole journal entry (or the majority of one) into their fake blogs, making it look like they are quoting it (“Someone said something great over at blahblahblah dot com, ‘entire post here,'”) with no other content? Just to get on google and steal my links? I’m sure they’re using robots at some stage….

  19. well.. just had a conversion with mr cutts about this and many other things 3 days ago.

    You are getting the Disallow and noindex tags confused in the robots.txt. Disallow will still let the bots visit and index them but not take in the content.

  20. Agreed that disallow will allow the bots to visit but not take the content. Maybe I said this wrong.

    Say for example you’ve got links coming into a page you’ve disallowed in robots.txt. This wastes any link juice that (linking) page is giving you. Using “meta noindex” will allow the bots to follow the links on the “meta noindexed” page and pass on the link juice, and also alleviate any dup issues.

    So has he changed his stance on the fact that a “meta noindexed” page accumulating and pass page rank? On a robots disallowed page the bots won’t take the content thereby there will be nowhere to pass page rank to.

    The way I understand it is this:

    meta noindex – don’t index but follow and pass pr
    meta nofollow – index but don’t follow links or pass any pr on entire page
    href nofollow – don’t pass pr on that link
    robots disallow – don’t index or follow or pass pr (they can reference the url still, just without content there is nowhere to pass any link juice).

  21. […] WordPress robots.txt tips against douplicate content – ShoeMoney® Some useful tips on updating your robots.txt file to avoid duplicate content problems with WordPress. (tags: seo wordpress) […]

  22. For smaller blogs this might not be the best thing to do when it comes to SEO. If implementing everything this way, you are relying on Google to find older posts (if they don’t have links to them) by going directly through the homepage. Requiring Google to go back 20 pages to find an article is a good way to end up in the supplemental index (which, of course they claim doesn’t exist anymore, but IMO it does).

  23. Thanks for the list and explaining it. I need to add a robots.txt file to my blog.

  24. Thanks for these tips – I hadn’t even thought of leveraging the robots file against duplicate content (much easier than disabling those features!). Thanks!

  25. Bob,
    Why do you need to block /comments/, I thought having comments indexed would be a good thing. This is new to me so any pointers would be great.

    Thanks.

  26. You can also use
    Disallow: /wp
    instead of all those others like
    Disallow: /wp-admin
    Disallow: /wp-includes
    Disallow: /wp-content/plugins
    Disallow: /wp-content/cache
    Disallow: /wp-content/themes

  27. I was wondering that as well. They can find and react to all sorts of things, I think they would know about WP installs and the issues it has.

  28. […] Read more of this article at ShoeMoney.com […]

  29. This is only applicable if your Permalink is not structuterd to have year on it. Or else this will result with a mess..

  30. Thanks! Very useful again. Avoiding duplicate content really helped me ranking well.

  31. […] WordPress robots.txt tips against duplicate content […]

  32. Well, this seems to be a better version than all noindex plugins going arround. Def. will give it a try!

  33. […] citeva zile am citit un articol al lui Jeremy Shoemaker pe aceasta tema. El propunea folosirea unui fisier robots.txt, care este […]

  34. Very helpful post for me as I have been looking how to use the robots.txt file in this way for some time.

  35. […] WordPress robots.txt tips against duplicate content Disallow: /*? – this blocks all urls with a ? in them. A good way to avoid duplicate content issues with wordpress blogs. Obviously you only want to use this if you have changed your url structure to not be 100% ?=.   […]

  36. I didn’t initially include a robots.txt in my blog and never had any issues with dupe content. It wasn’t until just recently I decided to add one, more for experimental purposes. So far, search engine traffic hasn’t improved or declined either way. WordPress out of the box isn’t great for SEO purposes, but with minor tweeks, I find that a robots.txt isn’t really necessary.

    -Guy
    http://www.nullamatix.com

  37. Uzair,

    How is this off topic? If Shoe thinks a robots.txt will help in SERPs, your site will get more traffic, and ultimately earn more cash. Isn’t that one of the focuses of this blog? “Skills to Pay the Bills” right?

    -Guy
    http://www.nullamatix.com

  38. Um, no. The only way to prevent those types of attacks would involve IP based content delivery.

  39. Do you have some before /after stats you can share? I understand the penalty, but just want to understand how it improves.

  40. […] ist. Um dies wirkungsvoll zu vermeiden habe ich eine feine und vor allem schnelle Lösung bei Shoemoney.com gefunden. Er benutzt diese […]

  41. I agree, disallowing category and page is not the smartest move to let google find old content.

  42. Shoe is making an “SEO Linking Gotcha”

    All the pages blocked with robots.txt will still gather juice and can still rank

    Simple proof is that my WordPress SEO Masterclass page is still ranking after being blocked by robots.txt for a couple of weeks as it was written as a paid post – actually it is ranking higher that Joost’s similar page.

    This article explains why so many people have got this wrong for years
    http://andybeard.eu/2007/11/seo-linking-gotchas-even-the-pros-make.html

    It gets worse when people start mixing this kind of advice with their “All in one SEO” because the noindex statements added don’t get seen by googlebot.

  43. […] Shoemoney hat vor einigen Tagen darüber berichtet wie man mit ein paar Einträgen in der robots.txt solche doppler vermeidet. In diesem Fall ist die Liste mit Befehlen auf WordPress angepasst, kann aber auch für andere Systeme genutzt werden (evtl. Anpassungen nötig). […]

  44. Thanks for this Jeremy. I have been looking for a good robots.txt file. I have no idea what to put in, so this will help.

  45. […] WordPress Robots.txt Tips Against Duplicate Content from Shoemoney. […]

  46. […] Sollte man nun besser die oben angegebenen Plugins oder die robots.txt-Methode verwenden? Um die Unterschiede zu verstehen, muss man ein weniger tiefer in SEO-Welten abtauchen: während die beschriebenen Plugins die von Google vorgesehene Syntax noindex bzw. nofollow in den Header der betreffenden Dateien einfügen, sorgt die robots.txt-Variante dafür, dass überhaupt nie auf die betreffenden Seiten zugegriffen wird. Ob die beiden Varianten in der Praxis einen Unterschied machen, darüber streiten derzeit die SEO-Experten – siehe auch die Diskussion zum betreffenden Eintrag bei Shoemoney. […]

  47. Great post with some great descriptions of what these certain words will “do.” Thanks for the post!

  48. […] are less likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]

  49. […] to basics” type post when he shared his tips on how to use the robots.txt in WordPress to prevent duplicate content. This is a great reference to use when editing your robots.txt to tweak your site and ensure you […]

  50. […] Als robots.txt speichern. Das wurde alles für WordPress optimiert. […]

  51. […] talks about his robots.txt file and how it guards against duplicate content in search engine results. Most of the strategies he’s using can be replicated for Movable Type and TypePad users. […]

  52. […] likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]

  53. Shoe, I just checked your actual robots.txt. Why do you have;

    Disallow: /sitemap.xml

    That seems like trouble?

  54. […] more tips on optimizing robots.txt for WordPress, check out Shoemoney’s suggestions. And keep in mind that like Shoemoney, I am not an SEO. I’ve just been using this method for […]

  55. Using Robots.txt to Improve your PageRank — WebDiggin.com: An Adventure to Make Money Online says:

    […] Line: We looked at Josh’s robots.txt post, as well as at ShoeMoney’s robot.txt post to figure out what we want our robots.txt file to look […]

  56. Wordpress Duplicate Content Penalty Fix : Duplicate Content Filter Wordpress » UK Affiliate Marketing Blog - Kirsty's Affiliate Marketing Guide - Affiliate Stuff UK says:

    […] Sort out your Robots.txt file to make sure Google doesn’t index that RSS or other content in WordPress that can cause dupe content horror for you and your blog. Shoemoney said it better than I can here. […]

  57. […] WordPress robots.txt tips against duplicate content […]

  58. […] WordPress robots.txt tips against duplicate content […]

  59. […] to reading a post by Jeremy at ShoeMoney.com and reading about different User-agents, I was able to create a robots.txt file that will help […]

  60. […] are less likely to suffer from the penalties of duplicate content. WordPress users should see the article on Shoemoney about robots.txt […]

  61. I think this article has been translated into other languages in the converter to use a car very enjoyable

  62. 8 tips to better secure your Wordpress installation | Web Design Blog x2interactive. Ένα blog για το Internet και το Web Design says:

    […] Robots.txt, μπορείτε να βρείτε σχετικές οδηγίες στο Shoemoney ή να χρησιμοποιήσετε κάποιο generator όπως αυτόν που […]

  63. I also think the author,date,comment and in some cases rss archives should be noindex if you want silo seo. Even adding the .html extension is a good idea, and stripping out the top level category.

    there is a post about it at WordPress Robots.txt for Silo SEO<

  64. Very helpful post for me as I have been looking how to use the robots.txt file in this way for some time.

  65. Hi, I hope it’s good for my writing assignments, if I get a note if I do not repeat here

  66. I visited some of this blogs. Some are good but some were not interested for me. But good list.

  67. Nice trick, thanks for that, I’ll be using that in my sites to lop off spurious content from the feeds…

  68. Thanks for the tips shoe. A lot of people don’t realize how much duplicate content on your site can really hurt you.

Comments are closed.