WordPress robots.txt tips against duplicate content

Posted by



Been getting some questions about my robots.txt file and what certain things do.

Thankfully some regular expressions are supported in the robots.txt (but not many).

$ in regex means the end of the file. So if you do .php$ it your robots.txt that means it will match anything that ends in .php

This is really handy when you want to block all .exe .php or other files. For example:

Disallow: /*.PDF$
Disallow: /*.jpeg$
Disallow: /*.exe$

Specifically this is some of the things I use in my robots.txt

Disallow: /*? – this blocks all urls with a ? in them. A good way to avoid duplicate content issues with wordpress blogs. Obviously you only want to use this if you have changed your url structure to not be 100% ?=.

Disallow: /*.php$ – This blocks all .php files. Another good way to avoid duplicate content with a wordpress blog.

Disallow: /*.inc$ – you should not be showing .inc or include files to bots (google code search will eat you alive)

Disallow: /*.css$ – why would you show css files for indexing seems silly.. The wildcard is used here in case there are many css files.

Disallow: */feed/ feeds being indexed dilute your site equity. The wildcard * is used incase there is preceding chars.

Disallow: */trackback/ – no reason a trackback url should be indexed. The wildcard * is used incase there is preceding chars.

Disallow: /page/ – assloads of duplicate content in pages for wordpress.

Disallow: /tag/ – more douplicate content.

Disallow: /category/ – even more duplicate content.

SO what if you want to ALLOW a page. Like for instance my serps tool is serps.php and from the above rules that would not fly.

Allow: /serps.php – this does the trick!

Keep in mind I am not a SEO but I have picked up a few tricks along the way.

106 thoughts on “WordPress robots.txt tips against duplicate content

  1. bob

    I never mess around with this stuff, but does duplicate content reduce how well your site ranks overall?

    1. Kamal Hasa

      This is an old article but still the same rule applies even now in 2010. The fact is that over the period of time Google has managed to know how to play around with duplicate content.

      So having a good robots.txt is good and is awesome if you want to block something from your blog or site.

  2. bob c

    I’m about to implement this:(how does it look?)
    User-agent: *
    Disallow: /cgi-bin
    Disallow: /wp-admin
    Disallow: /wp-includes
    Disallow: /wp-content/plugins
    Disallow: /wp-content/cache
    Disallow: /wp-content/themes
    Disallow: /trackback
    Disallow: /comments
    Disallow: /category/*/*
    Disallow: */trackback
    Disallow: */comments
    Disallow: /*?*
    Disallow: /*?
    Allow: /wp-content/uploads

  3. Ian

    Thanks for the tips shoe. A lot of people don’t realize how much duplicate content on your site can really hurt you.

  4. RacerX

    I am not an SEO, but I play one on the internet…

    If Shoe isn’t an expert…he is the closest thing that will talk to us!

  5. Arejay

    Very Nice post! We all know how many site’s leave this simple step out (like the ebook sales people, who u do a simple site: and u find the members download area). You put it out plain and simple!!! Don’t you find it funny how people who are non seo people like you, make more money then the seo people. LOL. Have a fantastic week Shoe and everyone else! Make that $$$$$

  6. Michelle

    Thanks for the excellent tips Shoe. One of my blogs had been performing amazingly until Google decided to hate it last week. These tips are just what I need to try and work out if it’s a duplicate content issue..

  7. TheMadHat

    I disagree with this assessment on some level. Sure, you don’t want duplicate content and it will negatively impact your site, but using the robots.txt file to fix the problem wouldn’t be my way to go.

    The robots file tells Google not to even crawl the page. A better scenario would be to use the meta noindex and follow. This tells Google not to index the page, but it can and will still accumulate link juice to pass it on (unless this page is a dead end, then it’s pointless).

    See this interview with Matt from a few months ago for a little more in-depth conversation.

    1. Unpublished Guy

      I use the robot.txt file, because I use a CMS with URL rewriting. I can’t (I don’t think) use meta tags because I have the appearance of duplicate content–not actual duplicate content. For example, teh same page might appear under the URL ./Default.aspx?tabid=1 or ./tabid/1/Default.aspx, depending on how the page is accessed. If I add meta tags, then none of pages will get indexed.

  8. Solo Programmer

    I have the all-in-one seo pack which applies noindex, nofollow meta tags on the actual archive/category/tag pages. I wonder if this is still worth doing but I guess it can’t hurt.

  9. Guy

    Disallow /category/ is a good one to add. Just make *extra* sure your Permalink structure isn’t set up to include “category” == otherwise nothing will be indexed.

    To help reduce DC, I also recommend blocking the archives (just add a new line for each year your blog has been online)

    # Block Duplicate Content from Archives
    Disallow: /2006/
    Disallow: /2007/
    Disallow: /2008/

    I also have this
    Disallow: /*?*

    instead of this;
    Disallow: /*?

  10. Guy

    Blocking /category/ is a good one. Just need to be careful that your Permalink structure isn’t setup to include “category” — otherwise nothing will get indexed.

    I also use the following to block the archives. Just add a new line for each year your blog has been online.

    # Block Duplicate Content From Archives
    Disallow: /2006/
    Disallow: /2007/
    Disallow: /2008/

    One more is that I use;
    Disallow: /*?*

    instead of;
    Disallow: /*?

  11. Terry Tay

    Excellent post Jeremy! Every single day I’m learning something new from you it seems. Just the other day with the link rel= and now today with the robot.txt file.

    I’ve just read the basics about the robot.txt file and never really thought much more into it. It’s good we have people like you helping us out along the way.

    Thanks!
    ~Terry

  12. jtGraphic

    Thanks for the tip. I guess I have the same question as someone above. How does duplicate content hurt your ranking? Is it a consequence of PR being spread across multiple pages – or is it just a case of being penalized for duplication? I’ll have to do more research. Thanks again.

  13. anty

    Interesting that the question mark doesn’t have to be escaped. Normally a question mark would be a RegEx meta character, but I just looked it up in the Google guidelines: a question mark is treaded as a regular character.
    An important note: Not every crawler understands RegEx in the robots.txt. So you are “protecting” your sites against the major search engines, but not from normal bots. This is ok to avoid duplicate content, I guess.

  14. anty

    I wonder if Google isn’t already good at detecting a wordpress installation and can therefore react on the duplicate content accordingly (like ignoring part of the sites, indexing after a schema normal wp blogs will follow)… Just a thought :)

  15. oakling

    OMG. Will this keep spammers from doing that obnoxious thing where they copy a whole journal entry (or the majority of one) into their fake blogs, making it look like they are quoting it (“Someone said something great over at blahblahblah dot com, ‘entire post here,’”) with no other content? Just to get on google and steal my links? I’m sure they’re using robots at some stage….

  16. ShoeMoney

    well.. just had a conversion with mr cutts about this and many other things 3 days ago.

    You are getting the Disallow and noindex tags confused in the robots.txt. Disallow will still let the bots visit and index them but not take in the content.

  17. TheMadHat

    Agreed that disallow will allow the bots to visit but not take the content. Maybe I said this wrong.

    Say for example you’ve got links coming into a page you’ve disallowed in robots.txt. This wastes any link juice that (linking) page is giving you. Using “meta noindex” will allow the bots to follow the links on the “meta noindexed” page and pass on the link juice, and also alleviate any dup issues.

    So has he changed his stance on the fact that a “meta noindexed” page accumulating and pass page rank? On a robots disallowed page the bots won’t take the content thereby there will be nowhere to pass page rank to.

    The way I understand it is this:

    meta noindex – don’t index but follow and pass pr
    meta nofollow – index but don’t follow links or pass any pr on entire page
    href nofollow – don’t pass pr on that link
    robots disallow – don’t index or follow or pass pr (they can reference the url still, just without content there is nowhere to pass any link juice).

  18. Pingback: links for 2008-03-04

  19. Gary R. Hess

    For smaller blogs this might not be the best thing to do when it comes to SEO. If implementing everything this way, you are relying on Google to find older posts (if they don’t have links to them) by going directly through the homepage. Requiring Google to go back 20 pages to find an article is a good way to end up in the supplemental index (which, of course they claim doesn’t exist anymore, but IMO it does).

  20. Douglas Karr

    Thanks for these tips – I hadn’t even thought of leveraging the robots file against duplicate content (much easier than disabling those features!). Thanks!

  21. Charlie

    Bob,
    Why do you need to block /comments/, I thought having comments indexed would be a good thing. This is new to me so any pointers would be great.

    Thanks.

  22. Uzair

    You can also use
    Disallow: /wp
    instead of all those others like
    Disallow: /wp-admin
    Disallow: /wp-includes
    Disallow: /wp-content/plugins
    Disallow: /wp-content/cache
    Disallow: /wp-content/themes

  23. Pingback: Wordpress robots.txt tips against duplicate content : Blogazine

  24. Pingback: Avoiding duplicate content: Wordpress and robots.txt | About

  25. Pingback: Solutia mea la continutul dublicat din blog | Milionarul Mioritic ®

  26. Pingback: Wordpress robots.txt tips against duplicate content

  27. Nullamatix

    I didn’t initially include a robots.txt in my blog and never had any issues with dupe content. It wasn’t until just recently I decided to add one, more for experimental purposes. So far, search engine traffic hasn’t improved or declined either way. WordPress out of the box isn’t great for SEO purposes, but with minor tweeks, I find that a robots.txt isn’t really necessary.

    -Guy
    http://www.nullamatix.com

  28. RacerX

    Do you have some before /after stats you can share? I understand the penalty, but just want to understand how it improves.

  29. Pingback: ein-uwe.de » getunte Wordpress robots.txt

  30. Andy Beard

    Shoe is making an “SEO Linking Gotcha”

    All the pages blocked with robots.txt will still gather juice and can still rank

    Simple proof is that my WordPress SEO Masterclass page is still ranking after being blocked by robots.txt for a couple of weeks as it was written as a paid post – actually it is ranking higher that Joost’s similar page.

    This article explains why so many people have got this wrong for years
    http://andybeard.eu/2007/11/seo-linking-gotchas-even-the-pros-make.html

    It gets worse when people start mixing this kind of advice with their “All in one SEO” because the noindex statements added don’t get seen by googlebot.

  31. Pingback: WP: Doppelten Inhalt vermeiden - im Designpicks Blog

  32. Pingback: Weekly Links - March 7th | Vandelay Website Design

  33. Pingback: Robots.txt Tricks für WordPress-Blogs auf datenschmutz.net

  34. Pingback: 99 Ways to Improve Your Blog | PureBlogging

  35. Pingback: Speedlinking - Back To The Basics » Derek Semmler dot com

  36. Pingback: meckator » Doppelten Content vermeiden

  37. Pingback: Tweaking Your robots.txt File

  38. Pingback: 7 Ways To Improve SEO Optimization | Digital Tips

  39. Pingback: Optimize WordPress for Search Engines with robots.txt

  40. Pingback: Using Robots.txt to Improve your PageRank — WebDiggin.com: An Adventure to Make Money Online

  41. Pingback: Wordpress Duplicate Content Penalty Fix : Duplicate Content Filter Wordpress » UK Affiliate Marketing Blog - Kirsty's Affiliate Marketing Guide - Affiliate Stuff UK

  42. Pingback: How to Setup a WordPress Blog | Niche Store Strategies

  43. Pingback: How to Add a WordPress Blog to Your Site | Niche Store Strategies

  44. Pingback: MoneyM8.com » Post Topic » robots.txt tips for WordPress to prevent Duplicate Content

  45. Pingback: 99 Ways to Improve Your Blog « Raheel’s Blog

  46. Pingback: 8 tips to better secure your Wordpress installation | Web Design Blog x2interactive. Ένα blog για το Internet και το Web Design

  47. Pingback: What is a Robots.txt file and how to write one | Web Design Blog x2interactive. Ένα blog για το Internet και το Web Design

  48. olay

    Very helpful post for me as I have been looking how to use the robots.txt file in this way for some time.

Comments are closed.