Questions
-
Site: inurl: Search
Thumbed up for being a great response! Wish I had thought of that.
Intermediate & Advanced SEO | | SamuelScott0 -
How should I handle URL's created by an internal search engine?
Basic cleanup From a procedural standpoint, you want to first add the noindex meta tag to the search results first. Google has to see that tag to then act on it and remove the URLs. You can also enter some of the URLs into the Webmaster tools removal tool. Next you would want to add /catalogsearch/ to robots.txt once you see all the pages getting out of the index. Advanced cleanup If any of these search result URLs are ranking and are landing pages in Google. You may want to consider 301 redirecting those pages to the properly related category pages. My 2 cents. I only use the GWT parameter handler on parameters that I have to show to the search engines. I otherwise try to hide all those URLs from Google to help with crawl efficiency. Note that it is really important that you do the work to find what pages/urls Google has cataloged to make sure you dont delete a page that is actually generating some traffic for you. A landing page report from GA would help with this. Cheers!
Intermediate & Advanced SEO | | CleverPhD0 -
Canonical and Rel=next/prev Implementation
Looks good to me as well, just as a tip. Don't forget to submit the parameters you're using in Google Webmaster Tools. In the menu item: URL Parameters are you able to configure if content changes with a certain parameter. It helps Google to understand your URL structure better.
Intermediate & Advanced SEO | | Martijn_Scheijbeler0 -
Canonical and Rel=next/prev Implementation
Hi, The good news is I've worked on very similar projects before and taking a look at your examples, you're configuring it almost by the book except you shouldn't use rel=next/previous and a canonical. It's either/or, so you're probably going to need to ditch your canonical. i.e. http://googlewebmastercentral.blogspot.co.uk/2011/09/pagination-with-relnext-and-relprev.html However, I've configured sites almost exactly as you have and found that Google has just randomly chosen different (and multiple) combinations of page and sort order to rank in different sections. Once they get added to the index, it's a real chore to get them removed. I've learnt that if you genuinely don't want your sorted pages to appear in SERPs, you should use AJAX (and not have AJAX crawling turned on) e.g. "/?pg=1#dir=desc&order=price". Everything after the hash won't get crawled by Google. If you can't do AJAX, then you can add noindex to sorted pages and (at your own peril) nofollow / robots.txt to stop some pages being crawled. Using nofollow / robots starts to move into crafting page rank though and IMO is to be avoided. Another approach to avoid pagination and the performance impact of very long lists is to create more subcategories to break the inventory up. Might not be possible for your inventory, but worth considering a complete side step. George
Technical SEO Issues | | webmethod0