Questions
-
Pagination Issues on E-commerce Site: Duplicate Page Title and Content on Moz Crawl
Unfortunately, Moz Analytics/PRO don't process rel=prev/next properly at this time, so we may give false alarms on those pages, even if the tags are properly implemented. It can be tricky, but Google recommends a combination of rel=canonical and rel=prev/next. Use the canonical tag to keep sorts from getting indexed, and then use rel=prev/next for the pagination itself. Your 3rd example (page=2...) should rel=prev/next to the URLs before and after it but then canonical to the page=2 variation with no sort parameter. It can get complicated fast, unfortunately, but typically rel=canonical can be implemented in the template. So, once you've got it figured out, it'll work for the entire site.
Moz Pro | | Dr-Pete0 -
How to avoid plagiarism and theft of content on article directory sites?
Does this apply to reputed article sites as well? We were under the impression that posting on some of the more well known ones would be a good way to start on our link building campaign. Obviously having second thoughts about this now. So should we stay completely clear of trying to link build even through the more reputed article submission sites?
Technical SEO Issues | | suchde0 -
'Not Found' Errors in Google Webmaster
Both. Sites generally do have errors, and you don't have to fix everything down to a 0. If I were you I'd use a crawler to find out where it is linking to that page and remove it. If its not being linked, look into your sitemap and see if it has to do with that. If none of the above are resolving the issues for you I'd suggest ignoring it unless it is a huge usability issue.
Technical SEO Issues | | William.Lau0 -
Missing meta descriptions from Google SERPs
Hi, yes Google does not care about the keywords meta tag but abusing it is also not recommended as some other search engines still consider this tag a little bit. Moreover, abusing or over optimizing keywords meta tag tells a lot about the intent of a webmaster which is even more dangerous. Best, Devanur Rafi.
Technical SEO Issues | | Devanur-Rafi0 -
Duplicate Page Content Issues
It's a bit tricky - we actually count something like 90%+ duplicated content as "duplicate", so we may be giving you false alarms on this particular example. My gut reaction is that they're thin pages - they look fine on the surface, but there's very little text content for Google to parse, and I think the overall content is very light even from a usability perspective. From a pure SEO perspective, if you had a lot of these very similar pages, you could run into some trouble. Honestly, at this point, you just don't have the authority (link profile, etc.) to support 10,000 products and the roughly 18,000 pages Google has indexed on your site. In the extreme case, you could run into a Panda penalty, but overall it's just an issue of dilution. Basically, you don't have the ranking power to support that many products, especially if Google perceived the content as thin. It's a balancing act, but I'd consider potentially NONINDEX'ing some of your thinner product offerings while you build up unique content for at least your top sellers. This doesn't have to be all-or-none. It may be that a couple hundred or even a few dozen products account for 90% of your sales, so start with those. Meanwhile, de-index some of your weakest content, and let the rest build up over time. Of course, there may be other issues at play, like actualy URL-based duplicates, that could be tackled before you start removing products from the Google index. Again, it's a balancing act. You could also use rel=canonical, but my gut reaction is that it's borderline for the cases you're showing here. These aren't true duplicates, and are separate products. It's just a matter of whether you want Google to see them yet or not.
Technical SEO Issues | | Dr-Pete0