False Negative Warnings with Crawl Diagnostic Test
-
Hello again.
Thanks for sharing the information on #1 and 2. I have heard of Volusion before but have no experience with them. Based on what you have shared it seems they may not be a great solution from a SEO perspective.
For #4, you are correct. The "META titles over 70 characters" is a warning that long titles will be truncated. The other main consequence is a title's weight is divided amongst the words in the title. The longer the title, the less weight that is applied to each term in the title. If you know and understand these factors, you can choose to ignore the warning.
For #3, you definitely do not want "thousands of links on a page". You need to figure out a way to significantly lower the number of links. Search engines will follow a percentage then stop. Yes, I would say this is bad for SEO.
Somehow you need to categorize the links. Many blog sites will show group links by month for the current year, and by year for past years. You could group by categories. Do something to get your number of links under control. You don't have to be under 100, but for now I would say you should be under ?250 links.
-
Hello Ryan,
Thanks again for the reply... your time is appreciated. We are currently working on creating a site map to 'categorize' the links in both our product and category indexes. This should take care of the two highest numbers of on-page links across our site. The majority of these warnings are under 250 links so we should be good. Or, let's hope cause there really isn't anything we can do about it at this point. Also, by chance do you know of or can you refer a company / independent who designs site maps? We have the xml site map file generated from Google, we just need someone to make it look nice.
Oh yeah, regarding all those duplicate title and duplicate content errors... they should be taken care of with a disallow robots.txt file. With that said, on our last SEO moz crawl the errors still came up on those same "email a friend" and "email when back in-stock" pages. Now... I did submit the robots.txt file during the past scan so this may be the reason. So before I start to wonder any further, I am going to wait until the next crawl is complete. Maybe you might know... into the future, will SEO moz still pick up those duplicate page and title errors in the crawl with the disallow robots.txt file
Also,our webmaster tools is showing 180 "restricted by robots.txt" crawl errors... all from "email a friend" and "email when back in-stock" pages in which the disallow robots.txt was just placed. I understand that even with the disallow robots.txt file, Google can still crawl whatever it chooses. Is this anything that we should be concerned about? Also please note that we have 1000's of these pages and webmaster tools is only showing 180 of them.
Thanks again for your help
-
Anthony,
You can begin a crawl of your site anytime. Click on Research Tools from the menu bar and scroll down to On-Page Optimization Tools > Crawl Test. This will allow you to confirm your robots.txt settings are set correctly.
For sitemaps, http://xml-sitemaps.com/ seems to be quite popular. I would suggest checking them out first. They offer a free test for up to 500 pages, and it is $20 USD to buy their product if you like it.
For Google WMT, the "restricted by robots.txt" errors can be disregarded if you are confident the pages should be blocked. I would recommend allowing Google to crawl your site whenever possible and using the noindex meta tag to prevent the pages from being indexed. This approach would eliminate those errors.
-
thanks a lot man. I'm going to check out that site map site. Also, I'm going to look into applying those "no-index,follow" tags on the pages instead. Thanks again

-
Hey Ryan,
So I just confirmed with Volusion that certain pages such as these can have the "no-index, follow" tag and certain pages can not. It's just the way their system is setup. So with the pages that can, I will for sure apply the "no-index, follow" and for the pages that can not, II will go ahead and apply a disallow robots.txt. Also, if you wouldn't mind confirming... it's the "no-index, follow" meta tag that I should apply? Not the "no index, no follow" tag?
Thank for all of your assistance and guidance through all of this trouble-shooting!
-
As a rule, don't use "nofollow" on internal links.
-
Thanks mate, I have been searching for a couple days on how to fix that warning.
-
I just set up footers that are on every page to nofollow sites that I care about because otherwise they get thousands of pages linking all from the same domain - this can't be good for the actual site.
I then made a single link to the sites I care about which is followable. I am hoping this is a good strategy. Sorry to digress from the original interesting topic. -
If you trust the target site, follow the link. If you don't trust the target site, nofollow all the links.
If you feel the footer links will actually be seen and used, keep them. If they are not likely to be seen or use, I would suggest removing them.
-
To the OP,
We are also on Volusion and have found that adding the Meta Robots tag for noindex, follow in the meta override area for categories has worked for us. We haven't found a way to add it however to the SearchResults page at this time.