Hey Mark Rand had a whitboard Friday that maybe worth watching regarding this -- 10 myths that scare SEO's but shouldn't on may 24th
Posts made by VanadiumInteractive
-
RE: Should I wory about spam domains linking to me?
-
RE: How to recover my traffic? How to make a boring subject interesting?
Hey Mark; Content and traffic is a great thing but as a business owner I'm guessing the end result is to convert that traffic into a lead. How is your conversion? What would it be worth to you to increase that by 10-20 percent? Pictures can say a 1000 words, your site is very textual and informative but has little focus on visual queues for segmenting your audience and cleary providing your offering. What action do you want your audience to take? And how well do you present that in all your pages? Once I find your content, How do you take me from reading it to interacting with you? Unless you have that dialed in, spending your energy and money on attracting traffic will result in just that, traffic. My initial thought would be to enhance the home page and back pages with a conversion strategy, and once that is in place to continue your traffic efforts

-
RE: Crawl Diagnostics Error Spike
One last thing;
It seems that I have a game plan for addressing this issue, but as I think about this one thing has me concerned in the way Roger crawled the site.
The site has maybe a total of 100 articles, which would account for ?Page=10, but what I'm seeing is errors on ?Page=104. When you look at that page its a blank. Where is Roger coming up with that parameter?
Do you think this is a Roger issue or something else?
-
RE: Crawl Diagnostics Error Spike
Hey Jake;
Whats your option of using "nofollow" vs "follow" on the pages i'm blocking from indexing? Is there a reason to prevent them from following the links on these pages?
-
RE: Crawl Diagnostics Error Spike
Thank you again for the input, the goal here is not provide accurate reporting and ensure that the site conforms to the search engines requirements.
Currently the "?page=" parameter is not blocked through . it sounds like this maybe the issue.
I will update the code to address that and see what kind of results we get with the next update. I think this is best addressed at the code level, rather then the robots.txt.
Thanks
-
RE: Crawl Diagnostics Error Spike
Hey Jake;
Thanks for your feedback, i did make some changes to the code (posted in the reply to Jamie). I'll take a closer look at the webmaster tools to make sure things are OK on that end.
FYI: The "rel=prev / rel=next tags" are implemented
I added code to manage
to pages that are accessed through
- /Blog/?tag=
- /Blog/category/
- /Blog/archive.aspx
As a secondary concern, with Roger now reporting all these issues in SEOMoz, I provide these reports to my clients and thus having 16k errors is not a good PR thing. How do I tell Roger no to crawl these blank pages?
-
RE: Crawl Diagnostics Error Spike
Hey Jamie;
In an effort to block crawling of pages on the blog that are essentially duplicating content I added coded (on (4/16) to insert :
to pages that are accessed through
/Blog/?tag=
/Blog/category/
/Blog/archive.aspx
I did not do this for
/Blog/?page=
There were no changes to the robots.txt
There were no updates to canonical tag
There were no updates to pagination
Thanks for your prompt reply

-
Crawl Diagnostics Error Spike
With the last crawl update to one of my sites there was a huge spike in errors reported. The errors jumped by 16,659 -- majority of which are under the duplicate title and duplicate content category.
When I look at the specific issues it seems that the crawler is crawling a ton of blank pages on the sites blog through pagination.
The odd thing is that the site has not been updated in a while and prior to this crawl on Jun 4th there were no reports of these blank pages.
Is this something that can be an error on the crawler side of things?
Any suggestions on next steps would be greatly appreciated. I'm adding an image of the error spike