Questions
-
Have My SEO Cake & Eat It - Mix Viral Content & Business Content?
Hi Takeshi, Yes, that's actually pretty good, since the hashed link will mean that everything can stay on one URL and I can retain the link juice. Also, I'm thinking, if that hashed link can be 'caught' by some sort of jQuery code that will present that specific piece of link bait content in a dynamic block, that would be great. Because then, what you could do is, when you have another link bait campaign with different content, you can get the code to display the content that is relevant to the click. So, if I'm clicking from Bob's Blog on a 'Lingerie in Films' link that has www.{....}.php#lingerie-films hash link, that function catches it and presents that piece of bait content first thing in the page. Any thoughts guys? Any other solutions perhaps? I do think this one has potential.
Conversion Rate Optimization | | RocketZando0 -
Ajax Crawling | Blocked URLs Spike
If the blocked urls are all the /recommendation/?c=catalog... pages, then looks like google is reading following the url in your ajax code, since the return is not a full page, then probably google discards them. Have you tried to disallow the /recommandation/ folder in your robot.txt? Also, why do you use ajax to call related products? To speed up the page load?
Technical SEO Issues | | smarties9540 -
Best Way To Clean Up Unruly SubDomain?
PS DO NOT " 2. Remove all URLs from the index via the Removal Tool in WMT" This is my opinion and I believe it is shared with many other people as well and search engine optimization community by using the Google disavow link tool or revoke links tool you are essentially consenting to doing something wrong do not do it do not unblock 100% of your robot text only allow the links that you wish to have seen by Google. Just think of them as a parameter on the end of the link but a subdomain. It will be indexed and site map accordingly. Sincerely, Thomas
Technical SEO Issues | | BlueprintMarketing0 -
Empty Meta Robots Directive - Harmful?
Can't answer the API question I'm afraid. However on the other bits - if you don't specify robots directive, search engines are likely to behave in the default manner - i.e. index, follow unless you're blocking them another way (i.e. robots.txt) A good test of this would be if you've launched a page since the 17th and it's not in Google's index and you know you've been crawled. Check in GWT for your crawl data - and don't worry about the cache because your users will always be taken to the current version of your site. It's only a concern if you're no longer being crawled. If it's an ecommerce site, then it should just be one site-wide tweak to put index,follow back in. Re-create and re-submit your sitemap.xml to GWT then Google will go after all your new content as well - i.e. it hurries up re-crawling. Hoping something helped you there
Technical SEO Issues | | Nobody15609869897230