Questions
-
Single URL not indexed
Hi Nick, first of all, thanx for your responses. I already did the "fetch as Googlebot" thing 5 days ago. The page was successfully crawled and has been sent to the index successfully, according to Google Webmaster Tools. But in these 5 days, nothing changed. I like your suggestions with the extra text. We will add some and do the "fetch as Googlebot" again and see what happens. And you are absolutely right when it comes to the "value" of this page. It didn't send that much traffic, just a little. It is no big deal for us if this page doesn't get back into the index - but as someone doing SEO I want to figure out the problem Google seems to have with this page - just to test and learn for future problems
Technical SEO Issues | | accessKellyOCG0 -
Photo Gallery marked as spam???
Sometimes galleries are viewed as thin content, but you stated you have captions under each one of the images, consisting of 1-3 sentences each, so I don't think you have anything to worry about. As an example, one of the sites I work on used to have gallery pages that were flagged as duplicate content in SEOmoz's crawler, because they contained nothing but images and navigation. I added a little bit of text under each image, and they're no longer flagged that way. Example: http://www.stadriemblems.com/galleries/fire-department-patches.html
Intermediate & Advanced SEO | | UnderRugSwept0 -
Page Rank gone - technical difficulty?
Hi Moosa, I guess you were right. Teh PR cam back friday evening. Seemed to be an update problem....
Technical SEO Issues | | accessKellyOCG0 -
Can't find mistake in robots.txt
_Hi, _ _Just wondering .. Did you save the txt file in ANSI format? Sometimes, people mistakenly save it different format and this is where the problem creeps in. _
Technical SEO Issues | | Debdulal0 -
Firefox Add-On for crawl frequency??
Hi, Your best bet is Web master Tools or any other server side tool like the ones available for Word Press like you've mention. Depending on your server (unix or microsoft based) and/or your platform for the backend you can install / add additional features to get more info for this - google bot visits / crawl rates and formats. There are no fireFox addons since the browser dosen't have access to your site / server in order to see when google bot is visiting what pages and how is it performing - as that 's a behind the scene type of process. If you have access to your server you might try newrelic.com (I am not affiliate in any way with them - I just love the tool). However there are several others in the same spectrum that will give you more then the Google bot stats and data. Hope it helps.
Technical SEO Issues | | eyepaq0 -
Job/Blog Pages and rel=canonical
Hi, First off, even with a canonical, I'd suggest you have a unique title tag if for no other reason than users. Changing the title, even slightly, can help. For my clients, I usually suggest something simple like adding ", Page #2" after the main title. Google may or may not index the page, but that way if a user bookmarks the page (or shares it), the title is different. Second, you need more than a canonical link to correct this problem. You are dealing with a sequence, which means you need to use rel prev/next as well as the canonical. (For example: on page 2 of your jobs, the canonical would be /jobs/2, the rel prev would be /jobs, and rel next would be /jobs/3.) Treating these pages like a sequence will help explain this group of pages more effectively. And, that means... Finally, the rel prev/next would also help those second, third, fourth, etc. pages from falling out of Google's index and allow Google to find those jobs listed on those subsequent pages. Instead of telling Google that the subsequent pages are duplications (which is what you are saying by having a canonical referencing the main page on each subsequent page), you instead would be saying that these pages are grouped together as a sequence making it acceptable for Google to crawl through those pages. I hope that helps. Also, I'm not sure how SEOmoz handles the canonical in regards to duplicate content. Thanks.
Technical SEO Issues | | Matthew_Edgar0 -
SEO basics for Q&A tool
Hi Russ, thanks for yiur help, made things clearer to me! One last question concerning the linking of the content: if we noindex all pages with less than x words, we can still put the links on dofollow, or doesn't it make sense? Because if we create an overview page with the latest questions on it, it should be good for G and our users. So if we link to all latest threads (index and noindex), the users and G will find all the relevant content, G can follow the links on the noindexed pages and finds other content and so on... Or doesn't it make sense to you? I am just worried that we will have many pages with small content and the Q&A will look empty to both G and our users...
Technical SEO Issues | | accessKellyOCG0 -
Https-pages still in the SERP's
Hi Irving, yes, you are right. The https login page is the "problem", other pages that I visit after are staying on https, as all the links on these page are https links. So you could surf all the pages on the domain in a https mode, if you visited the login page before I spoke to our it department about this problem and they told me it would take time to program our CMS different. My boss then told me to find another, cheaper solution - so I came up with the noindex,nofollow. So, do you see another solution whithout having to ask our it department again? They< are always very busy and almost have no time for nobody
Technical SEO Issues | | accessKellyOCG0 -
Https-pages still in the SERP's
Hi Stefan, If Google is finding those https pages, instead of a noindex, nofollow tag, I'd try on of the following: Redirect https pages to http via 301s (preferred) Add a canonical tag pointing to the http version (as Malcolm's suggested) By using these methods, you have the best chance of preserving your rankings for any of the https that appear in the SERPS, and you also preserve any link equity that is flowing through them. If Google is finding https pages of your site, then there is the possibility that some link juice is currently flowing through them. This also solves the problem of any visitors accidentally landing on https that you don't want to be there. Although in reality, there is nothing wrong with this. Today, entire sites are https and rank quite well. It can take a long, long time for Google to remove URLs from their results. Before you can request removal, the URL either has to return a 404 or a 410 status code, or be blocked by robots.txt. Since neither of these are a good option for you, I'd stick with the 301 or the canonical solution. Best of luck with your SEO!
Technical SEO Issues | | Cyrus-Shepard0 -
Multiple URLs and Dup Content
One additional comment, and it's tricky. You need to find the crawl path creating these, BUT you don't necessarily want to block it yet. Add the canonical, and let Google keep crawling these pages. Otherwise, the canonical can't do its job properly. Then, once they've cleared out, fix the crawl path. Are you seeing this in our (SEOmoz) tools or in Google? I'm not actually seeing these variants indexed, so it could potentially be a glitch. It looks a bit like some kind of session variable.
Technical SEO Issues | | Dr-Pete0