If you're having issues with Bing as well, the directions for geotargeting are here: http://www.bing.com/community/site_blogs/b/webmaster/archive/2011/03/01/how-to-tell-bing-your-website-s-country-and-language.aspx
Posts made by john4math
-
RE: Domains for regional websites
-
RE: Submitting multiple sitemaps
I'm not sure about your HTML sitemap; I don't think HTML sitemaps are a supported format for you to submit to Google (I don't see them on sitemaps.org). You just need Google to crawl this page, and all the pages it links to? There is a plain text format (see here) that is allowed for sitemaps. You could probably change your HTML sitemap pretty easily to that format.
I'm pretty sure you're allowed to submit multiple sitemaps, but I can't find anything concrete saying you can or can't. The Google Webmaster Tools UI seems to support it, so my guess is that it would be fine. Try it and see if it works? You could also create a sitemap index file that references both these sitemaps.
You can read more about sitemaps on sitemaps.org. According to the Google help doc here, they adhere to these standards.
-
RE: Too many pages indexed in SEOMoz
Have you looked at the errors in the campaign? I would suspect you have some duplicate content issues. For example, does your client serve the same page for the following instead of redirecting to the same URL?
- example.com
- example.com/
- example.com/index.php (or .html, or just /index)
- www.example.com
- www.example.com/
- www.example.com/index.php (or .html, or just /index)
There are many more that it could be finding. Drill in to this campaign, and click the subtab for Crawl Diagnostics. Then find the Duplicate Page Content error and drill into that. It should give you an idea as to what's going on.
-
RE: How to push down outdated images in Google image search
I think your plan is sound. Having quality content pages with the newer images with appropriate alt and title text is a good start. Then you'll just have outrank the other images like you said.
You could also try contacting some the webmasters of other sites whose images are ranking well, and ask them to update their image with the new third generation image, if that image is just as appropriate to their content as the first generation one. For example, if their post or article is about the product in general, they could update their image of the product for you. If their post or article is about the first generation product, it wouldn't make sense to update it. This is the same as when you're link building, and you ask the author of a post or article to change the anchor text of the link they're using to your site.
Also, if your client is very interested in having people see the new images when searching for them in Google Image search now, you can run ads in Google image search. These ads use a slightly different format than both display or search ads, in that they include an image along with text alongside it. It's still pretty new, and I think a lot of advertisers haven't tried it yet, so the clicks are relatively cheap for search advertising.
-
RE: Is 404'ing a page enough to remove it from Google's index?
Setting pages to 404 should be enough to remove them after Google indexes your page enough times. Google has to be careful about this, because when many sites crash or have site maintenance, they return 404 instead of 503, so Google wouldn't want to remove pages from their index until they're sure the page is gone.
Google talks about removing pages from there index here. The Google Webmaster Tools URL removal tool is only intended for pages that urgently need to be removed, so I wouldn't recommend that. Google recommends:
- If the page no longer exists, make sure that the server returns a 404 (Not Found) or 410 (Gone) HTTP status code. This will tell Google that the page is gone and that it should no longer appear in search results.
- If the page still exists but you don't want it to appear in search results, use robots.txt to prevent Google from crawling it. Note that in general, even if a URL is disallowed by robots.txt we may still index the page if we find its URL on another site. However, Google won't index the page if it's blocked in robots.txt and there's an active removal request for the page.
- Alternatively, you can use a noindex meta tag. When we see this tag on a page, Google will completely drop the page from our search results, even if other pages link to it. This is a good solution if you don't have direct access to the site server. (You will need to be able to edit the HTML source of the page).
Is there a reason you are 404'ing these pages rather than redirecting them? If these pages have new pages with similar content, you should do a 301 redirect to keep the link juice flowing and to take advantage of these pages being linked to. If you do continue returning 404 for these pages (or even if you don't...), make sure your 404 page is a useful one, that helps users find the page they're looking for (Google help article).
Also, Ryan, I'd be interested in hearing the results of using the 410 status code. I would imagine that status code would do the trick! I'm surprised I haven't read about this more, or why it's not mentioned in the help file linked to above.
-
RE: Alt tag using photoshop
Also, you might consider adding title text as well to your images. Here's a Search Engine Journal post about it. The title is less important for SEO, but can enhance the experience users.
-
RE: Alt tag using photoshop
If you view the source of your page (hitting cntl +u should do this), and find the image on the page, it should have an alt attribute on it. For example:

-
RE: 301 Redirect With A Message And Delay
I believe that is commonly done with the meta refresh tag. From http://www.seomoz.org/learn-seo/redirection:
Meta Refresh
Meta refreshes are a type of redirect that is executed on the page level rather than the server level (They are usually slower and not a recommended SEO technique). They are most commonly associated with a 5 second count down with text "If you are not redirected in 5 seconds, click here". Meta refreshes do pass some link juice but are not recommended as an SEO tactic due to usability and the loss of link juice passed.
-
RE: Looking for a better dashboard than Google adwords. Please help!
I'd be interested to know why you think it's throwing money out of the window. Of all the advertising online I do, Adwords converts the best, has the most options, and the most reporting. It sounds like you might need to refine your campaigns if they're not converting well?
If you use Google Analytics, you can integrate your Adwords data directly into Analytics, which allows you to slice and dice the data up however you want. To integrate these, you just need to link your accounts together (see here), and then turn on auto-tagging. You can turn on auto-tagging in the My account > Preferences page.
If you just want to view clicks and conversions by hour of the day, you can do this through the online Adwords interface. If you click on the Dimensions tab, it'll let you view the data by Hour of day. If you don't see this tab, just click the little down arrow all the way on the right, and enable the tab.
-
RE: Tuesday July 12th = We suddenly lost all our top Google rankings. Traffic cut in half. Ideas?
Adam is asking the right question. On http://eartheasy.com/live_water_saving.htm, I grabbed some text "Many beautiful shrubs and plants thrive with far less watering than other species. Replace herbaceous perennial borders with native plants. Native plants will use less water and be more resistant to local plant diseases." That text appears on a bunch of sites. I tried this with several other phrases from different pages on your site, and almost every time several other sites shared identical text to yours.
Google is penalizing you because your content is identical to a bunch of other sites. The more unique, original content you have, the more you should see your rankings rise.
-
RE: Canonicals Url question
It does appear that some of the posts are duplicate content, so you should use the canonical tag on those pages. For example, you serve the same post under http://callnerds.com/andrea/understanding-verizons-tiered-data-plans/ and http://www.callnerds.com/blog/att-and-verizon-tiered-data-plans-7132011/. I would suspect you'd want the main blog to get the link juice and appear in search results, so you'd want to place a canonical tag on the /andrea/ version pointing to the /blog/ version. I would imagine all her posts duplicate posts on the main blog, so you should add it to all of them.
Andrea's page, http://callnerds.com/andrea/, doesn't appear to be duplicate content of the main blog page http://www.callnerds.com/blog/, so I wouldn't add the canonical tag to that page. I don't think the content between the two pages is similar enough that the search engines would respect the tag anyway. That's good anyway, because you want her page to rank when people are looking for her posts.
-
RE: Facebook Comments
If you can display the contents of the comments on a page you host, as given in the blog post, there's nothing stopping you from displaying those on a the page below your blog post when the GoogleBot is viewing it.
The implementation should result in a set up where the page for your regular viewers has the blog post, and the iframe with facebook comments. The page you serve to the GoogleBot (or any search crawler) will have the blog post, but then HTML with the comments, as generated by the script mentioned in the SEOMoz post I linked to prior.
The con he mentioned is what the post was solving. I believe it's true that content in an iframe will not be indexed with the page it resides on, but in this case, you can get the content out of the iframe, and load it onto your own page.
-
RE: Canonical Tag Pointing To The Same URL
Matt Cutts posted a video regarding this question: http://www.youtube.com/watch?v=U8eQgx-njk4. The answer is that it's ok to do.
-
RE: Facebook Comments
I think the point of the article was that you can serve the actual Facebook comments you get from this script on the page when GoogleBot crawls it, so the actual comments will be on the page and you can get the SEO benefits from the comments. Excerpt:
I was amazed by the simplicity of the code and that no authentication is needed. If you'll examine the code you'll also see that the comments can be styled easily to fit any site's layout. So basically you can now use Facebook Comments Box on your site and serve GoogleBot (or any other crawler/browser agent) with the comments to have them crawled & indexed. Obviously this won't be considered as cloaking as you're serving Google exactly what the users see (just like creating an HTML version for a Flash website).
-
RE: Facebook Comments
You can get Facebook comments to be indexable by search engines! There was an SEOMoz blog post about it not that long ago at http://www.seomoz.org/blog/make-facebook-comments-box-indexable-by-search-engines. If that's the only thing holding you back, it sounds like you don't have much to lose in trying it out!
-
RE: Is use of javascript to simplify information architecture considered cloaking?
Ryan is right... you shouldn't do this. If you want to help the crawlers find their way through your site, you could submit a sitemap?
-
RE: Backtracking from verification meta tag to the correct Google account is difficult
I'd just delete that verification meta tag, and from a new Google account, generate a new verification meta tag to add (if that's how you want to verify). If you already have verified an account you have access to, just delete the meta tag. Who knows who that's giving Google Webmaster Tools access to... if you hear from someone after you remove it, I guess you'll find out.

-
RE: Using a Social Media Infrastructure
This sounds awesome! It looks like a great topic for a YOUmoz post (hint hint).
-
RE: Is there a development solution for AJAX-based sites and indexing in Bing/Yahoo?
I do! If you log into Bing Webmaster Tools, and go to the Crawl Settings, you'll see a new checkbox at the bottom, with the option "Configure your site to have bingbot crawl escaped fragmented URLs containing #!." According to the Search Engine Land post here, "It appears as though this means Bing will crawl #! URLs according to the Google standard. The help information hasn’t been updated, so it’s hard to say for sure."
It sounds like this is the option you're looking for?
-
RE: No follow syntax
ref=nofollow is incorrect (according to this). If it's ignored, it will pass all of its link juice.