Questions
-
New .TLD domains - SEO Value?
There's no difference, SEO-wise, between domain.com and domain.___ (insert your flavor here, .xxx, .info, .ws, .travel, etc). No search engine considers any TLD to be better than another (not even coveted .edu and .gov). Some things I do expect to see, tho Phising explosion - Hey, you have a complaint listed against you on ebay.web. Hurry and get it fixed! A massive spike in exact match domains and generic terms Branding is about to be a LOT harder. Previously, it wasn't too hard to police things. If you owned a .com you could kinda fend off the pretenders. There weren't too many TLDs to choose from and people aren't as likely to be fooled by a .ws as they are a .net. But now? You can be moz.web and host your own Whiteboard Fridays! Sure, Moz has an army of lawyers and will probably legally pile drive you into the ground but if 100 people did that... Yikes to the legal bills. How close can you get to someone using the same domain in a different TLD before it infringes on your identity? Something tells me it going to be left up to the courts to sort that one out.
Search Engine Trends | | Highland0 -
Viewing Page Authority and Domain Authority History Graph??
Unfortunately there is no way to view historical data for Page Authority, perhaps this is something Moz will consider for the future (maybe the top 10 or 20 pages per site according to PA?)
Getting Started | | Travis-W0 -
Fetch as Google in GWT - Functionality
BJS1976 The Fetch as Google tool allows you to see what Google sees. Yes, it can help you get the page indexed more quickly, but first take a look in WMT and see how the indexing is now. If your site is being indexed regularly, look at crawl errors. Do you see a problem there? etc. When you fix the problem, mark it as fixed and it will be removed from the list. The next time G crawls it, if the problem still exists it will reappear. But, you will know it was crawled. This allows you to dig deeper. NOTE: Fetch as Google will not follow a redirect. If you feel you are still not getting the page reindexed, I would resubmit the sitemap. I hope this helps a bit. Robert
Search Engine Trends | | RobertFisher0 -
How often does "HTML Improvements" refresh?
It can even take months to update, if you're looking at a page that has low authority.
Search Engine Trends | | Chris.Menke0 -
Content Writing for Ecommerce Products
Hi, Check if you like the products descriptions at http://www.shirts4geek.com/ I got done at: https://scripted.com/ All the best...
On-Page / Site Optimization | | Felip30 -
Why not buy +1´s?
Everyone who says "Google will know" is obviously not an expert with proxies. To answer you question, yes there is a way to add +1s without Google knowing. Would I do it? No because it's a waste of time and resources, I agree with everyone else when they say it doesn't do much for rankings.
Social Media | | jesse13410 -
Keyword Self Cannibalization and E-Commerce
I used to have separate pages for "each color" and other product variations. However, I changed to one larger page for the entire product group and it has worked better in my opinion.
Intermediate & Advanced SEO | | EGOL0 -
What is the better of 2 evils? Duplicate Product Descriptions or Thin Content?
Very good answer - and yes, 2 bad choices but limited resources means I must choose one. Either that or Meta NOINDEX the dupes for the moment until they are re-written.
Intermediate & Advanced SEO | | bjs20101 -
Moz Campaign Duplicate Content Problem
Hi There, I responded to the ticket you submitted to the Help Team, but I am going to post the answer here, as well. There are a few reasons why Roger might find different pages during different crawls. The way Roger crawls is by going to the homepage of the campaign and grabbing up to about 400 links on the same domain (or all of the links if there are less than 400) from the source code of that page and then we crawl through to those pages to diagnose them and to grab more links. The process continues on like that until we reach the campaign limit or until we no longer find any new pages. If there have been any changes to the links on a page or if we get any different server responses from any of the pages, it can change the architecture of the crawl, potentially leading the crawler to new pages. It is difficult to say more specifically why the crawler wouldn't find every page of the most recent crawl on the first crawl, since we don't keep the full crawl data after a new crawl is generated so I can't see the changes from crawl to crawl directly. if you have an old crawl report CSV where the current duplicate content pages are not shown as duplicate content, you can email that to me in the support request and I will look into the issue further for you. Chiaryn Help Team Ninja
Other Questions | | ChiarynMiranda0 -
Website "A Record" in DNS - Geotargetting
From the mouth of Google: "setting a geographic target won't impact your appearance in search results unless a user limits the scope of the search to a certain country". So if your users in the UK say "return UK sites only" then you likely wouldn't be shown unless you change your behaviour. And that depends on how users in your target country search, which I don't know enough about for the UK.
White Hat / Black Hat SEO | | DiTomaso0 -
404 Crawl Diagnostics Report MOZ
Hey Ben! Thanks for the question. The best way to do this is to head over to your Crawl Diagnostics page and then exporting the csv. From there, you can sort the 404 column to group them and scroll over to the referrer URL column (all the way at the end). That'll give you the path Roger took to get to the 4XX/5XX error. This will be made a bit easier in Analytics because we'll list the referrer URL in the UI for these situations, but for now, the csv is your best bet. Let me know if you have any other questions. Thanks! Best, Sam Moz Helpster
Other Research Tools | | SamWeber0 -
Benefit of using 410 gone over 404 ??
I had the (mis)fortune of trying to deindex nearly 2 million URLs across a couple of domains recently, so had plenty of time to play with this. Like CleverPhD I was not able to measure any real difference in the time it took to remove a page that had been 410'd vs one that had been 404'd. The biggest factor governing the removal of the URLs was getting all the pages recrawled. Don't underestimate how long that can take. We ended up creating crawlable routes back to that content to help Google keep visiting those pages and updating the results.
White Hat / Black Hat SEO | | matbennett0 -
Correct way to block search bots momentarily... HTTP 503?
You can do that, but it is less specific on what you are actually doing with your server. The 503 and retry after lets the spiders know exactly what you are doing (no confusion). Thank you for the clever remark below.
White Hat / Black Hat SEO | | CleverPhD0 -
302 Redirect based on Language Detection
Yes - forget the 302, I get nothing but headaches from them. I would not use a 301 either as a permanent redirect would not be accurate in how you are relating one page to the other. I think you need to consider if this needs to be automatic or not (regardless of how you forward people). I would suggest using a Javascript based approach - here are the details on why. Here is an article from Bill Hunt http://searchengineland.com/understanding-the-seo-challenges-of-language-detection-47524 he mentions how you can use an automatic IP address or a browser language based approach to send people to the proper page. There is a problem with this for the spiders. "Both of these methods are problematic for search engines, because spiders often crawl from a specific location and don’t signal language preference in their server request. For example, if Googlebot, crawling from Mountain View in California, requested a German language page on a site using IP detection the web server would detect an request from a IP in the U.S. and the crawler would be routed to the U.S. version and potentially never see the German language version. The same scenario on a site with browser detection would not detect any language preference and thus route the spider to the default version of the site which is typically English for US companies and the local language version for many country installations of scripts and web servers." Matt Cutts mentions this in http://www.youtube.com/watch?v=7paVYBgH0Hw for using IP as (in 2011) Google was only crawling from US IPs. Bill Hunt in his article mentions a solution, don't redirect the spiders "The easiest search workaround for either of these detection methods is to simply determine if the requester is a search engine and exempt them from any redirection, giving them the page they want. Note I did not say redirect them or any other action that could be misconstrued by conspiracy theorists as cloaking but simply let the spider have the page it requested. This will ensure spiders can index your local language content." If that gives you concern about to handle the automatic redirect - see the bottom of http://moz.com/community/q/what-countries-does-google-crawl-from-is-it-only-us-or-do-they-crawl-from-europe-and-asia-etc Hannah Smith from Distilled suggests using a Javascript overlay or some other "chooser page" to direct them to it. If you link up everything correctly, Google can crawl both versions of the site, but only the user is shown something dynamic to direct them where they want to go. This also ensures that if there are any errors in either the IP address shown or the default language in the browser, you have a way to fail gracefully and allow the user to select where they want to go on the site. Good luck!
International Issues | | CleverPhD0 -
Cooking Recipes Blog Links
Yes and no - you need to approach it conservatively. Adding useful links throughout the website is fine. To be on the safe side, I would avoid using exact match anchor text frequently unless it's actually the page title. For example: A link on their site that says "best kitchen knives" pointing to your homepage at mikescookware.com is going to be bad. A link on their site that says "best kitchen knives" pointing to mikescookware.com/best-kitchen-knives/ is going to be better, but still pushing the line too far in my opinion. A link on their site that says "check out Mike's list of best kitchen knives available" pointing to mikescookware.com/best-kitchen-knives/ would be fine in my opinion. After a certain point, each new link from her site isn't going to be quite as valuable as the first links were, purely from a link equity standpoint. However - each additional link could be providing value in terms of click-through traffic and actual visits to your site, which is just as important as any link equity that may or may not be passed. As long as you're not dropping a link to your site from a significant percentage of the pages on her site, and you're mostly linking to internal pages of your site with resources or products rather than directly linking to your homepage, I think you're OK. I would go heavier on the resource/blog links than I would on product/category page links. My end-of-the-day rule of thumb is that if it feels spammy to a stranger or looks like advertisements all through the site, you probably need to scale it back.
Intermediate & Advanced SEO | | KaneJamison0 -
Does IP Blacklist cause SEO issues?
It's not related to Gmail. The server itself was sending out email spam (Joomla is a CMS program used to manage websites). I bet he means he got listed on Spamhaus. First off, web spam and email spam are two entirely separate things. So you can be blacklisted with anyone in the email realm and not have it affect your SEO. Second, I've heard the "blacklisted IP" theory regurgitated for nearly 10 years now and nobody has ever proven that a specific IP was the reason for a site losing ranking. So you could, in theory, share an IP with an entire link farm and not lose any ranking (consider how many blogs share an IP under Wordpress.com or Blogspot). Google surfs the web just like everyone else (using DNS lookups) and they rank domains, not IPs (which are subject to change). The only way I could see an IP getting you in trouble is if your server got hacked and the hacker was using it to proxy attacks against Google (as in DDoS attacks, not spam). Then you might have some issues with SEO but your server being hacked would be a far more serious problem at that point.
White Hat / Black Hat SEO | | Highland0 -
What language to use for URL's for Russian language?
Hi, Technically if your cms and server can handle incoming urls with utf8 characters in them then you should be ok. I have seen some instances where the setup does not like them and produces 404 errors when you try to include these characters in the url, but most times it is fine. Google will display utf8 urls in the search results without problem (check out the wikipedia result here) There is a further consideration though which is one I face a lot with Greek (similar to russian in some ways) which is how these urls are shared in mails, social media etc. A lot of the times these urls end up getting automatically url encoded and the url shown in the mail or facebook etc is a long, long string of url encoded characters which is impossible to read, gives no indication of what the page is about and generally looks bad (try putting that link above into facebook....nasty). For this reason I usually choose to do 'greeklish' urls which is a latin character representation of the greek characters. There are usually some common practices in regards how the local language is 'recreated' in latin characters, there are for Greek, I would assume there are for Russian also. So with that in mind, if you have a russian speaker who is familiar with that kind of thing I would be inclined to make the urls themselves 'russianlish'. My two cents!
International Issues | | LynnPatchett0 -
Can you have too many NOINDEX meta tags?
no not if your not indexing it, your doing what they want, cleaning up the index, but still letting the crawler find links to other pages
White Hat / Black Hat SEO | | AlanMosley0