Yes! If you've got a high-level Moz Pro account, you'll get 15,000 queries per month. Go nuts! 
Best posts made by randfish
-
RE: Keyword Explorer is Now Live; Ask Me Anything About It!
-
RE: External Sitewide Links and SEO
My general stance would be to worry only if the links are obviously manipulative or non-editorial. If you've just happened to receive links from partners, clients, fans or bloggers that appear in footers sitewide, that's not a big issue. Look at networks like Conde Nast or Techmeme - sitewide links back and forth between all the related properties is a common thing.
They almost certainly don't pass the same value as separate, individually created links from each of those pages - the diversity of your link profile matters a lot, but unless there's other reason they look spammy, I wouldn't sweat it.
BTW - I also wouldn't intentionally target or pursue these kinds of links. SEOs used to do that when PageRank was the big dog in the algo, and every bit of PR juice meant more value, but nowadays, PR is just a small component of rankings.
-
RE: Keyword Explorer is Now Live; Ask Me Anything About It!
Yeah - some important things to keep in mind about volume in KWE:
-
It uses United States volume only right now (if you search by default in AdWords, you may be getting global search volume unless you choose "US")
-
It's modifying that data with actual clickstream numbers to give greater accuracy
-
It normalizes trend volume, so if Google has a big spike or dip, they'll show that one number, whereas KWE tries to estimate the monthly average data.
-
-
RE: Duplicate content and http and https
Thanks dude! If I make it to Vermont, I might look you up

-
RE: Should we add a link from every page with main keyword back to homepage
IMO, those links in the footer do not help with rankings in Google. This is exactly what the "first link counts" anchor text test is about http://www.seomoz.org/blog/results-of-google-experimentation-only-the-first-anchor-text-counts
You almost certainly are already linking to those pages higher up in the HTML, and Google is counting those anchors, and probably discounting the ones in the footer.
-
RE: Help fixing the traffic drop that started on 4 September 2012.
I think 404'ing is a worse "suicide" than 301s. And as I noted above, you can start by redirecting a few bunches, then see if things improve.
-
RE: Facebook Page vs. regular website
It really depends on the situation. If you've got a Facebook page that can get an extremely high number of likes/shares, but the webpage/domain version of that same page couldn't earn similar quantities of links/social mentions, it's possible the Facebook page would outperform. That said, due to the limitations and restrictions of the Facebook platform, and because it's a property someone else owns and controls (Facebook), I'd never go that route by default unless I knew it was a very specific, short term campaign.
-
RE: Keyword Difficulty Tool
Hi Danny - sorry for my delay. Been a very busy week. You asked:
"Can you please confirm for me the exact reason why the Google ranking data and the Google search volume data in KDT has always been so unreliable."
Yes, I can. What we do is adversarial analytics data. The source of data (the search engines) are not stable providers on information with SLAs and promises and deals. In fact, they are constantly changing and in some cases working actively to prevent the collection of this data. Hence, our attempts to gather this information are sometimes thwarted, as they were this past week. You can see rankings data instability in virtually every other product/software/tool that tries to collect this type of data as well (a continuing problem for all of us in this field over the past 9 years - since Google stopped providing a rankings API).
We currently have a primary system, an early warning alert system, and a backup system. Last week, we lost our primary system, our early warning system alerted us, and we moved to the backup, which had some instability and some trouble catching up to the volumes needed, but eventually did so. I believe 95% of campaign rankings were caught up as of yesterday, and it should be 100% today.
I certainly apologize for the downtime - it sucks and I want to do better. We're building a third and fourth backup system (they were actually already in the sprint for the production engineering team this month, but had to be pushed back to deal with this emergency), and we hope that will help make things more stable in the future. However, until and unless there's an API with a true guarantee, this data's reliability will always be in question.
You also asked about refunds. We always want to be very generous with our refund policy - if your account and usage has been severely impacted by this outage, please write to our help team and we will issue a refund. However, we treat refunds individually, not on the overall level, for temporary outages of individual features.
For many folks, rankings data was on time (as I noted, only a percentage of rankings collections fell behind thanks to the backup). And ~25% of our users will use the keyword difficulty tool in a month (that tool was back up and operational earlier this week, and had less downtime that some of the campaign rankings).
If PRO went down entirely, or if a tool like Open Site Explorer (used by 70%+ of members monthly) was down for a long period, I think we'd need to re-examine the individual-based refund policy, but I'm hoping it never comes to that. The worst we've had so far was a period in September where several services were throwing frequent errors and issues.
I can tell you that over the next 3 months, uptime and reliability is a HUGE focus. The production engineering team has 4 fulltime engineers, 2 contractors, and 2 open full time positions. These folks do nothing but worry about our backups and how to make sure customers get data.
Thanks for sticking by us in tough times, and if you'd like a refund, please do email help@seomoz.org.
-
RE: Tools to Discover the Keywords Competitors are Buying in Adwords
There's a few popular ones:
They all use large-scale results scraping to get data (except Compete, which uses ISP/clickstream info), so make sure to check the quality of data sources.
-
RE: Why seomoz.org still in Google index?
Part of me thinks this is just because so many live pages still have links to these URLs, and while Google knows they 301, some part of its index maintains the old URLs due to the strength of their presence in the link graph. It might be a technical limitation too, where Google uses synthetic links/URLs in their indices to represent redirected stuff.
In any case, not a big concern for Moz, and shouldn't be for you either if you're redirecting a site. So long as the traffic to the pages from search goes back up and indexation looks solid, you should be fine.
-
RE: I have 4,100 302 redirects; How can I change so many to 301s
Dejan's solution is solid, but you may also want to try writing a redirect rule, using mod_rewrite for Apache (or ISAPI_rewrite in IIS). More info here - http://www.seomoz.org/learn-seo/redirection and here - http://www.seomoz.org/blog/url-rewrites-and-301-redirects-how-does-it-all-work
-
RE: Old URLs Appearing in SERPs
Oh gosh - it's my pleasure! Thanks for being part of the Moz community
I'm honored to help out.As for the URLs - looks like everything's fine. Google often maintains old URLs in a searchable index form long after they've been 301'd, but for every query I tried, they're clearly pulling up the correct/new version of the page, so those redirects seem to be working just great. You're simply seeing the vestigal remnants of them still in Google (which isn't unusual - we had URLs from seomoz.org findable via site: queries for many months after moving to Moz, but the right, new pages were all ranking for normal queries and traffic wasn't being hurt).
Some examples:
- https://www.google.com/search?q=Enter+the+World+of+Eichler+Design
- https://www.google.com/search?q=Eichler+History+flashbacks
- https://www.google.com/search?q=eichler+resources+on+the+web+books
Unless you're also seeing a loss in search traffic/rankings, I wouldn't sweat it much. They'll disappear eventually from the site: query, too. It just takes a while.
-
RE: Why should your title and H1 tag be different?
Just want to point out that personally, I disagree with that assessment and haven't seen anything data-wise to suggest it's an issue. It's hard to believe that Google/Bing would want to penalize so many millions of sites that do this by default (news sites, Wordpress, Joomla, Drupal, etc. all have it in default settings either in base or plugins).
That said, Todd usually has good reasons for his recommendations, so would be interesting to probe more deeply.
-
RE: Flush Open Site Explorer data
Open Site Explorer is powered by the Mozscape index. As we re-crawl the site and update the index, you should see any pages which aren't there move out and the new ones get in. This might take 30-60 days from when the pages were removed/changed (or longer if the timing's bad - our index updates every ~15 days with crawl data from the past 60 days, so if there's an unlucky overlap, data could be as old as 75 days or as fresh as 15 days).
-
RE: Image Links Vs. Text Links, Questions About PR & Anchor Text Value
Hi Andrew - it's been a while since I ran this particular test (almost 2 years!) but, back then, we saw that a straight HTML text link with anchor text appeared to pass slightly more ranking ability/value to a page than the same link with an image + alt text. Now, that said, I'm still a huge fan of image links - they're natural, they make sense in a backlink profile, etc. - and it certainly could have changed since then.
Honestly, from a practical standpoint, I usually wouldn't sweat image vs. text (get what you can if it's a great link source!), but from a technical level, my guess is that anchor text in HTML text is still slightly more influential than an image + alt attribute.
-
RE: Discrepancy between number of links from web master tools and open site explorer.
Hi Vivo - this is great feedback and definitely an issue we're aware of and actively working on. A few things to keep in mind re: Google vs. Moz:
-
Moz tends to crawl the most important pages on most of the visited web. Google will crawl even the deep, dark, duplicate, spammy corners of the web. We've gotten the feedback many times that SEOs want to see these dark corners, too, and our big data team has been working on an infrastructure to allow us to scale closer to Google size (we'll probably get to 60-80% their size/freshness, but never quite 100%).
-
Moz's index can be older - we update every 15-30 days with pages found and crawled in the prior 60 days. Thus, if you've gotten lots of new links, I'd suggest checking out the "just discovered links" tab in OSE (which often shows way more new/fresh links), and Fresh Web Explorer (https://freshwebexplorer.moz.com/) which will surface almost anything connected to an RSS feed.
-
Two of our competitors - MajesticSEO and Ahrefs often have good crawl/link data as well, and when I need to see every link about a site, I, too, use multiple sources.
Thanks for letting us know, and rest assured, we're working tirelessly to improve both size and freshness (my personal estimate would be you'll start to see those results in 5-6 months).
-
-
RE: Is ok to add 'no follow' to every outbound link?
You're throwing out the baby with the bathwater (to use a colloquialism). External pointing, followed links are not only good for the web as a whole, they're good for YOUR site, too. We've seen numerous examples of sites that began opening their external linking policies and received greater search traffic and rankings as a result. The most famous of these in the NYTimes, which Marshall Simmonds talked about in his Whiteboard Friday here: http://moz.com/blog/convincing-upper-management-aka-justifying-your-existence-whiteboard-friday
I'd also suggest watching Cyrus' video on the topic of linking externally here: http://moz.com/blog/external-linking-good-for-seo-whiteboard-friday
And finally, I'd point out that sites that never link out with followed links create the perception that they are not generous and thus, not deserving, of links of their own. You might point out that only a fraction of web users know what a nofollow link is, and my response would be that those are the same people who control most of the websites and links.
All in all, I'd strongly advise against this (and Google does, too!).
-
RE: Why does Open site explorer only show a fraction of the linked Domains that Google does?
Hi George - this is a complex problem/issue, but I'll do my best to explain.
First off, Oleg & Andy are correct - Google has 100s of times the infrastructure (and thousands of times the financial and human resources) that Moz does, and their index is quite a bit larger. For example, the latest Mozscape index is ~180 Billion pages, while Google's main index likely contains something between 4-10X that amount at any given time.
However, there's a lot more complexity involved, because sometimes you'll actually see Moz report more links than Google Webmaster Tools. This is because Google Webmaster Tools samples (they might know about far more than 120 domains linking to you) and they're not specific about how they treat pages that are, for example, canonicalized by a redirect or a rel canonical. They also don't specify the frequency of their updates, and this appears to be inconsistent as well - we've seen counts go up, down, and all over the place without much indication of why. Hence, lots of site owners become suspicious about the link counts shown in GWMT. In your case, because you have a limited number, you can likely validate that all of these are real and so the problem isn't as big.
As far as Moz's index goes, while we're not as big we are always growing (last index, for example, was ~140 Billion pages - you can see all the updates here: http://moz.com/products/api/updates), we're consistent in what we'll tell you (the exact count and every link we see), and you can track this over time compared to index size (and compared to competitors). Next year, we are aiming to 3X the current size, so you should see something much closer to Google's scale (it's been an ongoing challenge for some time, but we're nearly there).
That said, totally understand if our index isn't right for your needs. Majestic SEO and Ahrefs may both be good options (though neither of them are quite Google's size/freshness/scale either, and they have their own weaknesses vs. Moz's index on other features).
I'll also bring back your suggestion of a page that talks more directly about the comparison between different indices and why Moz might have more/fewer links than other tools (and than its own prior indices) over time.
Wish you all the best!
p.s. George - in you particular case, you may also want to check the "Just Discovered" tab in OSE, which often shows a lot of new links we've found that haven't yet made it into the index (but will make it into the next one). If you've received lots of new links in the last 30-60 days, it will take a month or so for those to make it into our main index, but they'll show up within hours in JDL.
-
RE: Unexplained Drop In Ranking and Traffic-HELP!
Hi Kingalan - definitely a frustrating experience. Let me see if I can provide some thoughts on each of your questions:
#1 - Yes, it's certainly possible that links from bad sources could have propping up your rankings, and by disavowing these, you've lost rankings/traffic in the short term. However, I'd agree with your SEO consultants that pain now from this action is better than the potential penalty/banning you might experience in the future. Google has been very aggressive with penalties, but they haven't been wholly consistent. This makes bad links something that can provide short-term opportunity and long-term cataclysms. If removing these links is what hurt you, I'd argue it was the right choice to make, and getting some new, editorial, high-quality link sources is the next step.
#2 - My guess would be that extra indexation of a few hundred pages has nothing to do with the rankings/traffic changes. I've seen Google index thousands or even tens of thousands of extra pages without much problem - a few hundred are very unlikely to be the cause of the problem. That said, I'm not sure removal would be my first step - I might think about how to canonicalize these back to pages you do want indexed (if you do want that content discoverable). If you really don't want the content findable in Google, then meta robots noindex might be worthwhile.
#3 - It is possible that thin content is to blame here. I agree it's hard to scale quality content, but keeping a few hundred pages up to date and incredibly useful for visitors/searchers is exactly what Google wants to see. I'd be constantly asking the question - is my page the most valuable one in the search results? Does it provide a better, more useful experience than anything else in the top 10? If the answer is no, then you don't really deserve to rank (don't worry, many sites don't), and extra effort here may go a long way. One way to do this might be to ask those who submit listings to give you more content (or to get agents/interns/writers/contractors to bolster each listing).
p.s. You may wish to check out http://moz.com/blog/why-you-might-be-losing-rankings-to-pages-with-fewer-links-worse-targeting-and-poor-content
Wish you all the best,
Rand
-
RE: What's the Story on Mozscape Updates?
Yes - that should be possible and I will endeavor to make sure we're keeping the updates honest and correct for the future. Thanks Donna.