I should also add that the Keyword Explorer tool is awesome and one of the best things about Moz Pro. So kudos with that tool. Incorporating the rank tracker into the Keyword explorer would make sense to me from a UX point of view (more than just the first page, change over time, etc). Just a thought 
Posts made by prima-253509
-
RE: UPDATE: Rank Tracker is NOT being retired!
-
RE: UPDATE: Rank Tracker is NOT being retired!
I haven't used Rank Tracker very much in the last year but it has historically been useful to look up keywords outside of the core keywords we are tracking in our campaigns. It is not just that the tool is going away, it is also that the quota is being reduced in terms of what you can track. We recently upgraded our subscription so that we could track more keywords but now, in order to mimic the functionality of the Rank Tracker tool I would have to keep some keywords free and in reserve so that campaigns could be created on an ad-hoc basis. i.e. our 750 keyword limit on the campaigns is now essentially 700 if I want to keep open spots for ad-hoc keyword research that had been provided by the rank tracker (tracked over time) or 550 if I wanted to keep open the 200 rankings available on the daily cap.
Campaign limits are also going to be hit in regards to tracking domains for a keyword phrase as you can only add three competitor sites per campaign. It just isn't as functional for ad-hoc research as the Keyword Ranking tool was.
Are quotas going to be increased on campaigns to compensate for this (keywords available / campaign spots available)?
This is disappointing as it seems like a lot of features are disappearing / being sunset while costs are staying the same. If I am missing something about quotas let me know. Thanks!
-
RE: Wordpress Woocomerce Recommended SEO URL structure
Glad it was helpful!
If you are going to have a true blog then that is enough in order to segment it out. Having the date in there can be helpful to compare the hits you are getting to old blogs vs newer blogs (i.e. how long your content is staying relevant).
If you are going to have other types of content such as shopping guides / product comparisons / etc that are more "timeless" pieces of content then you might want to think about the kinds of articles you are going to write and create prefixes that match those types of articles.
You could definitely do product guides and product comparisons in a blog but it can be harder to segment out if it is just "blog".
Hope that helps.
Cheers! -
RE: Wordpress Woocomerce Recommended SEO URL structure
One thing to keep in mind with the urls is how you can segment them in analytics for easy data analysis. You want them to be semantic and pretty but also easily segmented. I would encourage you to think about how you will be able to segment your urls in analytics so that you can easily see patterns in how people are browsing the site and what types of pages are successful.
For instance we have the following url structures for brands, equipment, replacement parts, and a learning center.
- brand/[brand-name]
- equipment/type/[category] - for the categorization of equipment
- equipment/brand/[product] - for easy segmentation of products
- part/type/[category]
- part/brand/[part]
- learn/[cat]
- learn/article/[article-title]
This gives us a lot of flexibility in moving products around in the menu system without messing up urls while still being semantic, and allowing for easy segmentation in analytics. For instance, with this setup we can see if people prefer navigating by equipment catalog or by brand. It also allows us to easily pull out the learning center articles and all the visit we get to them to see how eCommerce only visits are doing.
One thing I would suggest with your blog is to have some kind of prefix that allows you to easily exclude those pages (or only include those pages) in analytics. If you simply go by year without a prefix it will be harder to segment out the data.
You should check out a mozinar that Moz did with Everett Sizemore that deals with a lot of these issues (he specifically talks about SEO and url structure).
Also, you probably have already seen this, but yoast's plugin for wordpress will allow you to remedy much of the duplicate content that wordpress can create.
Cheers!
-
RE: What is the full User Agent of Rogerbot?
I know this is an insanely old question, but as I was looking it up as well and stumbled on this page I thought I would provide some updated info in case anyone else is looking.
The user agent can't be found on the page that is listed anymore. However it is on https://moz.com/help/guides/moz-procedures/what-is-rogerbot
Here is how our server reported Rogerbot in its access logs (taken from May 2013). Notice that there is a difference with the crawler-[number]
rogerbot/1.0 (http://www.seomoz.org/dp/rogerbot, rogerbot-crawler+pr1-crawler-02@seomoz.org
rogerbot/1.0 (http://www.seomoz.org/dp/rogerbot, rogerbot-crawler+pr1-crawler-16@seomoz.org)[updated link added by admin]
-
RE: Considering Switch to old Domain - Any Bad Karma?
Hi Mememax,
Thanks for the feedback, that I what I was hoping for but just thought I would get some thoughts from the great community here. Thanks for weighing in!
Josh
-
Considering Switch to old Domain - Any Bad Karma?
So here is the issue. I am working with a company that used to have a branded domain. Then they split the domain into two separate keyword rich domains and tried to change branding to match the keyword rich domains.
This made for a really long brand name that is difficult to actually rank for as it is mostly hi traffic key terms and also created brand confusion because all of the social accounts still operate under the old brand name.
We are considering a new brand initiative and going back to the original brand name as it better meets our business objectives (they still get traffic from branded searches under the old brand) and the old branded web domain.
My question is if there is any added risk in going back to an old domain that has been forwarded for the past 2 years to the new domain?
I know the risks and problems of a domain name change, but I am not as certain about the added complication of moving back to an old domain and essentially reversing the flow of 301's. Any thoughts?
Cheers!
-
RE: Tool for tracking actions taken on problem urls
Maybe I don't fully appreciate the power of excel
but what I am envisioning seems to require more than what excel can provide.Thanks for the suggestion though. I will think about it some more.
-
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources.
So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues.
Example Case:
SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404).
When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found.
I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed.
Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved).
Bonus for any tool that uses Google and SEOmoz API to gather this info for me
Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed).
Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools.
Thanks!
-
RE: Google Hiding Indexed Pages from SERPS?
Thanks Alan,
will see what we can do. One way or the other it has to be addressed.
-
RE: Google Hiding Indexed Pages from SERPS?
High Alan,
thanks for the response, I guess it is good to know that someone else has seen this issue before.

As for canonical tags, I do have them on all pages, but because there is not a way to set them absolutely (our CMS only allows for relative....so it takes the base path of the domain that it is on) I can't get them to link only to the domain that they are supposed to be published on.
Cheers!
-
RE: How to add a disclaimer to a site but keep the content accessible to search robots?
That is rough,
maybe a legitimate situation for user agent sniffing (albeit fraught with danger)? If you can't rely on javascript then it would seem that any option will have significant downsides.
This may be a hair-brained suggestion but what about appending a server parameter to all links for those who do not have a cookie set? if the user agent is google or bing (or any other search bot) the server could ignore that parameter and send them on their way to the correct page, however if the user agent is not a search engine then they would be forced to the disclaimer page.
This would allow for a user to see the initial content (which may not be allowed?) but not navigate the site, however it would also allow you to present the same info to both user and agent while making the user accept the terms.
Alternatively serve up a version of the page that has the div containing the disclaimer form expand to fill the whole viewport to non-cookied visitors and set the style to position:fixed which should keep the visitor from scrolling past the div, but it will still render the content below the viewport. Thus cookied visitors don't see a form but non-cookied visitors get the same page content but can't scroll to it until they accept the form (mobile does weird things with position fixe, so this again might not work, and a savy user could get around it).
Edit: Just found this article which looks promising. It is a google doc on how to allow crawls on a cookied domain https://developers.google.com/search-appliance/documentation/50/help_gsa/crawl_cookies might solve the problem in a more elegant, safe way.
Would be interested to hear what you come up with. If you could rely on javascript then there are many ways to do it.
Cheers!
-
Google Hiding Indexed Pages from SERPS?
Trying to troubleshoot an issue with one of our websites and noticed a weird discrepancy. Our site should only have 3 pages in the index. The main landing page with a contact form and two policy pages, yet google reports over 1,100 pages (that part is not a mystery, I know where they are coming from.....multi site installations of popular CMS's leave much to be desired in actually separating websites)
Here is a screen shot showing the results of the site command:
http://www.diigo.com/item/image/2jing/oseh
I have set my search settings to show 100 (the max number of results) results per page. Everything is fine until I get to page three where I get the standard "In order to show you the most relevant results, we have omitted some entries very similar to the 122 already displayed." But wait a second, I clicked on page three, now there are only two pages of results and the number of results reported has dropped to 122
http://www.diigo.com/item/image/2jing/r8c9
When I click on the "show omitted results" I do get some more results, and the returned results jumps back up to 1,100. However I only get three pages of results. And when I click on the last page the number of results returned changes to 205
http://www.diigo.com/item/image/2jing/jd4h
Is this a difference between indexes (same thing happens when I turn instant search back on, Shows over 1,100 results but when I get to the last page of results it changes to 205).
Any other way of getting this info? I am trying to go in and identify how these pages are being generated, but I have to know what ones are showing up in the index for that to happen. Only being able to access 1/5th of the pages indexed is not cool. Anyone have any idea about this or experience with it?
For reference I was going through with SEOmoz's excellent toolbar and exporting the results to csv (using the Mozilla plugin). I guess google doesn't like people doing that so maybe this is a way to protect against scraping by only showing limited results in the Site: command.
Thanks!
-
RE: Disqus integration and cloaking
Thanks John,
That link was helpful, it is a similar concept but we are not using ajax. I appreciate your response.
-
RE: Channel Conversion Rates
Hi Kyle,
I hope this will be helpful in gauging your sites performance, but I have a feeling that it will be hard to compare because the conversion rates change so much depending on the target audience and types of users. Anyway, here it is for what it is.
I have three sites that I currently am involved with that are in the eCommerce realm. Two of which are mostly B2B and one that is both B2B and B2C.
Our lowest performing CPC is .39% while our highest is 3.53% (varies wildly depending on site and referrer, google/bing/etc)
Our lowest performing organic is .89% while our highest is 4.55% (again same stipulations as above)
Direct 1.6% - 5.5% depending on the site.
From what I have seen (and I know we can improve) your organic numbers look really good (maybe high?). while CPC might be a little low. Your direct looks really good as well, although I find it interesting that it is below your organic.
Hope that gives some gauge for you.
-
Disqus integration and cloaking
Hey everyone,
I have a fairly specific question on cloaking and whether our integration with disqus might be viewed as cloaking.
Here is the setup. We have a site that runs off of drupal and would like to convert the comment handling to disqus for ease of our users. However, when javasrcript is disabled the nice comment system and all of the comments from disqus disappear. This obviously isn't good for SEO, however the user experience using disqus is way better than the native comment system. So here is how we are addressing the problem. With drupal we can sync comments between the native comment system and disqus. When a user has javascript enabled the containing div for the native comment system is set to display:none. hiding the submission form and all of the content and instead displaying it through the disqus interface. However when javascrip is not enabled the native comment form and the comments will be available to the user.
Could this be considered cloaking by google? I know they do not like hidden div's, but it should be almost exactly the same content being displayed to the user (depending on when the last sync was run).
Thanks for your thoughts, and if anyone has familiarity with a better way to integrate drupal and disqus I am all ears.
Josh
-
RE: Homepage outranked by sub pages - reason for concern?
Thanks for the response. It is nice to hear from someone else who has the same type of site and sees the same thing. Appreciate the tip and the response.
-
RE: Homepage outranked by sub pages - reason for concern?
Thanks Alan,
that helps and you might have pointed something there. Our site has lots of links on each page and each page basically links to the same pages which would keep everything pretty even. Structure is something that we are working on. I wonder if that is part of the problem.
-
Homepage outranked by sub pages - reason for concern?
Hey All,
trying to figure out how concerned I should be about this. So here is the scoop, would appreciate your thoughts.
We have several eCommerce websites that have been affected by Panda, do to content from manufacturers and lack of original content. We have been working hard to write our own descriptions and are seeing an increase in traffic again. We have also been writing blogs since February and are getting a lot of visits to them.
Here is the problem, our blog pages are now outranking our homepage when you type in site:domain-name
Is this a problem? our home page does not show up until you are 3 pages in. However when you type in just our domain name in google as a search it does show up in position one with sitelinks under it.
This is happening across both of our sites. Is this a cause for concern or just natural due to our blogs being more popular than our homepage.
Thanks!
Josh
-
RE: Facebook Comments
If you mean viewing the source of the page and the actual html elements that is what I did. With Javascript turned on all of the html elements show up. With it turned off they don't, thus much of it is being written via javascript.
Instant preview on that page from the google serps does not show all of the comments, just the likes. However the cached version of the page does show all of the comments, but it must be some sort of screen capture because the majority of the comments do not show up when viewing the source of the cached page.
So not sure that really confirms anything. I guess to find out you might have to do a controlled test.