I definitely agree with Federico, we have tens of times your traffic on a daily basis and we still have an Alexa global ranking of around 2000. As this is definitely not a metric I/ we care about as it makes no sense and doesn't drive us any revenue or visitors in any way (shout out to RCS from Will Reynolds).
Posts made by Martijn_Scheijbeler
-
RE: Reducing Alexa Ranking
-
RE: How to schedule the on page reports myself
No, unfortunately there's not an option for now.
-
RE: Goals and funnels in Google Analytics
I would recommend using regular expressions within your funnel steps to make sure every category is within the funnel steps as well as your products are. I think you could only filter on the category part of your URL: /category/ to make sure this is within the URL as part of your funnel step and do the same for your product pages.
So a funnel step would become something like: ^/category/(category-name1|category-name2)
Hope this helps!
-
RE: Ranking Report showing point at two place
Did you redirect the old link to the new one? As this could be a reason for Google to show both pages still in the index. If you did redirect this page could you give some insights in when you did this?
-
RE: Difference between SEOmoz Pro and Mozscape API
Hi Shinya,
Let me try to help you out, hopefully if I say something wrong the Moz staff will correct me. The difference between SEOmoz Pro and the Mozscape API is that the API only covers details about the data you see in Open Site Explorer. As it's an API it will provide you with raw data.
The SEOMoz Pro campaigns will give you via dashboards insights in how your pages are marked up with the right code for search engines.
You're right about the rate limit of requesting every 2 seconds. There's a difference for the API besides the difference in rate limits for paying customers. You won't get acces to some parts of the API for retrieving metrics. You can find a complete overview of the methods you can request over here.
Hope this helps!
-
RE: Does this robots.txt file look right?
Hi Sean,
Like you already said, I wouldn't recommend your current robots.txt as it would indeed block all files ending with .html. So I would go with your own robots.txt file with only the User Agent.
Good luck!
-
RE: Proper way of handling wordpress urls and redirects?
Hi Bussara,
Yes, you should redirect the old urls to the new ones as the content from the old pages could now be found on the other (new) urls from how I understand it.
Hope this helps!
-
RE: Setting a campaign in SEOmoz
If you enter your campaign domain as: www.domain-name.com only the subdomain www. is used to gather campaign details. If you enter domain-name.com all subdomains and of course your root domain will be used to gather details.
So to answer your question: if you enter domain-name.com it will also run on www.
Hope this helps!
-
RE: Positioning one site in two languages
I don't know all the features of both plugins, but it seems to me that you want to use the hreflang tag to differentiate the pages for both languages/ google sites. Within these you're able to configure the language of your pages and Google will (in most cases) respect these tags and will show the right pages for the right language.
Hope this helps
-
RE: Facebook ad matching
Hi Tauri,
These people could get some hints that you're using their email addresses for your Facebook ads as it seems otherwise highly unlikely that you're able to target them so directly (besides the fact that you could use FBX partners for retargeting).
I would suggest including this within the terms of service of privacy agreement you have with your customers that you're able to target them on Facebook by using their email addresses.Hope this helps!
-
RE: Social data for links
In what sense do you mean not accurate, both tools get their data via the APIs of all the social networks. However as these reports are not generated on the fly it could be that URLs crawled at the front of a crawl could be in accurate when a crawl finishes.
-
RE: Google caching the "cookie law message"
Hi Cyto,
the problem could be found within the use of the enclosed tags as they're not in anyway used by Google bots to don't crawl the content within the tags. The tags ( and ) you're using are only used within Google Search Appliance and don't have any value for the normal Google bots which you're trying to target with this.
Hope this helps!
-
RE: I need Black Hat Examples
I have to make sure that I never dealt with Blackhat SEO before, but I mostly certainly enjoy reading the posts on Blackhat SEO around holidays and special events like Halloween. I would recommend reading such articles.
-
RE: Social data for links
Hi Chris,
Have you tried SocialCrawlytics? It will crawl your site and checks the nr of shares/ likes/ tweets for all the pages found on the domain. Very helpful!
-
RE: Indexing/Sitemap - I must be wrong
I see your frustration, how long ago did you submit these site maps? Are we talking a couple of weeks or a couple of days/ a day? As I've seen myself, Google is not that fast at calculating the nr of pages indexed (definitely not within GWT). Mostly within a couple of days/ within a week Google largely increased the nr of pages indexed.
-
RE: I want to check which pages have been crawled
From the page with Crawl Diagnostics on your Campaign you can export all the data to CSV. This will provide you with an overview of all urls SEOMoz crawled during it's last crawl.
-
RE: Problem crawling a website with age verification page.
Well that's a small side note to your problem ;-), are you able to just set up a crawl for a sub folder? Or do you have to pass the verification at all times?
-
RE: Problem crawling a website with age verification page.
Hi Catalin,
The best way do to this is of course to include a link to the rest of the Web site (you could remove the link of course when Roger came by). But what you also could is redirect the user based on the user agent when linking wouldn't be an option.
Hope this helps!
-
RE: How to optimize for different google seach center (google.de, google.ch) ?
Hi Yiqing,
You're right about getting links from domains like, .de and .ch, they will increase the authority within the German market. As I would imagine, but we never know for sure, Google will give more weight to the Zeitung, when you're a German site then the BBC as the Zeitung is a way more popular news source in Germany then the BBC is.
As I'm not sure about which site we're talking but I would also suggest taking a look at the HREF Lang tag. As this help article from Google: support.google.com/webmasters/bin/answer.py?hl=en&answer=189077 already suggest it's used by Google to determine which language is used for what countries.
Lately the're have been written a couple of great blog posts around the topic of internationalization, I would at least recommend to read:
Hope this helps a bit!
-
RE: Affiliate & canonicals
Hi Richard,
Absolutely, in the case you mentioned within the article it was a duplicate page of their normal pro page. So adding a canonical tag with the URL of the original page was by far the best way to make clear for Google that the original version of the page could be found elsewhere.
Hope this helps!