Pretty simple answer, yeps!
Posts made by Martijn_Scheijbeler
-
RE: Is it possible to add more than 1 author for a post?
-
RE: Is it possible to add more than 1 author for a post?
Hi Eliran,
It's possible to add multiple authors via the rel=author tag to a post. However Google will only show the first editor (when he's/she's verified) within the search results.
Hope this helps!
-
RE: Can I see when SEO Moz has crawled my website?
Hi Simone,
You're as far as I know not able to see when Rogerbot crawled your site from within SEOMoz. You should be able to see when Googlebot visits your site from within Google Webmaster Tools. They provided crawl stats to see how many pages Googlebot visited.
Hope this helps!
-
RE: How do you de-index and prevent indexation of a whole domain?
Hi TCE,
There are two options in my opinion for this which are the best (as their are several other options including META robots etc):
- Robots.txt: Exclude your portal via your robots.txt file.
- HTTP Auth: Let your developers add HTTP Authentication so you have to log in to see the portal. Google won't be able to see the pages itself and so will de index them.
Hope this helps.
-
RE: Help: my WordPress Blog generates too many onpage links and duplicate content
Hi Holger,
I would recommend to view the latest webinar by Nick Herinckx on Advanced Wordpress SEO. He's providing some crucial tips also regarding your problems. The webinar could be found here: http://www.seomoz.org/webinars#past .
-
RE: 301 a PDF?
Hi Nigel,
Your question was asked before in the Q&A and it looks like answered successfully. Maybe these answers could help you out: http://www.seomoz.org/q/301-redirect-on-a-pdf-docx-files.
Hope this helps!
-
RE: Can I prevent some pages from being crawled from SEOMoz spider and still not affect Google Spider?
Hi Matt,
Absolutely, you could do this by adding a special part to your robots.txt file specified to the user agent rogerbot which is used by SEOMoz for their spider. This could be done with for example:
User-agent: rogerbot Disallow: */anythingyouwanttoexcludeforroger/*Hope this helps!
-
RE: Last Linkscape index update
Hi Dan,
Since a couple of months Linkscape is called Mozscape and is the basis of data for tools like OpenSiteExplorer (OSE) & the Pro campaigns. The last index update for Mozscape was at the 28th of November and included more than 76 billion urls. This means that your link analysis was also using data from this update. Hopefully this answers your question.
-
RE: What user agent is used by SEOMOZ crawler?
Hi Eric,
The useragent of SEOMoz is: rogerbot.
-
RE: How do I disallow a subdirectory in my reports?
Hi Dan,
I would suggest adding this to your robots.txt file within your root, within the next crawl of Roger this shouldn't be in your reports anymore.
User-agent: rogerbot Disallow: /localmarket*
-
RE: My warning report says I have too many on page links - 517! I can't find 50% of them but my q is about no follow
Hi Sarah,
You're right about the 517 links, the SEOMoz toolbaar shows around 517 internal links. But there can't be found more than ±130 links on the page itself. However hopefully this answers your questions.
-
No, if the page is still linked from other pages without the nofollow this won't help.
-
Yes. The link won't be followed by Google.
-
-
RE: Robots.txt - What is the correct syntax?
Jarno,
The $ would suggest this parameter is always on the end of a URL. And within Henrik's example it's already somewhere in the middle of the URL.
-
RE: Robots.txt - What is the correct syntax?
Hi Henrik,
I would suggest trying: Disallow: &view=send_friend
Optional you could try this without the & as I'm not sure this is always at the start of this parameter.Hope this helps!
-
RE: Examples of sites using hreflang
Hi,
Hopefully you already read John Doherty's post about the hreflang as he successfully tested the hreflang.
His post could be found here.Hope this helps!
-
RE: Does opening a page with target=_blank cause bounce?
Hi,
A quick question to be sure we're able to answer your question: are the blog and the store on the same root domain? Or do they have their own subdomain?
Thanks.
-
RE: Our sitemap is not indexed well
I would say yes if you also delete the sitemap from the Google Webmaster Tools Sitemap division. Than Google knows what sitemap(s) to use.
-
RE: Our sitemap is not indexed well
Hi Eduardo,
Yepm, like expected. The first sitemap you mentioned is an sitemap index file. But the http://cursos.workea.org/sitemap.xml is also a sitemap index. Within the Google policies that's not possible. You should directly include the sitemaps of your http://cursos.workea.org/sitemap.xml file in your first and only sitemap index file.
-
RE: Our sitemap is not indexed well
Hi Eduardo,
This topic/ question might help you out, it was asked a couple of weeks ago. And I think it could provide you with an answer: http://www.seomoz.org/q/sitemap-nested-indexing-this-sitemap-index-is-referenced-by-another-sitemap-index
Hope this helps,
Martijn. -
RE: Has there been a change in the Bing/Yahoo Algorithm?
Hi Dave,
The best way to track such things is by using tools such as SERPs Volatility and SERPMetrics Flux. Both tools give information about the recent trends on the top 3 search engines for a couple of thousand keywords. You could see best for yourself if you think this caused the beatings of your Web site.
Hope this helps!
-
RE: Analytics: Goal Tracking
Hi,
Could you share a screenshot on how you set up the goals at this moment? This would give us some insights to see where the problem could be. I've got a feeling but it makes no sense to sent you in the woods looking for something if it could be not there. So a screenshot of your goal setup/ funnel would be appreciated.
Thanks!