My Pleasure. SEOmoz has a lot of great tools and each of them are valuable in their own ways. I would take the time to go through all of them (whether you use them or not) to just get an idea of what each does specifically. The Help Hub is a great section with short videos on each product and feature here. If we don't have a tool on here you love then you can always look through PRO Change Log and see if any of the tools you want are in development or you can suggest new ones too.
Best posts made by DarinPirkey
-
RE: On-Page SEO Fixes - Are They Relative?
-
RE: Importance of WMT change of address and problem doing it
Yes that is one way.
Here is a link to a bunch of methods to verify your site
https://sites.google.com/site/webmasterhelpforum/en/verification-specifics
If you have access to the Registrar (or know who does) then it's much easier to verify that way.
-
RE: Wrong titles in site links
Site links are auto generated by Google's Algorithm and Google has been trying to get them to be more "useful" to the users that see them. I've seen that many times Google will either pull from my meta description or from the first paragraph of the page. This isn't always accurate but it's a general guideline.
Webmaster Help: https://support.google.com/webmasters/answer/47334?hl=en
Matt Cutts on Sitelinks: https://www.youtube.com/watch?v=lpR34uwujeY
-
RE: Google Authorship Problems
It looks like your rel="author" is set up to your /post on your Google+ and not your Main Google+ Page.
On the plugin use this
https://plus.google.com/104817850948530040093
Instead of this:
https://plus.google.com/u/0/104817850948530040093/posts
Not sure that's it but I would definitely make sure that is changed.
Also, On your Google plus profile you can add "Contributor To" under Account > Me on the Web > Review Your Google+ Profile (click the bottom link named "open your profile" > under LINKS on the right hand sidebar you'll see a section call Contributor to > Add a link to that section
Lastly,
I would also go to Google's Authorship Verification if you haven't done so. https://plus.google.com/authorship
I also really like AuthorSure plugin to help this. It shows your profile on each particular page. http://wordpress.org/extend/plugins/authorsure/
Hope this helps.
-
RE: Only One page crawled..Need help
If I'm not mistaken that is showing that the pages that were crawled with errors on them. The 1 here is showing a client error (probably a 404)
It also looks like this could have been your initial crawl from Rogerbot. You'll get a full crawl within 7 days. But just in case, check your robots.txt file to make sure everything is normal in there. It looks to be an initial crawl.
-
RE: I'm receiving this message...
Is this an initial crawl or has the campaign been running for a while.
Can you provide link to your site so we can look into it.
-
RE: Wrong titles in site links
That's interesting. Don't forget you always have the ability to "demote" a site link. I would just use caution. If the page is the page you want to go to but the text isn't right, then I WOULD NOT demote that page. I would try and add the text you want to be there to either the top of the page in the 1st paragraph, your meta description and also check your page titles to make sure they are the way you want them too.
I like to do a site: search on my site to see which urls and page titles Google is pulling. Not sure this will help you but it's a best practice in my book. Just remember that Google doesn't have to use your meta data for it's SERPs.
I did a quick Google search to see if there was anything on comparing Google sitelinks and found an interesting article on clickz.com. It doesn't really answer your question but it's pretty insightful.
On a closing note, I would try and add meta descriptions and more descriptive title tags to your pages. For the site link "Met tekenen" (which links to http://www.tekenjetuin.nl/canvas/bewerken) the title tag of that page is <title>Tekenjetuin
</title> and I didn't see a meta description. I would definitely try to add to the title of that page to try and help Google understand them better. -
RE: "Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Don't forget that Rogerbot (moz's crawler) is a robot and not an index like Google. Google used robots to gather the data but the results we see is an index. Rogerbot will crawl the pages regardless of noindex or nofollow.
Here is more info on RogerBot http://moz.com/help/pro/rogerbot-crawler
-
RE: Huge spike in crawl errors today - mozbot ignoring noindex tag?
Don't forget that Rogerbot is a crawler and not an Index. Google will crawl those pages too but will (generally) follow your instructions to noindex those pages.
Here is a little information on Rogerbot http://moz.com/help/pro/rogerbot-crawler
And here is a little information on the frequently asked questions for th diagnostics report http://moz.com/help/pro/crawl-diagnostics
Hope this helps.
Darin.
-
RE: I'm receiving this message...
www.nextlevelmobiledetailing.com/robots.txt looks fine...
If you've just run the report a few minutes ago I would recommend waiting until the "full" crawl happens. In the mean time...check this post out from the HelpHub here on SEOmoz to see if any of these issues are there.
-
RE: Do self referencing links have any SEO importance?
I personally always like to think "Is this link useful" before answering a question. If a link is the copy of a page and helps the user go to the top of the article, then it may be beneficial. But in theory, "internal links" which in a way this is, are useful to help define site structure of a site. A page linking to itself generally won't be useful to either the reader or the search engine. That isn't to say that it won't provide some ranking value. I haven't done much testing with pointing a link to the page that it's on vs a different page on the domain. My guess, is that if it does have any factor, it's incredibly small. The link should be useful to the reader in that it helps them find out more information in regards to what they are reading. The link should also be useful to the search engine in that it helps define the structure of the overall website. Those are pretty much my go to rules for links on my site. If it doesn't help, it doesn't go on my page.
-
RE: "Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
Technically that could be done in your robots.txt file but I wouldn't recommend that if you want Google to crawl them too. I'm not sure if Rogerbot can do that. Sorry I couldn't be more help.
If you don't get one of the staffers on here in the next few days, I would send a ticket to them for clarification.
If you decide to go with robots.txt here is a resource from Google on implementing and testing it. https://support.google.com/webmasters/answer/156449?hl=en
-
RE: Are robots.txt wildcards still valid? If so, what is the proper syntax for setting this up?
Great job. I just wanted to add this from Google Webmasters
http://googlewebmastercentral.blogspot.com/2008/06/improving-on-robots-exclusion-protocol.html
and this from Google Developers
https://developers.google.com/webmasters/control-crawl-index/docs/robots_txt
-
RE: Duplicate Content Indentification Tools
I use CopyScape but it's more of a plagiarism tool then an actual duplicate content identifier tool. I say that because just because a few lines of text are the same on a page, that doesn't mean Google will remove it from the SERPs. Generally duplicate content has to be a substantial portion of a webpage to be considered duplicate content.
I would first dig into Moz Analytics and see WHY you are generating duplicate content before I would worry about what part of the page is duplicate.
- Have you set canonicals on your pages?
- Does your site produce session IDs?
- Do you have pagination?
- Are you copying and pasting text from page to page to fill up your site?
Google has said time and time again, duplicate content issues are rarely a penalty. It is more about Google knowing which page they should rank and which page they should not. Take a look at why you are getting the duplicate content issue and then we can help you resolve it or give advice on what to do next.
-
RE: Can you add a user to your MOz account or to a single campaign?
The answer to your question is NO but it is in the pipeline.
In the mean time, what you can do is actually build reports and add recipients to it. You can have these reports emailed automatically to someone however. That is what I do for my clients and my team. That way everyone has the same data at the same time.
Campaigns>Settings(for the particular campaign)>Reports>Add New Report>
If you expand the section under "Build Report" you can click on the reports you want each person to have. If you want to make sure that each section of the report is exactly what you are looking for, there is actually a little blue link that shows example of what that report will look like.
The only issue is that these are generally summaries and not deep dives in to things like duplicate pages. I get around this by each week when I get the email from Moz, I go into the campaign and EXPORT all of the deep stuff into a CVS and share it in a Google Doc folder that everyone involved uses.
I think having the ability for someone to be added to a particular campaign would be a great idea though. Especially when you have multiple people working on a project. Here is a link that was started some time ago asking Moz for the ability to add additional users to a campaign. Karen from Moz talks about it in there too so check it out.
-
RE: Should I get a unique IP?
Jeff's answer is very good.
I just wanted to add that if your sites link to one another, I would highly recommend having separate IPs and even separate hosting accounts if possible. Part of Google's patent on rankings are referring domains and the IPs in which those domains are on. I've personally seen where moving a domain that I own to a different host with a different IP dramatically increased each of the sites rankings. (this may be just correlation and not necessarily causation but it's worked for other people I've worked with too)
-
RE: Good references/studies on mark up?
Came across this last night too. http://searchengineland.com/from-microdata-schema-to-rich-snippets-markup-for-the-advanced-seo-162902
It talks about microdata schema and rich snippets.
-
RE: Content suggestion
I wouldn't call it a "secret sauce" but what's worked for me is the following:
First, know that Google is a question and answer service. People type in questions and Google tries and find the most reliable answer based on a ton of metrics. Here's what I do.
-
Write answers to questions that people are actually looking for based on the intent of the question being asked.
-
Answer those questions in a way that just doesn't answer the question, but also provides an education on the topic and goes deeper into the answer.
3)Try to answer other questions the user will have surrounding the question they found you through. As in what's next for them to do once they have the answer you provided or what are other ways of getting to the question they asked.
Obviously there is more to it than this but these sort of guidelines have helped me drive tons of traffic.
-
-
RE: Deindexed site - is it best to start over?
The REASON you got indexed is important. If it's because of linking then I would totally start over. The only trump card to that would be if you have strong brand signals. YOU CAN RECOVER but you'll have to determine if your metrics are strong or weak. If you had very strong metrics behind your site, then you can recover within a year and be back to where you were. If your signals are weak, then the effort to correct your issues will be the same time/money it would take to start over and build up domain metrics. Your Brand will matter. If you have good brand mentions then maybe staying on that domain is the best option.