Posts made by ShaMenz
-
RE: What would be a really good reason to pay for SEOmoz Pro service?
Hi Z,
There are basically 5 things that make PRO worthwhile for me
- Outstanding Tools that save me time and money, allowing me to give better service to my clients
- Online Webinars from industry experts who can quite literally rock my world with a single idea at times
- This Q&A, which provides access to other members, but also the ability to ask a private question and have SEOmoz staff and Associates help me nail a problem.
- Top notch customer service if I ever need help with something.
- The opportunity to be included in the development of new tools, analysis of new hypotheses on search etc combined with the knowledge that the people pursuing them are authentic thought leaders in this industry.
That about says it I guess.

Hope that helps,
Sha
-
RE: Link count
Hi,
The accepted best practice is no more that 100 links on a page, however there are some things to consider.
For a more detailed explanation, you can take a look at my answer in this thread
Hope that helps,
Sha
-
RE: "Link_count" in the SEOMOZ crawl report
Hi NYC,
I am guessing you are referring to the "Too many on-page Links" warning in the crawl report?
The accepted best practice is no more than 100 links on a page, but keep in mind that what you are seeing is a best practice recommendation, not an "error". So, if there is no reasonable way to reduce the number of links without lessening the user experience, then you may need to just accept that you can't comply with best practice in that case.
If, on the other hand, you see that there are unnecessary links on the page, you can reduce the number and this may even improve the experience for the user.
In general, if the links are predominantly from menus and necessary elements like buy buttons etc, then I would not be too worried.
If the page is full of internal text links trying to pass link juice with anchor text, then this would be something to fix. Two reasons:
- It is not a good experience for the user as it makes the whole page look and feel really "spammy".
- The amount of link juice passed by a link which is one of more than a hundred on the page is so miniscule that it is pointless anyway.

Hope that helps,
Sha
-
RE: Anyone else have trouble with Open Site Explorer
Hi Eric,
The best thing to do would be to email the SEOmoz Help Team direct so that they can take a look at the issue. The email is help [at] seomoz.org
Make sure that you give them as much information as possible, including the username that you log in with and the specific OSE search that you are trying to retrieve a report for.
Hope that helps,
Sha
-
RE: Site not indexing correctly
I agree with Keri that this is appears to be a case where Google is extracting text from within the body content and creating its own Title for the page.
The most important thing to remember in this situation is the signal Google is sending you by doing this.
The message is: Your chosen Title does not accurately represent the content on the page.
There are two ways to fix this:
- If your content is well optimized for the keywords you are targeting, Rewrite the Title
- Check whether the content on the page is adequately optimized and rewrite so that it better addresses the content in the Title. (Don't do this one without careful thought).
Hope that helps,
Sha
-
RE: Dynamic image serving SEO
Hi Chris,
Do you own the site which generates the images?
In order to verify a site you need to create a separate Webmaster Tools account for it and then go through the normal WMT verification process. This requires you to have sufficient control of the site to upload a file or add a meta tag to the index page.
If you cannot do this then you will not be able to verify the site.
Sha
-
RE: Hundreds of thousands of 404's on expired listings - issue.
Wow! Thanks Ryan.
I'm sure it won't surprise you to know that I'm always reading eagerly when I see you respond to a question as well.

-
RE: Hundreds of thousands of 404's on expired listings - issue.
Hi Croozie,
Awesome work once again from Ryan!
Since your question feels like a request for suggestions on "how" to create a solution, just wanted to add the following.
When you say "classified listings" I hear "once off, here for a while, gone in 45 days content".
If that is the case, then no individual expired listing will ever be matched identically with another (unless it happens to be a complete duplicate of the original listing).
This would mean that it would certainly be relevant to send any expired listing to a higher order category page. If your site structure is such that you have a clear heirarchy, then this is very easy to do.
For example:
If your listing URL were something like http://www.mysite.com/listings/home/furniture/couches/couch-i-hate.php, then you can use URL rewrites to strip out the file name and 301 the listing to http://www.mysite.com/listings/home/furniture/couches/, which in most cases will offer a perfectly suitable alternative for the user.
There is another alternative you could consider if you have a search program built in - you could send the traffic to a relevant search. In the above example, mysite.com/search.php?s=couch.
Hope that helps,
Sha
-
RE: .htaccess - error404 redirect within a directory?
Hi Ade,
My apologies on this one. My brain was a little addled after a long day of driving I think!
You are correct in that the "set and forget" solution we gave you to start with will be overwritten by the Joomla error handling after the .htaccess file has been read.
It is only possible to make standard 301's work using the .htaccess file if you were to manually create an individual rule for every deleted file using its specific joomla generated URL like this:
RewriteEngine On
RewriteRule ^courses/highways/6-NRSWA/27-nrswa-operative-sept-11.html$ /courses/highways/6-NRSWA?course=not-available [R=301,L]
The problem with this is that if your courses are regularly deleted, it will not be long before you have a very large .htaccess file, which could foreseeably lead to processing issues.
So in fact, the only "set and forget" solution is the one that you have already put in place and from a practical standpoint, it is the best solution.
Well done!
Sha
-
RE: How to Stop SEOMOZ from Crawling a Sub-domain without redoing the whole campaign?
Hi John,
Since your campaign is set up using the root domain, all subdomains will automatically be included.The only possible way that you may be able to remove the subdomain from the crawl would be to use robots.txt to specifically block mozbot from crawling it. However, if you do this, then it will not be possible to access the information for the subdomain separately.
So, what you are really asking is whether the subdomain can be split into a separate campaign without you having to start from scratch. I suspect the answer is no (certain things, like keywords may not be relevant anyway).
The only way to get a definitive answer on whether it can be done would be to email the SEOmoz help team direct - help [at] seomoz.org.
Hope that helps,
Sha
-
RE: .htaccess - error404 redirect within a directory?
Hi Ade,
We have used .htaccess to create 301 redirects for Joomla sites in the past.
Can you email me your .htaccess file and the URL of your site and I will get our Chief Programmer to take a look for you.
My direct email is on my profile page (or you can private message me from your profile).
Sha
-
RE: .htaccess - error404 redirect within a directory?
Hi Ade,
So sorry I wasn't around to follow this up for you. I have been away for the day and had wireless connection issues, so could not check Q&A until now.
Oops! Yes. Joomla does have its own error handling which does make a big difference, but it should be simple to fix once you understand what happens when you put the .htacces file in place.
When a request is received by the server, the .htaccess file is read from top to bottom, checking each rule in the file for a match. Once a match is found, the specific action assigned to that rule is executed. This means that no rules thereafter are read.
So, if you ensure that your code appears at the beginning of the .htaccess file, then whenever the conditions described by the rule are matched, the redirect will occur. However, if no other rule in the .htaccess is matched, then Joomla error handling will come into play should any other error be present.
This of course means that any specific rule you wish to add in the future should also appear before the Joomla code. As long as you always make sure it is last to be read, everything should work just as you intended.
Hope this helps,
Sha
-
RE: .htaccess - error404 redirect within a directory?
Hi Ade,
Since learning this one thing will be something that you are likely to use over and over, I figure it is much better if you see how it actually works. So, we wrote a little resource to show you how to do a basic 301 redirect as well as one that goes back one level to your category page.
If you take a look at this simple 301 Redirect course for managing 404 errors, you can see three working pages and also download the code.
Let me know if you have any questions.
Hope that helps,
Sha
-
RE: Www and non www how to check it.......for sure. No, really, for absolutely sure!!
Hi again Robert,
God of All Things Code is away from the office for a while today, so will need to wait a little longer for his input.
A couple of things that happened since my last post though:
Those twitching antennae just wouldn't stop nudging me to look a little further as everything I see with this site is saying "template" to me. Add to that the URL rewrites which hide the actual URL's and the broken pdf files...so i went digging a little further and ... Aha!
Not a template, but a "Theme". The entire site is built in Wordpress!
Now, I am pretty sure that the broken pdf's are the result of the Wordpress URL rewrites changing the directory name in combination with the hard coded links. If this is the case, then it ought to be just a matter of adding a rule to the .htaccess file to deal specifically with the pdf's. The order in which the rules appear will determine whether the issue is resolved or not.
I'll let you know as soon as I've confirmed the specifics with my Boss.
Hope that helps,
Sha
-
RE: Www and non www how to check it.......for sure. No, really, for absolutely sure!!
OK Robert,
First I'm going to tip my hat to Ryan, who has perfectly explained the fact that some of what you see in your site: search can be because the 301's have not yet been recognized by the search engine.
Second, an apology to Alan as I went right to the LAMP solution because of prior knowledge from a previous thread or two
that you were going to be talking about .htaccess 
Now...I will spell out a couple of things because I have a feeling that you are likely to come across them again in the future and quick recognition can often mean a lot of time saved.
So here goes.
When I first read your question, my little web developer antennae suddenly started twitching! When I hear that there are multiple versions of a file with different file names deployed on a server I generally suspect one of two things:
- The site has been developed from a standard Template package, or
- There has just been a little "untidiness" taking place in the development process.
In your example, the /contact.php was the original file deployed live to the server, then the /contact-us.php file was created to replace it (presumably for SEO purposes - debatable, but that is a whole other conversation). As I'm sure you can imagine, /contact is pretty common in template packages, although the biggest template producer out there is much easier to spot, as the pages in their templates are always in the format /index-1.htm etc. It may just be that the developer creates their own standard template from an original design and rather than pre-planning and creating the file names to maximize SEO, they create standard page names and change them later.
While there is nothing really wrong with either of these things (unless you are charging the client for an original design and buying a pre-designed template at a fraction of the cost), both methods do open up the way for mistakes and errors to occur. As a result, there are a few things to keep in mind if you are working this way -
- It is a much better idea to build on a development server so that none of the files that will become obsolete during the process will be indexed by search engines in the meantime. Tidy architecture, remove the obsolete files, test, then push to production.
- When changing file names it is ALWAYS better to re-name the existing file and do a global update of links rather than create a duplicate with a different name. As soon as you create two files, you open up the possibility of accidentally linking both files within the site. You could have /contact.php linked from the home page and contact-us.php linked from the footer for example. There is a danger here that should you decide to delete the unwanted file, you create broken links without knowing it, or you have duplicate content. Either way, you have to recognize the problem and either fix it, or put a 301 in place to catch it.
- NEVER hard code your links, because as soon as you change the name of the directory you placed your files in, you create a broken link! If you use relative links, the change of directory name will not matter.
I can see from Screaming Frog that some of the URL's for the pdf files have 301's in place, but it appears that the Redirect URL may also be hard coded to the /pdfs directory. The fact that they all return a 404 when the directory name is changed to match that section makes it purely a guess as to what is happening here. It seems both www and non www pdf's are returning 404's in the browser.
The picture is muddied a little by the fact that there appear to be internal URL rewrites in the mix as well (to produce those pretty URL's with trailing slashes). So, there are a few options as to why the pdf's are not accessible:
- They are not actually on the server at all (unlikely)
- The names of the pdf's themselves have been changed, so even if the URL rewrite is sending the request to the new directory, the file requested does not exist.
- The /pdfs directory has been named something completely different and the hard coding is the problem
- The /pdfs directory has been moved to another location within the site architecture
I tried guessing a couple dozen of the obvious options, but no luck I'm afraid

There is one other possibility, in that the internal URL rewrites and 301 redirects could be creating a problem for each other. I am not clever enough to identify whether this is the case without a hint from the code, but will ask the God of All Things Code (my Boss) if he can answer that for me when daytime arrives 8D
OK....this is now so long that I really need to read the whole thread back to see if I have forgotten anything! If I find something I have missed, or can find anything else when help arrives, I'll be back!
Hope it makes some sort of sense and ultimately helps,
Sha
-
RE: Www and non www how to check it.......for sure. No, really, for absolutely sure!!
Hi Robert,
OK, just to clarify...
- You want to check for sure that newclient.com is 301 redirected to www.newclient.com?
- You want to check for sure that ALL URL's which have been individually 301'd are redirecting to www.newclient.com/filename?
- You want to understand why the non www version of pdf files works and the other doesnt?
Right off the top, the definitive way to check whether there is a properly functioning redirect in place is to type the URL into a browser and see whether it resolves to the redirect target :). You can also run Screaming Frog and see what status the pages return, but be aware that this does not always reflect the real situation in the browser (pages can return status that does not match what you see).
On the other questions, I think perhaps what you really want is to first determine what is happening and then, WHY?
So, first things first:
- do you have access to the .htaccess file?
- Can you provide the URL (and .htaccess if you have it)? You can PM this info if you don't want to share it publicly.
Sha
-
RE: YouTube or Hosting Outfit to Boost Pages?
Hi,
I would host on site using a service that will create & submit video xml sitemaps and give you access to awesome statistics, while passing link credit back to your own site, no matter where it gets shared.
Wistia would be my preferred option as I know that they fit the bill on all counts, but there are a few good services out there, so you could check out what is on offer.
This post from the SEOmoz Blog will give you an idea of the kind of data you have access to with the Wistia service, so you can make clear comparisons.
Of course, you can also add content to youtube, but I believe it isn't possible to create video xml sitemaps for youtube content.
Hope that helps,
Sha
-
RE: Crawl Diagnostics and missing meta tags on noindex blog pages
Hi,
The reason that the Crawl Diagnostics report is divided into three different sections is because the impact of the things highlighted within is completely different.
Warnings are there because it is a known fact that "Pages with crawl warnings are often penalized by search engines." It is up to you whether you take notice of them, and obviously if you have specific conditions on those pages, such as the presence of a noindex meta tag, then obviously you may choose not to bother adding them.
That doesn't actually make the function spammy. It just means that the warning isn't relevant for your particular situation. Requiring the App to check pages with the noindex meta against pages that have missing meta descriptions and remove them from the reporting would add a whole other layer of complexity to the tool.
In short, the tool is meant to identify pages with missing meta descriptions. It does that very well. If you don't need to use the information then you are free to ignore it.
Sha
-
RE: Why does my crawl report show just one page result?
Hi William,
As indicated from the help page that Keri provided, the problem is that the page is entirely rendered in javascript and SEOmoz crawlers do not follow javascript links or redirects.
Of course, the reason why the SEOmoz crawlers do not do this is most likely because Google's (and other search engines) stated position is that they are "getting better" at handling javascript, but the likelihood of trouble free crawling for googlebot is likely low or at the very least unknown.

Bing now has an option in its Webmaster Central that lets you indicate that javascript crawling is required for a site. I have not seen any information on the effectiveness of this as yet, but you could investigate that by hitting their help forum.
Even if search engines manage to crawl the javascript without issue, there are other significant problems with the content on the site. It appears that the site is a multi affiliate whitelabel? All of the text is actually being pulled in from an external page and that page contains content that is duplicated across many other websites. This is the case with every "page".
Unfortunately, all of these things add up to a fairly bad SEO situation. Your best option for generating traffic would be to become massively popular through social channels and use them to feed traffic to the site. That is assuming that this whitelabel platform does not give you the option to create your own content (which would be much better).
Another alternative would be to create a site on a new domain with awesome, unique, shareable content with links to feed traffic to this site, but if you are going that route, making people take an extra click through a second domain on the way to the retailer's site would not be optimal for conversions. So it would be better to add direct affiliate links within the pages.
So, on the whole, I would say that ramping up your social activity is your best approach.
Hope this helps,
Sha