Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
Nofollow internal links
I agree with Ryan but question the usability factor of 260 links on your page. Have you done a usability study to check how easily your end users can find the information they are after. It can be daunting for a robot let alone a human to sift through all the sub-menus on your categories. It brings to mind Telstra and Optus sites that take a significant time to find the information your are after because of the huge number of options. I also notice that when you change currency that a prompt is displayed that 'all items in the shopping cart will be deleted' even when the cart is empty. Should you not check if the cart is empty before displaying the message, otherwise the prompt is defunct. If you want to still display them but not have the robots index them late populate them from a JQuery async call on demand as the user hovers over a menu item. You would need to ensure they are linked somewhere or on a sitemap so the search engines can still find them.
| oznappies1 -
IIS Work Around 301 Redirects
My recollection is that you can do this redirect page bu page under IIS Admin (inetmgr) as follows: 1. Browse the website you want to do the redirect for. 2. In the right pane, right click on the file you want to redirect, and click "Properties" 3. Under the "File" tab, hit the radio selection "A redirection to a URL" 4. Put the target in the "Redirect to" textarea. 5. Make sure "The exact URL entered above" and "A permanent redirection for this resource" I don't have IIS installed locally otherwise I would test this for you - but this method is referenced in a few places as above. Let me know if this helps you! I have attached a YouTube video below as a visual walkthrough - I am not the creator, just the ermm, locator? ;o) watch?v=FdU_bBp6KX0
| Hurf0 -
How Add 503 status to IIS 6.0
This might help: "If you have an ASP.NET web application site, and you place a text file named "app_offline.htm" in the root of the site, all requests to that website will redirect to that app_offline.htm file. Basically, if you need to take an entire ASP.NET site offline, you can place some nice message in that file. Then, any new requests to a URL, any URL, in that website will redirect to that file allowing you to do maintenance to the site, upgrades, or whatever. It is not really a redirect though. ASP.NET essentially shuts down the site, unloads it from the server, and stops processing any requests to that site. That is, until you delete the app_offline.htm file - then things will continue as normal and your ASP.NET site will load up and start serving requests again. A super-cool side effect of this is that any files that are locked by the site, such as a database or other resources, are freed since the application domain has been unloaded from the server. This allows you to remove the locks from those files and replace them, without the need to do a full IISRESET, taking down other sites on the server. One thing to keep in mind with this file however, make sure you out enough content in it so it is larger than 512 bytes or IE will consider it a 404 and will display the 404 instead of the contents of your app_offline.htm file." Source
| Hurf0 -
Restricted by robots.txt and soft bounce issues (related).
**These are duplicate URLs that we can't figure out how they are getting created. ** I want to be sure we are talking about the same thing here. When I hear "duplicate URL" I am thinking of multiple URLs which point to the same web page. Depending on how your site is set up it is possible to have many different URLs point to the same web page. Possible examples are: www.mydomain.com/tennis-rackets www.mydomain.com/tennis-rackets/ mydomain.com/tennis-rackets?sort=asc Above are three examples of URLs which can all lead to the same page. You can have dozens of URLs all lead to a page with identical content. How these issues get resolved depends upon how they were created. The best tool to help you figure this out is your crawl report. Use the SEOmoz crawl tool, then examine the crawl report. It can be a bit overwhelming at first, but you can narrow things down real fast if you use Excel. Select the header row for your data (begins with the URL field), then select Data > Filter > Auto Filter from the menu. Then start by looking at fields such as "Duplicate Page Content", "URLs with duplicate content", etc. Simply choose YES in the drop down menu to filter for that particular data. This will help you uncover the source of these issues. The URLs in my example should all be 301'd or canonicalized to the primary page to resolve the duplication issue.
| RyanKent0 -
.CA site same as .com site - are both necessary?
Ryan, Your suggested use of Webmaster tools doesn't mesh with what Google says. "Sites with country-coded top-level domains (such as .ie) are already associated with a geographic region, in this case Ireland. In this case, you won't be able to specify a geographic location." In this case, the .ca site would already be recognized as "country specific" through its TLD & IP address at a Canadian host. While we could perform the same task for the .com site, that would equate to 'cutting off our nose to spite our face.' As I mentioned, we have .com rankings on Google.ca - not sure we want to suddenly cut that off by making the .com focused on US only. You may want to read up on it yourself: http://www.google.com/support/webmasters/bin/answer.py?answer=62399 Thanks anyways!
| lunavista-comm0 -
Our Twitter App
Yes, rel=canonical seem perfect for this job. And I highly recommend doing it, as so many pages might be seen as low-qualoty content by Google post-panda, and thereby hurt your entire site.
| ThomasHgenhaven0 -
Is SEOMoz only good for "ideas"?
Exactly. And so, would it not be greatly beneficial knowledge to all of us to know if and when a limit is reached where this strategy is no longer effective? For example, there are many PR8 sites with literally hundreds of PR6 pages that allow dofollow commenting. We can alter anchor text and the deeplink to gain links from these PR6 pages. The question is when does this strategy become ineffective? Let's say our site has 100k pages. Should we spend our time getting a link from every available PR6 page from the same domain? Or is there a diminishing value? Having some sort of a study that's tried and proven to show if a persistent benefit exists, and when it wears off, would be invaluable to practical SEO, and the results of a study such as this are highly unlikely to change within a year. Surely you'd like to see something like this too? I do understand the need to keep SEO in-line with Matt Cutt's objectives, however the reality is that Matt Cutts objectives and what works are two different things. There would be no such thing as off-site SEO at all if Google worked the way it meant to. The thing is, is that it doesn't, and that is why off-site SEO exists. Instead of people giving hogwash answers, we should be demanding these sorts of useful studies. That is just my opinion anyway.
| stevenheron0 -
Ranking on french search engine
If you need to check a few phrases you can run them by my wife as she has worked as a translater in France with manuscripts and documents before. She lived there for 5+ years. Send me an email sales@oznappies.com and I will give you her email.
| oznappies0 -
Keyword rich domains
Hi Guys, I just registered on www.videoconferencing.com.au directory http://www.videoconferencing.com.au/beingthere-video-conferencing-australia/ Hopefully this will assist with ranking for 'video conferencing' Thanks for all your help
| dantmurphy0 -
Canonical for stupid _GET parameters or not? [deep technical details]
Also, here's a blog post from SEOmoz discussing the idea of Google, internal search results pages, and thin content: http://www.seomoz.org/blog/fat-pandas-and-thin-content "Google has often taken a dim view of internal search results (sometimes called “search within search”, although that term has also been applied to Google’s direct internal search boxes). Essentially, they don’t want people to jump from their search results to yours – they want search users to reach specific, actionable information. While Google certainly has their own self-interest in mind in some of these cases, it’s true that internal search can create tons of near duplicates, once you tie in filters, sorts, and pagination. It’s also arguable that these pages create a poor search experience for Google users. The Solution This can be a tricky situation. On the one hand, if you have clear conceptual duplicates, like search sorts, you should consider blocking or NOINDEXing them. Having the ascending and descending version of a search page in the Google index is almost always low value. Likewise, filters and tags can often create low-value paths to near duplicates. Search pagination is a difficult issue and beyond the scope of this post, although I’m often in favor of NOINDEXing pages 2+ of search results. They tend to convert poorly and often look like duplicates."
| RyanPurkey0 -
Domain with or without "www"
It definitely does not influence search engine results in any way whether you use www or not. It is purely a matter of preference. One consideration you may wish to factor into your decisions is links. A shorter URL allows for easier link sharing. If you do not use www, then you are saving 4 characters (www.) which makes your links smaller. But...most software will recognize any word beginning with "www." as a link and convert it into a hyperlink. So www.mydomain.com would be a hyperlink, but mydomain.com would not be a link. You would need to use http:// in addition to mydomain.com to make the hyperlink. Then again, some software still doesn't convert based on the "www' so you would need to entire http://www.mydomain.com. I probably crossed the line into giving too much information. We live in a world of tweets and friendly URLs so I thought it was worth mentioning.
| RyanKent0 -
Should I create mini-sites with keyword rich domain names pointing to my main site?
I had previously learned the deeper a keyword was in the URL, the better it was for ranking in SERPs. I am uncomfortable now because I can't remember where I learned that information or from whom. I spent about 90 minutes today watching all of Matt Cutt's videos discussing the topic. I also read numerous articles. In short, I could not locate any conclusive information on this topic. Everyone agrees it is good to have keywords in the URL, but no one shares if there is any higher value at various positions within the URL. I began a Q&A topic to seek more information on this topic. http://www.seomoz.org/q/keywords-in-urls-looking-for-consensus
| RyanKent1 -
If I point a domain name to a new faster server, will I lose some keyword ranking?
As for a small loss of link juice, Joe's response is one I would agree with. But in terms of keyword ranking, EGOL is spot on. There are many right and wrong things that could affect ranking in either direction. The sort of breakdown you're talking about deserves an experienced hand to get it right.
| Doc_Sheldon0 -
Re-direct issues
As Loudogg says, with a 301 redirect you'll be fine, although you will experience a slight loss of link juice through the 301. Further losses can be avoided by ensuring (as much as is possible) that links are directed to your destination page, so they don't go through the redirect. I'd definitely make sure to implement a rel="canonical" directive, and focus on being consistent.
| Doc_Sheldon0