Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
What's the best format for a e-commerce URL product page
I am not answering because I don't know the amounts of traffic generated by your categories, locations, references, activities, etc. So, I would be doing lots of KW research. Also, I would be using hyphens instead of underscores in the URLs
| EGOL0 -
Unknown "/" added causing 404 error
Dan, Thanks for the helpful answer. Sorting through that many pages looking for a stray "/" will take time. Any shortcut ways to find it? I do not get the error in Google crawl checks; I do find it in my deeper seomoz advanced cvs files. Need I try fix it then? You are right about the canonical tag. For whatever reason I get an error in seomoz that says: Appropriate Use of Rel Canonical Moderate fix <dl> <dt>Canonical URL</dt> <dd>"http://www.homedestinantion.com/calculator-mortgage-resources.html"</dd> <dt>Explanation</dt> <dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd> <dt>Recommendation</dt> <dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd> </dl>
| jessential0 -
Correct Indexing problem
Okay I may have understood your original post differently then what you meant. So the case is you have HTTPS enabled, but Google is Indexing Both HTTP & HTTPS pages. However, you want them to only index the HTTP version. You are also running a cart or checkout which is only HTTPS which is likely not relevant to Google so I would recommend blocking those pages with robots.txt. I would recommend coding an IF statement to deal with duplicate indexing (https & http) & setting up a robots.txt file to prevent crawling pages that have no value and are there for customer use only. Something like this would work in php: _ _if ( isset($_SERVER['HTTPS']) || (isset($SERVER['HTTPS']) && strtolower($SERVER['HTTPS'])) == 'on' ) {echo ''."\n";} else {echo ''."\n";} ?> _I'm not sure the code in asp since I rarely ever use Windows servers but you should be able to find that with Google. Then setup your robots.txt to block all urls that are specific to personal data like this: (Example) Disallow: /catalog/account.php Disallow: /catalog/account_edit.php Disallow: /catalog/account_history.php Disallow: /catalog/account_history_info.php Disallow: /catalog/account_password.php Disallow: /catalog/add_checkout_success.php Disallow: /catalog/address_book.php Disallow: /catalog/address_book_process.php Disallow: /catalog/checkout_confirmation.php Disallow: /catalog/checkout_payment.php Disallow: /catalog/checkout_process.php Disallow: /catalog/checkout_shipping.php Disallow: /catalog/checkout_shipping_address.php Disallow: /catalog/checkout_success.php Disallow: /catalog/cookie_usage.php Disallow: /catalog/create_account.php I hope that helps Don_
| donford0 -
Will training videos available on the "members only" section of a site contribute to the sites ranking?
That advice is great thank you - answered my question perfectly, really appreciate it thanks!
| CharlotteWaller0 -
How to handle this specific duplicate title issue
Casey, thanks for the answer and this is very good idea.... but all listings already have full street address and phone number. So since company name is similar, it's still appears as duplicate content. Apparently 2 extra lines of address and phone do not do the work.
| Boxes0 -
Canonicalisation - Best Approach?
If what you are trying to achieve is to have a canonical that is the www version, you are better served doing two things: First go to Google webmaster tools and select a preferred domain (from GWMT): To specify your preferred domain: On the Webmaster Tools Home page, click the site you want. Under Site configuration, click Settings. In the Preferred domain section, select the option you want. You may need to reverify ownership of your sites. Because setting a preferred domain impacts both crawling and indexing, we need to ensure that you own both versions. Typically, both versions point to the same physical location, but this is not always the case. Generally, once you have verified one version of the domain, we can easily verify the other using the original verification method. However, if you've removed the file, meta tag, or DNS record, you'll need to repeat the verification steps. Note: Once you've set your preferred domain, you may want to use a 301 redirect to redirect traffic from your non-preferred domain, so that other search engines and visitors know which version you prefer. To do the 301 redirect of the non www to the www here is the script from Scriptalicious: **RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^yourdomain.com [NC] RewriteRule ^(.*)$ http://www.yourdomain.com/$1 [L,R=301]** Hope this helps you.
| RobertFisher0 -
Best Practice to Remove a Blog
Subdomains are treated slightly differently by Google. Essentially they are seen as less connected to the rest of your content than a subfolder. Take wordpress.com as an example: surferdude.wordpress.com has little relation to www.wordpress.com surferdude.wordpress.com has little relation to skaterguy.wordpress.com surferdude.wordpress.com has lots in common with surferdude.wordpress.com/surfboards/*** In the same regard, www.yourdomain.com/blog is more correlated with www.yourdomain.com than blog.yourdomain.com would be. By using www.yourdomain.com/blog instead of a subdomain, you build more value to your www subdomain, everytime you post blog content or get links to your blog. This has more value to the rest of the www content on your site.
| KaneJamison0 -
Basic SEO HTML
Hi Bill! A couple of your questions that haven't been answered yet: How do you do this and how do you know if it's already being done? When you look at the code for a page, if you see a lot of code within tags, that means that the CSS is on the page rather than in an external file. You can do the same with JavaScript. If there is a lot of code within the tags in the source code, you know there is JavaScript on the page instead of in an external file. If it has not been done on a site is it hard to go back and do? It is very simple to take this code, put it into an external file, then link to it like we do on this particular page's code: <link type="text/css" rel="stylesheet" href="[/q/moz_nav_assets/stylesheets/production/all.css?0.4.25](view-source:http://www.seomoz.org/q/moz_nav_assets/stylesheets/production/all.css?0.4.25)" /> That is what it looks like when you link to a CSS external file. If possible though, I'd definitely have someone who knows and understands the code help you out at first. There's nothing more infuriating that making what you think is a small change to a JavaScript file and it end up breaking the whole site. Good luck! And I hope this helps a bit.
| jennita0 -
Problem with 1 Domain but not 60 Others
OK, a template is essentially a design architecture so I am assuming you are in the same vertical, different local markets? Then, it is going to be an issue of competition, your on page, your ranking, etc. versus others IMHO.
| RobertFisher0 -
Tracking Links Tool
Thanks for the link Jassy! Do you use ALM? If so, can you share any feedback based on your experience? Would you recommend the tool?
| RyanKent0 -
Best Pracice to Obsolete a Blog
I am almost positive, that using a 301 to the main page would be considered a soft 404 error
| Bucky0 -
How to de-index the server location of my website
Yes thats correct, check the HTTP_HOST is correct, if not redirect to correct one. then all domains will redirect
| AlanMosley0 -
One Page - Targeting Multiple Low Searched Keywords.
Yes that would be better, or just trying to go for "Builder" with out a location.
| AlanMosley0 -
Google places address missing
Sorry Webfeatseo, I totally missed last question there. If you decide to delete the Places listing and start over, here is a great blog post on that very issue: The Right Way to Remove A Google Places Listing by 540SEO Sometimes, the only way to ultimately get it right is to delete and start over. Remember, you will be better served by following the guidelines than by not. Here is another link to how to with a lot of links at bottom to good local experts: Optimizing Your Google Places Page by Geoff Kenyon Miriam also gave you a good list of don'ts above. Good Luck
| RobertFisher0 -
SEOMoz Crawl Diagnostic indicates duplicate page content for home page?
I contacted the help desk as instructed an was told: "I took a look at the campaign and it looks like our crawler can't parse the 301 redirect you have in place on the main page. The reason for this is the redirect in place, adds two https when rogerbot tries to crawl through it. Roger can’t parse the redirect as is, but it can identify it (as it did in your notice’s report on the crawl diagnostics page). This isn't a problem for browsers since they are made to ignore redirects of this nature. Crawlers on the other hand have a strict code to follow and can't follow redirects like that. When I load up your site [mywebsite.com] right now, it redirects to www.[mywebsite].com. Try creating a new campaign under the domain you are redirecting to, this should clear any issues up." And so I did that and it looks like that worked after the new crawl, however I then set up another campaign for another website I manage being sure to use the "www" in front of the domain and got the same problem again -- the home page appears twice as duplicate content. So I'm back to asking my primary question again: What is the definitive redirect code to use to convert a non "www" request to a "www" request? The same redirect code mentioned in my first post is being used on all of my sites.
| Linesides0 -
Do you get credit for an external link that points to a page that's being blocked by robots.txt
Hi Dave, I believe there would be two answers to your question 1. If Googlebot finds the page via the external link, then YES: the link will pass PageRank Googlebot will crawl both the page and the domain will get juice, because Googlebot hasn't seen the robots.txt 2. If Googlebot comes to the site via the root (assuming that it obeys the command to block), then NO: None of the above would happen because the page would never be seen by Googlebot, so the incoming link would never be seen. If, on the other hand, Googlebot comes to the page via the root and ignores the command to block, then it should be reasonable to assume that means the page would be crawled & links attributed as though there were no robots.txt, but that is only an assumption, so I guess your question would remain open. Don't suppose that helped much Sha
| ShaMenz0 -
Link Volume - calculate what you need?
Links can be a tricky thing however SEOMOZ really has some cool tools to figure this out such as LinkScape and others to determine where your domain is currently compared to your competitors. However that being said, I would not just focus on the links.. There are so many other aspects to getting good traffic and rankings, such as good content and a good product or service. Without those it is hard to get good quality links that will mater. The number of links used to matter and still have their value however if you focus on your core business and develop a quality site that keeps visitors engaged you will succeed much more than just looking for links. Watch your bounce rates and your content find out what is working and what is not. Finding that perfect number will make you pull your hair out and will waste time when you could be doing better things for your site and company.. Wil Reynolds offered up some really good ideas as well down at the Affiliate Summit and its work taking the time to watch his seminar on YouTube. http://www.youtube.com/watch?feature=player_detailpage&v=hSQ0DZdSDMI#t=368s I hope this helps.
| Ben-HPB0