Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
Sudden drop in Google with our top performing keywords
Hi Naomi - You will probably get some various thoughts on this, but it's probably best to forget about the messages Google is telling you. Focus your time on earning quality links from topical sites. This will easily replace the value lost from your previous linking efforts. Without knowing all of the activities you've done for the site, it's hard to provide more reasons as to what could have caused the drop in rankings. Be sure to do a thorough audit of the site from a technical perspective and take a deep look at your backlinks to see what the issues might be.
| RankSurge0 -
How ro write a robots txt file to point to your site map
Thank you so much for all your replies [CASE CLOSED]
| Nightwing0 -
Google Rewriting PDF Titles
Sure Wayne. While there are differences between a web page and a PDF, from the concept of how Google handle's the data there is little difference. A crawler reads text and processes the data, which is then ranked and appears in search results. The same basic rules apply. Here is an example: Go to the following URL: http://centerforhealthysex.com/wp-content/uploads/. You can see this site allows the contents of this folder to be displayed (not a recommended practice). Notice the first pdf file in the list: "alexandra-katehakis-biography.pdf" Go to Google.com and search for the following without quotes: ".pdf site:centerforhealthysex.com". Notice the title shows as "download bio pdf - Center for Healthy Sex". Return to Google.com and search for "alexandra katehakis biography". You will see the same file now has a title of "Alexandra Katehakis is a licensed Marriage, Family Therapist ..." In this case, Google grabbed the first line of text and used it as the title. You can repeat this type of testing with almost any pdf or web page.
| RyanKent0 -
A site is not being indexed by Google Yahoo or Bing
There lies the problem , best to manual add the admin folder and other folder you dont want spiders accessing in there This should help : http://www.robotstxt.org/robotstxt.html
| Saijo.George0 -
Omitting URLs from XML Sitemap - Bad??
Some good answers here, so I'll just throw in my own 2 cents. The purpose of a sitemap is to help search engines find pages they might not otherwise find during a regular crawl. Sometimes sitemaps can help pages get indexed faster. Other sitemaps serve special purposes, such as News or Video sitemaps, which can add extra information and help ranking particular types of content. In reality, many, many sitemaps are incomplete, missing, or flat out wrong. To my knowledge, no search engine will penalize you for this, as they would be penalizing half the web. The danger of an inaccurate sitemap is that the search engines may chose to ignore it completely. Daune Forrester of Bing has stated that if they find a 1% error rate in your sitemap file, then they will disregard the file. However, no such action is known to exist for incomplete sitemaps. So I'd say there is little in submitting a sitemap of your truly important page. Unfortunately, this won't stop Google from discovering or crawling your duplicate content issues. The faster you get these fixed, the better.
| Cyrus-Shepard0 -
SEO Audit - Panda
Ryan Kent is #1 on the users board, and his answers that I've read in the pro Q&A are always right on. He's the director at Vitopian, and it sounds like they've been helping out sites with Panda and Penguin issues (he wrote a great Penguin-related post here). He'd be the first person I'd look to for advice.
| john4math0 -
Open site explorer. Possibility to select a time period?
Hi Teun, Thanks for writing in. I'm afraid that there isn't a way for us to index when a link was created, since a link could have been added to a site for several months before our crawler found the link and we have no way of knowing when it was created. The most recent index was processed between the beginning of May through the end of May, so we are just showing links that we found within that index time period. The indexes that were released in May are much larger than the index that was released in February, so it is likely that we are showing a lot more links from the most recent indexes than we showed from February's index because we were able to crawl more links through those indexes. I hope this clears things up. If you have any other questions about this, you can write into us at help@seomoz.org Thanks, Chiaryn
| ChiarynMiranda0 -
No follow and do follow on wordpress
H Robb that is the plugin I am working with and while I see the general settings I cannot seem to find a way to differentiate between links on the same page. Perhaps I am missing something?
| casper4340 -
Javascript to manipulate Google's bounce rate and time on site?
Stephen, Thanks for the explanation - I just had a client ask me about this script. Based on your explanation, this script will change your bounce rate. This is because once the event is triggered, the visit will no longer be considered a bounce, even if the user only visits one page. So it's an artificial/false decrease in bounce rate, not a "fix" as others claim. I wrote a short blog post on this (and referenced your description)! ~Adam
| AdamThompson1 -
Should me URLs be uppercase or lowercase
Are you serving the same page for both /MBA and /mba? You should set up a 301 redirect from one to the other. In Analytics, you can set a custom filter to make your URLs case insensitive, but I don't believe that'll fix the data currently in your account, it'll only fix them going forward. That process is outlined here: http://support.google.com/googleanalytics/bin/answer.py?hl=en&answer=90397. My URLs are all lowercased so I can't actually find an example in my account to test, but when I do an advenced filter and select Include Page with the match type of "Matching RegExp" and try URLs with uppercase characters, Analytics appears to be making the query case insensitive. So you can try that as well. If the prior paragraph didn't work for you, you can do this on a URL by URL basis, by doing an advanced search by regular expression and substituting in "[M|m][B|b][A|a]" for "mba".
| john4math0 -
Matching C Block
No. It wouldn't make sense for Google to penalize a website based on an IP address' C Block. Can you post a screenshot of your Analytics? What's the time span these correlated drops occur? Could it be coincidence?
| deltasystems0 -
Indexed pages and current pages - Big difference?
Hi Nathan, The delta between the number of pages returned by the site: operator and the number of pages in your sitemap could be down to a number of issues: Your XML sitemap may represent only a percentage of the total number of valid content URLs that your site is capable of generating. a) Often sites will only generate XML sitemaps for URLs that someone has decided are "important", when the total number of URLs is much larger. Your XML sitemap contains ALL the valid content URLs that your site is capable of generating, but search engines are somehow finding more URLs. a) Look in Google Webmaster Tools under Optimization >> HTML improvements >> Duplicate title tags i) Do the pages with duplicate titles have duplicate page content? If so, your publishing platform is allowing multiple URLs to render the same content, which is a bug that needs to be fixed b) Run a crawler like Xenu Link Sleuth or Screaming Frog against your site, and see how many URLs they discover. Export the results to Excel and look for weird URLs i) Usually culprits for duplicate content include incorrect canonicalization (www vs non-www, URLs ending in /index.html vs just /, etc) ii) Look for URLs ending with strange query strings (affiliate tracking, session IDs, etc) c) Use the site: operator in other engines (Bing, blekko, etc) and compare the numbers they return. Especially if this number is larger than the number Google is returning, starting looking for weird URL patterns Also, I'm not sure what you mean by "the domain canonical has been set correctly". If you're referring to use of the canonical link element for every URL, there are plenty of ways that can go wrong. E.g., if your CMS requires that each published URL have rel="canonical", but allows URLs to be published with and without the trailing /index.html, you can end up with a canonical link element on the non-canonical version of the URL, further confusing engines. Something to look into.
| grasshopper0 -
Is Go Daddy a bad domain?
Choosing a host would entirely depend on on your requirement. I seriously doubt having your site hosted with them would cause it not to be crawled. ( unless of course the server you have with them have explicitly prevented bots crawling the pages in there , which would make no sense for them to do )
| Saijo.George0 -
Canonical solution for query strings?
Short answer Yes.( as long as you have rel Canonical them back to the original page ). Google will drop the other pages over time Things you can do here : Make sure your sitemap is not listing these extra urls Thing I recommend you DONT do noIndex the dynamic pages - adding a noindex could tell google not to index those pages, but some one could link back to that page with P_SOURCE=WBFQ and the main page gets no benefit from that ask for manual removal ( because google does not like it when we ask them To get the right "version" of your site indexedhttp://support.google.com/webmasters/bin/answer.py?hl=en&answer=1269119 ) Hope that answers you questions
| Saijo.George0