Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • My pleasure! Hope it works out for you.

    | MiriamEllis
    0

  • I don't get why you would have tracking on the image, Takeshi may be correct, but I would not do it myself as it may be a low quality signal. I know that the Bing Api will detect it as a duplicate files, what they will do about I don't know.

    | AlanMosley
    0

  • You'll want to add a 301 redirect to your htaccess file redirecting non-www URLs to www. This will pass the link value to www, as well as prevent potential duplicate content issues. Here is the code to use: RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^www.yourdomain.com [NC] RewriteRule ^(.*)$ http://yourdomain.com/$1 [L,R=301]

    | TakeshiYoung
    0

  • Thank you so much Chris Menke, now my home page again indexed by the Google. You are the best!

    | Godad
    0

  • Hi Carolina, has your question been answered?

    | Christy-Correll
    0

  • DNS servers are just like any other server, Marie - they can have outages, downtime and configuration problems.If the Googlebot visited while your DNS server was burping, it might have received no response, hence the error warning. When you checked, the server may have settled down. There are a number of best practices for good DNS hygiene, but my primary one is to monitor the uptime of your DNS the same way you do the uptime of your website. I use my paid subscription to Pingdom Tools to do this as one of my checks, but I'm sure many other uptime monitoring tools can do it as well. The reason I monitor is that it can be a really helpful early warning system for potential upcoming severe problems (and can help explain otherwise unexplained site outages). With one client, we saw a steadily increasing number of errors over a few days (over 40 outages on the last day), leading us to change DNS hosting before things could fail completely and leave us in the lurch. In addition, I always recommend against having the DNS hosted on the same server as the website, as would happen with cPanel DNS hosting, for example. Reason being, if you have severe prolonged server issues, you can't get at your DNS to change it quickly to somewhere else temporarily (even if just to host an explanatory error message) I also like to ensure the DNS is hosted somewhere with good geographic redundancy so even if one nameserver goes out, there are still multiple backups to keep things rolling. No matter how good your website's uptime is, if your DNS dies, you're still off line. My guess is the DNS server was having temporary issues that resolved by the time you checked it. I'd want to be sure that wasn't happening on a regular basis. (relying on Google to report issues isn't nearly accurate or timely enough), As far as the robots.txt - do you have uptime monitoring on that site? I can't count the number of new clients who thought things were fine with their website, when in fact they were having constant short outages that went unnoticed as they weren't on their own site constantly enough to catch it. I always recommend a system that checks at 1-minute intervals for just this reason. If you don't have independent verification that the site was fully up, you can't really discount the WMT warnings safely. Lemme know if you want more info on uptime monitoring services & methods. Paul

    | ThompsonPaul
    1

  • Branagan's suggestion is a good one. You might also want to consider blogging in a manner that highlights geographic terms associated with your target audience.

    | MiriamEllis
    1

  • Matt, Our web designer just made a change within Expression Engine, our CMS, and it worked! Here's the code: {if segment_2 == "" && segment_3 == ""}{redirect="sub1/sub2/sub3"}{/if}He put the code at the very top of the main /sub1/ template. It means "if segments 2 and 3 are empty, forward to /sub1/sub2/sub3."Thanks again for your help!

    | nyc-seo
    0

  • Hey Tom, I took a look at your campaign and it looks like it is crawling pages just fine. Let me know if you're still having any troubles. Cheers, Joel.

    | JoelDay
    0

  • Thanks - I'm hoping more people will agree with you on this one. As well as confusing humans, surely Google must be suspicious too. Like you say, I would 301 them too, and my Dev also agrees, but would be nice to hear it from the wider SEO community.

    | Webrevolve
    0

  • Hi Kyle, thanks for your input. I will absolutely implement 301s as in your example, that was my plan. Cheers, Florian

    | fkupfer
    0

  • this isn't an exact answer, but i might be able to point you in a correct direction . you might be able to do a mod-rewrite by doing a redirection based on rewrite conditions. for example, we once used the following to send users to our non-https, www page if they enter on a https: url. fyi we use helicon's isapi mod rewriter at www.isapirewrite.com/ RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule .? http(?%1s)://www.%2%3 [R=301,L]

    | danrawk
    0

  • I use a Perl program to scrape my category feeds and save each category as a small html file with  hyperlinked blog post titles.   These small files are then used as server-side includes on lots of category-relevant pages of my site. That way, when I publish a new blog post, the Perl program (which executes hourly as a cron job) automatically publishes the includes.  That gets links into new blog posts from thousands of pages - huge linkjuice hits them immediately. Because we put up several dozen new posts per week, these includes have fresh content cycling through them daily.  It's also a good way to market your content to people reading similar topics on your site.

    | EGOL
    0

  • Thanks Brad!

    | PGD2011
    0

  • Glad that helped, Lewis. Unfortunately, there's really  no way to determine how long the 301-redirect process will take to get the URLs out of the SERPs. That's entirely up to the search engines and I've never seen much consistency to how long this takes for different cases. One other thing you could do to try to help speed the process is to add an xml sitemap to the dev site, and verify it in both Webmaster Tools. (Only do this AFTER you have added the metarobots no-index tag to the remaining pages headers!) This will help remind the crawlers of the dev pages, and hopefully get the crawlers to visit them sooner, thereby noticing the redirects and individual no-indexes, and taking action on them sooner. Personally, I'd let the process run for 2 or 3 weeks after the dev pages get re-indexed without the robots.txt. If the pages are gone, job done. If not, at that point I'd re-evaluate how much damage is being done by still having the dev site in the SERPs. If the damage is heavy, I'd be seriously tempted to use the URL Removal Tool in Bing & Google Webmaster Tools to get them out of the results so I could move on with building the authority of the primary domain (even though that would throw away the value the dev pages have built up). REMEMBER! Once you've removed the robots.txt no-index, the metatitles and especially metadescriptions of the DEV site are what will, at least temporarily, be showing in the SERPs once the pages get re-indexed. So make certain they have been fully optimised as if they were the real site. That way at least in the near terms you'll still be attracting good traffic while waiting for the pages to hopefully drop out. This may allow even the dev pages to do well enough at bringing traffic that you can afford to wait until they drop out naturally. **As far as seeing the additional 70 or so pages that are indexed, as Dan says, at the bottom of the search page is this paragraph and link: _In order to show you the most relevant results, we have omitted some entries very similar to the 3 already displayed. If you like, you can repeat the search with the omitted results included. _ When you click on that link, you'll see the additional pages. This is called the supplemental index and usually means these pages aren't showing up very well in the results anyway. Which means that for most of them, it will sufficient to make sure you've added the metarobots no-index tag to their page headers to just get them removed from the index to avoid future problems. Does all that make sense? Paul

    | ThompsonPaul
    0

  • Hey Emory - if that's the default .htaccess file your software created (assume this is a Joomla-based site?), it looks like the redirect code you need is already there, but it is disabled by default. The following code Remove index.php or index.htm/html from URL requests #RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /(([^/]+/)*)index.(php|html?)\ HTTP/ #RewriteCond %{REQUEST_URI} !^/administrator #RewriteRule ^([^/]+/)*index.(html?|php)$ http://www.mysite.com/$1 [R=301,L] should do what you want, The reason its not currently doing anything is because it has been commented out. The "#" symbol at the beginning of each line tells the server NOT to run the code in that line. Try removing the "#" symbol in front of the last three lines of that code, save the file & then thoroughly test your site. (It's not the way I would write it, but there may be specific requirements for your site/system) The first line is just a descriptive header, so the "#" symbol needs to be left on it. If for any reason it causes problems, you can simply re-add the "#" symbols and re-save to return the site to its original state. Give that a shot and let us know if it accomplishes what you want to do. Paul P.S. In particular when testing - ensure that client logins work correctly, and that the search function and all plugins also still work.

    | ThompsonPaul
    1

  • They're the same site but not the same url.  Notice one of those URLs begins with www and the other does not.  It's just a weird thing about the way internet servers are set up and having problems with which one of them should be the one you use is called a canonicalization issue. Most webmasters chose to use the www version and redirect the non-www version to it via settings on the web host.  Here's some more reading on canonicalization.

    | Chris.Menke
    0

  • Hi There If you wouldn't mind, could we define "moving" a little more specifically? Which of these are true for what you need to do? You're moving to a different host - Y/N Your domain name is changing - Y/N You are changing your URLs on purpose - Y/N You're changing to a new theme - Y/N Hope you don't mind the little Y/N game If you can answer those, then I can help guide you as far as how to do it safely Thanks! -Dan

    | evolvingSEO
    1

  • I'm confused. When a book goes out of print, does the URL change to this long OOP html page? Or does that book's URL then redirect to this page? Or *(shudders) do you make the OOP page re-titled to whatever the OOP book's page was? If it were me I'd do the first scenario here. It's essentially the same concept as a 404.

    | jesse-landry
    0

  • That makes sense!  Thanks for the clarification!

    | TopFloor
    0