Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • Hi there, Its still a highly debated question whether or not if search engines like Google use WHOIS info while ranking a website. Even if you make your WHOIS info private, Google being an accredited ICANN registrar, they can access your WHOIS info. Now coming back to the question at hand, if search engines use WHOIS info while ranking. There was an instance in my experience where a client's websites were penalized because they had multiple domains that targeted the same set of keywords /phrases and were ranking in the first page. While the client took all the care to keep those websites look different in all possible respects  like the design, look and feel, navigation, content and even the link profiles of these domains, separate webmaster tools accounts and none of them were interlinked or cross-linked etc., the one obvious thing that was not changed was  the WHOIS info behind these domains which was same. So this incidence in particular made me believe that the search engines like Google might not leave any stone unturned in an effort to put a check on the quality of their search results and also to minimize the chance of any one individual or company trying to grab multiple positions in the first page of their results pages. That was my experience but I would suggest you to be honest and complete while filling the WHOIS info. Best regards, Devanur Rafi

    | Devanur-Rafi
    0

  • Thanks for the answer David, I didn't expect to get an answer from you. Everything that I know in SEO I have learned from you and the SEO Moz team, so I feel really excited that you replied. In order to claim Google Local+ I need to have a Google+ profile. I wanted to know if I could create the 250 places for each client with their emails (places@client-domain.com) and then claim all the 250 Google Local+ of the different companies, and manage them all from one Google+ account. The idea is like on facebook, where you can manage a bunch of pages with one account. P.S. If this is possible, can I afterwards authorize the client to manage their own Google Local+, in case they don't want to extend the contract with our agency?

    | cvissi
    0

  • Vikash Thanks for the follow up! It's a little tough to say without more info - but I'll assume some of your duplicate titles are coming from paginated pages: /page/2/ ... /pages/3 etc If this is the case, see if they allow you to add the page number to the title tags of archives For example - page one might be "A Really Awesome Category Archive Page 1 - Dan's Blog" Page two would be - "A Really Awesome Category Archive Page 2 - Dan's Blog" etc.. -Dan

    | evolvingSEO
    0

  • You're welcome Geraldine, hope this solves your doubts. If after speaking with the devs you'll still have some doubts, you know where we are and if you don't find us there's an huge great community of professionals here! Cheers!

    | mememax
    0

  • Howdy! For starters please make sure you are using rel="alternative" and rel="canonical" meta tags. I also highly suggest that you have completely unique content of both pages. Do not recycle anything unless you are going to robot.txt the page that will be using the duplicate content. We also prefer the practice of using responsive design. Since this does not seem to be an option for you, I suggest that you do some further research into optimizing a mobile site. Hope my tips help you out! Ashley @ ScriptiLabs

    | ScriptiLabs
    0

  • Yeah a major pain and went round and round since it was happening on all machines.  Once I got a machine that didn't do it it got me thinking.. Anyway hope it helps others in the future

    | TroyW
    0

  • Awesome, thanks so much. I know what I need to do now.

    | isret_efront
    0

  • If you want to have total control over what goes into the title on each and every page, as your developer mentioned, that would definitely require some extra custom programming - can't begin to give a ballpark figure without talking more specifics. If you're okay with something at least semi-standard across the board that would just include having the actual property address in the title, I can't see that taking more than a couple of hours of programming (if that). Without looking at the actual source, I'm 99% sure that your property details are already being stored in a database.  With that said, I imagine they can just duplicate the existing header into a new file (i.e. header2.cfm), insert CFM code into the title tag that pulls the property address, and then replace the header file that is currently included in the rentaldetail.cfm page with the new one. Should you make any global changes in the header in the future, you will need to update both header files. There's really probably a variety of ways to handle this, but that's my 2 cents. For whatever it's worth... I've worked on several real estate websites myself.  They can get rather complicated.  There's definitely a great deal of power in developing your own solution, but it can be costly and time-consuming - you can easily spend $20K for something good. If you see yourself continuing to need other programming adjustments, it may be easier to jump over to one of the more customizable commercial solutions, such as http://www.diversesolutions.com or http://www.realestatewebmasters.com .  Please note that I'm not trying to bump your developer out of work - they can certainly assist with either of the above solutions, and will likely be necessary for the first one. Hope that helps -

    | Bromtec
    0

  • Google Analytics and a noindex tag are completely separate entities. So by applying a noindex tag to your search pages (which is a great idea, imo, reduces duplicate content), you still have the Google Analytic code on the page, which allows it to report to the server and then you can view your data. So yes, you will still get accurate info and metrics.

    | Inductive_Automation
    0

  • It's not going to get them deindexed and it's unlikely it could be the root cause why they wouldn't rank high. There's honestly not much content on each page, nor it is easy to tell what keywords you are targeting since they kind of cannibalize each other, and that makes it harder to rank well.

    | josh-riley
    0

  • Ollan, Thanks for the reply. That www.crisisprevention.co.uk/specialties is actually the content on the mothership site. I'm not saying US site, because, for redirection purposes, we actually did map out individual URLs to go from the old UK site to a UK "culture" on our mothership site (with /en-uk/ as the directory). The redirects are going to the correct content. The problem is the old domain is hanging around.

    | spackle
    0

  • I´ve just tought to delete the pages and redirect because It could "transfer" some PR to a new page, (a new machine to sell) and keep the server clean (with less pages and pictures).  Thanks for your toughts

    | SeoMartin1
    0

  • Another possible way is adding your site in Google Webmaster Tools and submit your sitemap.xml in Optimization->Sitemaps section. You will see count of submitted urls and indexed urls.

    | nurzhyk
    2

  • Hi Tom Yes that makes sense, think the robot  content noindex,nofollow would be the best solution.

    | Cocoonfxmedia
    0

  • Hi Jill Every so often I hear from someone who gets this sort of ambiguous reply.  It's not that helpful, but here's what I can tell you from past experience. If there was a manual penalty: the Google reviewer has not mentioned that it is still present.  To me this suggests that any manual penalty that you may have briefly had, if any at all, has now been lifted. Usually, when you get a reconsideration request that explicitly confirms this, it says that you may need to wait a while until your status is restored. Because this is also omitted, it leads me to suggest that you may have an algorithmic penalty on your site that is present.  We haven't seen a penguin algorithm refresh for a while, so it may suggest that your site may be suffering from duplicate content and/or Panda issues. If you scroll to near the end of this article http://www.seowizz.net/2012/10/the-disavow-tool-works-real-sites-real-recoveries.html it features your specific reply and also draws the conclusion that there is no manual penalty, but there is an algorithmic one. I think it may be a Panda issue, you can cross reference the time of your penalty with the Panguin tool for analytics, which is really helpful.  If one of the last Panda refreshes matches up with your dates, it at least gives you a starting point. Hope this helps

    | TomRayner
    0

  • Thanks for reaching out to us! This is Abe Schmidt from the SEOmoz Help Team. After viewing your campaign, it does look like you were updated today. If you ever experience a strange delay, feel free to message us for a quick private response: help@seomoz.org.

    | Abe_Schmidt
    0

  • Thanks for the backup, Cyrus! Totally agree w/ you on the preference for using blank Disallow - as I mentioned about it being more standards-complaint. Paul

    | ThompsonPaul
    0

  • Thanks guys, We definitely mark up entities that have a chance of showing rich snippets, so far though I haven't seen any of these for purely article markup. http://schema.org/Article I guess that answers my question though, probably not worth the implementation costs at this time.

    | MarloSchneider
    0

  • Hi Pauline I'd agree with Takeshi - likely not a big deal at all. As long as the links are not spam (people trying to link to their products or services) you should be totally fine and don't even think you need to do anything else. Moz will report those things, but in some outlying situations like that, they are not a concern. -Dan

    | evolvingSEO
    0