Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • I should add - I have probably read more Whiteboard Fridays than I have actually watched, just because I find it easier to speed read through it and hone in on the stuff I care about. Something to consider since not all of your visitors want to watch a video - some people don't have speakers/headphones, don't want to use mobile data, etc.

    | KaneJamison
    0

  • Hi Batchbook, I am going to give you a slightly different answer than seowoody. The issue you are looking at is geography based, not language based from what I can tell right now. That means you don't need hreflang (maybe). This might be different for the actual situation though. Your customers needs determine what route you should go down for international, and I can't tell you what to do without knowing more (Like SEOWoody said). With all of that being said, use this tool and report back to me your end result (it's something I built). Then I can help you figure out how to deal with this: http://outspokenmedia.com/international-seo-strategy/

    | katemorris
    0

  • Hello, While I can't confirm this, my understanding was that you can't have a penalty on http and not on https (and vice versa). The domain or sub-domain is where the penalty would be (to my understanding - could be wrong?). Although Google sees http://www.domain.com/blog/ and https://www.domain.com/blog/ as two different URLs, I believe a penalty for thin content would be cast on domain.com, thus affecting both URLs. I'd suggest moving all pages to https and ensure your blog content is not thin.  If you're struggling for time to create great blog content, cut down on your volume and post better quality less often. That'd be the safest strategy IMO. Goodluck!

    | seowoody
    0

  • Hi Ronnell, Great things to think about!  We are absolutely adding original content in other areas of the site. The ideal audience here is me, lol. I am a homeschooling mom and there are no other sites that list out events the way I do.  I know, because I've searched for it myself. It doesn't exist and would be super-helpful if it did. Since I am my ideal visitor, I ask myself would it annoy me to have to navigate through a menu to find my state of Georgia when I could just put in my zip code on the top level page and save myself a click. Yes, that would annoy me. But from a SEO standpoint, the Georgia page is probably better, so I'm not sure which is best to implement. As far as event pages on my site vs going to the 3rd party website, I again ask myself which would I prefer, since I am my ideal visitor. I would prefer to stay on this website. It annoys me to keep leaving a website to see what the event is and then having to close that separate tab, return to the site and then continue checking the list of events. However, when I check other homeschool sites, they all link directly to the 3rd party website. Copy/pasting content from the 3rd party sites... again, I don't like going to the 3rd party site for information. When I'm on a homeschool site and their 'original' description is "this looks like fun, i've never been there, but heard its great", or some other 3 or 4 sentence description - well I hate those descriptions because they tell me absolutely nothing. So personally I like to see the meat and potatoes description that's on the 3rd party website without having to go to the 3rd party website. But since it's simply duplicate content from the 3rd party, it's probably not best for my SEO, which is why no one else does it. So I really don't know which way to go! -J

    | fatcreat
    0

  • Thanks, Paul. We started resubmitting the cleaned pages yesterday. I passed your comments about the Apache install and the old version of PHP to the devs as well. At the very least, this is a great learning experience for us. It's great to have such a helpful community.

    | Liggins
    0

  • Hi I now have a robots.txt for the old site and I created a sitemap by replacing the current domain with the old one and uploaded. Weirdly when I search for the non-www version of the old domain the pages indexed has increased! According to WMT the Crawl postponed because robots.txt was inaccessible however I've checked it returns status 200 and the Robots.txt Tester says it's successful even though it never updates the timestamp.

    | Ham1979
    0

  • Premium SEO pack looks like has more features?

    | welcomecure
    0

  • It seems like the issue is a bug in the way Google handle data from your site ('null' being computer speak for 'empty', and often appearing after buggy handling of data). However, it seems that the indication from Umar is correct, and that this buggy data handling is likely prompted by crawling issue so that is the best place to start.

    | Tom-Anthony
    0

  • My friend, there likely is no answer to this issue, and the responses you've received pretty well cover the possibilities. They've come from folks known to be experts, as well. To get more specific would require testing against the site in question, and for that you'll need to hire a consultant. Please be courteous and respect the community members who have generously given of their time and knowledge to answer your question. If you have questions about our community etiquette guidelines or thoughts you'd like to share, please email us at community@moz.com.

    | MattRoney
    0

  • Thanks - I'm not terribly worried about the test site as we use a password protected and IP blocked development domain that is completely different from the root domain. Its not even a subdomain. Eg. www.realsite.com and www.testdomain.com My dev team is trying to get me to wait and just do a massive 301 redirect > moving the URLs with the query strings (old site) to new page (e.g. multiple many:1) vs doing the canoncial. The new site won't create the query string issue. The challenge I see is that the 150,000+ indexed URLs really should be around 7,000, so the organic value of the real 7,000 pages (other than possibly the root domain) are probably getting punished, even though the site is doing decently well.

    | ExploreConsulting
    0

  • Good answers! If you do 301 redirect to all https pages would this cause issues with previous rel canonical tags which point to http version of the page. E.g. this page http://www.the upside sport.com/sale/women/hoodies/recovery-hoodie-coral Has a rel canonical pointing to (which is correct): http://www.the upsidesport.com/recovery-hoodie-coral Then if i implement a 301 redirect to the https version the correct version would be: https://www.theupsidesport.com/recovery-hoodie-coral But the rel canonical would be to the non-http page unless i change it. Would this cause issues if i don't change the rel canonical tags to the https version. - Chris

    | jayoliverwright
    0

  • Your Spain site is most likely getting Latin American traffic because it's in Spanish. Does your offering(s) change in Spain vs Mexico? If so, you should have a Mexico focused site. If your offerings don't change based on the location of the customer, then you probably just need one site in different languages.

    | katemorris
    1

  • Great input by Ken, wanted to embellish.  301 redirects are key (talk with your developer or host on best way with your setup on how to implement)  Google Search Console actually has a "change of address" option to help you migrate.  There is also a step by step tutorial  https://support.google.com/webmasters/answer/6033049?hl=en Good luck!

    | CleverPhD
    0

  • HI Erica, It's not on the page too many times, it's just relentlessly exact-matched between h1, url, title tag and img alt, but I guess that's okay. Thanks

    | 94501
    0

  • Awesome thank you for your help.  Definitely clears things up.

    | danstern
    0

  • Hi Ruth, Thank you a lot for your input. I think our changes to brand and website will not be that significant as we do not change anything regarding our brand - it will remain the same. Same for the design and content. I like the idea of collecting email addresses, but I am not sure what to tell them when we switch domains - as there will be nothing new, apart from the domain URL The major issue what was bothering us was the fact that our main domain was country specific (.at) and we are an internationally operating company. That is why we planned to switch the main domain to a .com. I will definitely follow your suggestions about some PR before the domain switch and get SEs to crawl the site before the switch.

    | comicron
    0

  • Hey Jawahar, Thanks for posting your question. Guest posts will never harm your blog if you follow these guidelines: Content is fresh, detailed, unique and optimized. It's well-written (in terms of grammar) and meant to educate users. There is nothing wrong in giving do-follow links if the site "really deserves" it. So make sure to double check every outbound link in the post. Make sure to use images with credits or with permission. It's up to you how many links you want to give away from each post but avoid any commercial anchor texts links. It's better to accept the authors who have previously contributed in that niche but if the author is new and the pitched topics are interesting, give them the chance as the new bloggers sometimes can be beneficial. Hope this helps! Umar

    | UmarKhan
    0

  • Thank you Rebecca for your reply. I agree there are many websites that do not have a m. variation and are responsive. In such cases its fine to have android links from the desktop version as its the only one available version. I am not sure what to do when there is both desktop and m. version available for a website. Should I be placing deep links on both versions or just the one. Thanks for your help.

    | Vsood
    2

  • Typically, no. You can see a discussion on this on another Moz Q&A here. There are exceptions to this rule, like when you're using AJAX and have a #!, but in your case, I'm pretty sure Google would ignore the filtered version of the page.

    | KristinaKledzik
    0

  • Hey Paul, Great answer, for some reason it totally slipped my mind that robots.txt is a crawling directive and not an index one. Yes the pages return a 404 on the headers. I've grabbed a copy of the complete SERPS and will now manually disallow them. Thanks! Jon

    | EvansHunt
    0