Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • I knew it that sounded like a Google A/B test protocol! A good rule of thumb is to avoid changing URLs unless it's absolutely necessary. There's a lot going on with that URL in the background that Google knows about....internal and external links as I mentioned above, but also XML sitemaps and usage metrics. You don't want to point them elsewhere and have them re-learn a new URL structure and step through a redirect just to get there. Google has put more emphasis on UX in the last couple years, so improving the usability of this page, as you've done by A/B testing, is likely to benefit you in the long run.

    | LoganRay
    0

  • Hi Jamie, For the reverse proxy method, the search engine will perceive that the blog is on the main domain (domain y). As far as any user or Googlebot is concerned, the blog is on domain y in a subfolder – they never see domain x. If you do this, your main challenges will be: Making sure that domain x can't be accessed directly (otherwise, your entire blog will exist in two places). They should only be able to access the blog by visiting the correct subfolder on domain y. Domain x can be configured to only accept connections directly from domain y. Making sure to configure the proxy correctly. They can be tricky, and may take up some of your team's time. Making sure everything remains fast. Although Googlebot has nothing against proxies, if the inclusion of the proxy introduces a big delay to the page load time, that's going to have a negative impact.

    | StephanSolomonidis
    0

  • It will still be seen as the same page.

    | EricaMcGillivray
    0

  • Hi Ikkie, Thank you so much for your prompt reply.  We had already done a website audit on well-known sites in the target country and language and as you mentioned, most well-known sites go down your suggested route.  However, our concern is that most well-known sites do not always adopt on page seo best practice  or in some cases even need to based on the brand equity they enjoy so not sure if we should follow this benchmark or is there a better structure .

    | nickspiteri
    0

  • Hey Paul, I would definitely try to push the client to purchase a new domain and create a new brand around the new business. That seems like a much easier and efficient process that pivoting the current business.

    | JordanLowry
    0

  • It's a valid concern, but if yo make sure that Google in the new scenario is perfectly able to crawl the pages again then you should be able to point them to the new pages that show up via the canonical URLs.

    | Martijn_Scheijbeler
    0

  • I think the issue comes from the way you handle the pagination and or the way your render archived pages.  Example: First archive page of Aktuale http://zeri.info/arkiva/?formkey=7301c1be1634ffedb1c3780e5063819b6ec19157&acid=aktuale Clicking on page 2 adds the date http://zeri.info/arkiva/?from=2016-06-01&until=2016-06-16&acid=aktuale&formkey=cc0a40ca389eb511b1369a9aa9da915826d6ca44&faqe=2#archive-results => I assume that you're only listing the articles published from June 1st till today. If I check all the different section & the number of articles listed in each archive I get approx. 1200 pages - add some additional pages linked on these pages and you get to the 2K pages you mentioned. There seems to be no possibility to reach the previously published content without executing a search - which Screaming Frog can't do. It's quite possible that this is causing issues for Google bot as well so I would try to fix this. If you really want to crawl the full site in the mean time - add another rule in url rewriting - this time selecting 'regex replace' - add regex: from=2016-06-01  replace regex from=2010-01-01 (replace by the earliest date of publishing) This way - the system will call url http://zeri.info/arkiva/?from=**2010-06-01**&until=2016-06-16&acid=kultura&formkey=5932742bd5dd77799524ba31b94928810908fc07&faqe=2 rather than the original one - listing all the articles instead of only the june articles. Hope this helps. Dirk

    | DirkC
    0

  • Your robots.txt file is used to give instructions to bots visiting to your site - which parts can/cannot be visited. If your page is in the cache - you are probably allowing the bots to visit the page (else it wouldn't be there). The reason why you are redirected to the homepage should have another cause. Are you using a meta-refresh? Javascript or htaccess redirects? You could try to check what's happening by copying the cache url in httpstatus.io - url to use is http://webcache.googleusercontent.com/search?q=cache%3A< <insert your="" encoded="" url="" here="">- to encode your url you can use http://meyerweb.com/eric/tools/dencoder/</insert> Dirk

    | DirkC
    0

  • Just make certain your http > https redirect is a 301, not a 302 (A lot of the examples on the web for this redirects are actually wrong in this regard). A 302 will make it difficult to get the old URLs to drop out of the index.

    | ThompsonPaul
    0

  • Fully agree with Dirk - this is the proverbial Very Bad Idea. In addition to the significant issues this can cause with duplicate content, its also going to bork your Analytics. Could also be siphoning link equity if folks are linking to his URLs instead of yours. Especially if he later leaves and  decides to point his domain name to another (possibly competing?!) site. There's no reason (or justification) for the employee to be using URL masking here. If he/she isn't willing to put the work into making a site of their own, they should be made to do a proper 301-redirect of their domain name to the company site. Paul

    | ThompsonPaul
    0

  • Thank you Kathiravan, For jewelry and gold coins I think the image and "read more" is the best option for both user experience and SEO, assuming Read More goes to the product page. My advice would be to link the product title to the product page as well, and for it to be the first link to that page in the code (read more being the second link) since the first link determines anchor text. Does that answer your question?

    | Everett
    0

  • Try to put that .htaccess on the old site. It will rewrite the asked URL before it can be processed by the server. It will rewrite the request to contain the new domain. On the new domain, you already have the code to process these requests. Try that and report back. It probably works after that.

    | BasKierkels
    0

  • Hey Damon, your "guess" about linking to the specific page with a specific target anchor text isn't wrong. However, with link building the best approach is always to "distribute the weight" amongst your entire site. Your home page is going to naturally accumulate a large percentage of brand-based/commercialized links and in so doing so, accumulate authority that will flow naturally through your site anyway. But when we talk about a "content-rich site" it's always a site that as very high percentage of deep links -- 70% or more is a great target. That's why having a blog embedded deeply on your site as a subdirectory is a GREAT way to accumulate both deep links and inbound linking authority. But doing this effectively means understanding what "quality content" looks like. The best way to accumulate good, deep links to your site (and move the dial algorithmically in Google) is to put up GREAT content that people will want to link to. I know we hear that all the time "all you need is great content and you'll move the dial" but most people don't know what good, evergreen content looks like. Over 2 million blog posts are published each day. Most of them SUCK. But if you can find and publish high-quality pieces of content that generate shares, freshness signals, and attract incoming links, that will lead to great individual Page Authority score that do reinforce overall domain authority. If I could offer you a tip in this regard, I highly recommend a tool like Answer the Public. Go over there and type in target keyword phrases and look at the questions returned. Then, look to write up detailed pieces of content that ANSWER these questions on your site. Not only is this type of content like CAT NIP to your visitors but it will also work to generate those fantastic Featured Snippets (Answers) you are seeing more and more at Position Zero in Google (like this one for "How to be a Food Blogger"). Hope that's helpful. Good luck with your link building!

    | mediawyse
    1

  • I did this on my wordpress site.  Here are my instructions: https://moz.com/community/q/switched-from-wix-to-wordpress-dreaded-hashtag-url

    | lcallander
    0

  • Hi Allie! Did you work this out? We'd love an update.

    | MattRoney
    0

  • If you've got that path anywhere in your navigation or other internal linking, you'd want to remove that or update it to /shop/necklaces/. The next step would be to 301 redirect /shop/necklaces/necklace/ to /shop/necklaces/ just in case you've got any links pointing to it - this will get your users where they want to go and also let search engines know you've relocated the page.

    | LoganRay
    0

  • Nice update Kevin!  And a good use-case of sites that could really benefit from AMP: heavy mobile traffic / users, content heavy pages, ability to mitigate technical constraints. Call-in based service businesses in competitive markets would also be prime candidates for AMP as well.

    | RyanPurkey
    1