Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • Hi Jay, Set login/account pages to not be indexed. Both way are equaly, it depends on how easy is for you any or other way. Personally I prefer robots.txt, just one archive and not messing with page code. Best luck. GR.

    | GastonRiera
    0

  • Yes, it is relatively easy to do there are specialized tools that will make it extremely clear what content has changed or added. Moz Content is outstanding for doing this. https://moz.com/content/ You can run deepcrawl, and it will keep track of all URLs and how much content has was added or changed. https://www.deepcrawl.com/ You can also keep track of added URLs using https://www.screamingfrog.co.uk/seo-spider/ Kapost Is a good way of grabbing all the content and seeing the differences. https://app.kapost.com/auditor You can also keep tabs on how many pages or posts are in Google's index which is not necessarily the best way of showing content was created but will say if it's in Google's index by typing in site:http://www.example.com into Google's search. I hope this helps, Tom

    | BlueprintMarketing
    0

  • for the original poster - what did you end up doing - and did it make a difference?? (and) similar question but different... If suddenly some 90% of a 5,000 page blog is changed to have the blog pages no-indexed, will the linkjuice be now more concentrated on the remaining 10%? in the old days, we sort-of-called this "pagerank sculpting" and the idea was to focus the linkjuce on certain pages and defocus it on other pages. does this make a difference these days?? keep in mind that the 4,500 remaining pages are still followed, and all backlinks remain in place. will the site start ranking better for the keywords on the 500 indexed pages? tia!!

    | seo_plus
    1

  • I almost forgot: about Internal Links, please, have a look at this discussion I am having here below: https://moz.com/community/q/rankings-drop-and-google-search-console-internal-links-panel-correlation I'd love to know your thoughts about it. Thank you again!

    | fablau
    0

  • Hi there Google offers great resources for migrating a site with URL changes, I would review that to get a high level idea of what you need to do from a search engine standpoint. URLs with keywords are usually the way to go, so in this case it would be the product names. Here are two great resources on Moz for URL best practices (1 / 2). I have seen the move from parameter to keyword based URLs work substantially better from a search engine perspective because search engines (and USERS) value the keywords in URLs. They are easier to read, crawl, link to, and so on. You really can't go wrong with it. I would first categorize the parameters by patterns. This could be anything from a category to a product to a color, etc. That way, when you're setting up redirects, you're not scrambling to figure out what parameters are; you already know that you're focused on categories or products and to go from there. It also helps set up your redirect file in a more organized fashion so should you need to change anything, you can quickly find what you're looking for. I would review the resources above and discuss with your IT team about your capabilities. From there, get a gameplan of schedules, responsibilities, testing your redirects, and benchmarking traffic / keyword rankings so you can gauge success post migration. Let me know if this helps or if you have any questions or comments - good luck! Patrick

    | PatrickDelehanty
    0

  • The simple answer may be that users enter through organic traffic, add something to their cart, then go to some other tabs or do some comparison shopping, the session times out (after 30 minutes is standard) and so when they come page to the cart page directly they start a new session but they're still tagged as coming from organic. David Sottimano has a really good write up on this Deepcrawl post: "If a user enters a site via Google / Organic, allows the session to time out, and then navigates to a non-indexable page, that non-indexable page will be credited as Google / Organic Landing page. I had been skeptical about this for a long time, even though experts like Mike Sullivan answered me on Quora, I had to test this many times to make sure this was actually the case – I also had a former colleague double check my test results (thanks Tom Capper)." Another post on Jfet.io discusses the same idea: "My guess?  People are adding items to their cart, then doing comparison shopping or coupon hunting.  This process lasts over 30 minutes (maybe they wander off to lunch or for a coffee break), and when they return to the browser with your site’s cart open, a new session starts.  In that new session, the landing page is the cart and it’s direct traffic.  Because GA rolls with the last non-direct source, it’s attributed to search/email/whatever." I'd recommend testing out all the action items on both of those posts and see if their solutions work. My take is that it's probably people revisiting their cart after a timeout session! Let me know what you find out.

    | Joe.Robison
    0

  • Great questions... If you feel the calls to action you are adding are relevant to the content and add value to the page, then there should not be any problem adding them in.  Do you currently have the sign-up forms in other locations on the page? This is definitely something you could easily test by making the change on 1-2 posts and not others, and seeing if there appears to be any negative impact.   Considering this would be "supplemental" content to the page, I would definitely make a point to try and give them a non-intrusive call to action. Cheers, Jake

    | HiveDigitalInc
    0

  • Hi, It might be a little counter-intuitive to enable ping-backs and trackbacks to find the spammy sites only to have to disavow them, when disabling this function means that they are unable to do a ping-back or trackback in the first place. I personally wouldn't go looking for spammy backlinks unless you feel they are causing you a problem. Keep an eye on your Search Console for the links that are appearing and assess each one to see if you think it should be disavowed. -Andy

    | Andy.Drinkwater
    0

  • Hi there, What I can get from your question is whether to use blog posts or pages. Strictly there is no difference. In my opinion I'd stay with the option that suits you the best. The SEO optimization can be as good for blog posts or pages. Best luck. GR.

    | GastonRiera
    0

  • Thanks for answering all of my questions! It's interesting that when I do a simple site:search in Google none of the main pages of your website are appearing. Most of the search results are either archives or comments. Typically, I've seen this kind of thing happen when something goes wrong in the redirects or a site is penalized. It looks like the big dip in indexation didn't occur until about August. I would think that if you pulled the trigger in June, pages would start dropping out of the index much sooner. In this case, your theory about a possible penalization might be right. I'd be interested to see what happens once Google considers the disavow file (unfortunately, that will take some time). Does anyone else have any input or possible reasons why pages on this site have dropped out of the index so quickly?

    | sergeystefoglo
    0

  • Hi This is great, thank you for responding Some really good examples! Becky

    | BeckyKey
    1

  • Adding a meta noindex tag can mean it takes a few weeks for a page to fall out of the index. These pages probably aren't doing you much harm, so if you wanted to just wait for them to fall out, that's probably fine (although I would update the tag content to "noindex, follow" to help Google crawl to the other noindexed pages). If you really want them out of the index faster, you could use the "Remove URLs" function under Google Index in Google Search Console, which will temporarily remove them from the index while Google is registering the noindex tags, or you can use the Fetch + Render tool and then Submit URLs in Google Search Console, which will cause Google to come back and crawl your pages and find the noindex tag.

    | RuthBurrReedy
    0

  • Agreed with Chris, when you have a lot of pages and when your code is a little bit more complex then some basic stuff Google Search Console will have a habit of sending. What I saw in the past as well is that they pick up parts of your tracking code and try to find URL structures within the code that don't really exist but are part of it. Nothing to really worry about, if you make sure you run a monthly or quarterly crawl to check upon weird URL structures on your site and these URLs don't pop-up there you should be fine. As mentioned, just mark them as fixed so the real issues will move up again.

    | Martijn_Scheijbeler
    0

  • Hi Dan I had missed that reply; cheers for the heads up (my email notification never came through). I'll talk to the devs about implementing to target all bots. Thanks!

    | KateWaite
    0

  • A Canonical is kind of like a Bots-Only 301 redirect. So, from a purely mechanical perspective, using a canonical can pass link equity to your other page without redirecting Users off of the forum thread. Now, this would be a deceptive use of the Rel=canonical tag and the bots would stop respecting it on those pages. Since a canonical is a suggestion, not a directive, if the bots think that your canonical is improper, deceptive, incorrect, etc. then they can just stop following it. Ultimately, using a canonical tag in the manner you're thinking wouldn't work out the way you would want it to. You might be able to pass equity from the one page to the other for a time... but that would not be a proper or best practices use of the tag and it would not have long term effects. You'd be better served by looking at updating/expanding your content, internal linking, and backlink profile. And take a look at the article that Andy linked to in his response.

    | MikeRoberts
    1

  • Huge corporation with lots of sites and large number of people on disparate teams (marketing, technical, management) involved in maintaining/updating the sites -- easier for anyone to be able to take a quick look at the sitemap online to see what version it is.

    | ATT_SEO
    0

  • That part is probably configured by the Yoast plugin as it looks pretty much the same as the set-up that we have.

    | Martijn_Scheijbeler
    0

  • The HTTPS are inaccessible without logging in because it is paid content. The HTTP version is a preview of the actual documents and images. I would like the pages to be indexed so people will sign up to preview the actual docs and become a paying member. Thank you for the advice. Joey

    | JoeyGedgaud
    1

  • Hey Joey They would just need to be able to crawl and access it somehow. So you'd want to link to that content from public pages, or maybe try putting it in an XML or HTML sitemap, and as mentioned try "submitting to index" in Search Console. But yes, they'll index it as long as they can crawl it from somewhere else. Running the crawler on your site and seeing if it gets picked up is a good way to see.

    | evolvingSEO
    0