Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • So here's one issue I have; I don't have the company name plastered all over the place. The only place that it actually appears in text format is in the footer. Site image in only in the header, hours of operation are only on the contact page. http://grinbaum.moderninterface.net/ Currently I only have the contact page marked up, and I brought in a hidden div so I could mark up the info that appears only in the header and footer. What would you suggest?

    | MichaelGregory
    0

  • First of all thank you both for taking the time to answer my question. @Russ I also was hesitating whether I could display the text first and then collapse it with some JS but I also read somewhere that Google is or will be analyzing JS in the future and of course this could lead to a penalty if not now than somewhere in the future. So I think I will follow your advice to stick with your first suggestion. As to your first suggestion: In this case the user has to click more so this is a slight limitation when it comes to usability but I guess to some extend I have to accept a compromise. Do you think it is a problem if content (in that case headline and teaser) is repeated on the same page? @ Dimitrii Well what Matt is saying is that they won't count it as some spam and penalize the website. But he does not say anything about how the click to expand content is weighted. The solution with the different pages will not work in my case as I need all descriptions on one page for SEO and it is also a slight limitation to usability as the user has to keep on switching between the pages.

    | Benni
    0

  • Disallow sounds like a fair solution. Has this not worked? The not found errors in Webmasters can be a bit wonky in my experience. For a long time after a domain change, I was getting "not found" errors that weren't linked from anywhere. Anything that was linked was properly redirected. I don't know if they have cached sitemaps or what was going on with that. I'd like to know more about how that works myself, but the point is, it may take some time of that stuff reappearing before it finally goes through.

    | eglove
    0

  • Hi there. Yes, you should. Especially if summary contains parts of full article content or is very close in its content. Basically, if there is any chance that summary can rank for keyphrases, same as full article - do canonicalizing. Hope this helps.

    | DmitriiK
    0

  • If I were in your situation, I would probably do exactly as you say you're going to do. EDIT: maybe using Google Analytics or Google Tag Manager to verify the account would let you take off the part of reaching out to the IT guys, but if they work fast enough it's the same.

    | arielbortz
    0

  • Thanks for the input, Dmitrii and sorry for not replying sooner. My experience is a bit different - maybe because it mainly stems from well-structured pages that was daily reindexed. Breadcrumbs generally showed up "almost instantly" (i.e. next index round) or "not at all". Same with some other cases of schema - even being deindexed and reindexed (be it after a silly mistake or the realisation that google for some unclear reason dislikes some markup within some others) within the same week has happened to me and is far from extraordinary. My prior experience with (old) schema.org breadcrumb markup was frustrating to the point of avoiding it; but next time I shall give it a try again. Always better to not mix syntaxes.

    | netzkern_AG
    0

  • If the change was done after the crawl date it's probably that. You could try to do a fetch & submit to index on both pages and then check if the error persists. In my experience - the test from dejanseo is doing a good job. It it doesn't list an error your implementation should be ok. Dirk

    | DirkC
    0

  • I came across this not sure if you ever solved it- Jquery is how you reduce the links without removing them.

    | CHADHARRIS
    0

  • Many thanks to everyone for the help!

    | Mike.Bean
    0

  • Josh is right about the reason. Personally I would just do it if the old structure is really really bad

    | _Heiko_
    0

  • I would go with rel=canonical straightaway, robots.txt is a bit harsh for that sort of thing. You could end up delisting yourself

    | seoman10
    0

  • Ok, now I got the questions... sry. I didn't know the answers but I start to google and...  I'll be back

    | paints-n-design
    0

  • "Why not do parent category by type of clothing - "snowsuites", "sweaters" and so on and then have boy-girls filters inside?" "I acknowledge that implementing a filter for "boys" and "girls" would be the best way to solve this redundant categorization, but that would simply be to expensive for our client." That being said, canonicals will help direct the juice to the right (/outwear/) page and away from /outwear/girls and /outwear/boys. The only other option I can see is to have an overview category (/outwear/) and then deindex the subcategories in robots. disallow: /outwear/girls* disallow: /outwear/boys* But that only helps Google with what you already have. If someone directly links the /outwear/boys/ page, that will get lost. So canonicals would seem to be the way to go in the absence of filters.

    | MattAntonino
    0

  • Screaming Frog does do this. It has been what we use to find missing alt's now. I think it would be a great thing for Moz to add to the crawl report, or do a crawl for this specific reason. Lawrence - you can use the crawl report to see meta description info, titles, response codes and more.

    | franchisesolutions
    1

  • The solution completely depends on what your goals are. Are you investing in those tag pages? Is there a reason to keep them around? If you noindex as the others here suggested, you're not only not going to rank with that page, you probably won't with the other. I'd decide what you want to support (it seems categories) and redirect the tag pages. Second best option is a canonical, but then you still have the support the tag pages, the search engines might not respect it, and you're putting a bandaid on a bigger wound. Hope this helps you think through it!

    | dohertyjf
    0

  • Just to make sure I understand.  Can you clarify the sequence of the changes and for how long?  Do you know if one set of URLs has links to it or was ever indexed. Let me explain. It sounds like you had a site that was using http and was an asp site. So you had URLs like http://www.website.com/file.asp  (we will call this URL type A) You then converted to https so the URLs were like https://www.website.com/file.asp  (we will call this URL type B) You then updated to a PHP site so now with URLs are like this https://www.website.com/file.php (we will call this URL type C) You can setup 301s to go from A to B  and then another set to go from B to C.  Your question is can you setup a 301 to go from A to C, the answer is yes. You should do this.  Anytime you can reduce the number of hops the better. What you need to think about is, well, that about the A to B and the B to C redirects?   Well, I would say at a minimum, you need to eliminate the A to B 301s as you have now decided to skip the "B" and go right to C.  That works.  What about the B to C 301 redirect?   It depends.   If you had version B of the website out for a while, and it was indexed by Google and you have links that are built to B version URLs, then yes, you need to leave the B to C redirects.  You don't want to lose any of that equity. Likewise, let's say you have a version D of the site that comes out a year later.  You have lots of links into the C version of the site. https://www.website.com/file.html You then need the A urls to 301 to the D URLs (and get rid of the A to C 301s), you need the B URL to 301 to the D URLs and so on. In other words, go through another process of cleaning up the 301s and reducing the hops. Why do all this.  Two reasons.  There will still be links to the A, B, C versions of the site.  Google will still find them and crawl them and you want to get credit for those links to your site.  Also, Google keeps an internal log of URLs and will check them from time to time, even if no one is linking to them.  You want Google to find the right URL.    In either case, if Google hits a version A URL, it would then have to go to version B via a 301 and then to version C.  It can do it, but it would rather have 1 hop. Side note.  Try not to use global 301s, where you just 301 a bunch of pages to the home page.  That does nothing for you as far as link equity.  Try and make the 301s a 1 to 1 relationship as much as possible. Take a look at this video and this backs up what I just said.  The number of hops is discussed at about 3 min in.  The whole video is worth watching https://www.youtube.com/watch?v=r1lVPrYoBkA

    | CleverPhD
    0

  • Awesome, Croy. That was extremely helpful. Thanks, Ruben

    | KempRugeLawGroup
    1

  • If you want to prevent member-only pages from showing up in Google's index (and I assume that is what you're after), then what you want is "noindex", or ideally just to block the pages in robots.txt. A quick look at your site indicates that there's members-only content in the 'resources' and 'presentations' folders (though of course there may be others you also want to hide). Then, in your robots.txt file, you'd put these lines: User-agent: * Disallow: /resources/ Disallow: /presentations/ Now even if Googlebot discovers these pages somehow, it won't crawl or index them, and you won't see these pages showing up in a Google search.

    | StephanSolomonidis
    0