Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • Thanks for helping Malika! To clarify for other readers, blocking in robots.txt after the pages have been indexed will actually prevent them from being removed from the index with a meta noindex tag, since Google won't be able to crawl the pages to see the noindex tag. If staging URLs have been indexed already (and assuming they still need to exist), here's the steps I would take: Add meta noindex tags to every staging URLs If urgent, also do a URL removal request in Webmaster Tools (but this is usually not needed) Wait until the staging URLs are noindexed - you can check periodically by doing site: searches in Google. Only after they are noindexed, block Search Engines from crawling them with the robots.txt file.

    | evolvingSEO
    0

  • Agreed! If the copy is same it would be duplicate. It might be worth looking at the URLs - versions with and without trailing slashes, www and non-www, different protocols serving the same copy. They all look same and one to the human eyes but search engines consider them as duplicate. Maybe more details on the URLs would help us answer your query better.

    | Malika1
    0

  • From an seo perspective the domain is regionfanilyholidays.co.UK so thought category page would be /hotel/ etc

    | neilhenderson
    0

  • Hi Sandi, I thought Screaming Frog would follow 301 rules and exclude them from the sitemap. To be sure I just ran the domain and it appears to be true. With the free Screaming Frog you can configure things like ignore canonical pages, noindex pages, paginated urls, and PDF's. It also allows you to set the change frequency and priority. Screaming Frog found 428 url's and the sitemap has about 100'ish urls. Which is still in the range of easily being manually reviewed. Here is a link to the sitemap I generated for your site: https://drive.google.com/file/d/0B5yObTBvN3iVWFJZczJWTUlZU0U/view?usp=sharing Here is a link to Screaming Frog official site: http://www.screamingfrog.co.uk/seo-spider/ When doing SEO it always best to have a full toolbox! I hope this helps, Don

    | donford
    0

  • Wow - hadn't thought of that!  Very creative use of resources!!    Thanks so much ~ Scott

    | measurableROI
    0

  • Well - charts in JS is OK for user but NOT ok for bots. Hiding text is "devalued" them and also not ok for bots. Probably the best is if you use some tool as PhantomJS and make static image of charts. Can be PNG or JPG with lot of text (alt and title) that describing what is this image. I know that this wasn't answer you're expecting, but bots can't see everything that JavaScript make. I forgot... MOZ have same issue with their WBF - text and video only. So look strange for bots. From two years they make text representation of video too. Now layout is text-video-text-text-text-text. Seems that works for them. So mine idea is try to make chart and below make text representation of what is in this charts with lot of text. Bots love this (and humans too!) and you can leave JS code then.

    | Mobilio
    1

  • Thankyou for your answers  so It doesn't matter how many 301 you have in a website as long as it isn't a chain more than 3? Regards

    | ReSEOlve
    0

  • Yeah, personally, ReSEOlve, I'd say to search Google and see what it chose, then stick with that. Unless, as Bryan notes, it makes a branding difference to you. But definitely make sure that you get the proper redirects in place once you've chosen one.

    | MattRoney
    0

  • Hello Craig, When I go to http://www.mangofurniture.co.uk I see a Rel Canonical tag referencing this URL version of the page: http://www.mangofurniture.co.uk/home. When I go to http://www.mangofurniture.co.uk/home I am redirected (301) to http://www.mangofurniture.co.uk. This has the effect on Googlebot of being an infinitely looping redirect. The solution is to change the Rel Canonical tag on your home page to be http://www.mangofurniture.co.uk by removing the "/home" directory on the end.

    | Everett
    0

  • Hi Bas, The change is made. Do you know if (and what) I now have to do with my Moz campaign? Thanx in aadvance

    | Tymen
    1

  • Alexa uses toolbar installed to users computer to scan for links and sent them to their cloud as "backlinks". There isn't way to index someone other inbox folder for links as far as i know. I have installed on mine site Linkstant and here is similar "backlinks": http://mail2.daum.net/hanmailex/Top.daum (this is korean mail) http://www.notprovided.eu/wp-admin/edit-comments.php (heh, seems that someone visit mine website just to check is comment legit or don't) http://foreman2.cnet.com:9058/rb-foreman/flows.html?_flowId=edit-item-flow&id=75852664&versionCnt=0 (ups, i don't have access to this server) http://www.reddit.com/message/unread/ (reddit inbox) http://mail.naver.com/?n=1415975274556 (another mailbox) http://1.1.1.1/login.html (probably internal network) So LinkStant show this as inbound links to me in their report and i'm almost 99% sure that there IS links somewhere. Same happens with Alexa - he detect links and upload them to their reporting engine. And this engine count link as legit.

    | Mobilio
    0

  • TL;DR - Yes, you should use both for better CTR in SERP and Social Networks. OpenGraph is described here: http://ogp.me/ and OG is mostly used from Facebook and G+. Twitter used TwitterCards: https://dev.twitter.com/cards/getting-started#opengraph only for them. G+ used shema.org or OG: https://developers.google.com/+/web/snippet/  https://developers.google.com/+/web/snippet/article-rendering Pinterest used RichPins: https://developers.pinterest.com/docs/rich-pins/overview/ So you should read excellent article from Cyrus Shepard about all meta tags: https://moz.com/blog/meta-data-templates-123 where everything is described with examples. And answer is YES! Today is 2015 and within few days will be 2016. Today shares in social networks are so much important than also baclkinging. Of course organic visits and visits from social networks are different, so you need to implement both of tags to get maximum performance there.

    | Mobilio
    0

  • RewriteCond %{QUERY_STRING} ^p=([0-9_])$ RewriteRule ^(.)$ /p/%1? [R=301,L] the above works on our test area but not on clients so it hosting issue

    | Cocoonfxmedia
    0

  • Hello Mozzers, I've the same problem but just the other way round and was wondering about the effects additionally: if I implement a redirect from a slash url to a non-slah url, will this affect my SEO visibility within Google? If so, is there another / better solution? Many thanks in regards!

    | dexport
    0

  • Thanks - so I have to continue the search for where a tenfold increase in indexed pages (according to Search Console) might possibly come from. Sadly, the rest of your reply misses my problem; probably I have been unclear. The reason I was asking for a method to know what pages ARE indexed is: I seem to have no problem getting stuff indexed (crystal-clear sitemap with dates; clear link structure &c.) but google seems over-eager and indexes more than there really is. If it is some technical problem, I'd like to fix that - but Google does not show anywhere what pages are actually indexed. There are lots of methods around - but none that I found do work as of now. I have been well aware of JumpTo-Links, as I stated, and it works nicely. No problem at all with "not enough" indexed pages - really rather the opposite with no idea what causes it. Regards Nico

    | netzkern_AG
    0

  • Hi There I agree with Peter. You can 301 redirect your old images to the new ones. Leaving the old images won't hurt anything, but would be good to delete them to free space up on he server (why not).

    | evolvingSEO
    0

  • Hi Shauni! I agree with everyone here. There's nothing inherently spammy about having a lot of links, but if those links ARE spammy, then Google should catch on eventually. You might consider running a competitive link audit to see if you can take advantage of any on your competitor's links. There's a quick video on the practice here.

    | MattRoney
    2

  • I have this situation on my sites too. This might not be right, but to double check that my 301s aren't being seen as duplicates of the 200s that they are redirecting to I'll filter my crawl test tool report from Moz to only show the records where the HTTP status code doesn't equal 200 and duplicate page content is yes. I have a couple different crawlers that I use and they each work a little differently. I think some might look at your situation and return page A as a 200 just because when it resolves to page B it is a 200, where other crawlers are more sensitive and see page A as a 301. I hope that helps!

    | matthew.dimmett
    0

  • Uhmm.. Are you suggesting that SEO comes down to "just a matter of mentioning it in a blog post"? Man, we are all in the wrong business Of course not! To rank for another country, you need to do every single step of SEO process with a focus on that country. If you are talking about making the same domain rank for several countries, then you gotta do all SEO steps + a ton of technical work to make sure that all your content is not duplicate and targeted. Please, read this: https://moz.com/beginners-guide-to-seo https://moz.com/learn/seo/international-seo

    | DmitriiK
    0