Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • Hello Andrew, If you are on Wordpress here is an article explaining how to disable trackbacks/pings, which I advise doing for most sites. It also shares how to remove trackbacks from previous posts by running a SQL query on your database. Alternatively, if you want to get the pagerank to pass through those links (inadvisable since 99% of them will be spammy, in my experience) you can change the redirect to a 301. See this Yoast article for more info on how to do that. It was written in 2007, but Joose de Valk is ahead of his time and I think it still works. Every CMS is different so if you're not on WP please let us know what you're using. Of course you can always just block the /trackbacks/ directory from being indexed or crawled via the robots.txt file. I wouldn't do that with the /feeds/ directory though, as I find them useful.

    | Everett
    0

  • I hate to bring up an old thread, but here's the answer if you're still looking for it: http://yoast.com/google-wordpress-and-trackback-urls/ Cheers.

    | Everett
    0

  • I think this is actually a really good question. The main reason most SEOs these days don't "sculpt" or "shape" with nofollow links anymore has to do with the fact that they will still take away from the total amount of pagerank available to be passed on to other links on the page. So the question I'm reading above seems to be: Do<a data-href...="" links="" still="" take="" a="" portion="" of="" pagerank="" away="" from="" the="" total="" pr="" available="" to="" be="" passed="" on="" other="" same="" page?<="" p=""></a> <a data-href...="" links="" still="" take="" a="" portion="" of="" pagerank="" away="" from="" the="" total="" pr="" available="" to="" be="" passed="" on="" other="" same="" page?<="" p="">My answer is "I don't know" but I'd like to see a test if anyone can think of a way to try it out.</a> <a data-href...="" links="" still="" take="" a="" portion="" of="" pagerank="" away="" from="" the="" total="" pr="" available="" to="" be="" passed="" on="" other="" same="" page?<="" p="">However, even if the test came back saying "No, these are treated differently and do not currently affect the total amount of PR available to other links on the page" I still would not use it for the purpose of pagerank sculpting. The reason is that how Google treats these links today can change tomorrow, making "tactics" like this a bad idea IMHO. It just leaves a mess for either you or some other poor SEO to cleanup later. If I don't want pagerank to pass through a link on a page I simply don't put the link on the page. In extreme circumstances where there is no other way around it I might consider obfuscating the link with some javascript, for instance. However, even if you block the .js file that handles this "link" in the robots.txt file Google still executes it (as you can see when viewing the cached version on Google for pages that do this).</a>

    | Everett
    0

  • Disqus doesn't use iframes. The content is displayed on page as HTML within a div tag. We use Disqus for comments on the TranslateMedia blog. http://www.translatemedia.com/400-million-chinese-cant-speak-mandarin.html/ If you view the source code you can see the comments in there.

    | TranslateMediaLtd
    0

  • Adding the following lines to the bottom of your robots.txt should do it: Sitemap: http://www.example.com/sitemap/uk/sitemap.xml Sitemap: http://www.example.com/sitemap/de/sitemap.xml If you wanted to update the file names to be different it wouldn't hurt, but I don't think you would have any problems with how they are currently set up.  If you have submitted them to WMT and they are being picked up ok I think you are fine.

    | Schwaab
    0

  • Check this out - https://support.google.com/webmasters/answer/139394?hl=en Basically, if the same page loads on several URLs, the canonical tag tells search engines which URL is the "real" location of the page.

    | OlegKorneitchouk
    0

  • Hi Anne I would make sure the page is in fact accessible via the crawler. 1. First check the page its self in something like URI Valet and make sure it's responding with a 200 OK code. Use Googlebot as the user agent. 2. You can also "fetch as Googlebot" in Webmaster Tools and from there submit the URL. So do the fetch and assuming it returns your 200 code you can then re-submit to the index. 3. You can also try crawling the site with Screaming Frog SEO Spider (with Googlebot as the user agent) and see if those pages come up in the crawl. Lastly, I am curious how you know the "indexed date" of the page? I know if the page is cached you can see cache date, but not sure where indexed date would be. And sometimes Google may just not re-cache or update the index of a page for a while if it has a lower PageRank and/or the content is not new and fresh - it will not see a reason to update the cache. Also, have these URLs ever been cached? -Dan

    | evolvingSEO
    0

  • Hi Phil, Yeah fair point re the publisher tag, but like you say there is a lot of debate about exactly how to implement it, but i'll definitely try and refine it's use if I can. Cheers for the video advice, i'll keep working on it. Stu

    | stukerr
    0

  • Here's the page from Google about site links: https://support.google.com/webmasters/answer/47334?hl=en

    | KeriMorgret
    0

  • Thanks all! Yes, I was familiar with the "Text-only" version and the Fetch as Googlebot, so I wasn't overly concerned. It just seemed odd that this particular spider couldn't get to the content. I think it is a very unsophisticated spider!

    | danatanseo
    0

  • Alec, The first thing that you need to do is take a deep breath and understand that you are not the only person this has happened to. I would suggest the following steps: Review WebMaster Tools and see if you have gotten any kind of link warning from Google. You might also want to do a backlink analysis of not just yourself but your competitors as well.  Have they done something recently to pass your website? You said a lot of "Russian links" are killing your website.  Are these on Russian domains?  If so, you can try to contact the WebMasters of these sites and see what you can do.  If you want you could also use Google's Disavow tool. While you're doing all of this, I would also suggest double downing on social media.  Reach out to your customers and engage them. You might also want to refocus on content creation, especially content that is sharable.  If you can get new leads through things other than organic it might be what you want to do.  Maybe a good email campaign? Those are some early thoughts, but the most important thing you can do is remain calm.  Starting a new website is a possibility, but I'd suggest doing the research and trying to focus on your initial customer base before you do anything.

    | TheeDigital
    0

  • No problem my friend. You are most welcome. Once the redirection is in place, please double check that its indeed a 301 and nothing else. You can use any of the HTTP header status checker available online like web-sniffer.net or Screamingfrog. Best, Devanur Rafi

    | Devanur-Rafi
    0

  • We had a sitewide 301 redirect for the subdomain. This was tested and worked fine, but MOZ considered every page that was redirect as duplicate page content error. So then, we simply deleted the subdomain on the host server. Our error count went down dramatically. Broken backlinks - which we found through Webmaster Tools - are still going to the subdomain, commercial.vigilantinc.com, and are broken because that no longer exists on our host server. I guess my question is, should we re-establish the subdomain with 301 redirects (which MOZ apparently doesn't recognize) or live with the broken links to our site? Thanks, everyone, for responding.

    | KristyFord
    0

  • tx Lynn, I'll do a test and will let you the outcome.

    | TruvoDirectories
    0

  • Thanks for the help, but upon looking into the issue further we've decided to host the sites locally.

    | theLotter
    0

  • The Stonebridge Dental Plus Page that ZD created is here: https://plus.google.com/104280204877579676759/about?gl=US&hl=en-US Notice that it is not a verified listing The Plus Page I claimed on behalf of the client which is verified is here (which I can log in to): https://plus.google.com/b/115007218006913673410/115007218006913673410/about Looks like ZD purchased a domain which then forwards to the ZD doctor page and then created a new Google Plus profile with my clients address and phone. I might have to ask them for their contract with ZD and see if there is something in there I'm missing.

    | Czubmeister
    0

  • Hi Tim, Thank you for using sh404sef!  We are working on a you moz post for Joomla 3 SEO. Your input is welcome. We have a couple references regarding duplicate content & Joomla. (see below) http://anything-digital.com/sh404sef/news/canonical-urls-and-joomla-seo.html http://anything-digital.com/sh404sef/news/ranking-factor-dilution-and-joomla-seo.html Thank you, Jess

    | AnythingDigital
    0

  • Great thanks.  Can't be clearer than that!

    | Pete4
    0