Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • My advice is to go easy with robots.txt--it's a bit like dynamite, powerful, but can take your leg (or entire website) off. I like this checker: http://tool.motoricerca.info/robots-checker.phtml If you look ok after running that checker, then use the built-in Google one. Note that robots.txt syntax DOES NOT have wildcards.  Apparently this doesn't stop a ton of people from using wildcards in them (to no effect, and clearly they didn't bother to test!). Another reason to avoid disallow in robots.txt is that if you disallow the engines from looking at a page's contents, then you're ALSO stopping the link juice that might have flowed to other pages it links to. So let's say you have 100 pages on your site that you're currently blocking with disallow in robots.txt.  If instead, you put a meta robots "noindex,follow" in each of those pages, then every page linked to from those 100 pages (i.e. everything in your main menu) would get an extra 100 internal links worth of link juice.

    | MichaelC-15022
    0

  • It's quite possible that at one point there was a link there--because the page rendered for some reason. I would crawl the site yourself using a crawler (there are several available) to make sure that the page isn't reachable from, perhaps, a bad link on the site. Check the archive.org to see if the page existed at one time or not. I would also take a look at the page's server header again to see if the site is showing a 404 error or a "200 ok" along with a "page not found". It's possible that the page doesn't exist but it delivers a "200 OK" server header anyway. Another option is that it might be in your sitemap.xml file. When in doubt, if the page doesn't exist, I would mark it as fixed in Google Webmaster Tools and watch if it comes up again. If it doesn't come up again as an error, then I wouldn't worry too much about it.

    | GlobeRunner
    0

  • I wish I had a good answer. When Google takes liberties with snippets, it's not always easy to sort out why. The first stop is usually making sure the query is in the META description and the description isn't too long, and you've got both of those covered. Two long shots, but easy to try: (1) Add the NOODP Meta Tag (). Sometimes, it prevents not only descriptions from the Open Directory Project, but also Google taking liberties. Sometimes. (2) Consider pulling your domain name out of the META description. You're saying "Myrtle Beach Hotels" twice in both the title and META descriptions, and it's possible that that looks very slightly spammy to Google. If you can, make the description slightly more natural - the length is about right. Again, these are in no way guaranteed, but they're easy to try. I don't think tweaking the structure of the page is going to help. It's not Google recognizing the search results that's a problem - it's why they choose to rewrite. Unfortunately, we don't have a good handle on the why, at least in some cases.

    | Dr-Pete
    0

  • Hi Claudio, you've received some great tips here. Did any responses help answer your question? If so, please mark them as "good answers". Either way, we'd love an update. Thanks! Christy

    | Christy-Correll
    0

  • Hey Samuel, I feel like this confirms some of the thoughts that I had. The different terms are, searched, so I think it will be best to leave them as individuals. Thanks for your contribution!

    | evan89
    0

  • This was exactly what I was looking for. Thank you very much you have really helped me out.

    | FireMountainGems
    0

  • I'm going to disagree a bit with the other commenters (respectfully) and say that - it depends. First off, you said it's an algorithmic penalty, and that can really follow a wide range of timelines. Let's say you got hit by a Penguin update - you'd have to wait until the next data update, even if you do everything right, which can take months. I think "weeks" is very situational and may be optimistic. The other big factor to consider - what does your link profile look like outside of this? If you have 1,500 toxic links from a handful of domains, and they're part of a profile with 150,000 natural links from thousands of decent-to-good domains, then definitely don't write this domain off. You've got a ton of assets to lose, and you can't just 301-redirect your way out of this (you'd have to start over). You'd also be losing social, direct traffic, and potentially a lot of other things. On the other hand, let's say you have 1,550 links, and 1,500 of them are toxic. At this point, Google's view of your site may be so dim that, at best, you'll take weeks to get the disavow processed and then effectively be left at zero (or possibly -1). If that's the case, then I think starting over is a much different equation and possibly even faster. It also depends a lot on the strength of your domain and your other branding efforts. Changing names isn't something to take lightly, if you've built a name. On the other hand, if you've slapped up a partial-match domain (no offense intended) and part of your problem is that you've built keyword-loaded anchor text around that PMD, then cutting the domain loose could actually help you. This isn't a decision any of us can or should make for you in Q&A, honestly.

    | Dr-Pete
    0

  • Great thanks for that donford. I've noticed a well ranked competitor is using 302 redirects to do what i'm after. Would this benefit me at all?

    | Whittie
    0

  • Hi Andy, Thank you for your response. I was also thinking the same but the business directions page it currently driving a a lot of traffic to the site. I am worried that if I noindex or nofollow this page that i may lose a ton of traffic. I was looking at Yelps infrastructure as well to see if they use either of these but there I cannot see that they have it unless of course they are blocking it using the robots.txt. Would you say that these options are the only options? Thanks, Neil

    | zodiothailand
    0

  • This question is over three years old, and the person who made this reply is no longer a Pro member. If you have a question on a similar subject, it's probably better to start a new thread after all this time. Thanks!

    | KeriMorgret
    0

  • Hey Khi5 — I just took a closer look at your webpage, as well as the related questions that you've asked before. I think an even bigger problem than "duplicate content" is "thin content". The main body of your page is 56 words, when the general rule of thumb is to put 300+ words of content. To answer you more specifically: No, I don't believe that search engines have the ability to identify very similar content, because they go by keyword. Even if the search engines DON'T categorize the content as duplicate, they're all competing with each other for the same keywords. The articles are all competing with each other in the same space. If you're trying to focus on "Honolulu" vs "Waikiki" vs <some other="" neighborhood="">, then your pages also need many more repeats of the keywords you're trying to win. </some> If the bulk of your page is unique (b/c you're writing about Honolulu as a category vs Waikiki as a specific neighborhood), then you don't have to worry about duplicate content; most of your content is unique tl;dr: > 300 words, repeat desired exact match keywords several times on a page; yes, create create unique content to make the pages more unique and specific Hope that helps more.

    | AndrewAtMGXCopy
    0

  • If it is just one thread on a forum that was flooded with bad links, you could delete that thread. Make sure that any traffic to the associated URL's get a true 404 response. My understanding is that doing so shows that you did not want those incoming links and that you are not gaining any advantage from them. If you do another Reconsideration Request, you should probably tell them that the targeted page was deleted.

    | GregB123
    0

  • This is a strange thing for an SEO company to say, because it's still widely accepted that good links pass PageRank / authority and add to the likelihood that a site will rank well for its chosen keywords. "Link building" itself might not be a term that fills Google with joy, but it's still a valuable way to improve a site's rankings and Google knows it. The truth lies somewhere in the middle if you want to stay white-hat: you need links, but you should acquire them in a way that does not constitute buying or being manipulative (again, if you want to stay to the letter of Google's law). The activities they mention are fairly standard "link building" tactics though, and something like "social bookmarking" was a type of link building that lost popularity due to being ineffective nearly 10 years ago. I would have doubts about their SEO chops if they cite both link building as being out of bounds, and social bookmarking as being a good tactic!

    | JaneCopland
    1

  • There's no single answer here, however the general consensus is that it really depends on several factors: Most pages can only truly be optimized for a couple high value phrases. So if you have too many phrases you want that single page to rank for, that's a tall order. If you go too divergent in a single page's topical focus, that makes more of a mess due to topical dilution - weakening the primary phrase focus for that page. If you force users to scroll forever (not just due to HTML5 / fluid design) that can be frustrating for readers on several levels.  That's made worse by the fact that most sites that use a one-page design tend to be one-hit-wonder magic-product sales pitch type sites, and thus reputation is an issue due to association with those for some visitors. That's just a few of the reasons one-page design is not highly recommended, both from an SEO and a User Experience perspective. As far as Google being able to figure out hashtag referenced content - just like every other thing their algorithms attempt to figure out, my recommendations to clients always state "don't rely on that - algorithms are inherently flawed to one degree or another - so figuring out JavaScript, AJAX, Flash - it's a crap-shoot.  Google needs multiple signals to help it figure out topical focus.  With only one URL, you lose the page TItle, URL, H1 and other related signals that only work best when there's one of each of those for separate main topics.  Sure, with HTML5 you are "allowed" to have multiple H1 tags on a page. Yet I've seen that confuse Google's algorithms.  It's just not wise to tempt the "formulaic attempt" process.

    | AlanBleiweiss
    0

  • Thanks Andy, makes sense.

    | NewspaperArchive
    0

  • I feel we are missing some information here. For example, for our site we have done a canonical on the pages where we have query parameters. We have also specified these parameters as representative URL in Google Webmaster - URL parameters. Even after this we received this message "Googlebot found an extremely high number of URLs on your site". The surprising thing is that these parameters are existing on the site for a long time, and the total URL count is reducing. Even after this Google has started sending this message to us since Feb 2014. Seems there has been some algorithmic change because of which some additional conditions that have not been highlighted in this thread have to be taken care of.. Not sure what..

    | Myntra
    0

  • Id probably go this route for a simple solve. I believe you would want to remove or disable the current handicapped markup.

    | sparkeeey
    1

  • Hey Duncan, There are few good ones. But generally when it comes to using a plugin for redirection, I go with the Redirection plugin (I know clever name ) But its the most popular plugin with 1.5 million downloads and for some good reasons its definitely a feature filled plugin, here are some neat things it does: 404 error monitoring - captures a log of 404 errors and allows you to easily map these to 301 redirects Custom 'pass-through' redirections allowing you to pass a URL through to another page, file, or website. Full logs for all redirected URLs Redirection methods - redirect based upon login status, redirect to random pages, redirect based upon the referrers Automatically add a 301 redirection when a post's URL changes Manually add 301, 302, and 307 redirections for a WordPress post, or for any other file Full regular expression support Apache .htaccess is not required - works entirely inside WordPress Redirect index.php, index.html, and index.htm access Redirection statistics telling you how many times a redirection has occurred, when it last happened, who tried to do it, and where they found your URL Those are some great features there. One thing to Note A plugin with so many features can interfere with some of your other plugins so test accordingly. Hope this helps!

    | vmialik
    0