Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: On-Page / Site Optimization

Explore on-page optimization and its role in a larger SEO strategy.


  • I agree with the above response. Linking to the article directly is the best method. In addition, consider what keywords people may use to search for it and link to the article using those keywords (called the anchor text). You can link to it not only from your site but get other blogs to link to it. You can also create unique articles and submit those to article sites with your anchor text embedded in them, pointing back to our great blog post. The word on the block is that Google loves unique content and that is true.

    | applesofgold
    0

  • I'm not sure if I can solve all of your issues, but here's a few thoughts: For #2 - Google is probably just indexing the second page of the lists of posts on your site. You probably don't want to index these pages (and, thus, the meta description becomes irrelevant) This might be helpful to eliminate that: http://www.johnfdoherty.com/noindex-organize-categories-tags-in-wordpress/ Same deal with archives pages and tag pages - just don't allow Google to index them. I'm not sure if All-in-One SEO does this, but I believe there are other WP plugins out there than will, too.

    | jeffreytrull1
    0
  • This topic is deleted!

    | stroke
    0

  • Sorry, by "bigger problems" I just meant the potential link-farm. The nofollow will remove the SEO risk - you'll still lose a little link-juice to those links, but you won't get penalized down the road for having them. Of course, you won't gain any SEO value from the cross-linking either. At this point, though, I think that's inevitable. The risk is greater than the reward from cross-linking this many domains. Any other ways to block the links are going to look more suspicious to Google than nofollow (including iFrames). Any I can think of would be best avoided in this scenario. Any way you can contextually cross-link would create less SEO risk and potentially let you get some ranking value out of the connections. That's why I suggested links at the job listing level. I think that might benefit users a bit more, too. Even then, you don't want to go overboard.

    | Dr-Pete
    0

  • I'm not seeing that Google is currently indexing either of these pages, so they may be too deep or duplicated in other ways. Pagination is a tough issue, but in general pages 2+ have little or no search value (and, post-Panda, can actually harm you). I would strongly recommend NOT using a canonical tag to page 1 - Google generally advises against this. You can use rel=prev/next, although it's a bit tough to implement and isn't honored by Bing. Generally, I'd advise one of two things: (1) META NOINDEX, FOLLOW pages 2, 3, etc. - they really have no SEO value. (2) If you have a View All page, link to it and rel-canonical to view all. This seems to be accepted by Google, but then the larger page will rank. Generally, I find (1) easier and pretty effective. Sorry, just saw Nakul's comment, and didn't realize you already have canonical tags in place. While it's not preferred solution, since it's already there and seems to be keeping these pages out of the index, I'd probably leave it alone. It doesn't look like Google is indexing these pages at all right now, though, which you may need to explore in more depth.

    | Dr-Pete
    0

  • Thanks Peter, I've added those urls to canonical tags which are actually being viewed by users i.e simple-url from above mentioned rewrite rule . So now same url is being used both for user and search engine bots

    | shaz_lhr
    0

  • Thanks Cyrus, I didn't realise you could do that.

    | yours2share
    0

  • It's a complicated issue, but adding 50K variations to 27K product pages can definitely be dangerous, especially post-Panda. At best, you're diluting your index and your ranking ability. At worst, Google could actually start de-indexing or at least devaluing core pages. Personally, I don't think the long-tail gains are worth the risk - these kinds of pages were behind the "May Day" update in 2010, and Panda continued that core philosophy. Google considers it a low-value tactic in 2012 - of that, I have no doubt at all. Of course, it does depend on how you use them. To have custom landers for PPC and not index them is perfectly fine, for example. If you're tripling your indexed page count with thin content just to target SEO keywords, though, you're taking a very real risk, IMO.

    | Dr-Pete
    0

  • Ideally, you'd fix the crawl path, but that may be tricky (unless they've patched the CMS). You could add the canonical to just the "page=1" version, but admittedly that's a bit code-intensive. An alternate idea - that is fairly Google-friendly. You could add a "View All" version and then point the canonical on all search pages to that version. Especially since all is only 2 pages, that could work well in your case and you wouldn't have to worry about all the variants or search results not getting crawled.

    | Dr-Pete
    0

  • You can use an expert or you can just use the tools that SEOmoz and you will see all of your warnings/errors

    | SEODinosaur
    0

  • Yes, now it should work. If not, write to me and I can help you

    | Naghirniac
    0
  • This topic is deleted!

    0

  • Hey Jason, Page Rank does not get you customers buddy and it is also horribly out of date if you are worrying about toolbar page rank. You are not even moving the page to another domain so this really should be a light weight move that will be well handled by a 301 redirect. It may be worth doing an inventory of your links to the pages in open site explorer and updating any that you see as valuable and any others you can easily change but you should not have a major problem with this kind of change. All the best Marcus

    | Marcus_Miller
    0

  • Hi Samuel, Once in rare while there's a rare miscommunication between the SEOmoz crawler, rogerbot, and your webserver. This usually happens when roger crawls a bit too fast than your site can handle, and so your site serves up what are essentially empty pages. Roger reads these pages as blank, and records no title tag, meta description or links on the page. If the title tag is already there, then it's nothing to worry about. One way to address this in the future is to use a crawl-delay directive in your robots.txt file, which will slow roger down. To do this, place the following lines in your robots.txt file: User-agent: rogerbot Crawl-delay: 5 This will make roger wait at least 5 seconds before crawling each page. Hope this helps. Best of luck with your SEO!

    | Cyrus-Shepard
    0
  • This topic is deleted!

    0

  • Hi, thanks for your reply. I'm afraid the number of indexed url in the widget is also 0. I think that the best will be for us to wait a few days and see what happens. What do you think? Thanks!

    | gerardoH
    0

  • If you have the canonical in your page (www.mysite.com/acategory/niceproduct.html) then you dont need to bother making the title unique as google will index the set canonical page as the ruler over the duplicated and use all that information.

    | Lantec
    0

  • Joshua, In your section you have some errors and so cleaning them up will get your code valid. Here is what you have: At the end of each of these meta and link elements you need the closing / which is not there. So instead of ending them all with > you end them all with /> instead. That will fix a lot. Also, you have this in your head section which doesn't belong: Just delete it. You can't have a div in the head section. You also have a

    | DanDeceuster
    0

  • Bots tend to see the world as text-only (i.e. more like how Lynx renders the web) . This means the formatting tags mostly get stripped out or ignored. As much as everyone promotes their design coding schemes (tables vs divs vs paragraphs) I've never seen a benefit shown for any particular one. Remember, CMS systems make up a good portion of the web. The only potential downside to this would be on long pages. Bots tend to index the first 100k and ignore the rest. I don't know for sure if that includes HTML or not but I would always err on the side of cleaner HTML.

    | Highland
    0