Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
"Leeching" backlinks...yes or no?
You may find that some website owners will question your motives and be suspicious if you are asking them to replace an existing link to a high authority site like Wikipedia. You might see better results if you suggest the link as an additional resource that supports what they already have.
| LauraSultan0 -
Quotery.com Suggestions?
I think either of those would be better. If you do the change make sure you do your 301s correctly.
| EcommerceSite0 -
Homepage Duplicates
Stephen Ahh I see. This is a bit of a different question. I think you are asking "why do my inner pages not rank, and why does only my homepage rank"? I think we should zoom out and look at more the concepts of "keyword targeting" and site architecture. This can get pretty involved, and would be an entire blog post - but here's a few resources to check out first: http://moz.com/blog/visual-guide-to-keyword-targeting-onpage-optimization http://www.lunametrics.com/blog/2013/01/31/keyword-targeting-mistakes/ https://blog.kissmetrics.com/site-structure-enhance-seo/ To rank for "film crew jobs los angeles" the site needs a really dedicated page about that specific topic, that has information about that topic better than other websites out there. The site current does not it seems, so the homepage is ranking instead because Google does not have a better option to rank for that keyword. This doesn't have much to do with _how _the page is created, if it's a CMS, category page, etc - if the end result is a page that serves that search then you will have a much better chance of ranking it. Then you need to be sure you have an intuitive site architecture - this post is pretty good about that: http://www.seobook.com/getting-site-architecture-right
| evolvingSEO0 -
Influence The Google Knowledge Box/Knowledge Graph
That is a pretty good example you have there of a slightly ... unexpected ... knowledge box! The knowledge box can draw data from a number of places, as Ron mentions schema markup and google plus pages are used as are other online sources like wikipedia (which is the source in your example) and freebase listings. Dr Pete has written a number of posts about this which might help (here, here and here for example) and another good article on 'hacking' the knowledge graph here. The particular case you mention is a definition box and is driven from wikipedia and might be pretty difficult to change, if you are looking to influence knowledge graph data about a company or an entity the above links should help.
| LynnPatchett0 -
Images Not Indexing? (Nudity Warning!) - Before & After Photos
hi Britney Muller What are you doing so your images indexed well, i have the same problem... thanks
| juliodaraujo0 -
Disavow File Submission process?
I would go ahead and submit as this can have an affect on the algorithmic part of the penalty. You should still try to have them removed, especially if a manual penalty is in place and record your steps, but if you know the links are bad you might as well go ahead and disavow them.
| TheeDigital0 -
Should We Remove Content Through Google Webmaster Tools?
As Donna pointed out, the 'delay' tween what you expect time-line wise and what Google can 'do' is often longer than anyone would wish........
| JVRudnick0 -
Couple questions: backlink bartering and getting backlinks in less developed markets.
Thanks Moosa. Great read.
| mack-ayache0 -
Unnecessary 301s?
I second what Ray says, having the rule in your .htaccess file is always a good practice. Especially to prevent the annoyance of seeing both versions in your Google Analytics. This question was answered in another Moz Q&A, and although it's from 2012 the responses are still good.
| Joe.Robison0 -
Dealing with past events
Hello benseb, You mention that you have de-prioritized past events in the sitemap. You could go the nofollow route although this is a somewhat clumsy way to go about it. I think based on what you have described, your best bet is to leave it as is (after moving forward with the hint Matt Cutts dropped) rather than eliminating a load of content which is sending Google positive signals. My guess is that these positive signals overpower any negative signals that might be resulting from aging content. If everything has been properly indexed and current events are showing up, I wouldn't make any big alterations - why mess with a good thing? If you begin seeing drastic declines in traffic or user interaction, that might be the time to take a harder stance. For now though, let it be. Best of luck! Rob
| Toddfoster0 -
Can I Use Multiple rel="alternate" Tags on Multiple Domains With the Same Language?
Very helpful, thanks for the advice Michael!
| LampsPlus0 -
Stub category pages (dupe warning)
I think that there is a slightly bigger question here. Rather than "How can I stop Moz flagging these up as duplicate?" you might want to ask "Are these duplicate pages harming me". Thin pages, particularly those ranking on desirable terms, are something I try hard to avoid. They send pretty poor quality signals to Google and create poor user experience signals as well. If there is a term that I want to rank for I would ensure that pages are strong enough to deserve ranking before letting them get indexed. It can be painful to deindex a page that ranks. However if those pages are giving off bad signals that could be your best chance of long term ranking success. A compromise might be to fill them out in the mean time. How effective this might be will really depend on your niche and your website. Lots of stores do this by just adding a load of low value text to the page, but a better approach is to try to put something useful there until the products arrive. Do this right and you could even be building links into those pages before the products arrive. One example of this that I have done in the past is to build out a great coming soon page that featured a competition to win the item when it launched, As well as ensuring that there was a page worthy of ranking (particularly against the competition who were using stub pages!), it brought some other key advantages: The competition was used to build links from related sites User experience was great. People hung about, watched the video and filled in the entry form It got shared (bonus prize draw entries for sharing!) When the product hit the shelves we already had a mailing list of interested customers That's fairly involved, so won't work for everything, but the principals are sound. If you were Google which would you want to rank? That or an empty page?
| matbennett0 -
Dilemma about "images" folder in robots.txt
Yup my images send me traffic from Google images on most of my sites and attractive images attract hotlinks as well. At the moment people are hosting their images on a different domain (cdn) and are still being credited with the images but I haven't tried to do that myself ie I don't know if they've set some "ownership" somewhere and somehow.
| dancape1 -
Static looking URL - Best practices?
Really, I think people have gotten themselves all twisted up unnecessarily over dynamic URLs and hiding the fact that they're dynamic. If you're dealing with a URL that really is dynamic, I'd stick with the ? & = notation that's pretty standard for this sort of thing. In my experience, Google is seeing ANY of those characters as word separators, and I'm not really seeing any downside in terms of ranking for terms when using those terms as traditional parameters, e.g. www.homes.com/listings/ca/san-francisco/?q=single-family-home&b=3-bedrooms&t=2-bathrooms&u=swimming-pool-garden-wood-exterior I'd be careful with using a "+" sign if you go that route, as various conversions from text to URL-safe to HTML-encoded etc. will replace spaces with + signs...and if something is un-encoding that, you might end up with spaces there. FYI where this all came from was URLs like this: www.homes.com/showproperty.asp?pid=115235423ion=ABX&type=723 In THAT case, those numeric parameters (which tend to be database record identifiers) are NOT of use to Google in terms of relevance or ranking. But the english parameters in my example further up ARE useful, as they may match some of the query terms.
| MichaelC-150220 -
SEO Concerns From Moving Mobile M Dot site to Responsive Version?
Make the old m dot URLs 301 redirect to the responsive version (the new pages). That'll take care of users landing on the m dot pages until Google removes those from the index, and will transfer over any link juice the m dot pages have gathered up (although that should have already happened from rel=canonicals on your m dot pages pointing at the desktop versions...but, if you missed any...).
| MichaelC-150220 -
SEO Impact of High Volume Vertical and Horizontal Internal Linking
No, keep doing it the way you're doing it. That's perfectly good link juice flowing between those pages. Breadcrumbs are a nice way to communicate the hierarchy to Google--not because they're breadcrumbs, but simply because of their nature: all pages at each level contribute link juice back up to each of its ancestor pages. A child page has the least internal links; its parent has more; its grandparent even more; etc.
| MichaelC-150222 -
URL Re-Writes & HTTPS: Link juice loss from 301s?
Thanks all...Much appreciated! Looking at the examples below, does anyone think this move could result in a negative effect? **From: **http://www.xyzwidgets.com/widgets/commercial-widgets/small_blue_widget.htm **To: **https://www.xyzwidgets.com/small-blue-widget **From: **http://www.xyzwidgets.com/info/videos/general/what-are-widgets.htm **To: **https://www.xyzwidgets.com.com/edu/what-are-widgets
| TheDude0 -
Website gone from PR 2 to PR 0
Just wanted to add that Google announced that page rank will not be updated (Source) and it's been theorized that the last update (Dec 2013) was a mistake. Agree with Andreas that using alternative methods is the way to go. As for the why it could be any number of things from fake PR to Google dropping PR or lost links etc. either way it makes no difference as the metric is superfluous now.
| GPainter0 -
Wordpress Comments Pagination
If you are hitting the 200-300 comments/page mark it might be better to get into pagination from a sheer UX perspective - depending on their dedication to the content matter, it is unlikely that the average user is going to want to sift through so many comments after reading your content. I don't know what audience you are targeting, but it seems to me that 25-50 would be sufficient to capture the essence of most commenting sections regardless of topic and would also help with loading times. Ray makes a great point about the canonical tag and rel tags though - as with everything, your strategy has to account for your specific situation and marketing approach.
| Toddfoster0