Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
Block in robots.txt instead of using canonical?
With this info, I would go with Robots.txt because, as you say, it outweighs any potential loss given the use of the pages and the absence of links. Thanks
| RobertFisher0 -
Pages getting into Google Index, blocked by Robots.txt??
Oh, ok. If that's the case, pls don't worry about those in the index. You can get them removed using remove URL feature in webmaster tools account.
| Devanur-Rafi0 -
Is it okay to copy and paste on page content into the meta description tag?
If you feel that your are explaining the page the best you can in the meta description that go of it. I think that this is one of the most vital tags on the website. It brings people into your website.
| benjaminmarcinc1 -
Switching from .co to com?
I agree with WilliamKammer. Our team religiously uses the MoZ website migration guide when dealing with site transfers. At best, you can "minimize" the risk. No real way to prevent the dip in traffic or know when it will level out unfortunately.
| Bryan_Loconto0 -
Ranking way worse than competitor even with better SEO metrics
Your competitor's domain is trageted directly at France. Look at their domain name: https://www.doctolib.fr/medecin-generaliste/paris Even if your site is based around a directory, I would add some content to that page about what you are trying to rank for. The MOZ tools are just that, TOOLS. The purpose is to point out any shortcomings that could be dragging you down. Let's make a short checklist of items to check: 1. You state that you have links, but where do they link to? Do they link to the home page, or are your inbound links pointed at the pages and keywords you want to rank? 2. Description (translated) "Find your General Practitioner in Paris, check available slots and make an appointment online". This could be worded a bit more convincingly, since this is most likely what shows up in search engine results with your link. 3. Google is most likely looking at the interactions of traffic with the site i.e. on page time, clickthrough ratio etc. These are the metrics that the MOZ tool will not show you, but that you can see in in your GWT report. Look at your analytics and GWT profile to see if there are any weak areas where people leave. 4. Having more links isnt enough on it's own. Where do they go? Do they make it as easy as possible to find a doctor, and once found, easy to contact them?
| David-Kley0 -
Loading Content Asynchronously for Page Speed Purposes?
I'm with Eric on this, it seems highly unlikely that only loading a block of content is causing a load on your pages. I would start with teaching your developers what the best purposes are for SEO so that they can come up with better alternatives for these kinds of situations. That might pay off best in the end.
| Martijn_Scheijbeler0 -
After Ranking Drop Continue SEO or Focus on Improving User Experience Instead?
Hi Alan, First, a few things to consider: Did your site actually get hit with a penalty? You talk about removing unnatural links. Was that just to be safe, or something else? If your 1st site revision changed site urls and generated a lot of 404s, without proper redirection, you'd get the exact result you're describing. Did you get new copy written for your 1st site revision? If so, and you didn't write it, take a few sentences from different pages and search for them, in quotes, on Google. Make sure they weren't plagiarized, because that would explain a drop in traffic, too. All of that said, here's what I'd do: **Definitely **work to improve engagement. Engagement is good for your business anyway, so even if it had zero effect on visits, you'd make existing visitors happier. And, Google at least rewards you because of the secondary effects of a great UX: More attention and citation, more positive reviews, fewer bounce backs, etc. And yes, there's some evidence Google rewards great UX, whether deliberately or as a side effect. Look for lost links and repair them. Use OpenSiteExplorer and get the Top Pages report. Look for all pages that respond with a 404. Put those pages back (if you had that page before), build a page at that location (if you never had that page) or do a 301 redirect from the page URL to a 'real' page (the easiest fix). Check site performance. Did site load speed take a huge hit with the new design? Look at your log files. Go all the way back to before the 1st site relaunch. Compare Googlebot activity on your site from that period to now. What's changed? Is Googlebot getting trapped somewhere? Has crawl traffic dropped? Finally, I wouldn't engage in old-fashioned link building. It's a terrible idea for a site with a low DA. If you want to acquire citations, you're going to have to do it by making customers really happy, offering great information and content, and generally offering a great experience. I hope this helps. There's no easy answer here. You're going to need to take a very strategic approach, rather than focus on a single tactic, if you're going to make this work. Ian
| wrttnwrd1 -
Has My Site Been Hit by Panda 4.0?
Were the 300 listing pages ever receiving traffic? If the answer to this question is no or if a significant number of them weren't ever receiving any traffic it might change what I would think you should do. Poor title tags will hurt your visibility in lots of ways. I would not personally tie the title tag strategy to Panda. Panda is a content algo. It seems to look for duplicate, near duplicate, thin content, poor quality content and then make sure that sites with those criteria are not ranking. If you think more broadly about it you might ask why Google would want to take the whole site down in the rankings for thin content on a few or many pages with potentially low or no traffic. I think the reason they are penalizing the whole site is because they don't want webmasters producing this type of content. If they can get content creators to think twice before creating another 20 urls about x topic then over the long haul their job will become much easier. They can fight off spam more easily because it won't work. I was very angry when Panda 4 rolled out and some sites I own got hit. However, I feel empowered now to correct the issue. My suggestion for you is to compare the urls with their links and traffic. There should be some clear cut low quality stuff that you can noindex. On the pages that drive traffic I would make sure you are providing deep, helpful content. Hard to discuss all the things you may need to do over email but I think you are probably getting the idea. PM me if you want to chat more.
| bradwayland0 -
Is it bad practice to create pages that 404?
Yair, See the infographic on this page regarding rel nofollow tags in links, and when you may want to consider using them. Specifically, see the part about User Generated Content: http://searchengineland.com/infographic-nofollow-tag-172157 However, Google can decide to crawl whatever they want to crawl, whether it is a nofollowed link, links on a page with a nofollow meta tag, or javascript links. If you really want to keep Google out of those portions of the site you should use the robots.txt disallow statement, as I mentioned in your other thread, or use the X-Robots-Tag as described here.
| Everett0 -
Internal nofollows?
Hello Yair, I think you're fine with NOINDEX, FOLLOW robots meta tags on those pages and either rel="nofollow" in the links to them or using javascript instead. Keep in mind that Google is getting pretty good at parsing javascript so they'll still crawl those pages (which means they'll still be using crawl budget), making it necessary to have the noindex tag on those pages. Providing those pages aren't already in Google's index, I would consider adding a robots.txt file disallow, similar to the one below... Disallow: /user//favorites Disallow: /user//posts Disallow: /user//questions Disallow: /user//friends The * is a wildcard that should apply this to every profile. You may or may not have a /user/ folder but I put it in there as an example.
| Everett0 -
Duplicating a site on 2 different ccTLDs and using cannonical
Bizarrely, I just answered quite a similar question to this about five minutes ago... Have you looked into the rel="alternate" tag option? Sometimes this is also referred to as the "href lang tag". You can place these on both the UK site and the .com, indicating that the UK site is "the same" but is targeted for UK customers only. This is basically canonicalisation with a geo-targeting twist: it negates the issue of duplicate content whilst reinforcing that the .co.uk is for UK audiences. More information on the tag is here: https://support.google.com/webmasters/answer/189077?hl=en The .co.uk replacing the .com in UK SERPs won't be immediate, but this is a fairly safe option for rankings. Can you also use a javascript lightbox when a UK IP is detected on the .com site, explaining that UK customers have to purchase on the .co.uk and providing a link? It isn't good to automatically redirect based on IP, but a JS pop-up / lightbox will be ignored by search engines and will allow any remaining UK traffic to the .com to make its way to the appropriate website. Does this help? Cheers, Jane
| JaneCopland0 -
New Section On Site Worth It?
I don't have any problem linking out if I know the site that I am linking to really really well. I don't see anything wrong with your list of "SEO and Marketing Resources" if you want to recommend them to your clients. I am assuming that you would like to have content on your website that educates your visitors and showcases your expertise. If that is the goal, I would be concerned about finding articles other website and using them as the basis of a short article on my site. All that does is turn your website into a signpost that promotes other people. I would rather spend a little extra time per article and write a library of articles on my own site that explain mostly evergreen topics. Why? If these resources are being prepared as a knowledge base for current and potential clients then I would want to keep them on information that I produce and control than send them out to other websites where my branding and expertise is lost. Using articles on other websites might seem like a time-saving effort that saves you from explaining the nitty-gritty - you just link to it. But it doesn't have the same impact as explaining the nitty gritty on your own site and keeping your voice and your branding in the visitors mind. What happens when the other website deletes that article or goes out of business or starts publishing stuff that you don't agree with? This is going to happen eventually. Then all of the work that you put into that article is gone. Also, this isn't going to earn you likes, links, tweets and mentions. It is going to earn those things for the other guy. You are simply turning your website and your labor into an advertising effort for other people in your industry. Why not build a resource for yourself? Write a weekly or monthly original stand-alone, evergreen article service that people can subscribe to, tweet about, mention, link to and like. That is how I would approach this.
| EGOL0 -
How would you suggest finding content topics for this site?
I was referring more about the content. You can write a great linkbait-worthy content about a new method to wax your car, but if you're selling diaper covers, it's not going to help you. Extreme example, but I'm trying to say to make sure that you write content that your target audience wants to read, not necessarily look just at content that will get links.
| KeriMorgret0 -
Noindex search pages?
I think you're possibly trying to solve a problem that you don't have! As long as you've got a good information architecture and submitting a dynamically updated sitemap then I don't think you need to worry about this. If you're got a blog, then sharing those on Google+ can be a good way to get them quickly indexed.
| DougRoberts0 -
Reviews and Other Content in Tabs and SEO
Hi there, This isn't a problem, and is quite a common way of presenting content on the "same page" as far as the source code goes without having it all appear on page load. Where this would get you into trouble is if you are presenting a lot of content behind different tabs, or if that content was vastly different to the topic of the page that appears on page load. If you are using CSS to tab between the content, none of it is uncrawlable.
| JaneCopland0 -
Are ALL duplicate title tags bad??
Thanks for those answers, that's really useful. It sounds like this is not something to worry about too much, but something that is not ideal for the site's appearance in the search results!
| RG_SEO0