Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
Duplicate Meta Data from Http and Https
Now I have moved my hosting (because of lot of problems) but on new hosting https version is not working. Google is indexing one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content . One with http version and another with https version. Please help me to overcome from this problem and after solving this duplicate issue, do I need Reconsideration request to Google. Thank you
| RaviAhuja0 -
Can URLs blocked with robots.txt hurt your site?
90% not, first of all, check if google indexed them, if not, your robots.txt should do it, however I would reinforce that by making sure those URLs are our of your sitemap file and make sure your robots's disallows are set to ALL *, not just google for example. Google's duplicity policies are tough, but they will always respect simple policies such as robots.txt. I had a case in the past when a customer had a dedicated IP, and google somehow found it, so you could see both the domain's pages and IP's pages, both the same, we simply added a .htaccess rule to point the IP requests to the domain, and even when the situation was like that for long, it doesn't seem to have affected them. In theory google penalizes duplicity but not in this particular cases, it is a matter of behavior. Regards!
| workzentre0 -
I was hit bad by Penguin on 4-24-12.
301 redirect to new domain likely wouldnt help, you're bringing the tainted link juice with you, assuming links is the issue. Most 'experts' think you wont recover from Penguin until the next Penguin update. Its unclear to me if this agency 'kept sending automated links' that you would have been cleared by any update if this process was still ongoing. Bear in mind Penguin isnt just about links. Changing domain might be a valid option but it does depend on the site itself, authority, number of spammy links you still have etc. Not that its much comfort but id also say, remember Google isnt the only source of traffic.
| AndyMacLean0 -
Malicious site pointed A-Record to my IP, Google Indexed
Yes, sorry, Fetch as Google: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=158587
| Cyrus-Shepard0 -
Question about putting high traffic keywords in my Primary navigation menu.
Good afternoon Jeremy and it's nice to meet you! Thank you for your response to my question but I feel that I have mislead you with my screen shot as I had not added the sidebar yet so I guess that I might have sent the wrong signal due to the partially illustrated intent. Sorry about that... Jeremy, I agree with you that looking at the existing Navigation Menu "Categories" are very to the point and they are all very high searched for key words in my business and there would be no confusion with my potential customers finding exactly what they need. - Currently as is, my site is very to the point of what I do to both the END USER as well as the Search Engines. But, back to my Specif question on the (existing) "Computer Repair" category in the Main Nav Menu vs using "Computer Service" in its place. I have a couple of concerns with this situation which are legitimate ones. If I use "Computer Repair" as one of the main Themes in the Silo "Main Nav Menu" I will be forced to use "Desktop Repair" to classify Computer Repair in my Sidebar Navigation and that is a problem for a couple of Reasons! 1st Problem - The Majority of People that need Computer Repair do indeed have a desktop computer. However, when people think of getting their Desktop Computer repaired they view it as just Computer Repair and DO NOT search for (repair desktop computer) because they DO NOT classify their DESKTOP as one would classify ( Laptop Repair , Tablet Repair or Phone Repair) because HISTORICALLY a DESKTOP is just a computer and it was not until laptops, netbooks, tablets, etc came about where there was a need for an actual CLASSIFICATION system. However, case in Point - (even still) today, 95% of the WORLD thinks and Searches for Computer Repair when their Desktop is broken or has a virus or whatever the case may be. This is backed up by the Numbers taken directly from GOOGLE Keyword Tool as shown below. "Computer Repair" = 823,000 Monthly Local Searches -Desktop Repair = 12,100 Monthly Local Searches 2nd Problem - Desktop Repair only Gets 12,100 Monthly Local Searches and derivatives of Desktop Repair related searches get about the same or in most cases, they get significantly less queries than the very LAME 12,100 Monthly Searches! 3rd Problem - Potential semantic URL Structure problem ( may look SPAMMY) considering that I GEO Target my Business Location in my URL Structures So I am taking multiple things into account here because as we all know, one decision will directly affect 1 or multiple elements which has the potential for disaster in instances such as this. As far as the Migration Process Site design content creation, 301 Redirects on Page level and all of that other jazz, I have that under wraps because I have done extensive research for weeks now on migrating. Plus I am also a coder and am trained and educated on the Server side of things and have been doing web mastering, site development and SEO for 5 1/2 years now and have been working with computers for 32 years on top of that. I would like a strategical view point for this complex dilemma from some other eyes and view points on this matter so I can exchange some ideas and have an intellectual and strategic discussion on this matter in terms of advanced seo and site structure. Here is one Updated Screen shot with the Sidebar Nav in place. Please keep in mind this site is online but blocked to the outside world with the exception of my IP Address. So in short we are looking at place holders and nothing is set in stone yet as I am currently trying to figure out the best way to go with the Silo Site Structure, Proper Semantic URL Mapping, etc etc. Please do interpret this as an advanced SEO Strategy Planning question from someone who has very advanced skill sets and has been developing with WordPress for 2 1/2 years now exclusively. As technical and challenging as you can get with your responses, I can interpret just fine and handle that. : D P.S .. I know how to build out the Silo Site Architecture and am also coding a SPECIFIC Sidebar Nav Menu for Each Silo Theme (Category) that will reflect and stay EXACTLY on Point with the Information Architecture of that Specific Silo Theme. I have only one Problem that I need counsel on and that is.... would it be more advantageous to use "Computer Repair" or "Computer Service" in the Silo Mast Head (Main Nav) ? Please re-read my first post in addition to this post and help me go in the direction that would be most strategically correct taking all other factors into consideration. Cam PC-MEDICS-Screen-Shot-YELLOW-HOVER.png
| MarshallThompson310 -
Redirect aspx files to a different path structure on a different domain using a different server-side language?
Thanks for the quick response, Nakul. The number of affected pages are in the dozens. And we are ranking moderately, but are obviously looking at methods to rank higher. When you say "ensure each one of your pages redirects to your new .cfm" and "make sure each page is being 301 redirected", should that be done within the code of each page in question? Or can this be accomplished at the server level somehow where we can list the files we need re-directed and to where they should now point? And regardless of which method, I will still need some assistance on the coding or server setup required. Thanks!
| hamackey0 -
Best way to remove duplicate content with categories?
I agree with Nathan on the canonical tag. You could also work with your developer back end system and look into configuration issues and see if there's a way for the application to always have a consistent product level URL. It's the same product. It's being exposed in multiple categories or multiple levels of categories. Regardless, it's the same product. There's no reason for them to be unique product URLs. See if you can work with them to get rid of the root cause. If not, canonical is definitely the way to go. Unfortunately, there's isn't another way like robots.txt exclusions etc that should be considered.
| NakulGoyal0 -
Advice on outranking Amazon and other big names in eCommerce
Thanks! Creating instructional content would be a great way to capture more of those long tail searches about their products. Thanks for the idea!
| TheOceanAgency0 -
No PageRank but good Moz stats?
There's a good chance of that - a drop like that would normally mean its PageRank has been penalised. If you can verify this with some suspicious looking backlinks via Open Site Explorer, then I would steer clear.
| TomRayner0 -
Ever had a case where publication of products & descriptions in ebay or amazon caused Panda penalty?
Google is pretty good at determining the original source of content. Did you publish the content on your site first? If not, you will need to rewrite the content on your site to make it unique. Also consider how often you site is spidered by google, if its every two weeks and you published the content on amazon a week after publishing on your own site google will probably view amazon as the original source of your content because it saw it there first.
| julianhearn0 -
Wordpress and duplicate content
http://yoast.com/wordpress/ or you can go to your wordpress and install pluggin http://wordpress.org/extend/plugins/wordpress-seo/ search for yeast seo pluggin
| maestrosonrisas0 -
Access Denied
I am also got this same message & after this google is de-indexed my all top pages & my sites dropped very significantly in SERP. I don't know why this google is sending these messages WGsu8pU
| sourabhrana390 -
Does link juice pass along the URL or the folders? 10yr old PR 6 site
This is very helpful information! I believe this is what the admin had proposed. I just wanted to double check with you guys. I will have to check into the cc info. I am not sure exactly what they have. Thanks!
| jasonsixtwo0 -
Best way to permanently remove URLs from the Google index?
I agree with Paul, The Google is re indexing the pages because you have few linking pointing back to these sub domains. The best idea us to restrict Google crawler by using no-index , no-follow tag and remove the instruction available in the robots.txt... This way Google will neither crawl nor follow the activity on the page and it will get permanently remove from Google Index.
| MoosaHemani0 -
Meta NoIndex tag and Robots Disallow
There's no real way to estimate how long the re-crawl will take, Ben. You can get a bit of an idea by looking at the crawl rate reported in Google Webmaster Tools. Yes, asking for a page fetch then submitting with linked pages for each of the main website sections can help speed up the crawl discovery. In addition, make sure you've submitted a current sitemap and it's getting found correctly (also reported in GWT) You should also do the same in Bing Webmaster Tools. Too many sites forget about optimizing for Bing - even if it's only 20% of Google's traffic, there's no point throwing it away. Lastly, earning some new links to different sections of the site is another great signal. This can often be effectively & quickly done using social media - especially Google+ as it gets crawled very quickly. As far as your other question - yes, once you get the unwanted URLs out of the index, you can add the robots.txt disallow back in to optimise your crawl budget. I would strongly recommend you leave the meta-robots no-index tag in place though as a "belt & suspenders" approach to keep pages linking into those unwanted pages from triggering a re-indexing. It's OK to have both in place as long as the de-indexing has already been accomplished, as we've discussed. Hope that answer your questions? Paul
| ThompsonPaul0 -
Why is my Crawl Report Showing Thousands of Pages that Do Not Exist?
Hi Jenna, It's not so much the fact you have 404 pages that is the problem for SEO, but rather the fact your site is creating a problem for the search engines to crawl the site correctly and efficiently since they are getting caught in an endless loop. This can be a problem because the crawlers may get caught in the endless loop and just give up on your site and leave, which means the search engines may not be able to access the rest of the pages on your site and may have a negative impact on your rankings as a whole. One of the most important parts of SEO is to make your website as "friendly" to the search engines as possible so if they caught in endless loops then that is definitely not ideal. Hope that helps! Patrick
| StreamlineMetrics0 -
Magento Duplicate Content Recovery
Hi There This is really hard to answer. Let's assume nothing else changed (which is of course, completely impossible) but for the sake of the question. All of the wrong pages that were not indexed prior would need to be de-indexed in Google - I would check to see if they are still indexed, because it can take some time for them to be de-indexed. If/when that happens, in our hypothetical vacuum like situation, the rankings would return when those pages are completely de-indexed. However, in reality, this is not how it works of course - so by all means, fix the technical issues, as they are important of course - but don't forget to keep a wide view of the situation and realize that other factors may be affecting the rankings now as well - and to not hyper-focus/obsess too much about this one thing. -Dan
| evolvingSEO0 -
Same Branding, Same Followers, New Domain After Penalty... Your Opinion Please
You have some great responses here. To summarize some of the advice and add a little new advice, this is what I would do: Display a text warning at the top of the site that the site has moved. I'd not worry about the text somehow contaminating the new domain. Keep the old site running, and try to get the penalties removed on the side. Noindex (or delete, if it's not important to the user) all the content that you want to keep but has few links, then move it to the new site. If the penalty is lifted, redirect the old site over to the new site's counterpart. Still, don't 301 redirect pages with low-quality content or spammy links. (You can just kill the pages that are "all bad" now.) The only question left is what to do with the content you want to keep and has with clean external links. You could probably redirect and cut the internal links without too much risk, which is what I'd do. The completely safe thing to do would be to avoid linking altogether, leaving it out there to gather what traffic it can. Good luck!
| Carson-Ward0 -
Duplicate site (disaster recovery) being crawled and creating two indexed search results
It's a little tricky. While Andrea is right about Robots.txt - it's not great for removal once pages/domains are indexed, you can block the sub-domain with robots.txt and then request removal in Google Webmaster Tools (you need to create a separate account for the sub-domain itself). That's often the fastest way to remove something from the index, and if it has no search value, I might go that route. Just proceed with caution - it's a delicate procedure. Doing 1-to-1 canonicalization or adding 301 redirects may be the next strongest signal (NOINDEX is a bit weaker, IMO). However, Google will have to re-crawl the sub-domain to do that, so you'll need to keep the paths open.
| Dr-Pete0