Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
Google indexing despite robots.txt block
It sounds like Martijn solved your problem, but I still wanted to add that robots.txt exclusions keep search bots from reading pages that are disallowed, but it does not stop those pages from being returned in search results. When those pages do appear, a lot of times they'll have a page description along the lines of "A description of this page is not available due to this sites robots.txt". If you want to ensure that pages are kept out of search engines results, you have to use the noindex meta tag on each page.
| john4math0 -
WebMaster Tools keeps showing old 404 error but doesn't show a "Linked From" url. Why is that?
Hi Jane, thanks for the follow up. Every time we see errors showing up in WMT (mainly 404's) we remove the URL's right away and indeed we see the errors going down every 4-5 days (under HTML improvements). I am just surprised, that if we would not use the URL removal tool, how long it takes for Google to actually remove 404's from their index. I know the higher the PR, the more likely they crawl more often and the faster they remove these 404's I guess, but still.
| revimedia1 -
.com and .co.uk duplicate content
Just a quick question, the client in question, in their wisdom, decided to put the US website live without telling me and our UK rankings have dropped significantly, do you think the tag will start to fix this?
| KarlBantleman0 -
Different user experience with javascript on/off
Yes, Dan is correct. As long as the intent is not malicious, you should be good. Moreover, it is a common practice where a JS overlay is displayed before the actual site is served. For example, the adult sites and liquor related websites show an age gate page using JS overlay technique for the human visitors to confirm their age before they can access the website but the search engines bots (like the Google bot) do not see the JS overlay and can directly access the website. With this kind of setup in place, there is nothing to worry about different experience being served to visitors and bots. This is definitely not considered, cloaking. Hope it helps. Best regards, Devanur Rafi
| Devanur-Rafi0 -
Website content has been scraped - recommended action
It's good to be aware of the scrapers to see what they are trying to do with your content, and it can't hurt to ask them to remove it. Don't ask for a link, you never want links for sites that rely on bad practices like that, it can hurt you. This is most likely not effect you if left alone. If the scraper is grabbing from source code, then implementing a canonical tag in your content will help Google know where the content came from (but they probably already know).
| WilliamKammer0 -
Two companies merge: website A redirect 301 to website B. Problems?
Is it a good idea to change the GWT change of address of domain A into domain B? Just make sure you don't change to a subfolder or subdomain (box.domain.com or domain.com/box) Mike
| Mike_c0 -
Are we really at the mercy of anyone who wants to damage our SEO ranking?
if you agreed to pay him for services, I would satisfy that payment and then break all ties. Sounds like this person is very vindictive. I think you are wise to get out now. To answer your question, yes there are ways to hurt a sites ranking. We only practice white hat seo at our company, but we have been approached by other seo companies to do some pretty questionable things. (We used to be an outsourcing destination for SEO) Some of the things we have seen include: 1. Using proxy servers to overwhelm a websites bandwidth,causing the server to crash 2. Setting up false citation sites with a similar or exact business name and creating false bad reviews 3. Buying up paid links from banned sites, and pointing them at the domain and many more.. Its generally best practice to not hire the people that claim to be white hat (or otherwise) seo experts. No one person is an expert at seo, we are all using what we know to be best practices. If someone promises or guarantees you anything, run away. I agree with William on keeping an eye on your webmaster tools reports, and also change ALL your passwords. FTP, cpanel, hosting, CMS, Email, etc. If you need any assistance, pm me. I'd be glad to help. I hate it when people get taken advantage of like this, by people that don't know what they are doing.
| David-Kley0 -
Schema markup
Thanks, Yes I agree there's no point in trying to shoe-horn something into what it isn't. I was just wondering if I'd missed something and it was possible! Thanks for your input, Amelia
| CommT0 -
Is skimlinks-unlinked organic and valuable?
google might recognise it as an affiliate link and therefore unnatural and so reduce or remove its seo value.
| PaddyDisplays0 -
Easy Question: regarding no index meta tag vs robot.txt
Hello Santaur, I'm afraid this question isn't as easy as you may have thought at first. It really depends on what is on the pages in those two directories, what they're being used for, who visits them, etc... Certainly removing them altogether wouldn't be as terrible as some people might think IF those pages are of poor quality, have no external links, and very few - if any - visitors. It sounds to me that you might need a "Content Audit" wherein the entire site is crawled, using a tool like Screaming Frog, and then relevant metrics are pulled for those pages (e.g. Google Analytics visits, Moz Page Authority and external links...) so you can look at them and make informed decisions about which pages to improve, remove or leave as-is. Any page that gets "removed" will leave you with another choice: Allow to 404/410 or 301 redirect. That decision should be easy to make on a page-by-page basis after the content audit because you will be able to see which ones have external links and/or visitors within the time period specified (e.g. 90 days). Pages that you have decided to "Remove" which have no external links and no visits in 90 days can probably just be deleted. The others can be 301 redirected to a more appropriate page, such as the blog home page, top level category page, similar page or - if all else fails - the site home page. Of course any page that gets removed, whether it redirects or 404s/410s should have all internal links updated as soon as possible. The scan you did with Screaming Frog during the content audit will provide you with all internal links pointing to each URL, which should speed up that process for you considerably. Good luck!
| Everett0 -
Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly. And finally there was peace in the universe...for now. ;> Tim
| Jen_Floyd0 -
Why would Google rank a highly irrelevant page in the top 15 especially for a seemingly important keyword?
Sometimes its about what you cant see. Backlinks, and other sources may exist that factor into a higher ranking.
| David-Kley0 -
301 with nofollow ?
Google have said that if you have the same site at a different URL then they may apply the same penalty to the new site as they did to the old one, therefore if you wish to redirect a site, I would recommend doing it at the same as you make some other significant changes to it. I came up with a way of doing this which should work. On the old domain redirect everything to the home page and then on the home page create a noindex, nofollow page with the following line of code in it: Where example.com is your NEW domain name. This way you're using a redirect at the html level, but telling search engines to not index or follow the page. This should work!
| indigoextra0 -
Parked former company's url on top of my existing url and that URL is showing in SERPs for my top keywords
Thanks, again. Will try these options today. It'll be nice going in more knowledgeable so it's a very good thing you do Mr. Kley.
| Joelabarre0 -
Using Web Applications for SEO
Why do you think that this will "hurt" you? I mean you provide a very useful tool for other webmasters and their visitors. The only thing you have to keep in mind is that you might need a lot of ressources to keep the system running. If your tool is embedded and needs to work with the original source, you need a powerful server to handle all those requests... nothing is more annoying than a helpful tool which doesn`t work
| dotfly0 -
No follow links from ban sites
I agree. Get rid of all the links that do not offer any benefit. Remove external links that don't point to good, relevant info. Disavow inbound links from any banned or questionable sources. Even if the links pointing at you are no follow, the links can still get visited. Remember, it's up to Google to respect the no follow rule. In some cases they may ignore it, if they think it's a relevant link.
| David-Kley0 -
Redirecting special characters in .htaccess and web.config
Just what I was looking for. & Thank you Aleyda for the tip on checking to see that the url with escaped characters is really on the site. That reduced the # of redirects needed by 75%.
| stephenfishman0 -
Changing the order of items on page against Google Terms & Conditions?
What you're referring to are parameters. You can handle this in a few ways: Block the ones you don't want indexed in Webmasters Tools to prevent duplicate content. Block the ones you don't want indexed in your robots.txt file using Disallow: /*? Allow it to all be indexed and not worry about Google. This isn't as good for your SEO but if you're worried about "terms" then no, it's fine. Just better for indexing if you do 1 or 2 above.
| MattAntonino0 -
Product Descriptions for Localized eCommerce Store
I presume the other countries are using the same language--if that's the case, then the next question is: is anything on the page going to be different for countries #2, #3, etc.? If the answer is "no", then you can use rel=canonical on the pages for countries #2, #3, etc. pointing back at the version at your site for country #1. If the answer is "yes" (e.g. the price is different, or the currency is different, or shipping price shown is different, etc.), then set the rel=canonical to point to the URL for the current country's page. In both cases, you'll want to use rel="alternate" hreflang= tags for ALL of the countries....on ALL of the pages. The idea here is that you're telling Google, on every page, what the other countries' versions of the pages are. Included in that is the page for the country you're looking at now. So, let's say you had www.bananas.com and also a version of the site in the United Arab Emirates, www.bananas.com.ae. You've got pages www.bananas.com/peel.php and www.bananas.com.ae/peel.php. BOTH of these pages would have: For more info on this, check out Maile Ohye's video here.
| MichaelC-150220 -
HTML Site for Speed
Moving to a purely static version of the site could certainly have some performance improvements, but that alone is not a guarantee of reduced latency for your customers. You're also likely to lose some flexibility that a platform like Wordpress offers in terms of managing the site. My suggestion would be to start off by analyzing your site using Google's Page Speed service and then see if there are options within WordPress to eliminate the worst performance offenders reported by the service. http://developers.google.com/speed/pagespeed/insights/
| dudleycarr0