Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
Google is indexing proxy (mirror) site.
Upload a robots.txt on ermitage to disallow all robots indexing all files?
| YannickVeys1 -
404 errors and what to do
Yeah I agree signing up for Webmaster tools is a great way to pull up 404 errors which SEOmoz pro tool have not picked up.
| JamesNorquay0 -
Tools to Wireframe Site / KW Strategy
http://www.seomoz.org/q/search?utf8=✓&query=wireframe&x=0&y=0
| firstconversion0 -
Duplicate Page title - PHP Experts!
Both Ryan and Marie provided good answers, but let me elaborate further. This is not a php thing. You can verify this yourself by visiting these pages. Does the page content change when you visit /?lang=en&limit=5&limitstart=20 and switch to /?lang=en&limit=5&limitstart=20? If these look like the same page, then you have a duplicate content/title issue on your hands. Google's take on the matter is simple... "Provide one version of a URL to reach a document To prevent users from linking to one version of a URL and others linking to a different version (this could split the reputation of that content between the URLs), focus on using and referring to one URL in the structure and internal linking of your pages. If you do find that people are accessing the same content through multiple URLs, setting up a 301 redirect from non-preferred URLs to the dominant URL is a good solution for this. You may also use canonical URL or use the rel="canonical" link element if you cannot redirect" You have several options to deal with this, depending on the content_._ Use the canonical tag to point the different versions of the page towards a single, definitive URL. Best to use this option only if the pages are actually duplicate. If the pages are not duplicate in content, but duplicate in title, then make sure your CMS writes unique title tags for every unique page of content, as Marie describes. If these extra pages are producing lists of only slightly varying entries, like a list of blog archives, you may consider adding a Meta Robots tag to NOINDEX, FOLLOW those superfluous pages. If dealing with pagination, here's an excellent article from Adam Audette to help. This is a tricky area, and you must be careful when dealing with duplicate content, because it's easy to make a mistake and have Google de-index your content. That said, by correcting these errors, you just might see an improvement in your indexation and traffic stats.
| Cyrus-Shepard0 -
How to extract URLs from a site (without bringing the server down!)
Just a follow-up to my endorsement. It looks like Screaming Frog will let you control the number of pages crawled per second, but to do a full crawl you'll need to get the paid version (the free version only crawls 500 URLs): http://www.screamingfrog.co.uk/seo-spider/ It's a good tool, and nice to have around, IMO.
| Dr-Pete0 -
Default.aspx and domain name difference
Thank you for the prompt reply. I'll check into your suggestions.
| DMacy0 -
I have a WordPress site with 30 + categories and about 2k tags. I'd like to bring that number down for each taxonomy. What is the proper practice to do that?
Hi Alex, If you're still looking to remove tags or categories from a blog, there is no need to redirect the old tags and categories unless there are external links pointing to these tag/category pages. In the event that you have external links, make sure to 301 them. Most blog CMSs will take care of the internal links on the navigation/menus, but make sure to change any internal links to these pages that you added manually. You could see an impact on your rankings in the unlikely even that you are receiving visits from keyword-targeted tags or categories. You should probably check and see how often these pages function as organic landing pages in your analytics software. Most blogs see almost no traffic or links to these pages. Let me know if you have any questions. -Carson
| Carson-Ward1 -
Getting subdomains unindexed
Subdomains can be verified as their own site in GWT. Verify the subdomain in GWT, then put a robots.txt on that subdomain excluding the entire subdomain, then request removal in GWT of that entire subdomain. I've had to remove staging and dev sites a couple of times myself. A couple of things I've found useful in this situation is to make the robots.txt files for both the dev and live sites read only, so you don't accidentally overwrite one with the other when pushing a site live. You can also sign up for a free tool like Pole Position's Code Monitor that will look at the code of a page (including your robots.txt url) once a day and email you if there are any changes so you can fix the file then go hunt down whoever changed the file.
| KeriMorgret0 -
Google +1 not recognizing rel-canonical
For facebook, if you use the proper open graph meta data your Likes will be consolidated. With respect to Google +1, I can share my viewpoint and best guess. You are presently choosing to canonicalize these pages, but they are in fact separate web pages. You can choose to remove the canonical tag at any time so the data needs to be tracked separately.
| RyanKent0 -
In order to improve my google rank, I recently changed my url from http://www.helpwithassignment.com/nursing-assignment-help to http://www.helpwithassignment.com/Assignment-Help/nursing-assignment-help
FYI - I just reversed the changes I made to the website and to my delight, I am back to where I was on google Thank you guys for your suggestons:) Richa
| RichaS1 -
What to do with Deleted Posts?
If they have links pointing to them I agree with you, but if they dont, you should let them 404 or remove them though WMT,
| AlanMosley0 -
Wordpress speedup
Good deal - I'll have to check that one out. I've had some issues with caching plugins before, but it really just depends on the individual setup as to which one works best in each instance.
| sandlappercreative0 -
Domain name with separated/non-separated keywords
Thank You for the comments, I was about to go with the non-separated and now I'm sure about it Thanks once more
| joo0 -
On-Site Sitemaps - Guidance Required
Thanks, I agree with your thoughts on this but with us we tend to chop and change inner pages and having a good sitemap helps us to make sure the navigation is solid. Or remember to include missing pages more to the point. I like the example you provided, it does help from a user perspective to see the pages summarised from a branding perspective as much as seo.
| tdsnet0 -
Old Blog
Is the content unique that is the main question? I would not delete the blog I would look at putting 301 re directs in place and mapping out content and urls on the new site. I would also look at moving across content if it is unique, you could even add content to a sub domain if it is not that great.
| JamesNorquay0 -
Page MozRank and MozTrust 0 for Home Page, Makes No Sense?
That may be at least part of the problem. If you're not able to share the URL with us in Q&A, I'd drop a note to the help desk at help@seomoz.org and ask them.
| KeriMorgret0 -
MozTrust Scraping
Hi Alex, Currently the only way to do it from SEOmoz is to have access to the Paid API. There may be tools out there that utilize the paid API and will return the results for you though I haven't seen any. It maybe be worth asking another question and see if anyone knows of a tools like this. Casey
| caseyhen0