Try this:
User-agent: *
Disallow: /
User-agent: rogerbot
Allow: /
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Try this:
User-agent: *
Disallow: /
User-agent: rogerbot
Allow: /
The main thin I consider is that the maximum size of the title is defined by the pixel-width, not by the character count. For this reason, I prefer "|" as you can fit more on it. But for individual services on a brand with a short name, "-" can be preferable.
Thanks for the reply. The problem is that it is a local business and looking to rank well for local results.
1. Too late, the domains are pretty close to each other.
2. The content is unique on both websites, but some of the categories and product names are similar
3. Both sites need to rank well for local searches. Using this advice means that the 2nd website will not be able to perform well locally. Am I right in saying that the websites can not effectively co-exist with the same details?
As I say, both websites are on page 1 for their primary keywords but are struggling to get near the top of the page. Is there anything I can do or will Google always treat them as duplicates and hold them back?
I like to use screaming frog or xenu. They basically crawl the links on your site and give you a report on the results. To put it simply, if pages do not show up in the results then Google probably wont either. Be careful though, the free version of screaming frog only crawls up to 500 pages.
You can also look at individual pages by typing "cache:[url]" into the address bar in chrome. This displays Google's latest index of the page. If it shows a 404 error then it has not indexed it - although this will often be the case with new products as Google just hasn't indexed it yet.
One of our clients decided to launch a 2nd website to market specific products and services that they provide. The trouble is, they have the same address, phone number and have a similar name. Whilst we have had some success and both websites are on page 1 for their primary keywords, I have a bad feeling that they may have hit a glass ceiling.
Does anyone have any suggestions on how to perform local SEO?
I just typed cache:www.hotvsnot.com and got a blank screen. When I looked at the code, I got information about www.hotvsnot.us and a small amount of content about adult themes. Definitely something fishy going on there.
From what I've seen, a lot more focus is being put on the onsite content these days. However, I have noticed that websites with keyword anchor text links still perform very well if done properly. General rules I would follow would include:
Keep keyword anchor text links below 10% of your overall link profile
Try to have unique anchor text for each of this type of link. Try to avoid using just the keyword, but a short phrase that includes the keyword or a variation of it.
Avoid sitewide links. I've seen examples of single websites using the same anchor text over 500 times, which destroyed search rankings.
%7E is the ascii code for ~. I'm not certain whats happening, but sometimes when copy and pasting rich text (like from word), the browser will see the ascii code and not the actual ~ character. This makes internal linking a bit awkward and can result in duplicate content, although I'm not sure if search engines would see this example as a problem.
I've noticed that you do not have rel="canonical" tags set up, it's worth putting them in just to be on the safe side.
I would also have a word with your developers as the urls are very long and messy. Also, try to avoid using upper case and avoid using non-alpha numerical characters (such as ~) except for hyphens. Something like http://www.arkwildlife.co.uk/straight-foods/sunflower-seeds/premium-sunflower-hearts would look a lot better and avoid problems like this occurring in the future. The only problem is that changing the url can temporarily affect search rankings and you must be sure to set up 301 redirects properly.
It's important to remember that the Moz toolbar is uses the moz crawlers so will not have as much information as Google's own crawlers. I would imagine that a lot of the product pages are newer than other pages in the website, so there is a good chance that they will not have any PA as the moz crawlers may not have crawled them yet.
The PA of sub-pages and product pages will mostly be down to internal linking. If you are worried about the pages carrying weight for the website, there are a few things you can do. First of all, run the website through a crawler tool like screaming frog or xenu. These will tell you if the pages can be crawled by search engines (if the pages do not show up in the results, they can't be crawled) and also tell you how many internal links point to the pages. Also, have a look through the website yourself, see how easy it is to access product pages without using search forms. As a general rule, no page should be more than 3 clicks away on an ecommerce site, but this is open to discussion.
Remember though that it is perfectly normal for a large website to have a tiered internal linking structure. If all the pages on the website had the same authority then it would look unnatural. I'm fairly sure Rand has done some talks about best practice for internal linking.
When I'm doing keyword research, one of the first things I do is go on competitor websites and see if they have the meta keyword tag. If they do, then it instantly gives me access to all the keywords that they're targeting and allows me to work on beating them. Deleting your meta keyword tag prevents your competitors from doing this to you.
What about having a subtle call to action in there?
What you have written is fine, but it might be worth having a play around to see what works best. Maybe get 2 pages with similar performance in search, use one descriptive meta description like you already have and one with a call to action along the lines of "visit our website to find out about...". Then simply see which has the better click through rate.
Just starting work on a client providing care services in the local area. They have a number of local branches in the area, but no Google+ page (which we will sorting shortly).
One of the first things that struck me about the company is the offsite citations are a mess. Their addresses all have different information and they have several phone numbers for each branch. I've been trying to gather a full list so that I can go through them and either change them or ask for log in details if necessary. However, this is time consuming and there is no guarantee that I will get them all.
I know that moz local has a tool to do this quickly, but it the grader seems to only work in the USA, we're based in the UK. I'm also trying to use whitespark to get a list, but this is difficult due to all the different phone numbers being used. Does anyone know of another tool that can speed up this process and ensure that I get all the citations?
Archive.org basically shows you how the website looked in the past. Therefore, you can make sure that the new content is consistent with what visitors from referral sites are expecting. Do not plagiarize the old content though or your might get sued.
It's a risky tactic that you're using and definitely on the darker shade of gray. I wouldn't do it more than once or twice and make sure that the content is relevant.
You could try Majestic SEO and see if they show up there.
However, you need to be very careful when buying used domains with existing back links. What are you planning to use it for? If it's simply to 301 to your main site then you could be looking a manual action straight down the barrel. If you are planning on using it as your primary site, be careful that it doesn't already have a penalty.
Instead of who the links were from, have a look at what niche the old website served. If it is relevant, then you will have links from relevant websites and you might also inherit some relevant and diverse anchor text which will help. Use archive.org to have a look.
You need to think about why the website was performing well before the penalty. If it was spammy artificial links that were boosting your site, then removing the penalty means that they no longer help you. Therefore, your website is receiving considerably less link juice than before.
You may need to face facts that your link building efforts will probably need to start again from scratch.
The general rule of using HTML for everything is one that I would follow. If you're unsure if something is crawlable, try downloading the web developer plugin for chrome http://chrispederick.com/work/web-developer/. Then disable javascript and plugins and refresh the page. Any content that can't be seen then probably won't be seen by search engines either.
If you're a newbie, it might be worth using the on-page grader to make sure that all of your content is relevant. Make sure your content looks natural though and you do not over optimize or do any keyword stuffing. Once the content is sorted, you can get started on more advanced stuff.
You mention mistakes that you have made, if you told us what they were then we could make recommendations for you.
However, there is a LOT to SEO and some of it takes a long time to explain. I would recommend reading the beginners guide to SEO http://moz.com/beginners-guide-to-seo
So whilst searching for link opportunities, I found a website that has scraped content from one of our websites. The website looks pretty low quality and doesn't link back. What would be the recommended course of action?
Email them and ask for a link back. I've got a feeling this might not be the best idea. The website does not have much authority (yet) and a link might look a bit dodgy considering the duplicate content
Ask them to remove the content. It is duplicate content and could hurt our website.
Do nothing. I don't think our website will get penalised for it since it was here first and is in the better quality website. Possibly report them to google for scraping?
What do you guys think?
Thanks Michael, you raise some good points.
I just want to raise one point though. I was under the impression that the amount of link juice passed from one website depends on the total number of outbound links from that site (I'm sure I recall Matt Cutts saying something along these lines). Therefore, a link from a local newspaper would be good but watered down due to the total number of outbound links. A site with lower authority but with less outbound followed links (or 1 in this case) would still be of value. Is this correct or am I talking nonsense?