Sorry for confusions. By search results I thought you might have been specifically talking about putting keywords into a site search and getting the results page. I've noindexed that page.
What you've said makes sense.
Thanks Peter.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Sorry for confusions. By search results I thought you might have been specifically talking about putting keywords into a site search and getting the results page. I've noindexed that page.
What you've said makes sense.
Thanks Peter.
Yes, it's the latter instance that I was talking about.
Thanks Peter.
Thanks Peter.
Just to clarify: I'm not talking about search results pages. I'm talking about paginated category pages. I've honestly had a number of cases where sites have linked to those 2nd or 3rd pages. Weird, I know.
Anyway, it's only a few links so I'm not too concerned about it.
Cheers.
Hi Alan, that wasn't my understanding of how it worked. I thought the "follow" part in this only permitted the bots to literally follow those links to other pages, and no link juice passes through. Maybe I misunderstood that?
Thanks Peter. One other advantage I can think that the rel=prev/next has: if someone is looking at products on a site and they are on the 2nd or 3rd page, they might decide to link to the page. This will pass the link juice to that page (or collection of pages) whereas if the page was noindexed, it would be a wasted link.
Cheers,
Thanks Peter. I hadn't seen Google's official advise on this. Having thought about it again, it does make more sense as I think it would be quite messy trying to get the rel next prev tags pointing to the non parameter urls. It's good to know that the canonical tag works in conjunction with these tags to point to the correct url.
I know it's easier to just no index those pages, but doesn't that mean you leak link juice that goes to those pages? Telling Google that they are a part of a series and having all that link juice combined into a single page should mean a more powerful page?
Thanks Peter.
Thanks Dan. I use Yoast's Wordpress SEO. It's a great plugin. I have the author archive disabled.
We don't actually link the author's name to an author page, so I think we're ok there. Thanks for the clarification.
Thanks Willny. Can you clarify what you mean by "Unless the links are going to the author's list of posts"?
Hi,
I've a number of wordpress posts that were written by different authors, and I want to merge them into a single author. If Google sees that originally the post was rel authored to person A and later we change the author reference to person B, will Google see this as suspicious in any way?
Or does it not matter, as long as it's only attributed to a single author at any one time?
Thanks,
Leigh
Thanks Miriam. In an ideal world, I agree with you, but there are many reasons why this system will work better for them, so it looks like they will be going with it.
The "Joe Bloggs" name was just an example name. They will, of course, be using a believable looking name.
Thanks,
Hi,
I have a client who's made some changes to their content strategy.
They want to use a single author for all content produced and publish, to maintain a consistent identity across the web. This single author is a persona e.g. "Joe Bloggs" but this is not a real person.
This works fine for creating and publish content (for their blog and outside blog posts). It allows many people to work on creating and publishing content under the same name, which for a number of reasons makes good logistical sense.
The problem arises when it comes to social marketing. They have set up a Facebook and Google + profile and Facebook and Google business pages.
The main issue is that they are finding it difficult to friend other people because nobody knows this "Joe Bloggs" persona.
Can anybody offer advise on how to approach this kind of strategy.
Thanks,
Hi Cyrus,
I don't see any issues with the canonical tag.
I'll contact the help team.
Thanks,
Yes, but the non-www version 301 redirects to the www version.
Hi,
I'm not sure I want to list the domain here, but here's a example of what I mean. We create google tracking links (google url builder) for use in a newsletter. The homepage looks like this:
and one the links in the newsletter might look like this:
http://www.site.com/?utm_source=newsletter&utm_medium=email&utm_content=offer&utm_campaign=1
When you look at the source code for both urls, they both have the canonical tag equal to:
So, Google knows there's no duplicate content issue there. It would be good if the diagnostics tool could recognise that too.
Thanks,
Leigh
Hi,
In the Crawl diagnostics reports, I'm getting lots of duplicate errors warnings e.g. duplicate page title. In most cases these are tracking urls and the page has a canonical tag pointing to the original page.
It would be helpful if the crawl analysis reports could separate these out from ones that are of genuine concern.
It can also happen when there's a noindex tag on a page.
Thanks,
Leigh
Ok. Thanks for the advise, Ryan.
Thanks Ryan.
I've no direct contact with the developer, so I can't answer those questions. I'm afraid I just have to work with what my client is telling me.
By what you're saying, and if done correctly, the pages would look to google as if they were in a folder on that domain e.g. website.com/language-site, and we would geo-target that folder, and not the sub domain?
Then we'd need to find a way to stop the search engines crawling the sub-domain. Would this be done in the robots.txt file?
Do you think it we'd be just better off using the sub-domain and forgetting about the rewrites. The main reason I'm advising him to go for a folder structure is because of the uncertainty of domain authority flowing to a sub-domain.