Yea, you don't have to login in this case. I think it could provide benefit to someone if the View Cart page were to come up in the Sitelinks when someone searches for the particular brand.
I think I'll go ahead and remove from robots.txt.
Thanks
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Yea, you don't have to login in this case. I think it could provide benefit to someone if the View Cart page were to come up in the Sitelinks when someone searches for the particular brand.
I think I'll go ahead and remove from robots.txt.
Thanks
Is it weird to remove the View Cart page from being blocked in robots.txt if it has links pointing to it?
Right, but have you noticed seomoz has nofollow all over the place?
I was wondering how many of you are still nofollowing the Home button or anything else for that matter?
I thought NoFollow was getting a little outdated, but I still see it lots of places.
Thoughts?
To follow up on this, the backlinks were in fact hurting rankings. We removed some paid backlinks and immediately saw large increases in rankings. I hope this helps someone else.
When I asked this question I really did not believe that removing backlinks would improve rankings, but it did.
Thanks!
I'd go into www.opensiteexplorer.org and see if you can find this backlink. If you can't, then it probably doesn't count.
Check it out, it's cached with your link - search this...
inurl:www.nyc.gov "cars for kids"
Yes, it was intentionally distributed. I would like to know whether the duplicate content on our site is being seen (by Google) as copied, not original, scraped, pulled from another source because we're so lazy we can't come up with any material of our own??
If this is the case, I will be removing the content, as the quality of the content sucks and there is quite a bit of it. Please, do not respond "if the content sucks, then why have it on your site..."
Then how do you determine if Google is seeing content as scraped? As you know, Google has made it very clear recently how they feel about scraped content.
There is a site I work for that has content that, when you search in Google a snippet of text from, they are not the top result for. I believe what has happened is that they had written blogs and articles and added them to their site and article directories at the same time and the article directories got cached first.
If we're not coming up first for our article, that means we are not believed to be the original author, correct?
Should I remove all content from our site where this is happening, even though we actually did create these articles?
Don't think so. When I search for inurl:www.nyc.gov/cgi-bin/exit.pl in Google, there aren't any cached results with any content containing a link pointing to your site.
I agree, harder to get seo value. Better to keep things as clean as possible, seo is hard enough as it is...
So, our developer just created a player at the bottom of this site I work for. It's not really important what it is. The thing is, when you go to our home page now, the javascript changes the url from www.site.com to www.site.com/home
It's not actually redirected or anything (no 301, it's just the javascript doing this), but I'm worried that if someone links back to our site they're going to surely pull that URL to point back to, which is wrong.
Also, when you go to a category, the URL changes from www.site.com/category to www.site.com/home#category. Again, it's not a redirect but I'm still worried people will link back to this since it's on the entire site now...
I'm suggesting that we turn off this new feature until we find a workaround. I just wanted to confirm with you guys that this is best.
Thanks
I have a couple of sites using 3dcart, the ecommerce platform. Their tech support recently told me that they do not list sub-categories in the XML sitemap, only products and top-tier categories.
Am I the only one that sees a problem with this?
Thanks
302s weren't what I was attempting, but yea, they all point to the same URL
As of a day ago, the SERPs in Google are showing our listing with NO meta description at all and the incorrect title. Plus the Title is varying based on the keywords searched.
Info: Something I just had done was have the multiple versions of their home page (duplicate content, about 40 URLs or so) 301 redirected the the appropriate place. I think they accidentally did 302s.
Anyone seen this before?
Thanks
I'm sorry, what I meant was you should make the pagination pages canonical themselves, like for Page 2...
200 links on a page isn't that bad. Once you get to 250+ I would rethink the architecture.
Yes, you should use rel canonical on your pagination pages.
A good way to pass ranking between deep pages like this is to have a section at the bottom that offers similar listings in the area. This way you are giving the bots multiple ways to find each listing, rather than just from one page/category. Do it like this - http://www.estatesgazette.com/propertylink/advert/kensingtonrooms_hotel-_131_137_cromwell_road_london_sw7_4du-3264453.htm. They have a "More Properties from this Advertiser" section.
Is it bad to NOINDEX, FOLLOW your pagination pages?
Ok, so I was just hired to do SEO for a company and so I did a backlink analysis. As it turns out, they've got a pretty dirty link profile - footer links, white on white links, unrelated links, tiny font links, links from penalized sites, hardly any branded links... you get it.
I was thinking about taking the worst of the worst links, removing them and leaving the rest, just to clean it up a little. I don't want them to get penalized.... but... I don't want their rankings to drop either.
Think I should leave them all and just start building relevant, branded links?
OR
Clean up the whole link profile?
OR
Just clean it up a little?
Does it do anything to place the Canonical tag on the unique page itself? I thought this was only to be used on the offending pages that are the copies.
Thanks