If you want Google etc to treat it as www.site.com then add www.site.com" /> in the head of that page. Then backlinks to www.site.com www.site.com.com/home will; all give link juice to www.site.com . The category page is a different issue in that there is no category page anymore just a reference on the home page. Your developer need to rethink why he would do this sort of layout remapping which makes it harder to get seo value from the pages.
Posts made by oznappies
-
RE: Javascript changing URL - Thoughts?
-
RE: Site not being Indexed that fast anymore, Is something wrong with this Robots.txt
I am not sure why you are setting disallow of file types. Google would not index wmv or js etc anyway as it cannot parse that type of file for data. If you want to coax google into indexing your site submit a sitemap in webmaster tools. You could also set NoFollow on the anchors for the pages you want to exclude and keep robots.txt cleaner by just including top level subdirectories such as admin etc. There just seems to be a lot of directories in there that do not relate to actual pages, and google is only concerned with renderable pages.
-
RE: Lost ranking once optimised a page
Any unsafe sites will be removed from the pages. So if you are 15 and there are 2 unsafe sites between 1 and 15 your rank will move up 2 spots to replace the pages that cannot be shown and so your rank will show 13.
-
RE: Lost ranking once optimised a page
The canonical do help as they funnel the links to provide rank for the http://www.mybabyradio.com/experts-faq/conjunctivitis page. If you leave it out and follow Google webmaster tools you will see duplicate content errors showing up with the 4 urls Ryan mentioned highlighted.
-
RE: How to relate two sites Domain Authority
Since you were looking to Advertise, I was thinking you were looking for referal clients. In that case is it worth paying for advertising, most SEO companies can get you on good DA sites with a good article submissions. It is mentioned on here a bit that multiple good C-Block DA's help to improve your link juice. So good links on lots of good ranked relativant sites should help your ratings more than a single link on a DA70 that you paid $500 for. I know Australia is a smaller market than UK but have moved all our main keyphrases to page 1 or 2 in a very short time by getting good links from on-topic related sites and getting A's for all of them in on-page optimization.
-
RE: How to relate two sites Domain Authority
Strangely, it is not that straight forward when it comes to conversions. If the traffic to site B are more in the market for your sunglasses (i.e mostly from France) compared to site A who's traffic could be from Scotland. the conversions could be better on site B to site A.
I would say if you are only talking about $600 a year, do both.
Check with Keyword tool and Rank Tracker to see where the two sites rank in relation to keywords you consider important for your site to see if visitors would see site A or B and be able to follow your Ad.
-
RE: Crawl report showing only 1 crawled page
Most of the menu system and site could work the same in jquery. Flash is not indexable by the search engines and requires a sitemap to be generated to show the structure of the site. As Ryan says there is only one link on the site. It is a tradeoff between the ease of creating a site in flash and having a good SEO friendly site that Google with rank. Even if you do site map your pages, if the content is contained in flash, it will not be seen.
-
RE: Duplicate Content within Website - problem?
David, the sub-pages as far as Goggle was concerned fed all the juice to the product page.
No the subpages were not indexed as we told Google they all came from the same page in the canonical.
How do you describe a red widget1 differently to blue widget1? The item is the same but there is only one word different in the content, so we decided to skip a physically different url for the different colours and just use different anchors on the thumbnail images. The title and alt tags would contain specific information about the colour of the widget.
If someone searches for red widget1 and we have keyword strength in widget1 they will get to the widget1 page where they will see the red widget1 and any other colours for that widget1.
The canonical allows you to specify the content origin. So if you have /category/widget1/red and /category/widget1/blue describing the same content you could use /category/widget1 in the canonical ref and both pages would give juice to the main page and get no duplicate content penality.
This only works if you have a small number of variants on each widget as Ryan pointed out, such as size, colour variations etc. Otherwise it is too confusing for humans to follow.
With the amount of content you are looking at, it is probably worthwhile getting a usability study done.
-
RE: Duplicate Content within Website - problem?
We had a similar issue but not to that scale. We had product A in Red, Blue, Green etc the first approach we used a url /category/product?id=subproduct and set id as a parameter in Google Webmaster Tools site config. This passed all the link juice to /category/product and ensured that all pages had the appropriate for the link juice page.
We then decided that all those page loads just to basically show an image for each subproduct were a pain for the customer and so decided to show small images on the /category/product page an use a jquery call to overlay a larger image when the customer clicked a particular product. This produced faster load time and better customer experience.
-
RE: Nofollow internal links
I agree with Ryan but question the usability factor of 260 links on your page. Have you done a usability study to check how easily your end users can find the information they are after. It can be daunting for a robot let alone a human to sift through all the sub-menus on your categories. It brings to mind Telstra and Optus sites that take a significant time to find the information your are after because of the huge number of options.
I also notice that when you change currency that a prompt is displayed that 'all items in the shopping cart will be deleted' even when the cart is empty. Should you not check if the cart is empty before displaying the message, otherwise the prompt is defunct.
If you want to still display them but not have the robots index them late populate them from a JQuery async call on demand as the user hovers over a menu item. You would need to ensure they are linked somewhere or on a sitemap so the search engines can still find them.
-
RE: Help with Roger finding phantom links
Thanks again Ryan, you have been very helpful answering al lot of my questions.
-
RE: Anyone know where we can find a validator for schema.org?
And that is why so many developers prefer to work in flash or silverlight. HTML will never be a standard as long as major browsers only support parts of it. Those development languages include the validation in the complier.
It is only asking for misuse of a schema to release a 'standard' with little documentaion and no validation. Arrh, the good ond days of IE6 and every second line of HTML being a test for what the browser is.
-
RE: Help with Roger finding phantom links
I have been looking at the data that Roger is reporting for the duplicate content and in ALL cases there is either a 301 or a NoIndex. So now I do not know why Roger is reporting them as a duplicate, robots should not see the second entry.
-
RE: Help with Roger finding phantom links
I did not think of looking at the csv report. I see it now thanks Ryan. There should be a soft 404 handler in place to process the bad urls, I will have to see why it is not working.
With tumblr, I was looking for an easy way to add a blog to the site.
The RSS is coming from tumblr as is all the content.
When we specify Tags in tumblr it creates urls e.g. mypage.com/article/tag1 mypage.com/article/tag2 mypage.com/article/tag3 which all contain the content of mypage.com/article with out a canonical to the original. It is a really strange non-seo friendly approach, and so I wondered if anyone had similar problems.
-
RE: Help with Roger finding phantom links
I removed the links and just left the text so these will cut and paste now. It confuses me where Roger found the links.
Thanks for running the Xenu scan. I have tried other site scanner and come up blank.
-
RE: Ranking on french search engine
If you need to check a few phrases you can run them by my wife as she has worked as a translater in France with manuscripts and documents before. She lived there for 5+ years. Send me an email sales@oznappies.com and I will give you her email.
-
RE: Our Twitter App
If you only have a few pages and then lots of the same with just a different id you could mark it as parameter in Google Webmaster Tools as Barry Suggests and add the canonical as John suggests. The one thing you would need to do in those cases is change your 302 to 301 on http://www.arenaflowers.com/flowers-fun/flowers/message page when it redirects to home otherwise you will loose link juice.
I would also suggest you run your site through http://gtmetrix.com/ as the image load times were up to 15secs and print.css is returning a 404 error. You should also run your home page through the On Page reports on SEOMoz.
-
Help with Roger finding phantom links
It Monday and Roger has done another crawl and now I have a couple of issues:
- I have two pages showing 404->302 or 500 because these links do not exist. I have to fix the 500 but the 404 is trapped correctly.
http://www.oznappies.com/nappies.faq & http://www.oznappies.com/store/value-packs/\
The issue is when I do a site scan there is no anchor text that contains these links. So, what I would like to find out is where is Roger finding them. I cannot see any where in the Crawl Report that tells me where the origin of these links is.
- I also created a blog on Tumblr and now every tag and rss feed entry is producing a duplicate content error in the crawl stats. I cannot see anywhere in Tumblr to fix this issue.
Any Ideas?
-
RE: Anyone know where we can find a validator for schema.org?
I was in the process of adding videos to the site and when I saw the whiteboard Friday with the Bing interview, I thought this would be a good time to add the markup. Since the examples are very limited, I wanted to ensure my interperation complied, but alas it looks like I will have to wait for Google to index the page and see what happens. It does surprise me that they release a 'standard' without having a validator available. I have also posted on Microsoft connect to see if I can get a beta tool via MSDN.
Thanks for looking up those references for me Donnie.
-
RE: Anyone know where we can find a validator for schema.org?
Hi Donnie,
I know that one, but it does not test the schema.org markup as yet. Google makes note that although they support the new markup, the current rich snippets tool does not and they hope to have a new one out shortly.
http://googlewebmastercentral.blogspot.com/2011/06/introducing-schemaorg-search-engines.html
states the following :4) Test your markup using the rich snippets
testing tool.
...
rich snippets previews are not yet shown for schema.org
markup. We’ll be adding this functionality soon. 6/5/11