I agree with mike, but I am curious why you cant have a xml sitemap, maybe you need to chage hosting
Posts made by AlanMosley
-
RE: XML Sitemap on another domain
-
RE: Do we need to update our sitemaps each time our content changes?
I have to disagree, if you include <a><lastmod></a>2012-07-24T08:36:20Zlastmod> tag, then yes you should update it. Bing for example will ignore your site map if it is not accurate. The have a tolerance of about 2% errors.
If your site is only small, and all your pages are linked, then I would not even bother with a sitemap. Bing also stats that you should not list all your pages in a sitemap, but only your key pages.
Sitemaps don't help rankings only indexing, on large sites it is good to tell the search engine what pages have changed, or what pages are good pages to find links to other pages,
-
RE: Duplicate Domain Listings Gone?
Maybe.
I have the #1 position for a corner of the market, but I could not get a second page on the front page, seeing I had #1 I hade a second site and then had 1# and #2,soi made another, I now have 3 on first page, once you have #1, this seems they way to go.
-
RE: Duplicate Domain Listings Gone?
Sometime ago google made a change where they did just this, tried to get more domains on the front page rather then many pages from the same domain.
This was a few years back, so not sure what you are seeing today, it may be that the domains were penalized some other way. -
RE: Should we NOINDEX NOFOLLOW canonical pages?
Devanur is correct, the canonical will never be seen if you no index.
Also you should use noindex,follow not nofollow, using no follow means all link juice pointing to those pages will be lost, using follow means the link juice can flow back out of the pages
-
RE: Lazy loading images effect image seo?
When I wrote the post above I hosted my sites on premises and used cdn for images. But cloud hosting has got so cheap I just host the lot on the cloud. With band width these days I really don't worry about images any more, I optimize them, and host them in the cloud, plus the fact that most people have a good download speed these days, I don't see it as a problem.
Also as a user experience it is more important that the page starts loading soon, when it completes is less important.
I would look into Microsoft Azure they host all sorts of sites, windows and Linux and are cheap.
-
RE: Blocking some countries and redirecting that traffic
You sound like you have an expensive cdn.
Try Microsoft Azure they are cheap will cost you pennies for cdn, or use YouTube free.
-
RE: Local subdomains for English speaking countries
Your sites sound like the are simular and may be seen as duplicates.
using tlds .com.au .co.uk and so, will solve this problem. You can have a duplicate site if in different tlds.
As for splitting efforts, normally I would agree, but not when trying to rank in different countries. I believe it is easier to rank 4 sites in 4 different countries than trying to rank world wide with one.
-
RE: An affiliate website uses datafeeds and around 65.000 products are deleted in the new feeds. What are the best practises to do with the product pages? 404 ALL pages, 301 Redirect to the upper catagory?
OK I understand a bit better now, yes I agree with what you are doing. All makes sense
-
RE: How cloudflare might affect "rank juice" on numerous domains due to limited IP range?
For a small number of sites I would not be concerned, but if you are worried, Try Microsoft Azure you get a unique ip for each website and they are very cheap with a great interface.
-
RE: An affiliate website uses datafeeds and around 65.000 products are deleted in the new feeds. What are the best practises to do with the product pages? 404 ALL pages, 301 Redirect to the upper catagory?
A few points, I would not put al these pages in the sitemap.xml, if you are a talking about a html sitemap visually showing on your site, then I would not have any links on y site that do not go directly to a page, don't want redirects from internal links, and you definitely don't want 404s.
You say you have a customized 404 page. does this page return a 404 status code? if not you will be getting soft 404 problems. Also a 404 page really should not have any images js or css files on it, especially if they are changing. if you get a broken link on your 404 page, such as a missing image, you will get a internal loop, and search engines really don't like that. -
RE: Why are some pages now duplicate content?
Remus said it well, they are obvious thin content pages, almost identical, Search engines don't want to list every variation of a simular page. You need to add more unique content to each page.
But when I visit, using IE 11 on Windows 8.1, I get a big warning about cookies. This can't be good for conversion. do you really need this warming?
-
RE: I want to put 65.000 productpages on NOINDEX, FOLLW at once! Would Google mind?
That's should be fine. I am glad you said noindex,Follow and not noindex,nofollow.
Remus presents a link by matt cuts, but that is talking about adding pages in great numbers -
RE: Does Moz Analytics need Google Analytics installed?
Yes and no, no its not needed, but there is some extra data that they get from GA, such as traffic data, such as number of visits, keywords sending visits.
-
RE: Experience/suggestions in redirecting old URLs (from an existing site) to new URLs under a new domain
If the site structure is the same, then a simple redirect from oldDomain.com to newDomaina.com will do the trick.
If the site structure is different, then I would look at your external links, if a page has links then redirect it to a page with same or simular content on new site, those that don't have external links I would not bother wit there is no value in redirecting them.
-
RE: 301's & Link Juice
a 301 simply redirects a request to a new request with a new url. If the page has no external links then a 301 will do nothing for you. If you don't want the page delete it, remove any internal links and your done
Each request leaks link juice.
If you have links pointing to page A, and you 301ed page A to page B, then any link juice will go to page B , but will lose a bit of link juice, in fact you lose it twice, once for the link, and once for the 301 redirect. If the only links are internal links why not just link to Page B in the first place.but I would not remove the page, all pages have PageRank to start with, the more pages on your site the more PR, but the more pages to share it with, but with smart linking you can sculpt the more PR to fall on pages you want it to and less you don't want it to.
Read this simple explanation http://thatsit.com.au/seo/tutorials/a-simple-explanation-of-pagerank
-
RE: Importance of 301 Redirects
you have a point, But if your page has no links, then it is likely search engines see it as poor quality been around so long yet no links, so for that reason the page age may be a bad signal, at least not one worth keeping
If in fact page age is even looked at.
It may also be that the age is not transferred, this link is all I know about age
http://www.youtube.com/watch?v=-pnpg00FWJY&feature=player_embedded
It talks about domain age not page age, but Matt talks about the first time they crawl a domain. If we adapt that to a page I would suggest that they will look at the first time the crawl a URL so changing a url will be enough to lose the age.
OK I don't have inside knowledge of google, but I do know how a 301 works and I can give a you a pretty good guess. A 301 tells a request to make a new request to a new url. I think that google follows it just like that, here is a new page, it has no relevance to the old page its just a new url. to imagine that they keep a record of all 301s where a page used to be for ever, I find hard to believe. When they crawl a external link, they get retuned a new url and the link juice is redirected again thought a second request(hence the second loss of link juice.) and finds the new page where the link juice lands.
Think of a page that has many 301s from many old pages, then it would have many ages, I would suggest google keeps it simple, they just follow the 301 to a new url and treat it as a new found page and distribute the link juice as they would any other page they and on.
For a bunch of pages that don't have links, I cant see any difference, but what I would worry about is page speed. reading a long set of 301 code then trying them can slow a site down (remember we are talking about a large number here). If you can simple do all your 301's with a few lines of code then that's not a problem, but if you start to get a long list then its going to a be a problem for every page in the site.
back to your point, I have also had thought about this, but for a slightly different reason, and that is original content, if your page was from 2001 and you were scraped in 2003 and will the 2003 scrape now get the original content credit. but then this may e the case anyhow even with a 301, and again, I don't think this is a big problem because sites that scrape are not likely to be given credit, they will have lots of scraped content and will be known as a scraping site and not be awarded credit.
-
RE: What are we doing wrong with Rich Snippets?
A bit late here.
But How I understand it, price can not be a meta, it must be showing to the user, like it is in the schema.org page
itemprop="price">$55.00
http://schema.org/Product -
RE: Will using 301 redirects to reduce duplicate content on a massive scale within a domain hurt the site?
Yes that's correct
The pages will be found just as quick thought there new links as they would have been though their old links with 301's
-
RE: CDN image links passing SEO benefit?
no, I wold have thought they were outgoing links