Thanks Everett - Just popping across to Stack Overflow now!
Posts made by TomVolpe
-
RE: Selective 301 redirections of pages within folders
-
RE: Google My Business: Multiple businesses operating from same address
Hi Ria,
The place falls under the multiple practitioners - single location scenario.
Multi-Practitioner Practices For practices with multiple public-facing doctors, it is acceptable to create local pages for each doctor, in addition to the practice’s local page. If this is the case, do not include your business name in the name of the practitioners’ pages. And try to differentiate between these pages with either a different phone number or suite number for each doctor, when possible. - See more at: http://www.searchinfluence.com/2016/04/google-my-business-for-doctors-visibility-authority-seo/#sthash.mRhgvMae.dpuf
According to Mike Blumenthals' site - the Google instructions for this have vanished! so you can get his take on the same thing here.
"
Individual practitioners (e.g. doctors, lawyers, real estate agents)
An individual practitioner is a public-facing professional, typically with his or her own customer base. Doctors, dentists, lawyers, financial planners, and insurance or real estate agents are all individual practitioners. Pages for practitioners may include title or degree certification (e.g. Dr., MD, JD, Esq., CFA).
An individual practitioner should create his or her own dedicated page if:
- He or she operates in a public-facing role. Support staff should not create their own pages.
- He or she is directly contactable at the verified location during stated hours.
A practitioner should not have multiple pages to cover all of his or her specializations."
http://blumenthals.com/blog/2016/04/10/google-my-business-guidelines-mia/
Hopefully that'll help you deal with your client more confidently

Ray pp Tom.
-
Selective 301 redirections of pages within folders
Redirection Puzzle - it's got me puzzled anyhow!
The finished website has just been converted from an old aspx affair to a wordpress site. Some directory structures have changed significantly; there appears to be a load of older medical articles that have not been added back in and it sounds unlikely that they will be. Therefore unmatched old news articles need to be pointed to the top news page to keep hold of any link value they may have accrued.
The htaccess file starts with ithemes security's code, Followed by the main wordpress block and I have added the user redirects to the final section of the htaccess file . I have been through the redirects and rewrites line by line to verify them and the following sections are giving me problems. This is probably just my aging brain failing to grasp basic logic.
If I can tap into anybody's wisdom for a bit of help I would appreciate it. My eyes and brain are gone to jelly.
I have used htaccesscheck.com to check out the underlying syntax and ironed out the basic errors that I had previously missed. The bulk of the redirects are working correctly.
#Here there are some very long media URLs which are absent on the new site and I am simply redirecting visiting spiders to the page that will hold media in future. Media items refuse to redirect
Line 408 redirect 301 /Professionals/Biomedicalforum/Recordedfora/Rich%20Media%20http:/kplayer.kcl.ac.uk/ess/echo/presentation/15885525-ff02-4ab2-b0b9-9ba9d97ca266 http://www.SITENAME.ac.uk/biomedical-forum/recorded-fora/Line 409 redirect 301 /Professionals/Biomedicalforum/Recordedfora/Quicktime%20http:/kplayer.kcl.ac.uk/ess/echo/presentation/15885525-ff02-4ab2-b0b9-9ba9d97ca266/media.m4v http://www.SITENAME.ac.uk/biomedical-forum/recorded-fora/
Line 410 redirect 301 /Professionals/Biomedicalforum/Recordedfora/Mp3%20http:/kplayer.kcl.ac.uk/ess/echo/presentation/15885525-ff02-4ab2-b0b9-9ba9d97ca266/media.mp3 http://www.SITENAME.ac.uk/biomedical-forum/recorded-fora/
#Old site pagination URLs redirected to new "news" top level page - Here I am simply pointing all the pagination URLs for the news section, that were indexed, to the main news page. These work but append the pagination code on to the new visible URL. Have I got the syntax correct in this version of the lines to suppress the appended garbage?
RewriteRule ^/LatestNews.aspx(?:.*) http://www.SITENAME.ac.uk/news-events/latest-news/? [R=301,L]
#On the old site many news directories (blog effectively) contained articles that are unmatched on the new site, have been redirected to new top level news (blog) page: In this section I became confused about whether to use Redirect Match or RewriteRule to point the articles in each year directory back to the top level news page. When I have added a redirectmatch command - it has been disabling the whole site! Despite my syntax check telling me it is syntactically correct. Currently I'm getting a 404 for any of the old URLs in these year by year directories, instead of a successful redirect. I suspect Regex lingo is not clicking for me
My logic here was rewrite any aspx file in the directory to the latest news page at the top. This is my latest attempt to rectify the fault. Am I nearer with my syntax or my logic? The actual URLs and paths have been substituted, but the structure is the same).So what I believe I have set up is: in an earlier section; News posts that have been recreated in the new site are redirected 1 - 1 and they are working successfully. If a matching URL is not found, when the parsing of the file reaches the line for the 1934 directory it should read any remaining .aspx URL request and rewrite it to the latest news page as a 301 and stop processing this block of commands. The subsequent commands in this block repeat the process for the other year groups of posts. Clearly I am failing to comprehend something and illumination would be gratefully received.
RewriteRule ^/Blab/Blabbitall/1934/(.*).aspx http://www.SITENAME.ac.uk/news-events/latest-news/ [R=301,L]
#------Old site 1933 unmatched articles redirected to new news top level page
RewriteRule ^/Blab/Blabbitall/1933/(.*).aspx http://www.SITENAME.ac.uk/news-events/latest-news/ [R=301,L]
#------Old site 1932 unmatched articles redirected to new news top level page
RewriteRule ^/Blab/Blabbitall/1932/(.*)/.aspx http://www.SITENAME.ac.uk/news-events/latest-news/ [R=301,L]
#------Old site 1931 unmatched articles redirected to new news top level page
RewriteRule ^/Blab/Blabbitall/1931/(.*)/.aspx http://www.SITENAME.ac.uk/news-events/latest-news/ [R=301,L]
#------Old site 1930 unmatched articles redirected to new news top level page
RewriteRule ^/Blab/Blabbitall/1930/(.*)/.aspx http://www.SITENAME.ac.uk/news-events/latest-news/ [R=301,L]
Many thanks if anyone can help me understand the logic at work here.
-
RE: & And + symbols - How does Google read these?
Hi,
This is an interesting question and I was just looking it up a few days ago.
Answers to your questions:
-
Yes, google can and does read ampersands and plusses and does show slightly different results depending on which you use.
-
Maybe, if you check the SERPs for ‘black & white football’ and ‘black + white football’ are they different? When I took a quick look they were different - so once your ‘black & white football’ page starts ranking, check SERPs for ‘black + white football’ – you may be in the same place for this keyword, or you may be much lower. If you’re at the same position there’s no need to optimise another page, if you’re lower then maybe you should create a page. Be sure to check search volumes first though, there’s no reason to spend time creating a unique and optimised page for the keyword using a plus instead of an ampersand if nobody is searching for it.
-
Yes they notice and treat each one slightly differently. Take these 3 example searches for ‘design and branding’:
https://www.google.co.uk/search?q=design+%26+branding&ie=UTF-8&oe=UTF- 8&ip=0.0.0.0&pws=0&uule=w+CAIQICIA&gws_rd=ssl
We’re seeing a lot of the same domains showing up - with a lot of the same pages - but in different positions as well as some sites sneaking onto page one for one term, and halfway down page 2 for another. Take www.steve-edge.com – currently at 7<sup>th</sup> for ‘design & branding’, 16<sup>th</sup> for ‘design and branding’ and 20<sup>th</sup> for ‘design + branding’.
So there’s the answer - yes Google can understand plus signs and ampersands, and yes they do treat each query slightly differently. You may be found at the same position for all variations, you may see fluctuations between each SERP, but what’s most important is checking to see if people are actually searching those terms with plus signs or ampersands before going to make the page - because there’s no point creating and optimising a page that nobody is looking for when the page they are looking for is being found fine.
Hope that helps!
Tom
-
-
RE: 2.3 million 404s in GWT - learn to live with 'em?
Hi,
Sounds like you’ve taken on a massive job with 12.5 million pages, but I think you can implement a simple fix to get things started.
You’re right to think about that sitemap, make sure it’s being dynamically updated as the data refreshes, otherwise that will be responsible for a lot of your 404s.
I understand you don’t want to add 2.3 million separate redirects to your htaccess, so what about a simple rule - if the request starts with ^/listing/ (one of your directory pages), is not a file and is not a dir, then redirect back to the homepage. Something like this:
does the request start with /listing/ or whatever structure you are using
RewriteCond %{REQUEST_URI} ^/listing/ [nc]
is it NOT a file and NOT a dir
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
#all true? Redirect
RewriteRule .* / [L,R=301]This way you can specify a certain URL structure for the pages which tend to turn to 404s, any 404s outside of your first rule will still serve a 404 code and show your 404 page and you can manually fix these problems, but the pages which tend to disappear can all be redirected back to the homepage if they’re not found.
You could still implement your 301s for important pages or simply recreate the page if it’s worth doing so, but you will have dealt with a large chunk or your non-existing pages.
I think it’s a big job and those missing pages are only part of it, but it should help you to sift through all of the data to get to the important bits – you can mark a lot of URLs as fixed and start giving your attention to the important pages which need some works.
Hope that helps,
Tom
-
RE: Sitemap international websites
Hi there,
You can use separate sitemaps along with a sitemap index but when you use you hreflang annotations you must specify all alternates for the URL. or they may not be understood correctly. You’re fine to use a sitemap for all of your content which you don’t wish to add the hreflang tags to, and another for the URLs with hreflang tags.
Just remember to specify every version of each page you mention in your hreflang sitemap along with a <loc>entry all wrapped in a <url>tag:</url></loc>
<url><loc>http://example.com</loc>
<xhtml:link rel="”alternate”" hreflang="”x-default”" href="”http://example.com”">//for users with no version specified
<xhtml:link rel="”alternate”" hreflang="”en”" href="”<a">http://example.com” /> //for English users in any country
<xhtml:link rel="”alternate”" hreflang="”en-us”" href="”<a">http://example.com” /> //us english
<xhtml:link rel="”alternate”" hreflang="”en-gb”" href="”<a">http://example.co.uk” /> //uk english
<xhtml:link rel="”alternate”" hreflang="”it-it”" href="”<a">http://example.it” /> //Italian users in Italy
<xhtml:link rel="”alternate”" hreflang="”it”" href="”<a">http://it.example.com” /> //Italian users anywhere</xhtml:link></xhtml:link></xhtml:link></xhtml:link></xhtml:link></xhtml:link></url>You cannot have one sitemap for hreflang=”en” and another for hreflang=”it” but you can use a separate sitemap on example.it specifying static pages on that domain:
<loc>example.it <loc><loc>example.it/page2</loc></loc></loc>
Your hreflang sitemap on example.it would have the same hreflang tags as the .com, but with the Italian domain specified in <loc>:</loc>
<url><loc>http://example.it</loc>
<xhtml:link rel="”alternate”" hreflang="”x-default”" href="”<a">http://example.com” /> //for users with no version specified
<xhtml:link rel="”alternate”" hreflang="”en”" href="”<a">http://example.com” /> //for English users in any country
<xhtml:link rel="”alternate”" hreflang="”en-us”" href="”<a">http://example.com” /> //us english
<xhtml:link rel="”alternate”" hreflang="”en-gb”" href="”<a">http://example.co.uk” /> //uk english
<xhtml:link rel="”alternate”" hreflang="”it-it”" href="”<a">http://example.it” /> //Italian users in Italy
<xhtml:link rel="”alternate”" hreflang="”it”" href="”<a">http://it.example.com” /> //Italian users anywhere else</xhtml:link></xhtml:link></xhtml:link></xhtml:link></xhtml:link></xhtml:link></url>So, each domain would need its own ‘sitemap 1’ (the hreflang sitemap) and its own sitemap 2 specifying the pages which weren’t in the hreflang sitemap, and its own sitemap index pointing to both sitemaps. Unless you verify both properties under the same WMT account, then you could use a sitemap containing every <loc>from all different sites, along with all their international variations, and reference that 1 international sitemap in your sitemap index for every site – this post will explain multiple domains: https://support.google.com/webmasters/answer/75712</loc>
This webmaster help pages explains about sitemap hreflang implementation: https://support.google.com/webmasters/answer/2620865?hl=en
Hope that helps,
Tom
-
RE: Migrating domains from a domain that will have new content.
Hi,
Yes this will work if you’re on a new domain, a subdomain, or even just in a folder on the existing domain.
As long as the URLs you were using aren’t being used for the parent companies' content you can redirect them all back to your subdomain with the method above.
Hope that helps,
Tom
-
RE: Migrating domains from a domain that will have new content.
Hi there,
You’d redirect just the same as redirecting an entire site, except only create rules for the pages you used to own. Mirror your old content on your new site (if you can use the same URIs that would make things easier) and then write a series of rules to redirect only your content.
If your URIs are staying the same you could do something like:
RedirectCond %{REQUEST_URI} ^/your-old-content/$ [NC,OR]
RedirectCond %{REQUEST_URI} ^/folder/your other content$ [NC,OR]
RedirectCond %{REQUEST_URI} ^/mynews/.* [NC]
RewriteRule (.*) http://www.newsite.com/$1 [R=301,L]You could use regex to match lots of your URLs at once, but you’d need to be careful not to redirect the new owners pages too. When I redirect an entire site I always create a final rule which says anything else? Send it to the homepage like this:
RedirectRule .* http://www.newsite.com/ [R=301,L]
But this time you would leave that off, as any requests not caught by your rewrite condition will belong to the new owner and go to where they’re intended on the old site.
Hope that helps explain things,
Tom
-
RE: Rogerbot problems
Hi there,
If you wanted rogerbot to only crawl once a month then you could put this in your robots.txt:
User-agent: Rogerbot
Disallow: *Then you’d need to remove the week when you plan on checking out your moz dashboard, but this would make your Moz analytics crawl reports fluctuate between 'Roberbot cannot access your site' and the actual crawl errors. And you don’t know exactly when Rogerbot will crawl, so you may end up blocking the crawls you wanted to allow. I would recommend against this method, but as you ask about that specifically I’ve included it as an option.
I think your best course of action would be to add a crawl delay for Rogerbot:
User-agent: Rogerbot
Crawl-limit:2Where the limit is a number equating to seconds to wait before crawling next page.
You could also use:
User-agent: *
Crawl-limit:5To limit all but Googlebot, since it does not respect Crawl-limit and follows the speed set in Webmaster Tools under crawl rate.
I don’t believe you can set lower Crawl limit than that, but you could use wildcards in your Robots.txt to disallow certain pages by quite relaxed rules. To say your store has a size query string, which doesn’t actually change the content of the page, you could use:
Disallow: ?size=
This would stop all those pages being crawled. On a site with 1000 products and 3 sizes for each you’re disallowing 3000 pages from being crawled, so be careful with wildcards and use the Webmaster Tools Robots.txt tester to make sure you’re not about to Disallow access to a lot of important pages.
Hope that helps,
Tom
-
RE: Is it convinient to use No-Index, Follow to my Paginated Pages?
Hi there,
If you don’t want these pages to appear in the index then yes, noindex follow would be the best directive to ensure any link juice still flows through them pages into other indexed pages, such as your blog posts found on those pages etc.
The harm of using noindex is when you are actually bringing in organic traffic through those pages, so have a look in analytics before you start noindexing. Take a look at organic traffic where your paged pages are the landing page – you could use a filter for something like page/ or page/[0-9]+ (or however your urls are structured for pagination) to look at all of these pages.
If those pages are bringing in organic traffic, why not optimise your metas and encourage even more users onto those pages? If they aren’t getting any entrances from search, you’re safe to do whichever you prefer - you could noidex,follow them to drop them from the index and keep the PR flowing.
Those pages aren’t harming you so you’re safe to leave them if you’re unsure, but always check entrances from search before you drop ANY page from the index. That way you can be sure you won’t lose any of your traffic.
Hope that helps,
Tom
-
RE: Do quotation marks in content effect SERPs?
Hi there,
You’re fine to have your product description quoting the text around the side of the product, but if you were to change it to something like this without quotes:
The words around the edge of the lazy susan read: Explore nature. Dream big. Take time to smell the flowers. Enjoy the changing seasons. Seize the day. Relish the night. Live life to the fullest.
…that would have the exact same SEO value as the existing description. Quotes are only counted as exact match keywords when searching in Google (and most other search engines), but don’t actually affect the way the page is seen by Google. The same way that using bold and italics to emphasise your keywords would not directly influence rank (but make your content more easily digestible, earning it more links and indirectly affecting rank), your quotes are also used to enhance human readability – but either would be fine.
Take a real world example: I pulled a page from my history which included a quote, “favor composition over inheritance” - (http://programmers.stackexchange.com/questions/65179/where-does-this-concept-of-favor-composition-over-inheritance-come-from)
Take a look at the screenshot I took below (from an unclean browser, sorry) – or you can run a search yourself – and we still see Wikipedia at the top, with its DA 100 (and no quotes); we see stackoverflow rising above stackexchange, with a higher DA; one result has more links than the stackexchange page, one has fewer. But they still perform better.
The stackexchange page with 5 counts of “favor composition over inheritance" (with quotes) is still outranked by the others.
- The 3<sup>rd</sup> result uses the keyword 6 times, twice in quotes.
- The 2<sup>nd</sup> result uses the keyword once without quotes.
- The 1<sup>st</sup> Wikipedia result uses the term once without quotes and still ranks #1 due to its other (better) metrics.
There are a number of factors which could affect the position of these pages for this keyword, such as anchor text for links to those pages, partial match keywords in the text and other ranking factors which I did not look into – but hopefully it will give you a real example of quotation marks not directly affecting the value of a keyword in Google’s eyes.
Write the descriptions the way you that sounds best to you – and optimise them for human readability, as quotes versus no quotes doesn’t make much of a difference.
Hope that helps,
Tom
-
RE: Hreflang link is always going to the homepage
Hi,
You are correct in thinking the hreflang tag should be different on each page, pointing to the different versions of that page, not the homepage.
Unless you feel like manually coding each page you could use php and $_SERVER['REQUEST_URI'] or js and window.location.pathname to get the current page path and append that to your domain in the hreflang tag in your head?
Or if that seems like a lot of work you can specify your alternate URLs for each language through a sitemap like this:
<loc>http://www.example.com/></loc>
<xhtml:link rel="alternate" hreflang="en" href="<a href=" http:="" www.example.com="" "="">http://www.example.com/" /></xhtml:link>
<xhtml:link rel="alternate" hreflang="x-default" href="<a href=" http:="" www.example.com="" "="">http://www.example.com/" /></xhtml:link>
There’s a good post here on the 3 different ways to implement your hreflang tags, per page, in a sitemap and in the http header here: http://www.branded3.com/blogs/implementing-hreflang-tag/
Hope that helps,
Tom
-
RE: What is the proper way to setup hreflang tags on my English and Spanish site?
Hi,
To answer your first question, using hreflang tags in your sitemaps is a perfectly fine implementation of the tags, they will work whether they’re coded into the of each page, set in the sitemap or set in HTTP headers. This page will be useful for you as it explains all three methods quite well: http://www.branded3.com/blogs/implementing-hreflang-tag/
But when you add them to your sitemap you should include all variations of the page, along with a default – so if a French or German searcher accesses your site, you can define whether they’ll be served the Spanish or English page, like this:
<loc>http://www.example.com/</loc>
To answer your second question about countries, you are fine to use hreflang=”es” to define all Spanish traffic, but using country codes can be useful in some circumstances. For instance if you have a site talking about football, you could use hreflang=”en-us” for a page which refers to the game as ‘soccer’ and use hreflang=”en-gb” for the page calling it ‘football’.
This Google Webmaster support post explains using both quite well under ‘Supported language values’ which I recommend you take a look at as well: https://support.google.com/webmasters/answer/189077?hl=en
Hope that helps,
Tom
-
RE: Moz can't crawl domain due to IP Geo redirect loop
Hi,
If you have manually set up your geo redirect in your htaccess then you could modify your rules to redirect only if not Moz’s crawler (rogerbot) like this:
uk redirect
RewriteCond %{HTTP_USER_AGENT} !=rogerbot
RewriteCond %{ENV:GEOIP_COUNTRY_CODE} ^GB$
RewriteRule ^(.*)$ http://uk.abcd.com$1 [L]
Which means both conditions must be satisfied before the redirect happens, the user agent must not be rogerbot, and then it checks the country code. You may have to adjust it a bit depending on your setup but it’s just the same as adding an exception based on IP, so if you could already do that you can set up a user agent condition just as easily.
If you’re using PHP you could use $_SERVER['HTTP_USER_AGENT'] wrap your geoip function with something like:
if($_SERVER['HTTP_USER_AGENT'] != 'rogerbot' ){
You will have to check if it is not empty before you implement it (or work it into your code) as some servers have $_SERVER['HTTP_USER_AGENT'] as not set.
Thanks, hope that gives you a few ideas to try!
Tom
-
RE: How to fix Medium Priority Issues by mozpro crawled report??
Hi,
If you want to add a sort of meta description template then you'll have to use a plugin, or use some php to create your meta description in the head. Like using your page title and category - "We've got all the news on category, read post title on website name"
But what actually happens in SERPs when you have no description is the section of your page which contains the keyword is used as the description, giving you another eye-catching bolded keyword. Will those uniform meta descriptions templates help your click through rate as much as a bolded keyword, which you know the searcher has typed into the search bar? Probably not.
You should focus your efforts on your highest performing pages and add unique meta descriptions enticing searchers to click through to your site, working your way through to pages which perform less well. Try using a spider such as Screaming Frog to crawl your site and show you the titles and descriptions of all your pages together. It has a nice little tool built in to preview what your page would look like in Google SERPs, and you can get a feel for what a searcher will see.
Hope that helps,
Tom
-
RE: .edu backlinks.. where to point them for a scholarship
Hi there,Because an anchor is a location on the same page, Google treats this as being the same page. This means all that link juice will be attributed to the original url.Your orphan page idea does sound like the best way to direct the flow of link juice to where you want it to be, or you could use your landing page as you said and let the pagerank flow naturally through your site via your menus and internal linking.But, if your landing page is performing well you may want to leave it as is so your scholarship information at the bottom of the page isn't stealing attention away from your call to action. This could then reduce your conversions from your best page.Each situation is different, but you're probably better off using a page specifically for this information. If you don't need it in the future, you could just 301 those lovely edu links to any page you liked.Hope this helps,Tom
-
RE: How to fix Medium Priority Issues by mozpro crawled report??
Hi there,
Missing meta descriptions mean your pages do not include the meta description tag, or it's empty. Depending on which CMS you’re using you could find a plugin to help you quickly add titles and descriptions to pages. If you’re using wordpress as many people are then Yoast will be your best bet, as it’s bulk title and description editor will let you quickly fix these issues: https://yoast.com/wordpress/plugins/seo/
You will be able to find plugins for different content management systems which do the same thing with a quick Google.
If your site is made up of static pages you’ll need to add a unique meta description to each page, remembering it should be below 160 characters to fit neatly underneath your page title in SERPs, like this:
http://moz.com/beginners-guide-to-seo
http://searchengineland.com/nine-best-practices-for-optimized-title-tags-111979
Hope that helps,
Tom
-
RE: Page for page 301 redirects from old server to new server
Hi Cindyt,
When I try to access that example URL I get a 404 on rock-n-roll-action-figures.com, which leads me to believe you still haven’t fixed your redirection issue. If you’re using an Apache server you can redirect page for page with these few lines in your .htaccess:
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !newdomain.com$ [NC]
RewriteRule ^(.*)$ http://www.newdomain.com/$1 [L,R=301]
The only difference between this and what Ray described was happening on Tuesday, is that it captures the entire path that is being requested with ^(.*)$ and appends it to the end of the new domain with the $1 – which is the reference for our first (1) captured group (the brackets). Very simple to implement.
Remember the rules are executed top to bottom in your .htaccess, so if some page URLs have changed and need to be redirected individually you should add them before your ‘everything’ rule.
Hope this helps,
Tom
-
RE: Google text-only vs rendered (index and ranking)
Hi,
Google is quite clever at distinguishing what your code does and since you can search for the sentence inside the hidden element and find the page, it is being indexed.
What you’re seeing in the Google cache is what a user without javascript enabled would see, so it’s personal choice if you think this is a problem for your site or not. But if Google thinks your site has poor usability for non-js browsers your rankings may be impacted.
There are a few things you could do if you wanted to fix this:
1. Remove the hide class from your code and have javascript add this class so only users with javascript enabled will have the content hidden from them, leaving it visible to crawlers and in your text-only cache.
2. Google recommends using
<noscript>tags to display content that is dynamically added by javascript. I know your js is not adding the content, just displaying it, but it will have the same effect in the text-only cache – your content will be visible both with and without javascript enabled.</p> <p>Hope this helps,</p> <p>Tom</p> <p> </p> <p> </p></noscript>
-
RE: Fading in content above the fold on window load
Hi,
For starters you could use the ‘Fetch as Google’ option in Webmasters Tools and see what your page looks like to search engines, or use a tool like browseo.net to do the same thing. Or you could make sure the page is indexable and link to it from somewhere and do a search for “one of your hidden text strings” (in quotes) to see if that content has been crawled and indexed.
If you can’t see your content then you may have a problem, and as crawlers can distinguish between hidden and nothidden text it may be more than just blocking your content from helping you rank. It might actually look like you’re trying to stuff keywords into your content without showing them to the user.
I think the easiest and simplest fix would be to remove the class which makes these elements invisible, and dynamically add that class with a little bit of jQuery just for users with scripts enabled:
This way when a crawler (or a user with javascript disabled) visits your site they will be served the page with the content visible, with it only being hidden if the visitor is accessing the site with javascript enabled.
Hope this helps,
Tom