Thanks for sharing your views Moosa.
Posts made by Saijo.George
-
RE: GWT Malware notification for meta noindex'ed pages ?
-
RE: Sub-domains or sub-directories for country-specific versions of the site?
Not trying to throw a spanner in the works, but have you considered top level country specific domains ? .co.uk, .au, etc
In theory you can use subdirectories and assign geo targeting for that folder in Google Webmaster Tools. ( keep in mind this will only work for Google, you have to look at how the other search engines will handle this Bing, Yandex if you target Russia, Baidu for China, etc )
-
RE: Wordpress and Redirects?
Try this plugin : http://wordpress.org/plugins/change-permalink-helper/
I have had good success with it . As with any plugins test it out after you deploy .. there might be conflicts with other plugins or with the theme you are using
-
RE: 404 or 503 Malware Content ?
Thanks Peter, apologies for the delay was tied downed with some other things. Your help is much appreciated.
-
GWT Malware notification for meta noindex'ed pages ?
I was wondering if GWT will send me Malware notification for pages that are tagged with meta noindex ?
EG: I have a site with pages like
example.com/indexed/content-1.html
example.com/indexed/content-2.html
example.com/indexed/content-3.html
....
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.html
....Here all the pages like the ones below, are tagged with meta noindex and does not show up in search.
example.com/not-indexed/content-1.html
example.com/not-indexed/content-2.html
example.com/not-indexed/content-3.htmlNow one fine day example.com/not-indexed/content-2.html page on the site gets hacked and starts to serve malware, none of the other pages are affected .. Will GWT send me a warning for this ?
What if the pages are blocked by Robots.txt instead of meta noindex ?
Regard
SaijoUPDATE hope this helps someone else : https://plus.google.com/u/0/109548904802332365989/posts/4m17sUtPyUS
-
RE: When You Add a Robots.txt file to a website to block certain URLs, do they disappear from Google's index?
Hi William
If the pages in question are
- already indexed by Google then if you block them via the robots.txt , they will show up in search result but the meta description will say something along the lines of
A description for this result is not available because of this site's robots.txt – learn more.
2) not indexed by Google for example on a new site , they don't follow it and the pages does not come up in search directly BUT if some external sites link to the pages then they can still come up in the SERP some time down the track.
Your best bet to keep the page out of the public SERP index is the meta robots tag : http://www.robotstxt.org/meta.html
-
RE: 404 or 503 Malware Content ?
Come to think of it we don't get a lot of malware warning in GWT anymore , I am guessing that is because the framed pages are no longer indexed. ( We could have potentially got the warnings while they were still being de-indexed ?? )
I am worried about that since GWT used to warn us about these and if the pages are no longer indexed and Google no longer sends us notification , we might miss these pages with malware. I have to look in to some way of tracking this ( if you have come across any solution I would love to hear more about it ) .
Thanks a lot for your help Peter.
Serving up malware content to users was never an option. I think .. in our case it makes sense for us to go the 503 route . If anyone is wondering how we plan to handle it :
When we see a malware notification on the i-framed pages.
- We plan to disable the iframe and send a general page for visitors saying the content is temporarily disabled .
- We will send a 503 header response with this page to state that this is a temporary issue. ( for search engines )
- Ask the site owner to fix the issue .
- Once issue is resolved , remove the 503 and make the framed content live again.
This helped me make this decision : https://support.google.com/webmasters/answer/2600719?hl=en&ref_topic=2600715&rd=1
-
RE: 404 or 503 Malware Content ?
Hi Peter
The content used to be index, but I have added the noindex tag on there ( since I felt the same way about them being indexed as you did ) but we still get the GWT warning about malware from time to time. My initial concern was do I 503 or 404 the page till we fix the malware issue. I think 503 is the best way to go about it.
-
RE: Why Google did not index exactly these 2 pages? Any ideas?
I think you might just need to wait it out. But you can always help they by using " Fetch as Google " from Google Webmaster tools
-
RE: 404 or 503 Malware Content ?
Hi Peter
Thanks for looking in to this.
We sell templates and themes for various cms and we find that it's great if we can demo the content to users before they purchase them. Our content is created by the community and most of them often add updates to existing content. We find that its best to let our authors host their own files and we link to that content through an iframe.
At times some of the author's might get hacked / or some of their advertisement gets flagged as malware. We get notified by WMT when google see an malware on these iframed pages.
-
RE: Tutorial Creation Tools
Try Camtasia Studio .. they also have a free tool tool ( looks like it's no longer free ) called Snagit or something like that that will create swf files with watermark.
-
RE: ECommerce Problem with canonicol , rel next , rel prev
Thanks for your input.
IMHO...If I exclude ? , then paginated pages like ?page=xx wont be crawled , thus the rel=next prev tags on the page are rendered useless.
-
RE: Are Social Links on Home Page Good for SEO
If you engage with your follows and users on any of those social media accounts I would advise to link to them from your site. Perhaps you can move them to a less prominent location and open them in a new tab when users click on them.
If you don't engage with your followers then might as well get rid of them
-
RE: Varying Internal Link Anchor Text with Each New Page Load
I would say test it out and see what happens. I would love to know the result. ( youmoz post perhaps ? )
what I assume would happen :
The new link only counts when G-bot crawls the page ( and obviously not on each page load ), and each time Gbot crawls the page it will see that an old link is dropped and a new one is added. So what ever value you gain from the new link , you will lose from the old one which is no longer there. So I really don't see the value to be had from an SEO point of view . But repeat visitors to you page may click through to those pages. ( Again testing it will give you solid proof )
-
ECommerce Problem with canonicol , rel next , rel prev
Hi
I was wondering if anyone willing to share your experience on implementing pagination and canonical when it comes to multiple sort options . Lets look at an example
I have a site example.com ( i share the ownership with the rest of the world on that one
) and I sell stuff on the siteI allow users to sort it by date_added, price, a-z, z-a, umph-value, and so on . So now we have
- example.com/for-sale/stuff1?sortby=date_added
- example.com/for-sale/stuff1?sortby=price
- example.com/for-sale/stuff1?sortby=a-z
- example.com/for-sale/stuff1?sortby=z-a
- example.com/for-sale/stuff1?sortby=umph-value
- etc
example.com/for-sale/stuff1 **has the same result as **example.com/for-sale/stuff1?sortby=date_added ( that is the default sort option )
similarly for stuff2, stuff3 and so on. I cant 301 these because these are relevant for users who come in to buy from the site. I can add a view all page and rel canonical to that but let us assume its not technically possible for the site and there are tens of thousands of items in each of the for-sale pages. So I split it up in to pages of x numbers and let us assume we have 50 pages to sort through.
- example.com/for-sale/stuff1?sortby=date_added&page=2 to ...page=50
- example.com/for-sale/stuff1?sortby=price&page=2 to ...page=50
- example.com/for-sale/stuff1?sortby=a-z&page=2 to ...page=50
- example.com/for-sale/stuff1?sortby=z-a&page=2 to ...page=50
- example.com/for-sale/stuff1?sortby=umph-value&page=2 to ...page=50
- etc
This is where the shit hits the fan. So now if I want to avoid duplicate issue and when it comes to page 30 of stuff1 sorted by date do I add
- rel canonical = example.com/for-sale/stuff1
- rel next = example.com/for-sale/stuff1?sortby=date_added&page=31
- rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29
or
- rel canonical = example.com/for-sale/stuff1?sortby=date_added
- rel next = example.com/for-sale/stuff1?sortby=date_added&page=31
- rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29
or
- rel canonical = example.com/for-sale/stuff1
- rel next = example.com/for-sale/stuff1?page=31
- rel prev = example.com/for-sale/stuff1?page=29
or
- rel canonical = example.com/for-sale/stuff1?page=30
- rel next = example.com/for-sale/stuff1?sortby=date_added&page=31
- rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29
or
- rel canonical = example.com/for-sale/stuff1?page=30
- rel next = example.com/for-sale/stuff1?page=31
- rel prev = example.com/for-sale/stuff1?page=29
None of this feels right to me . I am thinking of using GWT to ask G-bot not to crawl any of the sort parameters ( date_added, price, a-z, z-a, umph-value, and so on ) and use
- rel canonical = example.com/for-sale/stuff1?sortby=date_added&page=30
- rel next = example.com/for-sale/stuff1?sortby=date_added&page=31
- rel prev = example.com/for-sale/stuff1?sortby=date_added&page=29
My doubts about this is that , will the link value that goes in to the pages with parameters be consolidated when I choose to ignore them via URL Parameters in GWT ? what do you guys think ?
-
RE: Simple Link Question
Site 1 ---nofollow link--> PDF Doc(on Site 2 ) ---link in PDF ---> Your Site
Assuming Site 1 is a high profile site the PDF will get some benefit out of that nofollow link and the link from the pdf will help your site. So in theory there is some small amount of indirect SEO value to be had there.
-
RE: Skip indexing the search pages
To Block search pages from the index you can try adding the META NOINDEX tag in the head section of the search pages
-
RE: What is the best approach to handling 404 errors?
Hi Dave
404 errors will happen on website and you dont have to usually worry about them ( unless they are in alarmingly high numbers ) . You only want to worry about 301ing 404 pages when you are losing link juice with those.
I would use these 3 methods to find 404s on the site
-
Like Chris mentioned using Screaming Frog
-
Use your Analytics Package and search for traffic landing on the 404 page
-
Use Google Bing Webmaster Tools and see the 404 message warning ( in Crawl Stats area )
Form here you would want to 301 all valid 404 error pages to the close resembling pages ( that visitors will find useful ).
-
-
RE: /index.php/ What is its purpose and does it hurt SEO?
I have seen IIS servers usually throwing in index.php in the urls ( while working on wordpress sites ) . It's best to talk to your hosting company about it, they will have a good idea why this is the case. Usually you will have to edit the web.config file to rewrite the urls without index.php in them.
Again this would depend a lot on your server configuration and the CMS you are using , best to ask your host.