Questions
-
Some web 2.0 sites blocked by robot
Hi, Pages blocked by robots.txt file will not be considered for ranking. You still see those pages in Google's index is because of their mentions on other websites. Google has been doing link discovery from websites other than these Web2.0 sites of yours. So to stop Google completely from indexing these pages, you should go in for page-levelrobots meta tag. Here you go for more: https://support.google.com/webmasters/answer/93710?hl=en Once you implement the page-level robots meta tag to block those pages, Google will remove them from the index after the next crawl. You can expedite this process by using the link removal procedure in webmaster tools account. Here you go for more: https://support.google.com/websearch/troubleshooter/3111061?rd=1 Good Luck to you my friend. Best regards, Devanur Rafi
Link Building | | Devanur-Rafi0 -
How much Domain Authority score required ?
A couple of notes here first. Google has really clamped down on private blog networks. Your effort might be better spent in other areas than developing a network. Domain Authority is a Moz metric, and the search engines don't use our metrics for rankings. We approximate, to a degree, how we think the search engines might value sites. How the search engines treat an expired domain may depend on what you do with the site itself. If it had information about Ford Explorer tires and now it has information about hot air balloons, they can tell that something big has happened.
Link Explorer | | KeriMorgret0 -
Is .ME domain is effective in SEO ?
I think personally older people do not know too much about them. As far as just plain organic search it should be fine. But at the same time I would poll people you know and ask them if they know about .me domains, or even people you would think that could be visitors to your site.
Intermediate & Advanced SEO | | LesleyPaone0