If the non-plural version has potential, then it's obviously relevant to your site, so I don't see any reason why not to target it.
Posts made by mytouchoftech
-
RE: Should you target non-plural if you are ranking highly for plural kw?
-
RE: How do I start trying to get links from blogs
Reciprocal links don't hold as much power as 1 way links from their site to yours, so Moosa's suggestions are a better route. If you find blogs in your niche, you could also build your reputation through commenting on these blogs. It could provide traffic through readers realizing you are another good resource on the topic.
-
RE: Best links to gain?
It is a do-follow link, and the page has good PR. That's a nice link to have in the portfolio. Not only for the juice, but because you may also get good traffic from it.
-
RE: Client needs a basic page analysis tool
SEOquake has a free keyword density, and SEO analysis add on for Firefox you could try. Their tool has a well rounded approach that's easy to understand.
-
RE: Linux Server recognizing ASP Pages (301)
I'm not sure I fully understand why you need a spoof. If you put the 301 rewrite condition in the .htaccess file, Apache will read that before doing anything else with the page, is doesn't matter if the extension will execute on Linux because it is redirected before anything happens with the actual file.
-
RE: How can I tell which website pages are hosted on the root domain vs the www subdomain?
Yes I would do both. If your goal is to standardize to either www or non-www, then redirect to one or the other. This will assure new backlinks go to the desired version. The canoicalization should standardize the link power on Google's end.
-
RE: Dropped 12 after following SEOMOZ Tips
Did your changes coincide with anything else? Like a push in backlink generation?
-
RE: Should I remove all rel=nofollow links?
If the links are pointed at your own sites, there's no reason not to let the juice flow to your own pages.
-
RE: Forum Profile Links
Building links on highly relevant sites would be a good idea if you're willing to put in the time to make a useful post. If the forum allows do-follow links, that will help you a little. As part of a balanced link building strategy, they could be beneficial.
Any site that sells you thousands of links will be using an automated tool. This is considered very spammy because the links have no value, they'll be on trashy sites, and on sites that there will be thousands of other links on as well. The value a service would provide you wouldn't be worth the money.
-
RE: Moz Crawl Reporting Duplicate content on "template" styled pages
Adding Canonical links certainly wouldn't hurt. I don't think it'll help you though. It may see them as duplicates because there is so much of the page that is the same. Is it possible for you to throw in a paragraph description about the scholarship for each one? There isn't much for a robot to compare when 80% of the page is the same and just a few fields are changed.
-
RE: Dropped 12 after following SEOMOZ Tips
Did you remove any actual content, or change the url of the page, or use rel='canonical' links?
-
RE: How can I tell which website pages are hosted on the root domain vs the www subdomain?
Chances are they aren't separate pages. canonical tags are what you want to use. They essentially combine different ways of getting to your pages into 1 indexable page. So
all point to the same page, but if you put in the section, no matter how the crawler got there, it should consider the page www.example.com, which means you wont have the competing pages issue.
-
RE: Number of characters to duplicate content
As stated, the titles have to be exactly the same. 1 character different and they're not duplicate. Although 70 characters is what Google displays in the searches, they index more than that. You can make your title 140 characters long to make it unique, it just will cut off in the search.
-
RE: Finding page authority for a list of sites
If you don't mind paying for a tool, scrapebox will do that for you. You import a list of url's then it can check PA or DA and you can export the results.
-
RE: Extracting contact info from a website
Scrapebox apparently has the capability to find contact details off a page, but I've never tested it.
-
RE: SEOMoz Crawling Errors
Where you installed the blog shouldn't matter as any content in the blog will be structured under the directory you installed it in. I've run a few blogs out of sub directories with no issue.
If you're comparing your page speed from pre-blog to post-blog, chances are they'll be slower because of the nature of the blog. Every widget/plugin slows down your loading time, as may a poorly designed theme. Add the fact that you're using a database to serve your content, it slows down more, especially if you're on a shared hosting platform that has other sites doing the same thing.
As for your 404 error. Is products.html a page that's outside of the blog directory? If so, you have to hardcode the http://www.mysite.com in the href or the blog will auto append it's base directory on it.
-
RE: Should I invite people to guest post?
Guest Posts should be unique content, so you may not get a huge response if you're site will provide little value to the writer of the content. That being said, I agree with Anthony that the benefits are there for you. If you get a really great writer that creates a viral piece of content, it will be associated with your site, and you'll get all the links for it.
-
RE: Blog Comments
I have to say that a well posted comment with some meaning has driven traffic to my personal blog. Every once and a while I'll see a strange referrer and it's due to a random blog comment I posted. Has it made a huge impact? Not really. Could it be outsourced? Probably, if done by a professional that will take the time to make the comment meaningful. You won't find a cheap way to outsource a true comment though.
-
RE: Getting Rid of Duplicate Page Titles After URL Structure Change
It's known that when you 301 a page, you loose some of the power it once had. That being said, if you're basically 301'ing your whole site, Google will have to reevaluate everything.
For your category pages, add the robots tag and Google will not index them.
if you're using wordpress, here's a quick snippet.
if (is_category()) {
//Allow indexation of category URLs
echo ‘’. “\n”; } elseif (is_404() || is_search() || is_author() || is_archive()) {
//Do not allow indexation and crawling of 404, search, author and archived pages
echo ‘’. “\n”; } else {
//For other URLs, not mentioned above such as post URLs, Page URLs - allow indexation and crawling.
echo ‘’. “\n”; } ?>
-
RE: Correct Canonical Reference
Can you check to make sure those pages are still indexed by Google? If the pages that were indexed are no longer indexed, then your canonical links have interfered with the ranking.