Thanks everybody!
Posts made by ntcma
-
What are some powerful reviews websites for online-only businesses?
Looking for a small handful of places that I can lead customers to, following a transaction with my dot com (i.e., no brick and mortar presence) business, so that they can leave reviews
Chiefly interested in the sites that Google is most likely to notice
Thanks!

-
RE: What sources do you use to keep on top of SEO news?
Thanks everybody! Nice list

-
What sources do you use to keep on top of SEO news?
I want to try building an RSS feed of SEO news... but not wanting to find myself drowning in materials
As such, looking for a short list of recommendations for keeping on top of SEO developments – the impetus is that I'm still discovering changes that happened 2, 3, even 5 years ago, and I want to try and catch these things as they happen.
Thinking something actually from Google may be on the list, but some of these sources are pretty on top of things!
Seroundtable.com also comes to mind.
But what do you use to keep informed?
Thanks

-
RE: Should I use meta noindex and robots.txt disallow?
Hi,
Thanks, I will do some testing to confirm that this behaves how I would like it to
-
RE: Should I use meta noindex and robots.txt disallow?
When you say:
nofollow will tell the crawlers to not crawl the page
I believe you mean to say that this will tell the crawlers not to crawl the links on the page, the page itself is itself still "crawled" is it not?
But yes, you are right to say, that once robots.txt disallow is in place, the meta tag will not be seen and thus be moot (at which point I may as well take it off).
It would be nice to be able to say "don't crawl this and don't put it in the index"... but is there a way?
-
Should I use meta noindex and robots.txt disallow?
Hi, we have an alternate "list view" version of every one of our search results pages
The list view has its own URL, indicated by a URL parameter
I'm concerned about wasting our crawl budget on all these list view pages, which effectively doubles the amount of pages that need crawling
When they were first launched, I had the noindex meta tag be placed on all list view pages, but I'm concerned that they are still being crawled
Should I therefore go ahead and also apply a robots.txt disallow on that parameter to ensure that no crawling occurs? Or, will Googlebot/Bingbot also stop crawling that page over time? I assume that noindex still means "crawl"...
Thanks

-
RE: Any success stories after removing excessive cross domain linking?
Hi, and thanks for the response.
1. Yes they are on separate domains
2. Do you have any references or experiences to share as per the question title? I.e., what actually causes you say that this will be "ok", whereas this will "cost authority", etc. This kind of information by itself isn't much to base a strategy on...

-
RE: Can 302 chains (affiliate links) from "toxic" sources hurt you? Or are you "shielded"?
Thanks; I had not heard that a 302 over time can start to be considered a 301 – would you be able to share any information about this?
I took your advice, and so far haven't noticed anything in WMT, but that only shows the top 1,000 links; also looking through ahrefs.com... nothing confirmed yet.
-
Any success stories after removing excessive cross domain linking?
Hi,
I found some excessive cross domain linking from a separate blog to the main company website.
It sounds like best practice is to cut back on this, but I don't have any proof of this.
I'm cautious about cutting off existing links; we removed two redundant domains that had a huge number of links pointing to the main site almost 1 year ago, but didn't see any correlated improvement in rankings or traffic per se.
Hoping some people can share a success story after pruning off excessive cross linking either for their own website or for a client's.
Thanks

-
Can 302 chains (affiliate links) from "toxic" sources hurt you? Or are you "shielded"?
Hi,
I'm going through some affiliate links, which send visitors to our website via a chain of several 302 redirects
Some of them are relevant links, others perhaps not so much.
I know that Google doesn't pass PageRank on 302s...
But are they still considered valid links that pass, let's say, "reputation", "relevance", "link neighbourhood" kind of signals?
Otherwise put, is a 302 similar to adding the "nofollow" attribute on a link? Sort of? Not at all?
More succinctly put, should I be worried about "toxic" sources separated from us by 302 redirect chains?
By the way, yes I recognize that (Google's 302 redirect chain handling aside) associating our brand with perhaps what some might consider spammy websites is not in general a good move; I'm concerned with the technical SEO implications here however.
In fact, this technical information may very well help drive decisions/policies on where we allow our affiliate advertising to appear.
Thanks

PS - The affiliate company by the way is cj.com if that helps
-
RE: Looking at acquiring a competitor with a high organic ranking (WordPress Plugin)
First thought is, this may be more of a "knock them out of the picture" move rather then really directly take over their organic traffic per se (i.e., via something automated such as a 301 redirect).
I.e., if they no longer were to exist, would your product then rise to the top of organic? What could the organic "shuffle" end up looking like?
Now, if you want to reuse some of the organic "juice" that they have earned over the years (and why not?), my best thought is to simply identify the external backlinks which point to their plugin page, and then ask if people will update their pages to link to your URL/plugin page. This would only support the first thought (the shuffle).
Now you can actually start getting new links; the acquisition of a popular plugin is fairly interesting news in some spaces – you might be able to get some PRs out regarding the takeover and start picking up new links. A bonus is that you'll get your plugin name co-cited alongside the acquired one, which may work to your benefit.
Next thought: You would of course have the opportunity to segue their plugin users into yours; handled properly you could possibly retain the majority of them! This could actually be done right within people's WP admin panels to notify them of the change, given that you would have access to their actual plugin code (which I of course assume you would).
You would have license presumably to use their plugin name for as long as you like – this would likely help. For example, if someone searches for their plugin name, you could create a web page which, over time, may end up ranking quite well for the term; I could imagine the landing page saying something like, "X is now Y" in the title tag.
Lastly, you could talk to the team at WP.org to see if they can do some sort of redirect for you. I suspect that this won't work, which I why I think that it might just be best to take the page down (in time). In the meantime you can request that the plugin page gets updated to mention that ownership has changed, and may be able to include a link to your plugin (another SEO win).
-
RE: Is having an identical title, h1 and url considered "over optimization"? Is it better to vary?
PS - We are indexed, just not ranking as well as we'd like
-
RE: Is having an identical title, h1 and url considered "over optimization"? Is it better to vary?
Hi,
Thanks for response

I get that, except that our top competitors are doing a-ok with their SRPs...
Maybe our SRPs look somehow more SERP-y than theirs do?

-
RE: Need help technically with dealing with "no results" pages on our internal search engine.
Hi, ok, turns that was a mysterious example!
But in general, I suppose that one day we might have results, which yes are reachable through a link for Google to find.
The next day the link may be gone if we have no live results, but Google will not have realized that it's a 404 yet until they go back to it themselves (since the link is gone)
In the meantime though it is a 404
-
RE: Need help technically with dealing with "no results" pages on our internal search engine.
Hi,
- One might find this URL in the Google cache for example, but on arriving to the site find that the current page points to no inventory, which at this point we treat as a page not found error.
- Yes, because it looks like a 302 from what I can see, which then issues the 404 ... seems like a less than ideal way to issue a 404 for that URL (if that's what we should be doing at all)
- An alternative is to actually issue a 200 and print a message like, "No results today, but come back soon or broaden your search". The big problem I see with this is controlling what are actually legitimate pages vs. nonsense pages (we should have some legit 404s in other words)
Thanks for the response

-
RE: Need help technically with dealing with "no results" pages on our internal search engine.
We need to compete with these aggregate pages, our competitors are ranking excellently with their aggregate results (i.e., search results) pages.
Our individual listing pages rank well and we receive a lot of inbound traffic to them; it's these SRPs that we're having trouble with.
We have taken steps to try and ensure responsible rel="canonical" usage. If you notice a case where things don't look right, please share!

Thanks for the reminder about the URL parameters in GWMT, we do use it, but it's been a while since I reviewed this

-
RE: Need help technically with dealing with "no results" pages on our internal search engine.
Thanks for the detailed consideration!
Again though, I'm looking for some advice on the technical working of our 404 rules, I'm concerned that we're not getting things quite right.
For example, when using a header code checking tool on the above page, I'm getting some mixed results (depending on the tool)
-
RE: Need help technically with dealing with "no results" pages on our internal search engine.
Hi, I appreciate the overview
I am more looking for help with the technical behaviour of our current setup, I suspect it is suboptimal (but perhaps Google can figure it out just fine).
I edited my original post to try and make my request more clear.
