Questions
-
Technical SEO Question: Why is our new platform showing a small decline in traffic?
I would also look at how you are currently managing the sitemaps between the two platforms to make sure that the new pages have not been left out of the sitemap generation.
Intermediate & Advanced SEO | | Packaging-Group0 -
Case study re-directing one site into another?
Unfortunately most of the largest examples are not shared publicly. You could go looking for external data (e.g. SEMrush / searchmetrics / similarweb) around the time of big / well-known mergers. There have been a few over the years - but I don't know off-hand what historical data there is available. The biggest I remember being interested in was when Adobe bought Macromedia back in the day (but there won't be historical data going back that far, and anyway everything has changed since then). Aside from "outside looking in" approaches, the biggest public case study I could find is this one. Hope something there helps.
Technical SEO Issues | | willcritchlow0 -
Comparison of affiliate marketing programs
You'll probably have to Google two at a time. Most reviews are straight up comparisons. I would suggest starting with: Max Bounty vs. Max Bounty vs. Neverblue Max Bounty vs Clickbooth Neverblue vs Clickbooth To be honest, I don't mind CJ but I know people have had their ups & downs with them. Peerfly gets a LOT of support from 2012-2013 but everything since has been negative so I'd avoid. MB, NB & CB are the go-tos for the pros I know.
Affiliate Marketing | | MattAntonino0 -
Will Google recognize a canonical to a re-directed URL works?
I would update the canonical tag on your end to reflect that Page A (that's being redirected to Page B) is no longer the canonical/preferred URL. Add rel="canonical" href="http://domain.com/page-b" to the old & the new page. I would also send the new tag to the 3rd party with something like 'Hi there- I know you're all super busy, so we thought sharing the new canonical tag with you might help get things updated more quickly' - or something to that effect.
Intermediate & Advanced SEO | | Sheena_Schleicher0 -
Importing Keyword Planner Data into Excel?
So, there isn't any way to directly pull search volume directly into excel? I'm looking at a data set of 400K keywords
Intermediate & Advanced SEO | | nicole.healthline0 -
Jump to Navigation in SERPs?
It's just a naming convention, so for strictly navigational purposes it doesn't matter what you call them as anything will work.
Intermediate & Advanced SEO | | TheeDigital0 -
Schema markup for video playlists?
If you won't want to adjust the technical implementation of the videos - I'd recommend just using an XML video sitemap, rather than Schema.org mark up to provide metadata about your videos. You can use the gallery_loc tag to account for playlists.
Intermediate & Advanced SEO | | PhilNottingham0 -
Does the position of an author byline on a page affect authorship?
In my experience, it doesn't matter where the byline is. I've dealt with sites with is almost everywhere and recently had a site with it in the footer with no issue. Maybe not the ideal place to have it, but still an option it seems.
Intermediate & Advanced SEO | | WilliamKammer1 -
Why is "Noindex" better than a "Canonical" for Pagination?
I guess the short answer is that Google frowns on this practice, since the pages aren't really duplicates. Since they frown on it, they may choose to simply ignore the canonical, and you'll be left with the problem. I think the general problem is that this requires a lot of extra crawling/processing on their part, so it's not that it's "black at" - it's just a pain for them. I've typically found putting a NOINDEX on pages 2+ is more effective, even in 2014. That said, I do think rel=prev/next has become a viable option, especially if your site isn't high risk for duplicates. Rel=prev/next can, in theory, allow Google to rank any page in the series, without the negative effects of the near-duplicates. Keep in mind that you can combine rel=prev/next and rel=canonical if you're using sorts/filters/etc. Google does support the use of rel=canonical for variants of the same search page. It gets pretty confusing and the simple truth is that they've made some mixed statements that seem to change over time.
Intermediate & Advanced SEO | | Dr-Pete0 -
Why isn't open graph working for this URL?
Let's start with the basics. There's a tag there that does not exists: "fb:ids", I guess what you wanted to use here is "fb:admins". Now, the debugger is actually using the content in your meta tags properly. I tested also posting it to facebook and it also shows the appropriate data as you have specified in the tags, see the screenshot. HDMRMmY.png
Social Media | | FedeEinhorn0 -
Should we include a canonical or noindex on our m. (mobile) pages?
Thanks! What about re=next and rel=prev - if we added it to the desktop version, should we also add to the mobile version?
Intermediate & Advanced SEO | | nicole.healthline0 -
If you remove a 301-re-direct, will there be a corresponding drop in traffic?
If you are seeing a large increase in traffic as the result of the redirects, then removing them will reverse that shift in traffic. If it's SE referrals from long tail keywords going through the redirect, then you're going to definitely feel the hit from moving to 404's. Sure, your newly created pages may replace that traffic eventually, but the 301's are helping them do that now. Just the traffic standpoint is one reason, but passing any and (dang near all) of the link equity through to the new pages in the process.
Intermediate & Advanced SEO | | BrightHealth0 -
Why are these m. results showing as blocked?
Yeah, I was testing exactly the same when you posted the response. I even tried crawling a googlebot-mobile and still I get the 301 redirect. Which, for everything that I am seeing it is correct, as no matter what browser I use (desktop, mobile, spider) I always get a 301 to the www. version. @michelleh, are you sure there's a mobile version not being redirected to the www. one?
Intermediate & Advanced SEO | | FedeEinhorn0 -
Google showing high volume of URLs blocked by robots.txt in in index-should we be concerned?
I think it's worth it. I'm not sure what CMS you're using, but it shouldn't take much time to add noindex,follow to the header of all your pages, and then remove the robots.txt directive that's preventing them from being crawled.
Intermediate & Advanced SEO | | TakeshiYoung0 -
To index search results or to not index search results?
No see: http://www.rimmkaufman.com/blog/site-search-dynamic-content-and-seo/01032013/ http://www.mattcutts.com/blog/search-results-in-search-results/ My feeling is that if that site search info captures long-tail traffi,c why not find a way to make an indexable page targeting the words that have received traffic to that search page. Cutts writes in the post I linked to: "Google does reserve the right to take action to reduce search results (and proxied copies of websites) in our own search results"
Intermediate & Advanced SEO | | IOSC0 -
Was anyone hit by BOTH the 'Phantom' update as well as Penguin 2.0?
So, I've probably dug into "Phantom" as much as anyone, and I'm afraid the picture isn't very clear. I've heard of people who got hit by Phantom and then recovered at Penguin 2.0, people who got hit and then hit again, and I've seen multiple situations where people got hit by Phantom only. My gut feeling so far is that this was an independent update and not a Penguin 2.0 test. They might both be related to link factors or seem conceptually similar, but I don't think Phantom was an early or partial release. It happened across data centers and appears to follow a pretty typical algo change pattern.
Intermediate & Advanced SEO | | Dr-Pete0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
Thanks Takeshi. Why does Google send out a warning if it isn't a huge deal?
Intermediate & Advanced SEO | | nicole.healthline0 -
What is a "good" dwell time?
I have not seen any studies indicating such a thing, (but my guess is that is that dwell time seems to be such a strong signal of relevance that google would never release that info, I could be totally wrong though) An idea to improve UX... if you have a page with 2 paragraphs of text, take the average time it takes for 10 ppl in your office to read it and set the 'bounce rate' accordingly. Then you'll know if ppl are reading it. If you have a page with 2000 words, avg. that time, etc. If visitors bounce too soon, edit the text until your office avg. meets visitor avg. That would equal relevance right?
Intermediate & Advanced SEO | | IOSC0 -
Can URLs blocked with robots.txt hurt your site?
90% not, first of all, check if google indexed them, if not, your robots.txt should do it, however I would reinforce that by making sure those URLs are our of your sitemap file and make sure your robots's disallows are set to ALL *, not just google for example. Google's duplicity policies are tough, but they will always respect simple policies such as robots.txt. I had a case in the past when a customer had a dedicated IP, and google somehow found it, so you could see both the domain's pages and IP's pages, both the same, we simply added a .htaccess rule to point the IP requests to the domain, and even when the situation was like that for long, it doesn't seem to have affected them. In theory google penalizes duplicity but not in this particular cases, it is a matter of behavior. Regards!
Intermediate & Advanced SEO | | workzentre0