Hi!
On a wordpress blog, it will by default not add author information if you are the only author on the blog.
Either adding the code manually, as FedeEinhorn suggest or add a plugin that adds an author box could be a solution.
Good luck! 
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi!
On a wordpress blog, it will by default not add author information if you are the only author on the blog.
Either adding the code manually, as FedeEinhorn suggest or add a plugin that adds an author box could be a solution.
Good luck! 
Hi
I wouldn't be surprised if it took some time before the knowledge graph is updated. Have you tried looking at http://www.blindfiveyearold.com/knowledge-graph-optimization?
Looking at http://schema.org/Organization it looks as if there is a couple of options for adding structured data about founding date and founder. Perhaps adding that + structured data about the CEO with attributes validating against http://schema.org/Person on the companys website can do the trick?
Best of luck!
Anders
Hi Donna!
I know this is really a longshot, but i found this: http://www.zdnet.com/blog/apple/icloud-bug-creates-thousands-of-duplicate-bookmarks/12084. Given that the IP is supposed to be registered to Apple, and that it does not appear to have a host. Could it perhaps be that this is an iCloud server that has a ton of links/bookmarks to your site? I know it's a stretch but still...
If you click on the entry in GWT, are you able to see what subpages are linking your way? And are the links to your site direct links, or do they show as "via intermediate links"? If so, then it could be that someone has scraped your site. Or could it be that your client (or you clients hosting provider) has some sort of backup server or test/staging environment that resides on this IP and that for some reason was available to GoogleBot? See http://webmasters.stackexchange.com/questions/54100/gwmt-show-non-existent-backlinks-via-intermediate-link.
This is just me guessing, to please don't take this as definitive answers/solutions... 
I hope you'll figure it out. Please share if you find the answer to this.
Hi Donna!
According to one of the websites I used, I found that the IP range is owned by At&t Global Network Services Nederland B.v as you said.
Using http://www.iplocationtools.com/194.196.0.36.html I found that this IP is reproted to belong to apple.com. Same goes when going to http://ipaddress.is/194.196.0.36#.U85nh7HzlyI
Does that make any sense?
Best regards,
Anders
Hi!
From what I could tell, it wasn't that many pages already in the index, so it could be worth trying to lift the block, at least for a short while, to see if it will have an impact.
In addition - how about configuring how GoogleBot should threat your URLs via the URL parameter tool in Google Webmaster Tools. Here's what Google has to say about this. https://support.google.com/webmasters/answer/1235687
Best regards,Anders
Hi Devanur.
What I'm guessing is the problem here, is that as of now, GoogleBot is restricted from accessing the pages (because of robots.txt), leading to it never going into the page and updateing its index regarding the "noindex, follow" declaration in the that seems to be in place.
One other thing that could be considered, is to add "rel=nofollow" to all the faceted navigation links on the left.
Fully agreeing with you on the "crawl budget" part 
Anders
Hi!
It could be that that pages has already been indexed before you added the directives to robots.txt.
I see that you have added the rel=canonical for the pages and that you now have noindex,follow. Is that recently added? If so, it could be wise to actually let GoogleBot access and crawl the pages again - and then they'll go away after a while. Then you could add the directive again later. See https://support.google.com/webmasters/answer/93710?hl=en&ref_topic=4598466 for more about this.
Hope this helps!
Anders
Hi!
The reason why these pages keep popping up in WMT is that they have already been indexed. You could try to remove them from Google's index by using the removal tool in WMT (https://www.google.com/webmasters/tools/url-removal) or by setting up "301 Redirect" for them to more ideal pages.
Hope this helps 
Anders
Hi!
Have you considered blocking googlebot from the mobile site, only allowing googlebot-mobile to crawl the site? And vice versa for the desktop site? See https://support.google.com/webmasters/answer/1061943?hl=en for a list of googlebots...
Best regards,
Anders
Hi Philip!
If these URL's are already indexed, you should 301 Redirect them to the right URL (if they by chance have some inbound links). You could also try the URL removal tool from Google (see https://support.google.com/webmasters/answer/1663416) if all you want is to get rid of them.
Good luck, hope this helps.
//Anders
Hi Ruben!
I would say that this seems a bit "too much". How about adding a "location" area to the site, and link to it from some sort of main menu instead? Or perhaps one location area to every law section?
Best regards,
Anders
I agree!
In addition (more long term), you should look into trying to get the external links pointing towards Site 2 and 3 redirected to site 1 (get them to change the links), so that link value is not lost in the redirects.
Good luck!
Anders
Perhaps you could submit (or resubmit) a XML sitemap, alternatively use the "fetch as Googlebot" for the "old" URL. That way, Googlebot should see the redirect.
Good luck. 
Anders
Hi Oscar.
Is the redirect rule somewhat new? Perhaps the /Product.asp?PID=1236 was indexed before and GoogleBot has not yet made another visit to this page?
Anders
It appear to do a redirect to a URL within the product-folder, add the product name as a subfolder, and add the SKU# as the last part of the URL.
The two URLs should not cause any duplicate issues, as the http://www.repsole.com/Product.asp?PID=1236 does a proper 301 Redirect to the user friendly URL..
Best regards,
Anders
Hi!
The URL Product.asp?PID=1236 redirects to a more semantic("user friendly") URL. Or did you perhaps think of something else?
Hope this helps
Hi!
One more final suggestion: Add nofollow to the links at http://www.thelaw.com/partners/
Perhaps they are considered to be problematic.
Good luck in getting traffic etc back to normal.
Anders
Hi!
In regards to the "directory"-question, I think what Todd wondered, was if it was some special part of the site that had gotten a penalty, or was reported as problematic.
It appears that you have invisible links in the social sharing plugin on the left side (at least on the article-section) that are not nofollowed. Search in the source code for
.
I found this:
_<div class="wpsr-linkback"><a target="_blank" href="http://www.aakashweb.com/wordpress-plugins/wp-socializer/">WP Socializera><a class="wpsr_linkaw" target="_blank" href="http://www.aakashweb.com">Aakash Weba>_div>
Hope this helps!
Anders
Hi Michael!
Have you tried spidering your site (with screaming frog or similar tools), to see if any of the profile pages on the subdomain contains any suspicious outbound links?
Is there a penalty for the entire domain, or just the subdomain?
Best regards,
Anders
Hi Shailendra!
This sounds like a good idea if you expect to receive additional traffic towards this page. If not, you could always do a 301 Redirect to an events overview or something like that.
Best regards,
Anders