Appears I broke the site... sorry
Posts made by sprynewmedia
-
RE: HTML5 Nav Tag Issue - Be Aware
-
RE: HTML5 Nav Tag Issue - Be Aware
Here is how one could test this to be sure:
- Create site on a throw away domain that includes:
- home page
- sub page (containing unique text in title and body)
- orphaned sub page
- Place the nav tag on all pages with links to only the first two pages.
- Add some dummy content but don't create any other links.
- Link to the orphaned page from a decently trusted and ranked page on another site.
- Wait 2-4 weeks.
- Search for the unique string and write a YouMOZ post about your findings.
-
RE: HTML5 Nav Tag Issue - Be Aware
While I have found it does, you could always use a logo link to accomplish this.
-
RE: HTML5 Nav Tag Issue - Be Aware
To be sure I understand; you have a site-wide header ,
<nav>section but you are not seeing the backlinks from all the pages in the GWT internal links report?
(Incidentally, my experience has shown these links do count.)
Could we see the site?
How long ago did you post the nav element?
</nav>
-
RE: HTML5 Nav Tag Issue - Be Aware
This seems reasonable and a good way to ensure the link is allocated correctly.
I presume your issue is you have external links inside a
<nav>container?
Follow up: it appears the specifications do suggest the nav element is for internal links - the element is primarily intended for sections that consist of major navigation blocks. External links are generally not considered major navigation, no?
http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-nav-element
</nav>
-
RE: Html text versus a graphic of a word
Short answer: fair amount and/or no point.
Alt tags have been abused so they'd be on the short list of places to watch for keyword stuffing. H1s are also on that list but fine within normal use (1-2 a page, highly visible, title length content).
With modern web fonts the only reason to use an image to replace words is a really fancy treatment of the font. Even then, use an image replacement technique so that Google has a easier time understanding the content.
http://www.zeldman.com/2012/03/01/replacing-the-9999px-hack-new-image-replacement/
Save the alt text for describing actual content images.
-
RE: Website Design Structure
Responsive.
But... I prefer a server AND client approach. Pure media query techniques result in really heavy pages that make too many design compromises. I use server side client capabilities tables to narrow the range of stuff different clients get. However they all use the same URL.
-
RE: Robots review
Thanks Aaron.
I will add the rules back as I want Roger to have nearly the same experience to Google and Bing.
Is it best to add one at a time? That could take over a month to figure out what's happening. Is there an easier way to test? Perhaps something like the Google Webmaster Tools Crawler Access tool?
-
RE: Robots review
All urls are rewritten to default.aspx?Tabid=123&Key=Var. None of these are publicly visible once the re-writer is active. I added the rule just to make sure the page is never accidentally exposed and indexed
-
RE: Robots review
Actually, I asked help this question (essentially) first then the lady said she wasn't a web developer and I should ask the community. I was a little taken back frankly.
-
RE: Robots review
Can't. Default.aspx is the root of the CMS and the redirect will take down the entire website. Rule exists for only a small period where Google indexed the page incorrectly.
-
Robots review
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before.
User-agent: Rogerbot
Disallow: /default.aspx?*
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspxDisallow: /ctl/
// Keep from indexing the admin controlsDisallow: ArticleAdmin
// Keep from indexing article admin pageDisallow: articleadmin
// same in lower caseDisallow: /images/
// Keep from indexing CMS imagesDisallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls.general rules lacking wildcards
User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx
-
RE: Markup reference data using Scheme.org
Near as I can tell, reference material has not yet been addressed.
This means I have to extend the schema myself as outlined here: http://www.schema.org/docs/extension.html
Not 100% sure how to go about that in a way that will actually register properly. Good chance I will get it wrong.
-
RE: Markup reference data using Scheme.org
Thanks but that's not that related to schema.org
-
RE: Markup reference data using Scheme.org
Sorry, but that is literally no help. I've read the site a fair bit - a link to it's sitemap doesn't get me any closer.
-
RE: Do drop caps impact the search value of your content?
Can't see it especially if the span is added with JS. But, if you are worried, try pure CSS with pseudo class "first-child": http://css-tricks.com/snippets/css/drop-caps/
-
Markup reference data using Scheme.org
Can anyone point me to a page showing how to mark up reference data according to schema.org ? Ie glossary or dictionary page.
-
RE: Block an entire subdomain with robots.txt?
Fact is, the robots file alone will never work (the link has a good explanation why - short form: all it does is stop the bots from indexing again).
Best to request removal then wait a few days.
-
RE: Block an entire subdomain with robots.txt?
You should do a remove request in Google Webmaster Tools. You have to first verify the sub-domain then request the removal.
See this post on why the robots file alone won't work...
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
-
RE: Advice regarding Panda
Wow, you have really highlighted how far the coping of the site has gone. This site was created in '96 and many of the definitions are over 10 years old now. I can guarantee the author didn't copy the content unless cited. But, I can see how the definitions of the same thing would come out very similar.
Take http://www.poole.it/cassino/ARCHIVE/TEXTS_legal/duhaime%20online%20legal%20dictionary.htmshows
even with the single (wrong) back link, this would be competing for original content title, right?
Looks like we have to get busy with DCMA requests.
The next step for the citations is a separate domain which is a shame - Google really needs to catch up to reference sites and stop treating all pages on the web the same. Sure, the citations pages don't have to rank at the top but they shouldn't be hurting the rest of the content either.