Thanks William. Good suggestion. I am on it! I'll post back here once I know more.
Posts made by danatanseo
-
RE: Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
-
RE: Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
Excellent thoughts! Yes, they are consistently the same IP addresses every time. There are several producing the same phenomenon, so I looked at this one 66.249.79.174
According to what I can find online this is definitely Google and the data center is located in Mountain View, California. We are a USA company, so it seems unlikely that it is a country issue. It could be that this IP (and the others like it) are inadvertently being blocked by a spam filter.
It doesn't matter the day or time, every time Googlebot attempts to crawl from this IP address our server returns 301 status codes for every request, with no exceptions.
I am thinking I need to request a list of IP addresses being blocked by the server's spam filter. I am not a server administrator...would this be something reasonable for me to ask the people who set it up?
Is returning a 301 status code the best scenario for handling a bot attempting to disguise itself as googlebot? I would think setting the server up to respond with a 304 would be better? (Sorry, that's kind of a follow-up "side" question)
Let me know your thoughts and I'm going to go see if I can find out more about the spam filter.
-
Why would our server return a 301 status code when Googlebot visits from one IP, but a 200 from a different IP?
I have begun a daily process of analyzing a site's Web server log files and have noticed something that seems odd. There are several IP addresses from which Googlebot crawls that our server returns a 301 status code for every request, consistently, day after day. In nearly all cases, these are not URLs that should 301. When Googlebot visits from other IP addresses, the exact same pages are returned with a 200 status code.
Is this normal? If so, why? If not, why not?
I am concerned that our server returning an inaccurate status code is interfering with the site being effectively crawled as quickly and as often as it might be if this weren't happening.
Thanks guys!
-
RE: Googlebot soon to be executing javascript - Should I change my robots.txt?
Excellent answer. Thanks so much Doug. I really appreciate it! Adding a "nofollow" attribute to the Checkout button is a good suggestion and should be fairly easy to implement. I realize that internal nofollows are not normally recommended, but in this instance, may not be a bad idea.
-
Googlebot soon to be executing javascript - Should I change my robots.txt?
This question came to mind as I was pursuing an unrelated issue and reviewing a site's robots/txt file.
Currently this is a line item in the file:
Disallow: https://* According to a recent post in the Google Webmasters Central Blog: [http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") Googlebot is getting much closer to being able to properly render javascript. Pardon some ignorance on my part because I am not a developer, but wouldn't this require Googlebot be able to execute javascript? If so, I am concerned that disallowing Googlebot from the https:// versions of our pages could interfere with crawling and indexation because as soon as an end-user clicks the "checkout" button on our view cart page, everything on the site flips to https:// - If this were disallowed then would Googlebot stop crawling at that point and simply leave because all pages were now https:// ??? Or am I just waaayyyy over thinking it?...wouldn't be the first time! Thanks all! [](http://googlewebmastercentral.blogspot.com/2014/05/understanding-web-pages-better.html "Understanding Web Pages Better") -
RE: Looking for a specific article by Rand or others
You might also try this one recently published on the Moz blog by @CyrusShephard : http://moz.com/blog/google-plus-correlations
-
RE: Is this site structure going to kill link juice?
I think this could be happening because of the way Google interprets the
- I tried it with the opening quote, but not the closing quote and it worked. Notice too that the text that's highlighted (or bolded) in the search result is everything up to the 
-
RE: Hiding body copy with a 'read more' button
You are welcome Dan!
-
RE: Website Hierarchy Question / Discussion
I agree with David. There are really arguments for going either way. I would give one edge to this method:
www.site.com/category-page/product-page
The advantage to using this instead of the super simple URLs is when you have a really large complex site and you need to move it to another platform. From an organizational standpoint, and just knowing from looking at your URLs what "lives" where, it's much easier if your URLs echo the structure of your site. Still, there are probably some ways to cope with that too, so depending on your CMS, this might not really be a problem.
-
RE: Hiding body copy with a 'read more' button
Hi Dan,
Yes, if you accomplish this with CSS and collapsible/expandable
tags it's totally fine. It's understandable why from a design standpoint it might be much more attractive to have a page with less words on it. Justin Taylor (@justingraphitas) actually did a bang-up job in a Mozinar on designing for SEO that discusses this exact topic: http://moz.com/webinars/designing-for-seo
Hope that helps!
Dana
-
RE: Lost Links in Google Webmaster Tools
I checked your site in OSE and looked at both versions of your URL: http://brownboxbranding.com and http://www.brownboxbranding.com
Has something recently changed with the way your domain is redirected? I ask because it looks like the lionshare of authority and links all are on http://www.brownboxbranding.com which redirects to the "non-www" version.
It seems to me like the redirect should be the other way around. i.e. the "non-www" version of your domain should redirect to the "www" version. I would also make sure that Google Webmaster Tools is set to reflect the correct "preferred" domain.
Anyone else see this as the possible problem?
-
RE: How do you feel when Moz marks one of your questions as "answered?"
I don't think you came across as defensive at all. I totally get the house-keeping issue. I know the "Bounty" section is something quasi-new...what about the possibility of just moving unanswered questions over there after they've gone unanswered for a set period of time, provided the person who posted responds to admin emails and indicates the question is still unanswered?
Perhaps another option would be for the original poster to reverse the "Answered" status?
I don't think Moz's intent at marking questions as "answered" was to effectively shut-down a topic, but, unfortunately, I do think that's what happens.
I agree with EGOL, I am not looking to see if someone marked my answer as a "good answer" or not, although I am always thankful if they do. What I do do is go back to questions I've answered to see if the person responded with another question or needs clarification on something and I try to help them if I can. Because I know sometimes people who are newer to Q & A often mark a question as "answered" when they read a response they "like" (but not be a complete answer), I'll often encourage them to continue to solicit answers from more people so they can get more input from the community.
It would be interesting to see data on how many threads complete stop getting new comments once they are marked as "answered." I bet it's more than 90%...which, from a UGC viewpoint, could mean Moz is losing out on content they would be getting by leaving more threads marked as "unanswered." Hmmm,
-
RE: How do you feel when Moz marks one of your questions as "answered?"
Amen! - Side note....I originally posted this discussion topic a week ago and it took me this long to come back and respond. I was really excited to see 13 new comments!
I totally agree with EGOL and Donna about the default view being changed to "Active." If this post hadn't been one of mine, I probably wouldn't have ever found it.
-
RE: How do you feel when Moz marks one of your questions as "answered?"
Excellent response! You know, I am here a lot...and I had no idea there was an "Active" view, so I am a perfect example of exactly what you described.
I really like your idea. It looks like Jenn has already picked up the ball and started running with it. That's very cool.
I agree with you EGOL that most often things get marked as "answered" when something is liked, but not necessarily answered. I have seen the thumbs down for answers that aren't necessarily what someone wanted to hear too, but less often lately.
I guess the whole reason I brought it up was because a few times I wanted more varieties of opinion on a question I had asked, but because it got marked as "answered" people stopped looking at it. Sounds like Moz might consider making some changes to the Q & A that could make it better. It's already really good, but I'm sure with some good feedback they can make it even better. Thanks again for chiming in!
-
How do you feel when Moz marks one of your questions as "answered?"
Hi everyone,
This is not meant to be snarky at all, so I just want to preface my question with that.
So, since the new re-branded Moz rolled out last year, I'm sure many of you have noticed that if you ask a question and it is answered by a Moz associate, your question is marked as "answered."
I'm sorry, but I don't like this. Here's why,
I'm the one who asked the question. I should be the one who determines if the answer was adequate for me, or if it didn't sufficiently answer my question. This is particularly true when my question doesn't have to do with a customer service issue or a Moz tool question.
If I ask a question about SEO, Content, CRO, marketing or any other subject, I feel like it should be me and only me who determines whether or not I feel like my question is answered.
In addition to this, Moz is actually depriving themselves of useful UGC by shutting down questions in this way. How? Because when the rest of us who frequent the Q & A see a question that's already been marked as "answered" we tend not to open it, read it and respond, because we think that person has already gotten what they needed....when in fact, it could be that a Moz associate has jumped in and marked their question as answered when it really wasn't. Consequently, we all miss out.
I propose/move that Moz associates can only mark questions as "answered" when they pertain directly to Q & A about Moz tools, service and support. All other questions must be marked as "answered" only by the asker or closed as "answered" after they have been dormant for 6 months or more.
Can I get a second (motion) ?
-
RE: We're currently not using schemas on our website. How important is it? And are websites across the globe using it?
Hi Pawan,
You're welcome
Yes, I believe you are correct in saying that the data highlighter really only translates to Google right now. However, it seems Bing and Yahoo1 really are doing very little with structured data right now. I think it depends on your industry regarding your timeline of adding the markup. If you are in the restaurant, food or travel industry, I think you really have to start now just to stay competitive. If you're in a niche, maybe it's not so crucial. One thing's for sure, what's true about structured data now will probably be different in 6 months, so whatever you do now will need to reviewed over time, just like most anything else related to SEO
There's always something new and always something changing. That's why we love it right?Dana
-
RE: We're currently not using schemas on our website. How important is it? And are websites across the globe using it?
I totally agree with Lesley. You asked why so few few sites might be using them. I think it's a question of knowledge and implementation. Unless you are extremely comfortable with HTML and XML, schema.org markup can be very intimidating. It also doesn't help that Google is choosing to display only certain elements of structured data right now, and even then, it's sporadic. In fact, recently, Google went from displaying a lot of authorship information to displaying less. This is all still in experimental stages. That being said, will it go away? i.e. Is it just a search fad?
My answer is: "no," structured data (also referred to as "schema," "microdata," "rich snippets," and "microformats" ) will only become more and more important until search engine bots get better at understanding different elements of a Web page, for example, understanding that there might be a MSRP price, an "our price" and a "regular price" simply by crawling the data. Right now, bots aren't very good at that because if they crawl three prices, all they are understanding is a very basic "$10.00" - "$8.00" - "$7.00" - but they won't have any idea how those three prices relate to each other without schema.org markup. Or, as another example, especially for e-commerce, a product page might have many images on it. How does a bot know which image on the page is the main product image? Bots aren't quite smart enough to know this because they can't "see" a page like a human sees a page...they can only crawl code.
But, fear not! There is help! Google initiated a microdata highlighter in Google Webmaster Tools sometime last year. If you have a smaller, simpler site, you can use this tool to markup your pages with schema without knowing a lick of code. Here's how to do it: http://www.danatanseo.com/2013/08/google-finally-demystifies-structured.html
Hope this is helpful!
-
RE: I need help!
I liked #2 because it was the only one that really communicated to me that the site was about design services.
-
RE: GWT shows 38 external links from 8 domains to this PDF - But it shows no links and no authority in OSE
Very interesting Travis. I hadn't even thought to take a look at some competitor's pdfs to see what they are looking like in some of the same tools. Yes, this is something we need to keep testing to see if we can figure out if going through the trouble of inserting links back to our domain is a worthwhile project.
-
GWT shows 38 external links from 8 domains to this PDF - But it shows no links and no authority in OSE
Hi All,
I found one other discussion about the subject of PDFs and passing of PageRank here: http://moz.com/community/q/will-a-pdf-pass-pagerank But this thread didn't answer my question so am posting it here.
This PDF: http://www.ccisolutions.com/jsp/pdf/YAM-EMX_SERIES.PDF is reported by GWT to have 38 links coming from 8 unique domains. I checked the domains and some of them are high-quality relevant sites. Here's the list:
Domains and Number of Links
prodiscjockeyequipment.com 9
decaturilmetalbuildings.com 9
timberlinesteelbuildings.com 6
jaymixer.com 4
panelsteelbuilding.com 4
steelbuildingsguide.net 3
freedocumentsearch.com 2
freedocument.net 1However, when I plug the URL for this PDF into OSE, it reports no links and a Page Authority if only "1". This is not a new page. This is a really old page.
In addition to that, when I check the PageRank of this URL, the PageRank is "nil" - not even "0" - I'm currently working on adding links back to our main site from within our PDFs, but I'm not sure how worthwhile this is if the PDFs aren't being allocated any authority from the pages already linking to them. Thoughts? Comments? Suggestions? Thanks all!