You're welcome Emily.
Moz has a list of recommended companies here: http://moz.com/article/recommended
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
You're welcome Emily.
Moz has a list of recommended companies here: http://moz.com/article/recommended
I suggest you download your "latest links" from Google Webmaster tools to verify if the report is/isn't an OSE issue. If Google also shows all those new links, then you might want to look a much closer look and if those are spammy backlinks, start disavowing before getting any penalty.
206 means partial content, which is what your Website/Server is delivering to Facebook's request. Have you tested the "Fetch as Googlebot" under Webmaster tools to see if Google can get the files? https://www.google.com/webmasters/tools/googlebot-fetch
If you get an error there, then it must be something IP related with your server, as my test returned a 200 and a test using Googlebot as the user agent also returned 200, which means that the IP wasn't blocked (nor the user agent excluded), basically telling me that if Googlebot is unable to access your site (nor facebook) it must be something IP related.
Hope that helps!
The "page" you see only the image is the image file itself, there's no page there, just the file.
Wordpress does that by default but you can simply change that default to other options they offer and it is "saved" as the default, like no link, link to another page, etc.
The only benefit of having the link to the image file is that usually images are scaled to fit into posts, and therefore someone may want to see the image in its full size, hence the link to the image file. There are also other ways to deal with that like lightboxes to display images.
You could redirect the image to the page where the image is, but that requires some coding (detecting from where your image is being requested, etc.). Doing that may also carry a penalty from Google (recently announced) called "Image mismatch".
There's no "best practice" here, the best is what you consider best for each image. Take the image scaling example I mentioned, say you post an infographic, perhaps the image is much larger than the size you have available, so it makes sense linking to the image file, so the user can see the infographic in its full size.
Hope that helps!
Your site in fact lacks of lots of meta description. In the page you posted, you have the OG:DESCRIPTION tag, but not the description tag. If you are using some kind of CMS make sure you have the description tags active, the content could be the same as the one in the OG:DESCRIPTION.
Unfortunately, as this point, I think Moz isn't capable of doing that, although it would be a nice feature, however, Moz would still need to crawl the entire site to find the URLs that matching the URLs that match your wildcard settings, ending up in the entire site being crawled.
I am guessing you use some kind of CMS for those pages, maybe wordpress, then you can change the URL structure to be something like domain.com/blog/* and then create a campaign for domain.com/blog.
Anyways, you can contact moz at help@moz.com asking if they are planning to add such a feature.
(I run a test prior to this response to see if that worked, it didn't.)
Agreed.
Issue: the quotation marks!
FIX: set them right: " . I think that what you used is called "Double prime".
Hope that helps!
I'm with Peter. As I wrote you Yesterday in the question you posted: http://moz.com/community/q/all-keywords-increasing-rank-except-url-keyword-whats-going-on
I think they do.
Our scenario:
http://domain.com and http://domain.com/index both load with the same content, with the proper canonical tag, and we are not getting any duplicate content warning (or titles and desc's).
Implementing the redirect at the server level will be much better as you can redirect all pages to their correspondent page in the .com version.
You will need the URL-rewrite extension: http://www.iis.net/downloads/microsoft/url-rewrite
Then create a "Canonical Hostnames" rule to redirect the other domains.
Hope that helps!
EDIT:
You should check your entire HTML code, there seems to be lots of issues there, for example:
After going back to your campaign dashboard, does the campaign shows up there?
If not, you can do 2 things:
Set he new campaign up in PRO (in Analytics you can switch to PRO), and then contact contact help@moz.com to look into it.
Hope that helps!
What do you mean by "search engine simulators"?
I tested crawling your site with googlebot as the user agent and it worked just fine.
Google and other engines are capable of running javascript and ajax just fine that shouldn't be an issue.
What I would suggest is to look over your pagespeed. Your homepage loads a TON of external files, about 50 requests for JS and CSS files. You should really consider putting all those codes into a single JS and CSS file instead, making over 50 calls (+ the extra ajax calls) are WAY too many!, not to mention the hundreds of lines you have of inline JS and styles...
Unfortunately no. Moz gets your traffic number because you allowed access to your Analytics. There's no way to know others if you don't have access to their Analytics.
There are sites that provide some open metrics, but almost all are inaccurate: Alexa, Compete, etc unless you installed their tracking code.
You can return a header last-modified, LastMod in your sitemap, or use the Fetch As Googlebot in WMT and then submit the page to the index.
Even with any of these in place, Google automatically re-crawls your site every few days/weeks or even hours...
They almost never tell the "why" and "how", because spammers will eventually find a way to overcome it. Cutts at pubcon simply said something like "the roll out of Google authorship opened the door to anyone that can implement a Google author tag. In the coming months Google will finish up a plan to look at social factors around google authorship and award higher priority to author’s that are truly authoritative on a topic and that have published social conversations on that topic."
Moz uses servers probably in the location offered (or the nearest, I guess) with a clean browser, no previous cookies and history on Google. Those are the rankings you can see in the rank tracker.
Instead of taking the results from Moz as "exact" I suggest you take them as a "trend". Most users have their Google search personalized with god knows how much information Google gathered, and therefore, most likely, you can use 2 browsers within the same computer and get different results.
Using rank tracker as a trend helps you understand what is happening with your site ranking-wise (increasing/decreasing).
If you MUST http all pages then a simple rewrite rule will do (if using apache).
RewriteEngine On
RewriteCond %{HTTP} !=on
RewriteRule ^/?(.*) http://%{SERVER_NAME}/$1 [R=301,L]
Users won't get warnings as that only happens when you are POSTING data from an HTTPS page to an HTTP. Just make sure any sensitive info is transmitted over HTTPS to prevent any eavesdropping.
Hi Rob,
I personally wouldn't go the way you are heading... that could be seen by Google as a technique to manipulate search engine results (which you stated it is).
But to respond to your question, why don't you use the "definitive" version of the page as the canonical? If the one including "near downtown" is the most accurate (and complete one as I guess the hotel IS near downtown) then you should go with that and noindex the alternatives... although I know that's not your intention, that is the way it should be done.
Hope that helps!