Questions
-
Domain Migration Question
The question that isn't being asked here is if rebranding to establish a brand is the better play. As you are finding out, when you register domains and build out sites determined by your offering/product your site is a bit limited. So why not reconsider the rebrand with a new domain that doesn't put you in this situation again. Then you'll of course want to migrate the appropriate content add the 301 redirects etc. Good Luck
Search Engine Trends | | NickLeRoy0 -
Issues with Crawl Test and SSL Certificate
Hi Chris! Kristina from Moz's Help Team here. It looks like you have quite a few Crawl Test results, so it is difficult for me to look into what may be the issue here without a bit more detail. If you don't want to share which test you are referring to here, can you please email us at help@moz.com or click the blue message icon in the bottom right corner of the product to send us the details of which test you had this issue on so that we can investigate further? I look forward to hearing from you soon.
Other Research Tools | | KristinaKeyser0 -
Robots.txt Disallowed Pages and Still Indexed
And don't forget to remove disallow in robots.txt first, if you want to remove it from index. Because if you add meta nofollow while the page is disallowed it won't go anywhere, crawler will not check it and it will stay indexed. Allow > Add meta noindex > wait for it to be deindexed > Disallow
Intermediate & Advanced SEO | | Igor.Go0 -
_Styling in my SERP, why?_
There are a couple of spots, such as the link title attribute ( <a title...="">where your tags are being translated into <em>. For example:</a> <a title...=""></a> _<a title...=""></a><a <="" span="">href="</a>/store/stanley/en_US/pd/productID.325299600" title="Classic One Hand Vacuum Mug <em>20oz</em>"> Not sure what's going on, but might be an oddity in the CMS. I suspect Google is processing one of these instead of the actual text, but not sure why._
Intermediate & Advanced SEO | | Dr-Pete0 -
Question about Syntax in Robots.txt
That's excellent Chris. Use the Remove Page function as well - it might help speed things up for you. -Andy
Intermediate & Advanced SEO | | Andy.Drinkwater0 -
What Transcription Product should I use for Videos?
We use Speechpad here at Moz. It's quite accurate; we really just go over the transcript after the fact for formatting.
Online Marketing Tools | | MattRoney0 -
Migrating Youtube Channels
The official Youtube take on it is here: https://support.google.com/youtube/answer/2404846 Unfortunately, you can’t merge or link separate YouTube channels. Similarly, you can’t transfer data from one channel to another (this includes videos). However, you can download your videos from your own channel. Once you've downloaded your video, you can re-upload it to a different channel. View count and other statistics will start over for the new upload. That's the official version.
Branding / Brand Awareness | | RyanPurkey0 -
DNS Prefetching for Tracking Applications
Hey Chris, one of the bigger two factors on page load speed are the number of http requests (meaning various files that have to be loaded, but each new domain slows things down even further). Essentially, every time they see a new subdomain or domain the browser has to go lookup DNS information for that URL. Because of that, prefetching DNS info for those 3rd party sites is valuable since it tells the browser "here is the full list of domains you need to get info on" right upfront. As Ryan mentioned, it's just one more thing that can get checked off the list. I think the value of implementing this is going to be strongest for high-traffic publishers which have tons of 3rd party data collection on their site from ad networks and other providers.
Online Marketing Tools | | KaneJamison0 -
XML and Disallow
Hi Thomas, I don't think that technically there is a problem with adding url's to a sitemap & then blocking part of them with robots.txt. I wouldn't do it however - and I would give the same advice as you did: regenerate the sitemap without this content. Main reason would be that it goes against the main goals of a sitemap: helping bots to crawl your site and to provide valuable metadata (https://support.google.com/webmasters/answer/156184?hl=en). Another advantage is that Google indicates the % of url's of each sitemap which is index. From that perspective, url's which are blocked for indexing have no use in a sitemap. Normally webmaster tools will generate errors, to let you know that there are issues with the sitemap. If you take it one step further, Google could consider you a bit of a lousy webmaster, if you keep these url's in the sitemap. Not sure if this is the case, but for something which can easily be corrected, not sure if I would take this risk (even if it's a very minor one). There are crawlers (like screamingfrog) which can generate sitemaps, while respecting the directives of the robots.txt - this would in my opinion be a better option. rgds, Dirk
Intermediate & Advanced SEO | | DirkC0 -
Adjusting Display of Sitelinks
Thanks! This is pretty much what I thought but thought I would check. Google is using all Caps for some words in other Site Links so my guess is it is somewhere on the site in the anchor text. Much Appreciated!
Intermediate & Advanced SEO | | DRSearchEngOpt0 -
Question about Multi-Locale/Lang Sitemaps
Nope, you can just have a regular sitemap with all of the urls. Then on the actual pages, declare the localities via the hreflang tag. Overall, you should declare the location associations on either the pages, the sitemap, or ideally, both. The more direction you can give crawlers, the easier it would be to understand.
Local Strategy | | OlegKorneitchouk0 -
Robots.txt Syntax
My advice is to go easy with robots.txt--it's a bit like dynamite, powerful, but can take your leg (or entire website) off. I like this checker: http://tool.motoricerca.info/robots-checker.phtml If you look ok after running that checker, then use the built-in Google one. Note that robots.txt syntax DOES NOT have wildcards. Apparently this doesn't stop a ton of people from using wildcards in them (to no effect, and clearly they didn't bother to test!). Another reason to avoid disallow in robots.txt is that if you disallow the engines from looking at a page's contents, then you're ALSO stopping the link juice that might have flowed to other pages it links to. So let's say you have 100 pages on your site that you're currently blocking with disallow in robots.txt. If instead, you put a meta robots "noindex,follow" in each of those pages, then every page linked to from those 100 pages (i.e. everything in your main menu) would get an extra 100 internal links worth of link juice.
Intermediate & Advanced SEO | | MichaelC-150220 -
Best way to indicate multiple Lang/Locales for a site in the sitemap
Yes, you are doing the right thing. You may also want to look at including Meta Tags in the as well. ()
Web Design | | DRSearchEngOpt0 -
How to optimize for a product by two names
Hi Thomas This is really a nice question. I found a nice post on Is Google’s Synonym Matching Increasing?. Also there is a good post on More info about synonyms at Google by Matt Cutts on Jan 2010.
On-Page / Site Optimization | | SanketPatel0 -
Crawl Test has taken over 5 days and still has yet to complete
Hey Thomas, Unfortunately, the issue is that the crawl tests are actually completing but we aren't able to access the reports at this time. Our engineers are working to fix the issue as soon as possible, but I'm afraid we don't have an ETA for when these reports will become available. I'm really sorry about that! I have sent the information for your crawl tests to our engineers so they can look into your account specifically and I am going to create a support ticket for the issue so that I can update you once we have a resolution for this issue. You will receive an email when the ticket is created and you should be able to access the ticket through a link in the email. So sorry for the inconvenience! -Chiaryn
Moz Tools | | ChiarynMiranda0 -
.com Outranking my ccTLD's and cannot figure out why.
rel=alternate works for GB vs US, for example. I don't see it as a problem!
Intermediate & Advanced SEO | | DiTomaso0 -
Virtual Domains and Duplicate Content
Sadly I can't say for sure what the outcome will be given that it's mixed signals. There's honestly no way to know in advance what would happen either short term or over time as Google attempts to sort it out via other signals and cross-referencing factors.
Intermediate & Advanced SEO | | AlanBleiweiss0 -
Non-Canonical Pages still Indexed. Is this normal?
It can take a while. I disagree very slightly with Alan and EGOL on one point - while 301s are traditionally more appropriate here, I often find that canonicals are pretty strong (and more than a hint). Both suffer the same problem, though - the signal has to be crawled and processed, and that doesn't always take right away. I haven't seen any reports on it taking 2, 3, etc. times to happen, but I've definitely seen a page re-cache without the indexation signals beign honored. Are these true duplicates or did something change in the interim a bit? If the duplicates don't seem like true duplicates or you put 1000s of them out there all at once, Google could choose to ignore the canonicals. If these really seem stuck, though, switching to 301s is harmless, and for a permanent URL change, it is probably the better way to go. I wouldn't expect that to kick in instantly either, though.
Technical SEO Issues | | Dr-Pete0