Latest Questions
Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!
-
Google Fetch and Render - Partial result (resources temporarily unavailable)
Can anyone suggest any answers or has anyone had similar issues? I continue to monitor the site via fetch and render and the issues remain the same - lots of images, css and js files 'Temporarily Unreachable' (yet they do exist and the link can be clicked on). The website functions fine otherwise. As I say, I have changed website hosts and it is still the same. This is really affecting my rankings and if anyone has any clues I would be most grateful. Many thanks, Dan
Alternative Search Sources | | dan_550 -
Massive Spam attack against my domain - automate disvow of tld?
Annoyingly the disavow tool does not support complex matching. If you're after wildcard or regex matching for your disavow uploads, that's something you won't be able to get your hands on. It is a shame because, coordinated network link-bombardment really has no simple 1-click solution for webmasters right now (that's pretty poor!) You'd have to build something more complex which connects with the API of the tool which detects all of these links. It would have to have its own database and be programatically capable of updating that database. You'd need it to filter out all of the domains which don't match your pattern (and come up with regex / SQL queries for matching that exact pattern in a robust, reliable manner). It would have to de-dupe existing / new entries and then generate a text file for you. It would also have to be capable of comparing the file it generates against your existing file, so it doesn't lose your manual-mode disavows To me, it sounds like a lot of trouble to go to. I'd make a post about it on Google's forum here: https://productforums.google.com/forum/#!forum/webmasters - try to attract the attention of someone from Google and let them know that, these kinds of attacks do happen and you want the Disavow Tool (as a Google product) to properly allow people to defend themselves.
White Hat / Black Hat SEO | | effectdigital0 -
Buying the MozCon Local 2018 video bundle
Hi Jonathan, I think there werent any MozCon Local this year. Only the main event: MozCon 2018. I believe that you have found that, but anyway: The MozCon 2018 Video Bundle Hope it helps. Best luck. GR
Other Research Tools | | GastonRiera0 -
Changing URL's During a Site Redesign
Hi Jennifer, Considering that you tackle everything needed in a migration (such as proper 301 redirects, new sitemaps and all that).. What you should spect in an organic traffic perspective: Rankings will vary for some days while google is understanding the new site, hopefully wont take longer than a month. Visits could be down for the time where rankings are fluctuating Google will show both site in SERPs for some time Keep in mind that this is a migration, like any other. So it could come handy these resources: The Website Migration Guide: SEO Strategy, Process, & Checklist - Moz Blog The Ultimate SEO Guide for Successful Web Migrations at #DigitalOlympus - AleydaSolis Migration Best Practices - SMX London 2018 <- backedup by JohnMu in this tweet Hope it helps. Best luck. GR
Intermediate & Advanced SEO | | GastonRiera0 -
Importance of Spam Score
Hey there! We'd recommend checking out this resource page to learn about Spam Score and its applications: https://moz.com/help/guides/link-explorer/spam-score As for why you might not have a Spam score yet, it is determined by a separate crawler than the crawler that powers Link Explorer and updates roughly once a month. Because of this, it is entirely possible to see backlinks and domain authority, but still have no calculated Spam Score. As we continue to crawl and calculate sites, this will become less of an issue, but unfortunately at this time, we will need to wait for the Spam Score user-agent to find your domain. Hope that helps!
Other Research Tools | | moz_support1 -
How to handle no ad pages or no search result pages for a classifieds website?
Those pages that do not have Ads which actually is considered as a content are considered as low-value (thin content) pages and also those pages will consume part of your crawl budget. So let me explain my point. Thin content So what is thin content? Thin content is content that has little or no value to the user. Google considers doorway pages, low-quality affiliate pages, or simply pages with very little or no content as thin content pages. But don’t fall into the trap of just producing loads of very similar content: non-original pages, pages with scraped and duplicate content, are considered thin content pages too. On top of that, Google doesn’t like pages that are stuffed with keywords either. Google has gotten smarter and has learned to distinguish between valuable and low-quality content, especially since Google Panda. So in your case, the pages with no Ads meet with the criteria_** content that has little or no value to the user. **_First, let's assume one thing, there is no manual of how to deal with your case. Create content for those pages inviting them to register to your directory website If you have a problem with thin-content on your site, you can take steps to beef up its quality and meet your target audience’s needs. Consider some of the most commonly recommended SEO best practices for improving thin content. Keep those pages away from Google until they have content. Remove unnecessary pages. Instead of letting a poorly performing page impact your SEO goals, take it down. Eliminate all duplicate and irrelevant pages. Scaling back and refocusing on quality can help brands identify and invest in more value-added topics. Thin content is a barrier to SEO success, but you can easily remedy this incredibly common problem with a focus on quality over quantity. Before you publish any new content to your site, ask these questions: Does the content align with the chosen topic? Have we presented the information in an original way? Will it add value to our users? Focus on your Taxonomies Implementing categories and tags on your website is an important way to add structure to it. These taxonomies group content on a certain topic. When used properly, Google will understand the structure of your site better. Categories have a hierarchical structure. There can be subcategories within categories. Tags do not have a hierarchical structure. Think of it like this: categories are the table of contents of your website, and tags are the index. Your category archives are more important than individual pages and posts. If you sell boxers and you optimize every product page, all those pages will compete for the term ‘boxers’. You should optimize them for their specific brand and model, and link them all to the ‘boxers’ category page. That way the category page can rank for ‘boxer’, while the product page can rank for more specific terms. This way, the category page prevents the individual pages from competing. In your case, if you have a directory website and you promote dentists in Austin you should focus on that category page rather than focus on every single ad of each dentist
Local Listings | | Roman-Delcarmen0 -
No Control Over Subdomains - What Will the Effect Be?
Thank you for that reply. As you mentioned, I think I am more struggling with my task and that unified mission across all subdomains in an industry where only a handful of institutions can make that happen. There was lots of good information to take from this. Thank you.
Intermediate & Advanced SEO | | Jeff_Bender0 -
Mobile first - what about content that you don't want to display on mobile?
Roman has covered most of the bases with his answer, so I won't retread old ground! But one thing I will note - my understanding is that with mobile-first indexing, content which is default-collapsed (to minimize clutter) won't be discounted. So if there is content you want to have on the site but the long-form nature is making the mobile experience feel cluttered, consider including it in expandable accordion style sections or similar. I would not recommend leaving it out altogether as Googlebot may no longer crawl your desktop site at all and all that content you add to the desktop site only won't give you any benefit.
Technical SEO Issues | | bridget.randolph0 -
How to get readers to engage with content
Hi, this is something that we have actively been tackling recently. It's hard to tell what kind of content you're making from your post but as a wholesale product company, we sell items to shops. A lot of our wholesale content centres around shop layouts or displays. It's something that independent shopkeepers have told us they like to see, so we've spent a lot of time on it. We've found that the best way to encourage readers to become contributors is to start by approaching some of our more outgoing clients and asking them to provide content. We then **heavily **credit them, link to them, thank them and quote them throughout the piece. Before you know it, the shopkeepers that read that content want the same thing for themselves and are much more likely to engage with the contact form. This content works well for lead generation because it means that we can show shop owners contextual examples of our products looking well. Just a small example with no quantitative data but I hope it helps in some way. Ross
Content & Blogging | | MSGroup2 -
Is it best practice to have a canonical tags on all pages
ColesNathan, Have you seen what Google has to say about canonicals? https://support.google.com/webmasters/answer/139066?hl=en You might find it helpful. They list reasons why you might want to use a canonical tag including those identified above and a few others, for example, for letting Google know your priorities when it comes to crawl budget and SERP display. Canonicals can also help undermine plagiarism. If scrapers leave your self-referencing canonical intact, it will tell Google you are the originator of that content and consolidate link signals into your URL.
Intermediate & Advanced SEO | | DonnaDuncan0 -
The Globe - Spam Link Network. Should action be taken to remove these links?
We don't believe that these links are harmful and we don't do anything about them. We believe that Google knows that these domains are crap and ignore them in the rankings determination - although some of them might still appear in search console.
Online Marketing Tools | | EGOL1 -
Same URL names in one domain
This is a technical question, that they need to tackle from database side. It can be implemented, but it needs a few extra development hours, depending on the complexity of your website architecture/cms used/etc.. Anyways, you are changing the URL, so don't forget about the best practices for them. Good luck!
Technical SEO Issues | | Keszi0 -
Canonical tag on a large site
Hi Cristiana Answering your question, I will say that canonical-tag to a site is not an option is a requirement, almost mandatory requirement. The canonical tag is directly related to the duplicated content issues. From a technical standpoint, you'll need to understand how duplicate content can unintentionally be added to a site. Many times, it's simply a canonicalization issue. For example, homepage canonicalization causes most duplicate content issues on sites. For example, search crawlers might be able to reach your homepage in all of the following ways: https:yoursite.com https:www.yoursite.com http:yoursite.com http:www.yoursite.com http://example.com/index.php Just to give you an example on Google Search Console you need to verify these versions of a single domain for single property http:www.yoursite.com https:www.yoursite.com http:yoursite.com https:yoursite.com Google will see each URL as a different page – and it won't know which one you prefer to send users to. The problem can get exponentially worse if it exists on every page on your site. The easiest way to solve the problem is with a server-side redirect that sets one of those URLs as the “official” version of the page, and only serves that version, regardless of which URL was the destination. You can also use the rel canonical tag – it's a directive that's inserted in the header of the page. It looks like this: rel='canonical' When you're starting SEO on a new site, you'll want to check out all of the canonicalization that's been declared, so that you have a solid understanding of what's going on with the site content.
Intermediate & Advanced SEO | | Roman-Delcarmen0 -
Preventing multiple market domains from appearing in the local search rsults
Hi Cristiana, It sounds like you need to implement hreflang on your sites. Hreflang can be complicated but you can start reading about it and how to implement it here: https://moz.com/learn/seo/hreflang-tag Cheers, David
Local Website Optimization | | davebuts0 -
How should one approach pagination on website
Hi TheTelescope, Your best bet is to mark up your category/news pages with rel next and previous pagination. this will indicate to Google that your pages are part of a series and help to avoid duplicate content. On /news page 1 - only use the rel next link. On /news page 2 - use both rel next and previous On /news last page only use rel previous to indicate end of series The links will look something like below You can find more info direct from Google below here Hope this helps Tim
On-Page / Site Optimization | | TimHolmes1