Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • Thanks Martijn, we'll look into the blog post and implement. dave

    | Daaveey
    0

  • Hi there, Regarding the X-Robots tag. We have had a couple of sites that were disallowed in the robots.txt have their PDF, Doc etc files get indexed. I understand the reasoning for this. I would like to remove the disallow in the robots.txt and  use the X-robots tag to noindex all pages  as well as PDF, Doc files etc. This is for a ngnix configuation. Does anyone know what the written x-robots tag would look like in this case?

    | Bobbi_Tschumper
    1

  • Great news, glad you could solve it, thanks for the update David

    | R0bin_L0rd
    0

  • OK that's good to know.  We do inadvertently have a lot of our pics on GI so I was obviously doing something right all these years. Thanks

    | robandsarahgillespie
    0

  • Thank you!!!! I've printed up your responses and applying your suggestions to my 2018 strategy. I've been using similar methods but this really breaks it down and gives me what I need to make an actual organized game plan going forward.

    | LindsayE
    1

  • Hi, Assuming that each page that you are referring has unique content, meta tags in place, etc. In sum, complies with the SEO best practices. The answer to that difference might be into the "ecosystem of its content and queries". If a page answers a query and is indexed soon. That is not guarantee that another page with different content will have the same result. We might say it is about each topic: if the content value/answer was fully addressed by other webpages, algorithms might not be interested soon. Rankbrain plays a role into this matter. Good luck! Mª Verónica

    | VeroBrain
    1

  • Thank you that confirms my thinking

    | seoanalytics
    0

  • The regex in your RedirectMatch doesn't say what you think it says, Jes This part (note the bolded part of the expression (.*) /category/Sample-Category**(.*)** doesn't actually say "match the URL that is specifically** /category/Sample-Category"** That**.*** is a wildcard thatmeans "and any other additional characters that might occur here" So what it's saying is "match the URL /category/Sample-Category _**as well as **_any URLs that have any additional characters after the letter "y" in category. Which is what is catching your -1 variation of the URL (or the -size-30 in your second example). In addition, that wildcard has been set as a variable (the fact it's in brackets), which you are then attempting to insert into the end of the new URL (with the $1), which I don't think is your intent. Instead, try: RedirectMatch 301 /category/Sample-Category https://OurDomain.com.au/New-Page/ you should get the redirect you're looking for, and not have it interfere with the other ones you wish to write. Let me know if that solves the issue? Or if I've misunderstood why you were trying to include the wildcard variable? Paul P.S. You'll need to be very specific whether the origin or target URLs use trailing slashes - I just replicated the examples you provided.

    | ThompsonPaul
    0

  • Fortunately, I do have full control of the server and backups. But I should have never agreed to allowing the modification of plugins. At the time I did not understand the implications. Would it help a new coder if the previous developer  provided a detailed description of the modified plugins? What if I agreed to pay the old developer to act a consultant to a new developer? My developer believes he is in the drivers seat and is charging me 3-5x what is reasonable and fair.  Result is that I can't afford to make any meaningful improvements to the site.

    | Kingalan1
    0

  • Hi Brooks, Thanks for the input. It is great to know that it is also working in your ecosystem.

    | VeroBrain
    2

  • Hey Natty, Not sure how much organic traffic you consider to be "not much", but either way, there are a few things you may want to consider: 1. Create content for each category page that is unique to them (e.g. create category description for "Body Moisturizers"). This will help create relevancy and reduce your overall "thin" and "duplicate" content throughout the site. 2. While it appears that the page is in main GG's index (as found here), I'd recommend adding links on the home page that deep link to the categories you're wanting to gain more traction with. Doing this will help reduce the crawl depth of the page and would help distribute some of the authority that the home page has, directly to those sub-pages. You could do this in a "categorical module" (e.g. create a visual box for a category "Skin Care") which has a sentence or two description w/ links to the sub-cats you really want to bring further up...could do max 5 links with a "view all skin care products" at the bottom of it. 3. You'll notice in the site search link from above that you have many pages indexed for the single page we searched for. This represents a duplicate content issue. Checking the paginated results pages (which is what those extra results are), you'll notice the URL canonicals to itself (which is fine if the content is truly unique, but in this case, it doesn't appear to be). You'll also notice that the canonical references conflict with the hreflang references. In any event, just some thoughts. Hope this helps! Cheers.

    | regal_kyle
    0

  • Dear Roman, Thank you. Can we discuss on many things, you will get my email id at contact us page.

    | sureshworks
    0

  • For suggestion 1, I should clarify that you already are using Microdata. Your Microdata is repeating what is already in the page, rather than "tagging" your existing content inline. Microdata is a good tool to use if you are able to tag pieces of content as you are communicating it to a human reader; it should follow the natural flow of what you are writing to be read by humans. This guide walks you through how Microdata can be implemented inline with your content, and it's worth reading through to see what's available and how to step forward with manual implementation of Schema.org with confidence. Will these solutions remove the duplicate H1 tag? Whatever CMS or system you are using to produce the hidden microdata markup needs to be changed to remove its attempt entirely. The markup of the content itself is good, but it needs to be combined in with existing content or implemented with JSON+LD so that it is not duplicating the HTML you are showing the user. Are these options relatively simple for an experienced developer? Is one option superior to the other? Both should be, but it depends on your strategy. Are you hand-rolling your schema.org markup? Is somebody going into your content and wrapping the appropriate content with the correct microdata? This can be a pain in the butt and time-consuming, especially if they're not tightly embedded with your content production team. I downloaded the HTML and reviewed the Microdata implementation. I don't mean to sound unkind but it looks like computer-generated HTML and it's pretty difficult to read and manipulate without matching tags properly. Is one option superior to the other? Google can read either without issue; they recommend JSON+LD (source). In your case, I'd also recommend JSON+LD because: Your investment in Microdata is not very heavy and appears easy enough to unwind The content you want to show users isn't exactly inline with the content you want read by crawlers anyway (for example, your address isn't on the page and visible to readers) It's simple enough to write by hand, and there exist myriad options to embed programmatically-generated schema.org content in JSON+LD format Please review this snippet comparing a Microdata solution and a JSON+LD solution side by side. PLEASE DO NOT COPY AND PASTE THIS INTO YOUR SITE. It is meant for educational and demonstrative purposes only. There are comments inline that should explain what's going on: https://gist.github.com/TheDahv/dc38b0c310db7f27571c73110340e4ef

    | TheDahv
    0

  • Hi Charles, I strongly recommend to follow Miriam Ellis' article. Written for SEO practitioners, she enlights the most common mistakes that prevent the success. Also, I suggest to include your properties (all the variations of your domain) into the Google Search Console, if you did not do already. Bullet points for local SEO business: NAP (name, address, phone) copy and paste for avoid mistakes. Google Search Console Google Analytics Google My Business Google Maps Persistance is the key for SEO success, do not desmay. Good luck! Mª Verónica not-actually-the-best-local-seo-practices

    | VeroBrain
    1

  • The WP migration plugins I'm referring to do a rewrite of the URLs in the database. And yes, this is critical to a solid migration, instead of using redirects. There are a number of WP tools for this. My preferred tool is BackupBuddy (paid- 40% off this month) as it does an excellent job of the migration and is then a top-notch tool for managing the ongoing backing up of the site, as well as helping create a staging version of the site for future dev and maintenance purposes. I've also used the free Duplicator plugin for one-off migrations, and have used Updraft Plus on occasion as well. The majority of the work is in tuning up the site after migration, and yes, making sure all the related functionality and tools have been updated as well. My timeline would look something like this: Create addon domain in hosting cPanel for new domain and enable AutoSSL certificate - 15 mins Use migration plugin to move site to new domain - 1 to 1.5 hours depending on experience Run quality Assurance testing to insure all of site and functionality is running properly under new domain and HTTPS, including updating CDN and testing forms - 1-2 hours. Review and update 3rd party tools and off-site profiles - 2 hrs Implement final DNS changes and redirection of old domain to new, add change of address in Google Search Console - .5 hr Miscellaneous, including setting up backup protocol for new domain - 1 hr (And don't forget 3-4 hours of careful monitoring and followup for any errors over the following 4-6 weeks after migration, plus earning of new links to the new domain, and getting existing links replaced with new ones to the new domain where possible.) For a total of about 6 or 7 hours for the migration work itself. You're right, a clearly laid out and well-priortised project plan for this kind of migration is absolutely essential. You need to know exactly what's going to be done, and in what order, so you can insure all necessary steps are taken. To be blunt, many devs (even really good ones) don't take into account the extra details necessary in migrations like these that an experienced SEO pays attention to. Having all the images on Amazon CDN actually simplifies the migration somewhat as those images will not have to be moved during the changeover, just have the CDN adjusted instead. The SSL should absolutely be installed on the new domain before migration - otherwise, you are going to add a lot of wasted time and complexity rewriting the database URLs a second time after the domain name change to update them to HTTPS. Paul

    | ThompsonPaul
    0

  • That's not going to solve your problem, vikasnwu. Your immediate issue is that you have URLs in the index that are HTTPS and will cause searchers who click on them not to reach your site due to the security error warnings. The only way to fix that quickly is to get the SSL certificate and redirect to HTTP in place. You've sent the search engines a number of very conflicting signals. Waiting while they try to work out what URLs they're supposed to use and then waiting while they reindex them is likely to cause significant traffic issues and ongoing ranking harm before the SEs figure it out for themselves. The whole point of what I recommended is it doesn't depend on the SEs figuring anything out - you will have provided directives that force them to do what you need. Paul

    | ThompsonPaul
    1

  • Hi, After 2 months my question still stands. in Google Webmaster Tools the amount of indexed pages is already down to 0. However, the pages keep showing up after 2 months when using the "site:www." search. Is this normal? Hope you can help.

    | conversal
    0

  • Hi Kingalan, This tutorial might help you. It worked smoothly for me. Hope it works for you too. Good luck! Mª Verónica Ps: In case you are using the Yoast plugin, there is an option into the set up for display or not the dates. watch?v=NQ4ygSA4aP0

    | VeroBrain
    1

  • Hello! @ThompsonPaul is correct, this is a kind of rich snippet. Schema.org has some documented microdata markup around nutrition information. The item type is NutritionInformation, and you can utilize properties like "calories," "fatContent," or "caffeineContent."

    | brooksmanley
    1