Latest Questions
Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!
-
Htaccess - Redirecting TAG or Category pages
The regex in your RedirectMatch doesn't say what you think it says, Jes This part (note the bolded part of the expression (.*) /category/Sample-Category**(.*)** doesn't actually say "match the URL that is specifically** /category/Sample-Category"** That**.*** is a wildcard thatmeans "and any other additional characters that might occur here" So what it's saying is "match the URL /category/Sample-Category _**as well as **_any URLs that have any additional characters after the letter "y" in category. Which is what is catching your -1 variation of the URL (or the -size-30 in your second example). In addition, that wildcard has been set as a variable (the fact it's in brackets), which you are then attempting to insert into the end of the new URL (with the $1), which I don't think is your intent. Instead, try: RedirectMatch 301 /category/Sample-Category https://OurDomain.com.au/New-Page/ you should get the redirect you're looking for, and not have it interfere with the other ones you wish to write. Let me know if that solves the issue? Or if I've misunderstood why you were trying to include the wildcard variable? Paul P.S. You'll need to be very specific whether the origin or target URLs use trailing slashes - I just replicated the examples you provided.
Intermediate & Advanced SEO | | ThompsonPaul0 -
Hostage Taking by My Wordpress Developer
Fortunately, I do have full control of the server and backups. But I should have never agreed to allowing the modification of plugins. At the time I did not understand the implications. Would it help a new coder if the previous developer provided a detailed description of the modified plugins? What if I agreed to pay the old developer to act a consultant to a new developer? My developer believes he is in the drivers seat and is charging me 3-5x what is reasonable and fair. Result is that I can't afford to make any meaningful improvements to the site.
Intermediate & Advanced SEO | | Kingalan10 -
Crawl and Indexation Error - Googlebot can't/doesn't access specific folders on microsites
Hello ImpericMedia, If you can share the site with me (private message is OK) I'll look into it. If you don't want to do that, here are some things I would look at: 1. If you have verified that the Robots.txt file is not blocking the pages you want indexed, and the pages are still not indexed (or indexed with a message about the Robots.txt file) you should check for a Robots Noindex meta tag on the page. If the source code looks strange you may have to use the Chrome Inspect tool to see the fully rendered page. 2. If there are no blocking robots meta tags on the page you should check the HTTP response for an X-Robots header. 3. If there is no X-Robots header, it's probably because of the duplicate content and spammy(seeming) subdomain setup. Sorry about the wait. If you include the site URL it will get other community member's curious enough to check it out next time. I hope this helps. If not, feel free to message me.
Intermediate & Advanced SEO | | Everett0 -
How do you deal with Scam-Type SEO businesses?
I imagine that eventually he will be caught, right? He might get whacked - especially if the sites have verbatim text on every page. He might get away with it for a long long time if each site has totally unique text and he isn't interlinking these sites very heavily or interlinking them with nofollow. Lots of people are able to link between their own properties if they are using the biz name as the anchor text instead of a money keyword. (I know that this answer wasn't satisfying, but that's my opinion on how diverse are the ways that Google handles these things. I mean, this has to be Black Hat SEO. Ignorant bliss has a large overlap with black hat in appearance, although the intent is very different. It's blackhat if these sites are identical except for the name of the community and the seller of the service knows that this is far below best practice.... and even worse if they are being juiced from a private blog network or paid links from ignorant webmasters or from those who know paid links are not good but sell anyway. Have any of you encountered an SEO/Marketer like this? This stuff is everywhere. Sometimes they grab your content and use it in New Jersey. If so, what do you do about it? Beat their asses with quality.
White Hat / Black Hat SEO | | EGOL0 -
What are the technical details (touchpoints) of a website gathered by Google?
Hello, Some technical factors: Internal linking structure Architecture and Crawlability Https (Http secure) Existence of Meta description Site speed Keywords included in the domain Use of flash Domain .com My two cents, I'm sure there's much more out there Hope this helps!! If you like the answer, don't forget to select it as BEST ANSWER Roberto
Search Engine Trends | | AgenciaSEO.eu0 -
Moz got problem crawling SquareSpace websites?
Hey there! Tawny from Moz's Help Team here! It looks like your Moz Pro Campaign's Site Crawl for that site came back with results — I see data waiting for you in your Campaign. Where did we report that we couldn't crawl your site? I'd love to help, if I can! Feel free to drop us a line at help@moz.com and we'll do our best to sort everything out!
API | | tawnycase0 -
Site crawl only shows homepage
Hi SEOchris, Thanks for your answer. I checked the robots.txt file, changed the User Agent to Googlebot in Screaming Frog, but non of these gave new insights. For now, we don't have access yet to the server log files, but when we have, hopefully they will tell us more.
Other Research Tools | | WeAreDigital_BE0 -
Old url is still indexed
Hello, I agree with Agencia SEO, The 301 redirect should help take care of the problem. It does take a little time for it to kick in, but it will help with all the search engines and not just Google. Best Regards
Technical SEO Issues | | Dalessi0 -
What does google think about legit link exchanges where one is follow and one is no follow?
Hi Ruchy, I think Google will understand these legitimate scenarios but cannot be guaranteed all the times. We have more than 30 partners. We have given all of them "nofollow" from our website and they link back to us with followed links. Our DA is high and all of them have low DA. This is been happening for years where partners get deleted and new ones will add up. We never experienced any algo or manual penalties. If both pages have relative context in terms of content, the link must be legit. Thanks
Link Building | | vtmoz0 -
H1 and Schema Codes Set Up Correctly?
For suggestion 1, I should clarify that you already are using Microdata. Your Microdata is repeating what is already in the page, rather than "tagging" your existing content inline. Microdata is a good tool to use if you are able to tag pieces of content as you are communicating it to a human reader; it should follow the natural flow of what you are writing to be read by humans. This guide walks you through how Microdata can be implemented inline with your content, and it's worth reading through to see what's available and how to step forward with manual implementation of Schema.org with confidence. Will these solutions remove the duplicate H1 tag? Whatever CMS or system you are using to produce the hidden microdata markup needs to be changed to remove its attempt entirely. The markup of the content itself is good, but it needs to be combined in with existing content or implemented with JSON+LD so that it is not duplicating the HTML you are showing the user. Are these options relatively simple for an experienced developer? Is one option superior to the other? Both should be, but it depends on your strategy. Are you hand-rolling your schema.org markup? Is somebody going into your content and wrapping the appropriate content with the correct microdata? This can be a pain in the butt and time-consuming, especially if they're not tightly embedded with your content production team. I downloaded the HTML and reviewed the Microdata implementation. I don't mean to sound unkind but it looks like computer-generated HTML and it's pretty difficult to read and manipulate without matching tags properly. Is one option superior to the other? Google can read either without issue; they recommend JSON+LD (source). In your case, I'd also recommend JSON+LD because: Your investment in Microdata is not very heavy and appears easy enough to unwind The content you want to show users isn't exactly inline with the content you want read by crawlers anyway (for example, your address isn't on the page and visible to readers) It's simple enough to write by hand, and there exist myriad options to embed programmatically-generated schema.org content in JSON+LD format Please review this snippet comparing a Microdata solution and a JSON+LD solution side by side. PLEASE DO NOT COPY AND PASTE THIS INTO YOUR SITE. It is meant for educational and demonstrative purposes only. There are comments inline that should explain what's going on: https://gist.github.com/TheDahv/dc38b0c310db7f27571c73110340e4ef
Intermediate & Advanced SEO | | TheDahv0 -
How do i update my mirror and sun listing?
David's (other David!) response definitely works for Moz customers. I'm not sure if you were asking about this a Moz Local user or if you were going at it on your own. If the latter, have you considered managing your listing on services that feed into these directories? For Mirror, if you look at the footer on their website, you'll see that they cite Central Index as the source of their business data: Powered by web.com. Business data Central Index and third parties. You can review the full suite of their network partners on their Publishers page. Regarding Sun, you might look to Scoot as they mention Sun among their network of business directories. Unfortunately, I cannot speak to what experience you may find creating an account, finding, and managing your listings. The best I can offer is my understanding that they are all related to each other.
Moz Local | | TheDahv1 -
Got a problem in using MOZ Crawl test
Hey! Thanks for reaching out to us! Would you be able to copy+paste what you have written above into an email to help@moz.com as well as including your email address, this will allow us to better investigate into this issue. Looking forward to hearing from you! Eli
Getting Started | | eli.myers0 -
Lost Wikipedia page and dropped heavily in rankings. How many of you aware of and experienced this?
Hi William and EGOL, Here is the additional info on our Wikipedia page which answers your questions and give more knowledge on similar scenarios: Our Wikipedia page is pretty old. It was first created on 2005. Website link was pointing to our homepage. Suddenly this got deleted in this January due to lack of reliable resources and the page sounds little spammy and advertising. We didn't create this page. If so this couldn't survive for so long. So, back to the actual discussion, even the link from Wikipedia is technically a "nofollow", we can see the importance Google gives to this page to boost a website's ranking with a strong backlink. Thanks
Search Engine Trends | | vtmoz0 -
Ranking gone for the original page and a shortened url ranks instead.
For my own understanding should you or should you not disallow the google ?gclid= parameter? Timo
Search Engine Trends | | Bestbing1