URL Parameters to Ignore
-
Hi Mozers,
**We have a glossary of terms made up of a main page that lists out ALL of the terms, and then individual pages per alphabet letter that limit the results to that specific alphabet letter. These pages look like this: **
https://www.XXXX.XXX/publications/dictionaries/XXX-terms?expand=A
https://www.XXXX.XXX/publications/dictionaries/XXX-terms?expand=B
https://www.XXXX.XXX/publications/dictionaries/XXX-terms?expand=C
https://www.XXXX.XXX/publications/dictionaries/XXX-terms?expand=D
etc.
If I'd like Google to remove all of these "expand=" pages from the index, such that only the main page is indexed, what is the exact parameter that I should ask Google to ignore in Search Console?
"expand=" ?
Just want to make sure! Thanks for the help!!!
-
Useragent:* Disallow: /*?expand= This should work put it in your robots.txt -
Hi!
What billbill369 said is correct, but will only prevent google from crawling those pages.
My suggestion is to use canonical tags in every URL with a parameter pointing to the correct url (the one without parameters)For further reading:
SEO Best Practices for Canonical URLs + the Rel=Canonical Tag - Whiteboard Friday Consolidate duplicate URLs - Google Search Console HelpHope it helps.
Best luck.
GR. -
I agree with what is said above, in addition you could also add the ignore parameter in GSC. As it 's basically adjusting the page content based on that. It's a bit unclear how much information that is really sending to the crawlers but it probably can't hurt.
-
This post is deleted! -
This post is deleted!