Questions
-
Without slash URLs not redirected with slash URLs; but canonicalised: Any potential harm at Google?
The potential harm is that, even though canonical tags stop duplicate content from being a problem - they don't do much to consolidate backlink authority hitting each web-page. If you have two pages which both have links pointing to them (with "/" and without "/") then only 301s will properly 'merge' those URLs (assuming that their content is near identical) in terms of backlink authority. For this reason, a real architectural solution is always better than using canonical tags. Canonical tags are really a fall-back measure, if you have poor on-site architecture which you fundamentally cannot change. The end goal, though, is not to need them.
Search Engine Trends | | effectdigital0 -
Tens of duplicate homepages indexed and blocked later: How to remove from Google cache?
Hi Nigel, Thanks for the suggestion. I'm going to use "Remove URLs" tool from GSC. They have been created due to a bug in the Yoast SEO plugin. Very unfortunate and we paid for no mistake from our end. Removing from SERP means removing from Google index also? Or Google will still consider them and just stops showing us? My intention is: Anyway we blocked them, but whether they will cause some distraction to our ranking efforts being there in results being cached. Thanks
Search Engine Trends | | vtmoz0 -
Need only tens of pages to be indexed out of hundreds: Robots.txt is Okay for Google to proceed with?
Hi vtmoz, Given the limitations you are telling us, I'd give noindex in robots.txt a try. I've run some experiments and found that noindex rule in Robots.txt works. It definitely won´t remove from index that pages, but it will stop showing them for search results. I'd suggest you to try using that rule with care. Also, run some experiments on your own. My first test would be only adding one or two pages, the one that causes more trouble being indexed (maybe due to undesired traffic or due to ranking on undesired search terms). Hope it helps. Best luck! GR
Search Engine Trends | | GastonRiera0 -
Google ranking penalty: Limited to specific pages or complete website?
Hello vtmoz, Nope. If rankings drop for certain pages are due to few keywords, it affects only the pages where they are placed and not for other pages / home page. when primary keyword is deployed widely across the website, Yes... you can see some drops / changes in the rankings. Also it's not penalty from Google, but your webpage / website is not upto the ranking recommendations from google. Hence just make sure your optimization for the keywords are done properly for the placed pages and recommend using unique keywords for all pages apart from few primary keywords. www.angleritech.com
Search Engine Trends | | ANGLERTechnologiesUSAInc0 -
How much content is duplicate content? Differentiate between website pages, help-guides and blog-posts.
Thanks for offering the help. Please check below pages as sample: https://www.vtiger.com/all-in-one-crm/contact-management/ https://www.vtiger.com/docs/contacts
Search Engine Trends | | vtmoz0 -
Does using non-https links (not pages) impact or penalise the website rankings?
You won't be making the most of your SEO authority by firing it through redirects. A to B is always better than putting 301s in the middle of links. It's also bad UX because, you send the user to too many locations which is inefficient and a waste of their time. I wouldn't expect something like this to make much difference overall, but if you were going to 'do it right' you would just fix it. It's also not good to have too many links pointing to 'insecure' addresses, even if they will later be intercepted. I would just go in and fix the problem. In SEO, very few things make much difference at all (if there was one thing that made the most difference, everyone would just focus on that). Instead it's about pride, balance and stopping snowballs from forming before they start to roll. If you ignore this one, you'll also ignore all related issues and over time you could notice a performance hit (for poor link structure, lower site health, linking to insecure content, wasting user's time with redirects / bad UX). Instead of sitting back and waiting for bad things to happen, nip it in the bud. It only takes five minutes and then your mind can be at rest
Search Engine Trends | | effectdigital0 -
How Google distinguish and ignore keyword attested with or in a brand?
I wrote this answer to a slightly similar question in 2016, on Quora - and IMO nothing has changed much in this area: https://www.quora.com/Why-do-some-companies-have-a-big-Twitter-widget-in-the-Google-search-results-and-some-do-not/answer/James-Allen-15 Google is not very good at distinguishing and ignoring, or thinking about brands specifically. Actually Google sees query-spaces in terms of 'search entities' (especially after the Hummingbird update). Different keywords and keyphrases are related to one another and where that definition is pretty clear cut (thematically) you have a search entity. A search entity can exist in multiple different states (place, business, news topic, trending search, regular query space for general interest - etc.) When most of the searches (or search queries / keywords) within a given search entity change in terms of the user's intent, the search entity itself may shift state. If a search entity which previously handled generic 'interest' based queries is intercepted by something like a meme (and suddenly there are an explosion of searches, with clicks going to sites with radically different thematic groundings) - then the state of the search entity and its associated keywords (or most of them) can shift from one contextual niche to a completely different one If you think of it like that, things become much clearer. It's not that Google is saying "hey you're a brand you're cheating I'm kicking you out". Instead Google is saying "well I know that this search entity is not a business or brand, most people are searching for a meme that is trending. As such I'll return sites which more closely match the state of the search entity to which this query-space is bound" Not all search entities are so clear cut. Some query-spaces are very ambiguous! In which case, Google will try to return a balanced mixture of results. "SEO" is actually a very good example as, many people are searching for information but many people are also searching for companies and businesses. As such Google supplies divided results and tries to give the best of both (or all) thematic pillars. These are what we call noisy query-spaces: https://d.pr/i/UX3lON.png (screenshot) I know it's not a very clear-cut answer, but search is diverse and complex :') Hope that helps
Search Engine Trends | | effectdigital0 -
What website changes (technical) SEOs can ignore confidently? Google's perspective!
I agree with Joe. SEO and web development often think differently about website structure. It's gotten better but with Google's emphasis on speed, both teams will need to talk more over the next few years. I would try to be as involved as possible with what they are doing and how they are doing it.
Web Design | | JohnSammon1 -
Internal pages ranking over the homepage: How to optimise to rank better at Google?
Hey there! Is it possible to give the website URL you are having issues with? I'll be able to better assess the problem that way. Thanks, John
Search Engine Trends | | JohnSammon0 -
Does Google considers the direct traffic on the pages with rel canonical tags?
They will both rank, but the original page will have the the higher priority.
Search Engine Trends | | jasongmcmahon0 -
Our sitemap is not indexed i Google even though it's successfully processed
To add to what Roman posted - I would also encourage you to set up Google Search Console to review how this has changed the number of pages indexed. You have a good head start by using the WP plugin but I would take this addition step.
Search Engine Trends | | JohnSammon0 -
Script must not be placed outside HTML tag? If not, how Google treats the page?
If Google detects that your website contains social engineering content, the Chrome browser may display a "Deceptive site ahead" warning when visitors view your site. You can check if any pages on your site are suspected of containing social engineering attacks by visiting the Security Issues report. If your site is flagged for containing social engineering content, follow these steps: Check in with Search Console. Verify that you own your site in Search Console and that no new, suspicious owners have been added. Check the Security Issues report seeing if your site is listed as engaging in social engineering. Remove deceptive content. Ensure that none of your site's pages contain deceptive content. Check third-party resources included in your site. So make sure your site is clean and request a review here https://developers.google.com/web/fundamentals/security/hacked/request_review?visit_id=636777391086456088-3885393656&rd=1
White Hat / Black Hat SEO | | Roman-Delcarmen0 -
Third part http links on the page source: Social engineering content warning from Google
Hi Serge, Google removed the warning from our pages and they are back to normal. However Google didn't give any info on what exactly caused the issue. Probably they marked by mistake and removed the warning. We have removed the meta descriptions after receiving this warning which were added two weeks back before this issue, but I don't think they have anything to do with this. Even the third party http links are not the culprits as they are still at the pages. Thanks
White Hat / Black Hat SEO | | vtmoz1 -
Deceptive site warning from Google: Java script and meta descriptions deployed.
Wow, that's just ridiculous. I'm glad you figured it out though.
Behavior & Demographics | | Everett0 -
Displaying 10 blog-posts from website homepage: Any loss in-terms of link juice?
Doing this will make the 10 active (currently rendered in the homepage listings) blog posts have a better chance of ranking for long tail terms, but **yes some PageRank will be lost from your homepage **(or rather, redistributed) If you have lots of SEO authority, you're doing the right thing! There's a reason that farmers use complex irrigation systems. Those same systems would also be poorly placed to water the daffodils hanging from your balcony. Although the question which I answered just a few moments ago is slightly different to your own, you will find (if you read my full answer) that there are lots of relevant parallels to what you are asking: https://moz.com/community/q/best-structure-for-a-news-website-including-main-menu-nav More links, more expansive nav and increased granularity of URL architecture are all great things. The only problem is, like a complex irrigation system, they need a large water supply (or rather - a large pool of SEO authority to draw from). If you don't have lots of 'juice' to spread, you could bleed out instead If you only have a bucket of water to begin with, then a simpler, more streamlined approach is usually better. You don't want the system to spray one tiny droplet of authority (which makes no difference) to thousands of pages. For every tactic there is a proper time and place I'm not trying to push you in either direction, just allow you to make a more informed decision for yourself
Behavior & Demographics | | effectdigital0 -
Indexed, though blocked by robots.txt: Need to bother?
Hi there! What Google is telling you is that you are indexing URLs that you probably are not wanting to be indexed, or the other way around, that important pages are being blocked but indexed for other reasons. If I might ask, why did you blocked through robots.txt those files? There most 2 answers are: 1- Wanted to remove those from search results. If this is your case, you've solved only a part of the problem. What you should have done is (previously allowing robots to crawl those urls) apply noindex rules (keep in mind that can be set up in the HTTP header, as long as not html files cant have meta robots tag), then after a sufficient time block them in robots.txt. _2- Optimize how GoogleBot (crawiling) time. _Being this case, then you've done it correctly and there is nothing to worry. Hope this help. Best luck. GR
Search Engine Trends | | GastonRiera1 -
Huge difference in browser and search console rankings: Any latest updates?
www.google.com/ncr works no more. It will land only on regional google website.
Search Engine Trends | | vtmoz0 -
WordPress redirects are taking too long to navigate: Anyone ever faced this?
Hi Vtmoz It's probable that you have a new plugin which is conflicting with this one. A simple solution is to disable the newer plugins one by one and keet testing the 301 redirect plugin, clearing the cache after each disable before you do. If nothing works and updating the plugin doesn't work, copy all of the redirects in two columns into a spreadsheet. Then add 'Redirect 301' in the first column and then add them to the htaccess file manually (The best way anyway) like this. You will need ftp access or server side file access to see the htaccess in some cases. Redirect 301 /old-url https://yourdomain.com/new-url Note the source URL has no http and the destination one does! So try the first solution and if it doesn't work do the second. It should be quite straightforward to copy and paste them. Regards Nigel
Web Design | | Nigel_Carr0