Questions
-
Google mystery: Website ranks inversely proportional in 2 countries for the main keyword. Any ideas or thoughts?
Google’s mission is “to organize the world’s information and make it universally accessible and useful”. In other words: to build the perfect search engine that helps people nd what they are looking for. Google always wants to show you the best result for your search query. Google has updated its algorithm numerous times over the years, but their goal remains the same: Google tries to get you the best result. Google gives you the best results by ranking the most relevant and usable websites, and combating spam. Sites that are only built to make money, or otherwise created purely to rank in order to make money, should not be on top of the search results. Sites that give the user what they searched for should always be on top. Google also rewards sites that give have nice user experience (including for instance sites that load fast). So Google always keep in mind user behavior and user intents, so probably on specific users and region you fit their needs at least in theory and for the other region you do not fit that need, The other factor is your competition probably you have different competition level so in one country you are the best because there's no better competitor and in another country there are many sites better than you.
Search Engine Trends | | Roman-Delcarmen1 -
Do we have any risk or penalty for double canonicals?
Yes! I read the example backward. I'm with you! All pages should point to C.
Search Engine Trends | | slatronica0 -
What happens when a de-indexed subdomain is redirected to another de-indexed subdomain? What happens to the link juice?
Exactly as you said. I wonder what ranking fluctuation or dip we can expect with main domain due to this deindexing of sub domains. Someone claims that ranking will be dropped, but how? Still sub-domain "B" will be there with all backlinks. So the backlinks are there technically. Please let me know your valuable thoughts on this. Thanks
Search Engine Trends | | vtmoz1 -
Best proxy service to browse the Google from different countries to check the ranking
http://isearchfrom.com/ With I Search From you can simulate using Google Search from a different location or device, or perform a search with custom search settings. It's useful for searching Google as if you were somewhere else, as well as for SEO & SEA testing.
Alternative Search Sources | | Igor.Go0 -
Any risks involved in removing a sub-domain from search index or completely taking down? Ranking impact?
If the sub domain targets keywords not targeted in the rest of the website then rankings will slip. I would 301 all webpages to relevant pages on your main site. Any important keywords should be monitored. You should crelated pages with content from the sub domain to maintain these keywords. If traffic is non existent just 301 them.
Web Design | | Andrew-SEO0 -
Any recent updates from Google or community on sub domains vs sub directories?
No changes here. Sub-domain is still a separate unit from Google's POV. Links are not liquid and your pages are not communicating vessels you don't lose any "juice" by linking to new pages but things happen. You can try running your website through Netpeak Spider, (disclaimer: I work for the company which developed it) it can calculate internal page rank for your pages, maybe you'll find something unusual there.
Search Engine Trends | | Igor.Go0 -
What are the technical details (touchpoints) of a website gathered by Google?
Hello, Some technical factors: Internal linking structure Architecture and Crawlability Https (Http secure) Existence of Meta description Site speed Keywords included in the domain Use of flash Domain .com My two cents, I'm sure there's much more out there Hope this helps!! If you like the answer, don't forget to select it as BEST ANSWER Roberto
Search Engine Trends | | AgenciaSEO.eu0 -
Lost Wikipedia page and dropped heavily in rankings. How many of you aware of and experienced this?
Hi William and EGOL, Here is the additional info on our Wikipedia page which answers your questions and give more knowledge on similar scenarios: Our Wikipedia page is pretty old. It was first created on 2005. Website link was pointing to our homepage. Suddenly this got deleted in this January due to lack of reliable resources and the page sounds little spammy and advertising. We didn't create this page. If so this couldn't survive for so long. So, back to the actual discussion, even the link from Wikipedia is technically a "nofollow", we can see the importance Google gives to this page to boost a website's ranking with a strong backlink. Thanks
Search Engine Trends | | vtmoz0 -
Do SEOs really need to care about trend in increase of voice search?
I agree that voice search will only continue to grow and grow in 2018. The results that are read aloud as answers are primarily featured snippets. So when you ask how to optimize for voice search, what you're really asking is how to optimize for featured snippets. A few optimization tips I've come across: Utilize a Q&A format: ask questions in your headings, and then include an explicit answer in the following paragraph. Use conversational language: seek to optimize for long tail conversational questions Consider frequently asked questions: by your customers, and other searchers (check out the "People Also Asked" section of the SERP when applicable) Here's a post that answers your question as well. Hope this is helpful!
Web Design | | brooksmanley2 -
Website's server IP address is redirected to blog by mistake; does Google responds?
Hi Vtmoz, Google will not respond at all. I mean Google has no way to know that it was a mistake, unless many websites hosted by the same host provider had the same behaviour. Said that, I assume that you fixed the issue and into the Google Search Console you checked the robot texts, updated the sitemap (s) and "fetched as Google" too. Good luck! Mª Verónica
Search Engine Trends | | VeroBrain1 -
Any tool to get all the old or archived links of a website?
This first method is a complete guess if it just went down you might have a chance of grabbing the rest on google of it by typing in sight:thedomain.tld in google I don’t know how old this thing is so it’s a possibility. Try http://archive.is Or time-travel.mementoweb.org http://archive.is/20130504233657/http://moz.com/ You can find just links or html useing http://www.internetfrog.com/mywebsite/archive/ with archive.org Try when in The way back machine take the code and use Regex to grab the URLs https://www.quora.com/Aside-from-the-Wayback-Machine-what-are-other-options-for-getting-screenshots-of-websites-from-the-past Use this inside of archive.org. https://regex101.com/library/V36Bah https://regex101.com/r/V36Bah/1 https://regex101.com/r/fbX1LZ/1 http://network.ubotstudio.com/forum/index.php/topic/12459-regex-explained-match-extract-urls/ https://mathiasbynens.be/demo/url-regex Use google search inur:thedomain.tld any variation that you think will get your URLs you never know what you might find. The site Get data from any link tools like Majestic, Ahrefs, Moz OSE that will give you a outline of site structure if it was Crawl and I had a lot of back links you can get a lot of URLs that way. One thing you have to do with the way back machine or archive.org. This simply use whatever page they have and click on that look for the site special by going to the homepage and trying to save that . So you can grab the URLs out of the navigation outside of that and doing Google searches for anything depending on how old it is it may still be there I don’t know how long ago this occur? But Google is not going to be a bad place to look. After that going to Webmaster to look for a xml site map or anything you could have useed To crawl the website prior to it coming down. If you know ran a crawl via Moz, screamingfrog or Deep Crawl you’re in luck. call the hosting if you own the domain Are you the new owner? If you are there might be a small chance the hosting company or the old owner had a back up of the entire site never know but you need to ask. I wish you all the very best sorry about the formatting I’m on my cell phone tom
Alternative Search Sources | | BlueprintMarketing0 -
Do the external links at footer menu take away PR or Linkjuice?
No question - the page's authority is divided up amongst all links on the page, not just the internal ones. That's why I made the recommendation I did. To be clear - you're not "losing Pagerank" for the page that contains the links. You're losing the ability of that page to pass some of it's power to other pages on your own site, by having that power sent to external sites instead. Paul
Search Engine Trends | | ThompsonPaul0 -
What is the best way to employ log-in to benefit in SEO?
Beware of the Login pages – add them to Robots Exclusion A lot of sites today have the ability for users to sign in to show them some sort of personalized content, whether its a forum, a news reader, or some e-commerce application. To simplify their users life they usually want to give them the ability to log on from any page of the Site they are currently looking at. Similarly, in an effort to keep a simple navigation for users Web Sites usually generate dynamic links to have a way to go back to the page where they were before visiting the login page, something like: Sign in. If your site has a login page you should definitely consider adding it to the Robots Exclusion list since that is a good example of the things you do not want a search engine crawler to spend their time on. Remember you have a limited amount of time and you really want them to focus on what is important in your site. Out of curiosity I searched for login.php and login.aspx and found over 14 million login pages… that is a lot of useless content in a search engine. Out of curiosity I searched for login.php and login.aspx and found over 14 million login pages… that is a lot of useless content in a search engine. Another big reason is because having this kind of URL's that vary depending on each page means there will be hundreds of variations that crawlers will need to follow, like /login?returnUrl=page1.htm, /login?returnUrl=page2.htm, etc, so it basically means you just increased the work for the crawler by two-fold. And even worst, in some cases if you are not careful you can easily cause an infinite loop for them when you add the same "login-link" in the actual login page since you get /login?returnUrl=login as the link and then when you click that you get /login?returnUrl=login?returnUrl=login... and so on with an ever changing URL for each page on your site. Note that this is not hypothetical this is actually a real example from a few famous Web sites (which I will not disclose). Of course crawlers will not infinitely crawl your Web site and they are not that silly and will stop after looking at the same resource /login for a few hundred times, but this means you are just reducing the time of them looking at what really matters to your users. Source Beware of the Login pages – add them to Robots Exclusion
Web Design | | Roman-Delcarmen0 -
Any risks involved when we have huge list of redirects in our website database?
Top of the morning to you!! As long as you 301 redirect the old URL's to the current related pages, 90-99% of the page authority will transfer to the current URL. It would also be advantageous, if those old URL's contain duplicate or similar content, to set up "rel=canonical" tags for the current URL. The canonical tag basically tells search engine crawlers which URLs to index. I would also use a tool like AHREFS to research the page authority of these old URL's. If you have thousands of old URL's to transfer, it will really only benefit you, from an SEO standpoint, to transfer the pages with a higher page authority. So, to answer your question, there is not a lot of risk, if you redirect correctly. It would also help, going forward, when there is a URL change, to automatically set up a 301 redirect, that way you don't have thousands to sift through.
Web Design | | AdvisGroup0