Category: Technical SEO Issues
Discuss site health, structure, and other technical SEO issues.
-
How do I authenticate a script with Search Console API to pull data
Hi Jo. So I think that you want everything after code= and before the &. In the example you pasted, that would be: 4/igAqIfNQFWkpKyK6c0im0Eop9soZiztnftEcorzcr3vOnad6iyhdo3DnDT1-3YFtvoG3BgHko4n1adndpLqjXEE If that doesn't work (or rather, it doesn't work when you re-run it and use whatever value comes up next time), let us know and I'll pull in someone who has done this themselves (I'm just reading the same instructions!). Good luck
| willcritchlow0 -
Can't get my site recognised for keyword
Dan took the words out of my mouth I would do exactly what he just described honestly. When you search your site for | Search for: fascinators | You see a lot of good content but it in hub pages as Dan would say and I agree with him on that as well. Sincerely, Tom
| BlueprintMarketing0 -
Fundamental HTTP to HTTPS Redirect Question
Hi Sean / Others Can I just add one last question to this post? I'm not certain about the canonical tags. Although I don't need to add redirects for everypage, do I need to add a canonical tag to each page? Is there a way of doing this for all pages?
| ruislip180 -
Using 410 To Remove URLs Starting With Same Word
Martijn - Thanks for your reply. I tried the code you provided, however it still provided a 404 error. I was able to get the following to work properly - any drawbacks to doing it this way? RewriteRule ^mono(.*)$ - [NC,R=410,L] The browser now shows the following anytime there is the word "mono" immediately after "sitename.com/" The requested resource /mono.php is no longer available on this server and there is no forwarding address. Please remove all references to this resource. Additionally, a 410 Gone error was encountered while trying to use an ErrorDocument to handle the request.
| vikasnwu0 -
Internal Links issue in webmaster
We don't know why these links have been made. for example in our main page all the "href"s included menu, footer and the other hrefs inside the page are calculated which are about 51 but the thing which we see in our webmaster is about 400 (Please take a look at this screen shot ) This unusual number has hunted our set and ranking and we don't know how to solve this problem
| jacelyn_wiren0 -
Duplicated titles and meta descriptions
Thanks Kate, will do the best I can in the light of your answers. But as you've probably understood by now, with quite limited resources.
| GhillC0 -
Internal no follow links
Yeah that's pretty much overkill. "No-follow" isn't actually named very well as it doesn't prevent users or search engines from 'following' a hyperlink. I know, it was named really badly! In fact many people feel it's not even a directive to stop links from being 'followed' (or visited) What the no-follow tag is commonly used for these days is to denote the difference between editorial and advertorial hyperlinks. It's only really an issue with external links, rather than internal ones. If you have placed content on another site (and you paid for it, like a sponsored post) with a link pointing back to your own site (to try and get referral traffic), the 'no-follow' tag lets Google know that the link is advertorial in nature and thus should not pass PageRank to the receiving domain / web-page Because of this a lot of people believe that if you no-follow a link, it doesn't vent or lose any PageRank. This is false. If a link is default ('followed'), then an amount of PageRank will be lost from the linking page and donated to the receiving page. If a link is 'no-followed', the PageRank will still be lost by the linking page but the receiving page just won't get anything (so it gets vented into cyberspace). This is to stop "PageRank sculpting" using no-follow links from being a viable SEO manipulation tactic As such, all no-following your duplicate internal links will do is vent tiny chunks of SEO authority without them then being appended to other pages on your site (so little bits of authority just get lost from your website's ecosystem) It's not a huge problem that you should freak out about, in-fact the noticeable difference in performance via either implementation (I would guess) would be negligible to totally unnoticeable But still - why chip away at yourself right? That's what your competitors are there for
| effectdigital0 -
Should I add my html sitemap to Robots?
As said above, great question though.
| Libra_Photographic0 -
My company bought another company. How long do I keep the purchased company's site live?
In my opinion, it would be a business decision that's then implemented by the SEO. So ask your bosses these questions and what experience they want for their customers.
| ThomasHarvey0 -
Subdomain Question
can anyone think of any other reasons why the images wouldn't get indexed. Only thing I can think of is perhaps if your subdomain is serving wrong status codes for your images, unlikely but possible.
| ThomasHarvey0 -
Fetch as Google temporarily lifting a penalty?
Unfortunately it's going to be difficult to dig deeper into this without knowing the site - are you able to share the details? I'm with Martijn that there should be no connection between these features. The only thing I have come up with that could plausibly cause anything like what you are seeing is something related to JavaScript execution (and this would not be a feature working as it's intended to work). We know that there is a delay between initial indexing and JavaScript indexing. It seems plausible to me that if there were a serious enough issue with the JS execution / indexing that either that step failed or that it made the site look spammy enough to get penalised that we could conceivably see the behaviour you describe - where it ranks until Google executes the JS. I guess my first step to investigating this would be to look at the JS requirements on your site and consider the differences between with and without JS rendering (and if there is any issue with the chrome version that we know executes the JS render at Google's side). Interested to hear if you discover anything more.
| willcritchlow1 -
How do i fix fatal error message?
Wish we were able to give instant 24/7 support, but alas we're just volunteers ¯_(ツ)_/¯ If you have time, would you mind explaining how this issue was resolved?
| zeehj0 -
Bloking pages in roborts.txt that are under a redirected subdomain
Thank you so much for you answer! the home page in the subdomain is redirected but none of the actual pages in the subdomain are, and because there are so many of them, it would be easier to block them in robots.txt, even if there is small change that Google will still index them. But because the home page is redirected, I don't want to confuse Google with a Disallow: / Could I do Disallow: / and then Allow: /homepage.html
| RaquelSaiz0 -
International targeting, translation, URL indexing confusion
As I see you are using the Google Translate API, so in that case, keep in mind the on-page for every single language
| Roman-Delcarmen0 -
Http to https redirection issue
Tim Holmes gave a good answer but it does assume your redirects are being applied via a .htaccess file which is the usual method if your website is hosted on a Linux / Apache server. If your website runs on a Windows / IIS server, then instead of implementing your redirect rules via .htaccess you'd be using web.config instead. Obviously most plugins (especially on common platforms like WordPress) are coded to interact with a .htaccess file. If you're running on IIS instead they could break stuff or at the least fail to function entirely. On Google you can find many posts complete with web.config instructions: https://www.google.com/search?num=100&q=https+redirect+for+web.config This is the one which Google gives the knowledge-graph entry to: https://www.ssl2buy.com/wiki/http-to-https-redirect-using-htaccess-or-web-config The second part of the content deals with Windows. Checking that your SSL certificate is correctly installed, valid and provided by a supplier which Google accepts is highly advisable. If browsing to an HTTPS URL on Chrome yields warnings or 'not secure' messages, it's safe to say that Google has not accepted your SSL certificate. If you can't even browse to HTTPS URLs, something is likely wrong with the install! Hope that helps
| effectdigital0 -
Paginated pages are being indexed?
Hi there. Well, the question is why you don't want those paginated pages ranking. I assume that the content on those pages is different from each other and first page, and is still relevant. correct? If so, then by noindexing you might take away from your SEO, rather than add to it. If anything, canonicalize them to the first page (still very debatable if you should). If you want your first page ranking - just make sure that it's optimized more than paginated pages - more popular products are on the first page, description text is there and is optimized, maybe do some targeted backlink earning to that page. Hope this helps.
| DmitriiK0