Hi Joy,
there are several aspects to the business and there are indeed identifiable departments. They could be chosen from Marine Engineer, Yacht Broker, Marina, Shipyard and so on.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi Joy,
there are several aspects to the business and there are indeed identifiable departments. They could be chosen from Marine Engineer, Yacht Broker, Marina, Shipyard and so on.
Hi Miriam, thanks for the reply.
There is a main reception where the customer is directed to the department they require. Typically the services required cross over but they are really unique to each other and have clear working areas - the customer goes to a specific location for their requirements. The problem is that as an excellent customer service the business will bend over backwards and may serve them from the start position if they can, rather than sending them to the different offices.
The problem I have then is that I don't know how Google will perceive a central switchboard that all the direct telephone numbers redirect back to? Although I see Google says "...whenever possible."
Without duplicating i'm looking at a similar setup to this thread
There is a marine business that has within it 3 distinct departments, Sales & brokerage, marina and shipyard all within the same location but with unique telephone numbers grouped into sections on the website with the unique numbers. I'm clarifying whether there are genuine distinct customer facing locations for each department. We want to create department pages because each one has unique opening hours.
However looking at this I have some questions i'm unsure about.
To make matters worse I can't remove any listings because I can't login to Central Index because the whole process is being managed by Moz Local. I think i'll also talk to support over this.
I'm working on a site that has had some real bad technical issues over a period of time. We have carefully resolved all of them, rinsed through, rechecked for issues and so forth.
Organic traffic started to move slightly up but has now taken a real backward step.
Taking a look at the link profile which has not really been worked on at all we have a mass of Central Index derived links coming through from sites such as gethampshire, heathrowpages and so on. Within each of these directories the business is listed under pages for areas it doesn't belong in, so for example in gethampshire it is listed under printers in Warwick or printers in Surbiton.
The end result is that 65% and quickly growing anchor text is 'website' - 90% dofollow. They are now coming through like popcorn.
My instinct is to remove these listings from the profile. Has anyone else had this kind of issue with Central Index?
Hi Andy,
thanks for the reply. Yes, each p=* is identical to the base category URL, the only differences are a small handful of products on each p=* which are not really offering anything to those pages in the way of uniqueness at all in the way they are presented. So from that point of view the canonical makes sense. However, I don't want to take Google's focus away from cleanly crawling all the products within p=*
So rel=next & prev for me opens up duplication issues as there are no "parts" of content, it's going to be effectively the same category textual content.
However if I implement &view-all and set the canonical to that version i'm then worried Google may be problematic and not play ball.
I've been reading some posts on the merits and pitfalls of using rel=prev, rel=next and canonical, but I just wanted to double check the right solution.
example.com/birth-announcements
example.com/birth-announcements?p=2
example.com/birth-announcements?p=3
With a small selection of products on each variation.
So at the moment there is a canonical on all of them to the base example.com/birth-announcements. The problem is we are having difficulty getting the products within p=* indexed. I don't think from all I read that rel=prev/rel=next is the way to go. Would the solution (or best way to go) be to create a "view-all" filter and set that to be the canonical URL, so all product URLs are in clear focus for Google. The volume of products won't (shouldn't) have too much of an impact on page load. Or am I wrong and rel=prev/rel=next is a feasible solution?
I'm working on a site that has hidden H1 content. So for example:
with the page using the following codeVideo/Film Productionas the title. there are no other H1 tags on the page.
I have taken this up with their dev and they have suggested this has to be implemented this way due to some issues with displaying in iOS. They are digging their heels in and suggesting it stays as is.
How much of a risk would you say this is? Well i'm actually looking for a bit of a back-up here.
I got there in the end. They have a Wistia video loading on the homepage, but Wistia robots blocks this resource. When the resource is blocked the CSS is loading a holding image. However, this is configured to fill the whole page. So therefore when googlebot crawls it cannot render anything further than this image or this defined area in CSS. Dev is fixing.
not yet, not a sniff.
Apparently on my last question my profile status says Staff? Is there something I should know?? 
Wooah, this one makes me feel a bit nervous.
The cache version of the site homepage shows all the text, but I understand that is the html code constructed by the browser. So I get that.
If I Google some of the content it is there in the index and the cache version is yesterday.
If I Fetch and Render in GWT then none of the content is available in the preview - neither Googlebot or visitor view. The whole preview is just the menu, a holding image for a video and a tag line for it. There are no reports of blocked resources apart from a Wistia URL. How can I decipher what is blocking Google if it does not report any problems?
The CSS is visible for reference to, for example, <section class="text-within-lines big-text narrow">
class="data"> some content...
Ranking is a real issue, in part by a poorly functioning main menu. But i'm really concerned with what is happening with the render.
Although Google can now get to js, I would still be nervous on choosing a theme/CMS that is using lazy loading.
According to John Muller from Google:
“Is Googlebot able to trigger lazy loading scripts- lazy loading images for below the fold content” – “This is a tricky thing.”
On lazy loading images John says “test this with Fetch as Google in Webmaster Tools” and “imagine those are things that Googlebot might miss out on.”
In my experience fixing all technical issues, making sure all redirects are properly in place and doing some link building then the site should recover well, even 2-3 months later. I rinse through and through on the technical side.
The issues start coming to the fore when content is killed, keywords are changed, directory structure is changed. You know how it goes.
"Magento is churning out tons of 404 error pages like this https://www.tidy-books.co.uk/childrens-bookcases-shelves/show/12/l/colour:24-4-9/letters:6-7 which google is indexing"
That page is returning a 404 header response so it does not exist. Therefore Google cannot index it.
Without seeing Magento it's difficult to be certain what settings you have and/if you have a bug.
What you can do (maybe you have) is to add the attributes into Webmaster Tools > Crawl > URL Parameters and set to no URLs. You could also add the directory /sort-by/ to robots.txt to disallow.
Using your example of https://www.tidy-books.co.uk/childrens-bookcases-shelves/colour/natural-finish-with-letters/letters/lowercase, well this has an internal rewrite to https://www.tidy-books.co.uk/childrens-bookcases-shelves/letters/lowercase?colour=20 which is not indexed.
It looks like not only do you need to resolve any MANAdev issues but you need to do an audit on the site as I think you have several issues.
Personally I don't agree with setting internal filter URLs to nofollow. I set noindex as you have done and add the filter attributes to the Search Console > Crawl > URL Parameters.
For the option "Which URLs with this parameter should Googlebot crawl?" you can set "No URLs" (if the filters are uniform throughout the site).
"No URLs: Googlebot won't crawl any URLs containing this parameter. This is useful if your site uses many parameters to filter content. For example, telling Googlebot not to crawl URLs with less significant parameters such as pricefromand priceto (likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5&pricefrom=10&priceto=1000) can prevent the unnecessary crawling of content already available from a page without those parameters (likehttp://www.examples.com/search?category=shoe&brand=nike&color=red&size=5)"
Each domain in a PBN should leave no footprint and if you have found it then Google will find it/them.
For the 4 sites have you got any proof of relationship - contact details, whois, duplicate content, same plugins/layout, a very good reason to believe they are from the same source?
It would be very foolish of them to leave such a huge footprint as to link to all the sites.
Have you checked the PBN domains to see if they share the same C class? e.g. http://smallseotools.com/class-c-ip-checker/? or http://www.authoritydomains.com/bulk-ip-checker.php
If they were totally lazy then there is probably only a handful of hosting accounts. If that was proven then I wouldn't have any difficulty talking to Google if the SERPs were being distorted by more than one site dominating but basically from the same source. If it was a single site i'd tend to sit back and watch, while building a strong link profile.