There are a lot of factors which could have caused this. Without a background of your last actions, links, changes... or at least the domain name, it is impossible to give any valid answer.
Too generic question, sorry 
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
There are a lot of factors which could have caused this. Without a background of your last actions, links, changes... or at least the domain name, it is impossible to give any valid answer.
Too generic question, sorry 
Hello.
This is a common problem for us working on ecommerce. Every filter, ordenation widget and any other options you give the customer to browse your site on a more comfortable way, becomes a pain in the ass with the duplicated content for search engines.
Apart from implementing all the canonical tags as you say you did, you could also take a look on removing as much parameters as you can in Webmaster Tools (specially those dealing with ordenation or number of items shown). At last, you should also decide if you want to avoid bots from indexing those special filter combinations so that you can focus on the categories page. You give the example of pagination, but also filtering by manufacturer. The option we take, and which I think is the better, is to add a "noindex" meta tag to those kind of pages, and only indexing the main page in each category. If we think a filter is important as a keyword (for example, "adidas soccer boots" for category "soccer boots" and brand filter adidas), what we do is create an special description for that page so that it is no longer duplicate content: if we are not able to manually create that description, we just add the noindex tag as I said before and we forget about that page on search engines.
More than 3 competitors is a must nowadays, when using a paid PRO version this is by far one of the weakest limitations on your wonderful service.
The problem is that you are implementing a canonical from all those parametrized pages to http://mathematica-mpr.com/news/, and the content in that page -an empty result list- is not the same as in the original link you provide in your post -which show article results.
Canonical is used to show a preferred URL for a page, when there are two or more URLs which lead to the same content. After implementing the canonical, you have to make sure you are not linking to the "bad" url or it will never disappear from Google's SERP.
As your content is not the same on both pages, and the old URL is still accessible from your site, you are not fullfill the requirements to make those URL disappear.
I think that if you don't want those index to appear on the SERPs as depending on the search options it will lead to very common pages, and it is almost impossible to identify clearly a canonical reference for them, the best option is to add to robots.txt a line which blocks those parameters, and you will end with that problems.
Disallow: /news/?facet=*
This will remove your search results from the index, ending with duplication problems. Please make sure this is what you want, or if you prefer to keep those results although you continue having duplication issues.
That kind of behaviour is really strange. Are you always searching from the same computer, and always logged off -or at least always in the same Google Account? Seems like you are getting some kind of customized experience based on your geolocalization, browsing history, Google Account or something like that.
For checking your ranks, always use an external tool like MOZ.
If it is a personal blog, it is a correct way of implementing those meta tags, as publisher and author is the same. If it is a site with several authors, I would change the publisher TAG so that it starts pointing your Google+ page, and the author tag pointing to your profiles.
So you will benefit of both tags.
As long as you use valid HTML, use proper tags for titles and every section, and don't make the HTML size grow too much, it won't damage your rankings.
If that redirect works (check not only the frontpage, but also internal pages), and you still see the three kind of results, you should implement the canonical tag on your site to make sure it is detected as the same page not depending on call URLs.
Removing them from sitemap will not make them disappear from Google Index. A sitemap is a tool which allows the spider to discover new pages, but one indexed they won't disappear from the index just for removing them.
If you don't want them to be indexed, you can remove then using Google Search Console, and going to "Optimization"/"Remove URLs". It is faster than including the noindex metatag.
If they contain just a link as in your example, I would remove them without any doubt.
Best option in my opinion is to combine "Fresh web explorer" from Moz and Google Alerts.
You can receive daily updates about any keyword.
Hello.
I would include a different description and title for every section to avoid them as being duplicates if you only list the events.
Then, for the navigation on each section, you have three good options:
Any of those ways would help quite a lot to Google so that it can understand what is happening on your site.
I hope it helps you.
A lot of directories, specially those based on old scripts, have issues with the regex which checks if the URL is valid, and forces the address to start with http:// instead of accepting https:// . It is a problem with the directory, and not of your website.
You should make sure you have a redirection from your http to your https version, and then post the http URL to the directory causing that failure. Please make sure website is loading with http via a redirection, and not serving the content directly, or you will have a serious duplication issue.
It is not a must nowadays. As soon as you get some external links, bots will be able to start indexing them. But, sincerely, creating an automatic sitemap is an easy task, and for sure it speeds up the process of all the site appearing on the search engines, specially the deepest pages. Also, it is a way to index areas which are not accesible via links.
So, it is not the main goal in SEO, but is still recommended, specially for launching of new sites.
I cannot know your exact case as you don't provide an URL, but I had a similar issue some weeks ago, which I have solved. For pages which are strongly internally linked in your site, Google usually shows the most common anchor text of your internal links pointing to that page, instead of the HTML title tag.
We changed the anchor text in our main menu and soon it changed on SERPs.
Could it be your case? Do you have that page linked as "Isagenix Australia" in your menu, footer or any other place in your site structure, or a lot of external links with that anchor?
I am checking your schemas with https://developers.google.com/structured-data/testing-tool/ and it seems your product and review snippets are well implemented.
We implemented them two years ago on a quite powerful ecommerce site, and it only appears for branded keywords. Having the schema is a must for appearing, but not enough for Google to show them.
Luck and patience!
Which URLs do you include in your sitemap? Could you check if you try to index
https://www.zenory.com.au/psychic-readings/psychic-readings or https://www.zenory.com.au/psychic-readings ?
The first one is the URL you link at menus, but it has a 301 redirect to the second URL format (and the same for the rest of main options). That is quite a bad idea. Please make sure you include the correct address on the sitemap and not the 301 redirect one. That could be causing the problem of Google Webmaster Tools not showing that page in your sitemap as indexed, as although final page is properly indexed in google (as you can check if you look up for site:www.zenory.com.au), GWT is not able to match both addresses.
When managing a popular ecommerce, coupon pages are a natural source of traffic. Those pages make some linkbuilding for you, although you don't submit coupons to them.
If you have other type of links apart from the coupon sites, I would not give too much importance to this issue. Just check if some of those sites could be seen as spammy to ask for a link removal.
Search engines are getting good in identifying common problems like this, but it is in fact a duplicate content issue. By the low cost of redirecting one of those options to the other, or implementing a canonical tag, I would not risk to be detected as duplicate.
Also, using always the same notation will benefit you to concentrate links to one page, as any incoming link will reach directly to the correct address. If you randomly use both versions of the URL and both return content, visitors will copy the link and you will end with links pointing to both of them, damaging your linkbuilding.
What option is best? It does not mind. Usually for users it is "cleaner" to see no trailing slash, as it is interpreted as visiting a document and not a folder. But any of them is perfect.
Maybe you could index your galleries, which show the small thumbnail so that it does not weight those 7mb you talk about, and link with the a href to the full image size.
Other option is to keep working as you do, and manually insert a title and a small description for each image page. This would definitely improve your SEO for those images, but obviously it is a manual work which I don't know if you will be able to do depending on the volume of images you process.
Hello.
Before starting from scratch, try to optimize Drupal. You have some simple things to do which speed Drupal amazingly:
Try if it helps while you find the source of the problem.