Nice video Matt, it will surely be helpful to new comers.
Posts made by donford
-
RE: Brush up on the ins and outs of posting to Moz Q&A
-
RE: Download help
Hi,
I think you may have downloaded the retiring ranking csv which would include the same keyword multiple times as historical rankings. Or you are using a program like OpenOffice in which you need to turn off the "space" delimiter and only use the comma delimiter. My new ranking CSV file is downloading just fine.
Hope it helps,
Don
-
RE: How to perform a search as though you are in a local city
Hello,
You should be able to see results if you use a proxy server located in the city (area) you are researching. But I imagine there is an easier way nowadays.
This post may help over on SearchEngineLand
Good luck,
Don
-
RE: HTML Page in PHP Website
Hi plinggtre67,
This should have no direct effect on SEO. The reason is, PHP is server sided script, which is usually outputted as HTML anyway. HTML is what the browser actually reads.
If you are referring to some pages having .html extensions and some having .php, then from user's perspective that may have a slight impact on some users not knowing whats going on, but in general it shouldn't matter. If you would like a consistent look on the page extensions you can use an Apache URL re-write to remove the extensions all together (just like Moz does). You should notice no pages on Moz site have an actual .extension appended to them.
Example Code: @ StackOverflow
RewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.php -f
RewriteRule ^(.)$ $1.phpRewriteEngine on
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}.html -f
RewriteRule ^(.)$ $1.htmlI hope this makes sense and helps,
Don
-
RE: Does Google's Information Box Seem Shady to you?
Hi Saijo,
Absolutely! in fact that is exactly what I was looking for in the Information Box, I wanted to see the source of the definition. When citing a source it feels like it would look better to cite Merriam Webster rather then Google, if that makes any sense. But perhaps Google is aware of that perception and this is an effort to change it.
I know there is a difference between Snippets and the Information Box or I think Google calls it "Knowledge Graph", but when I didn't see a source my wheels started turning. I really like the Snippets as you and EGOL point out, they are extremely helpful and can be a valuable source of traffic.
Thanks guys for your thoughts,
Don
-
Does Google's Information Box Seem Shady to you?
So I just had this thought, Google returns information boxes for certain search terms. Recently I noticed one word searches usually return a definition.
For example if you type in the word "occur" or "happenstance" or "frustration" you get a definition information box. But what I didn't see is a reference to where they are getting or have gotten this information.
Now it could very well be they built their own database of definitions, and if they did great, but here is where it seems a bit grey to me... Did Google hire a team of people to populate the database, or did they just write an algorithm to comb a dictionary website and stick the information in their database. The latter seems more likely.
If that is what happened then Google basically stole the information from somebody to claim it as their own, which makes me worry, if you coin a term, lets say "lumpy stumpy" and it goes mainstream which would entail a lot of marketing, and luck. Would Google just add it to its database and forgo giving you credit for its creation?
From a user perspective I love these information boxes, but just like Google expects us webmasters to do, they should be giving credit where credit is due... don't you think?
I'm not plugged in to the happenings of Google so maybe they bought the rights, or maybe they bought or hold a majority of shares in some definition type company (they have the cash) but it just struck me as odd not seeing a reference to a site. What are your thoughts?
-
RE: Moz site Issue
Hi Radi,
That is a bit normal.
The reason that it appears to happen is the browser is trying to render HTML data.
If you come across this it could be because of a slow internet connection or a problem with the server hosting content.
I would suggest a force refresh Windows & Mac CTRL+F5.
Usually that fixes the problem.
Good luck,
Don
-
RE: Robots User-agent Query
Hi Thomas
Unless I'm mistaken. If you list multiple user agents before a rule all the users agents are subjected to the rule.
So what you have is a list of 3 user agents allowed "anything" disallowed 4 specific things.
In the end the rules apply to all.
Don
-
RE: Duplicate content across a number of websites.
Hi Fraser,
The solution is to use a single website with local options. Yes! I know it is exactly what you said the client doesn't want, but then again, the client came to you for your expertise....
Build a national website with localization focus for each product offered. A company with 25+ locations "SHOULD" be ranking nationally for every product!!!!!!!
As for the localization part, if its a franchise or whatever then let the "main" website feed the locals, who should be on their own ranking.
I'd be happy to expand on my thoughts with more details,
Don
-
RE: Crawled page count in Search console
Ben,
I doubt that crawlers are going to access the robots.txt file for each request, but they still have to validate any url they find against the list of the blocked ones.
Glad to help,
Don
-
RE: Crawled page count in Search console
Hi Bob,
About the nofollow vs blocked. In the end I suppose you have the same results, but in practice it works a little differently. When you nofollow a link it tells the crawler as soon as it encounters the link not to request or follow that link path. When you block it via robots the crawler still attempts to access the url only to find it not accessible.
Imagine if I said go to the parking lot and collect all the loose change in all the unlocked cars. Now imagine how much easier that task would be if all the locked cars had a sign in the window that said "Locked", you could easily ignore the locked cars and go directly to the unlocked ones. Without the sign you would have to physically go check each car to see if it will open.
About link juice, if you have a link, juice will be passed regardless of the type of link. (You used to be able to use nofollow to preserve link juice but no longer). This is bit unfortunate for sites that use search filters because they are such a valuable tool for the users.
Don
-
RE: Crawled page count in Search console
Hi Bob,
You can "suggest" a crawl rate to Google by logging into your webmasters tools on Google and adjusting it there.
As for indexing pages.. I looked at your robots and site. It really looks like you need to employ some No Follow on some of your internal linking, specifically on the product page filters, that alone could reduce the total number of URLS that the crawlers even attempts to look at.
Additionally your sitemap http://premium-hookahs.nl/sitemap.xml shows a change frequency of daily, and probably should be broken out between Pages / Images so you end up using two sitemaps one for images and one for pages. You may also want to review what is in there. Using ScreamingFrog (free) the sitemap I made (link) only shows about 100 urls.
Hope it helps,
Don
-
Moz Thumbs not working? @MozHelp
Hi all,
I'm noticing an issue with Moz thumbs. Last night before bed, I am pretty positive I gave EGOL and Laura a thumbs up for their answers to this question. This morning I noticed there were no thumbs up for their responses. At first I thought maybe I didn't give them a thumbs up, and proceeded to thumb them again. Everything appeared to work, however after returning to the post I noticed my thumbs didn't appear to be recorded.
Has anybody noticed this issue?
-
RE: Duplicating content from manufacturer for client site and using canonical reference.
Hello,
Laura and EGOL really nailed it as usually they both do!
By using Canonical you have basically told the search engines hey this content all belongs to X.
What I would suggest is use the manufacturers description in conjunction with the sites or owners own description. There is absolutely nothing wrong with using a manufactures description but you have to own it, which means unique content for every client for every product. Amazon for example uses manufacturers descriptions but they also usually add a slew of other things to a page to make it theirs, Manufacture Description / Amazon Description / Technical Information / User Reviews / User Questions / User Images / Shipping Information.
And here is the crux of the matter, people don't want to buy things from companies who know nothing about what they are selling. If a site can't add some sort of information or opinion about what the product is and why it is worth buying, they honestly have no business trying to sell such a product.
Just my thoughts along with the other 2 great answers,
Don
-
RE: How many directories are too many directories?
Hi Will,
I'm of 2 minds when it comes to directories. My general advice would be to ignore them all together, unless there are some very industry specific ones that make sense. I say general advice because the vast majority of industries I have researched have only 1 known good directory (Dmoz.org), the rest are at best, relic sites that have basically run their course in usefulness and give little to no value in terms of traffic or link juice. Why? because it is atypical for somebody to use anything other then Google / Yahoo / Bing / Baidu to find anything on the internet.
That being said, I do place some value on directories for some specific industries and lead generation. For example, in my current industry there is a site that has been around since the 90's and many people before the rise of search engine dominance found it as a great resource for finding business to business partnerships. Many of those people who got acclimated to the site are still working today and use it as their go to source for specific project requirements. In other words they have used it for so long and it has worked for so long they never found the need to branch out and rely on search engines. And in all honesty even Google would have a hard time returning pertinent results for lets say a rubber manufacturer who has experience with over molding fda approved buna-n rubber to an aluminum substrate. But the good directory sites can list those sorts of capabilities.
Because this is a public question I had to give both my opinions on directory sites. Again I wouldn't seek them out as any form of link building, but I also wouldn't ignore ones that seem capable of delivering either traffic or leads, I will say with the exception of Dmoz.org any of the good directories sites I have run across are very industry specific and they are certainly not free.
Hope that helps
Don
-
RE: Crawled page count in Search console
Hello Bob,
Here is some food for thought. If you disallow a page in Robots.txt, google for example will not crawl that page. That does not however mean they will remove it from the index if it had previously been crawled. It simply treats it as inaccessible and moves on. It will take some time, months before Google finally says, we have no fresh crawls of page x, its time to remove it from the index.
On the other hand if you specifically allow Google to crawl those pages and show a no-index tag on it, Google now has a new directive it can act upon immediately.
So my evaluation of the situation would be to do 1 of 2 things.
1. Remove the disallow from robots and allow Google to crawl the pages again. However, this time use no-index, no-follow tags.
2. Remove the disallow from robots and allow Google to crawl the pages again, but use canonical tags to the main "filter" page to prevent further indexing the specific filter pages.
Which option is best depends on the amount of urls being indexed, a few thousand canonical would be my choice. A few hundred thousand, then no index would make more sense.
Whichever option, you will have to insure Google re-crawls, and then allow them time to re-index appropriately. Not a quick fix, but a fix none the less.
My thoughts and I hope it makes sense,
Don
-
RE: Reputable SEO companies
I should also add that when you look at a company that provides link building services you should have them evaluate your site for content. What I would do is show them a couple so so pages, and a couple real good ones. Allow them to pick which ones they are willing to promote, if they choose the good ones you know you have the right people... my thoughts,
Don
-
RE: Reputable SEO companies
Hi Nicholas,
Moz maintains a list of the top companies they recommend. You know Moz used to be an SEO company all in its own, so those that manage to impress Moz would be well worth a look. Many people who post here, or contribute here are working for or even own some of those companies.
As always ask questions, link building is very merky area if you ask me. You want to find somebody or company that knows your industry and can deliver your content to the right places. That being said, if you're content isn't worth linking to, then no company or person is going to have good long term results.
Link building is an important ranking factor because it is at the core of how Search Engines operate. The more sites linking to a particular page or website the more value it appears to have. With that there are some strong changes to the way search engines treat the value of links, rolled out in the Panda and Penguin updates. Those updates targeted low quality links and sites themselves to eliminate the abuses link builders were using to manipulate sites to the top. And if you think about it that is the way it should be.
Link building is suppose to or should be occurring naturally, when webmaster A really likes what webmaster B has done they should be linking to them. This provides a benefit for both A and B's users. What has happened over the years is people abused the hell out of this mechanic and really pushed some crappy sites into the top rankings. Along with social media sharing the natural way for sites linking to each other has only been complicated. Still the value of link building is still there although much much harder to accomplish. In part because good sites don't want to risk getting penalties by linking to "less popular" sites. This only means if you really want to get those great links, you have to impress the existing sites who may provide them.
Hope this helps,
Don
-
RE: Use 301 or rel=canonical
Hi Kerry,
If you use 301, then the no index no follow rule will never be read. That is because as soon as the page is requested the server redirects, in such case the meta rule tags in the html are never read. So in short I wouldn't worry about it if you're 301'ing.
You should however make sure you update any sitemaps you maybe using and change your internal linking to use the new url opposed to the old. You don't want your site to continue to link to a page that just gets 301 redirected by the server. That is just good practice.
Hope this helps,
Don
-
RE: Use 301 or rel=canonical
Hi Kerry,
My advice is 301. Canonical was originally designed for people who didn't have access to the server to create the 301 rules. Since we have used it for that purpose but also to deal with dynamic urls and url variations like (www.mysite.com/home vs www.mysite.com/home/)
If you are in fact using a new page as better version of the old, then you should 301 it to the old to the new. This will pass all the link juice your previous page has accumulated and your new page will be the one to appear in the index upon Google's next index pass of your site.
Hope that helps,
Don