Category: On-Page / Site Optimization
Explore on-page optimization and its role in a larger SEO strategy.
-
Is this writer worth the $0.5/per word
If you have the time I would highly suggest training your own writers to work as contractors. It took me about 8 hours to train two writers, who are free to right for anyone they choose, and I get optimized content for .05 a word. It also gives me the chance to actually sit down with them to discuss improvements. I use two ex journalists for mine, but you could easily find someone pursuing a writing degree that would be interested in doing it as a side gig. As for the original question, I wouldn't pay that.
| WhoWuddaThunk0 -
Infinite scroll SEO
Infinite scroll is fine for SEO, as long as you include code that forces a unique URL at a certain point in the scroll, to emulate a new page. Now - if you have that on category pages, and thus provide a crawlable, indexable group of pages with proper pagination best practices, you don't need that on the home page. The key though is you need at least one proper way for search bots to get to all your content through HTML links. If you don't have separate categories, and if the home page links are the only way to get to that content, then having page 2, page 3 is critical - and should not be blocked from search bots for crawling or indexing.
| AlanBleiweiss0 -
What does Google consider a "Duplicate Title Tag?"
I'm not certain duplicate means exactly the same when it comes to titles. I've seen instances on particularly on large ecommerce sites where titles are blatantly auto-generated and are not displayed by Google in SERPs: e.g. "buy <parent category="">and <subcategory products="">from xyz shop at great prices"</subcategory></parent> In view it's likely that Google is aware of this from a quality guidelines standpoint. Where possible, titles should be individually crafted.
| webmethod0 -
Multiple domains for the same business
There is no real need to use all those domains. Build the trust and authority with your primary brand URL. Maybe use "some" others if there is any real value but 500, jeeze!!! That will be a ton of work and I think you'd be spreading yourself too thin. The best plan of action IMO is to focus on properly optimizing and building the TRUST for the company's branded URL. Forget all that old school "microsite" nonsense
| Bryan_Loconto1 -
How to remove duplicate content issues for thin page(containing oops no resulting found)
Hi, If you simply want to stop these pages from being indexed until they have some non-duplicate content on them, you could use the meta robots tag in your This is how that would look: <meta name="robots" content="<strong>noindex</strong>"></meta name="robots"> And this will immediately drop these pages from the index when googlebot crawls a page with this directive in the If you wish to stop these pages being crawled by rogerbot (moz’s crawler) so the duplicate content errors do not show up, you could add these lines to your robots.txt: User-agent: rogerbot Disallow: /oopspage1.html Disallow:/oops/page2.php User-agent: * Allow: / This will still allow googlebot to crawl these pages, and will keep them in the index, so when you do update the content you won’t have any problems if you forget to remove the meta robots tag on these pages. Be careful with robots.txt and remember that Disallow:/oops/ will stop crawlers from crawling subpages of the /oops/ folder, so you need to remember your filenames. But in this case, since you’re only setting these rules for rogerbot, it wouldn’t be catastrophic for your site but it’s best to take care either way. In reality though, duplicate content is a problem when multiple pages are competing with each other for the same terms. Sometimes the wrong version of the page will be displayed above your preferred version. Unless you’re looking to rank for your ‘Oops nothing here’ phrase, these duplicate content problems are nothing to worry about and the notices will soon disappear from your crawl errors when you create some fresh content for them. Hope this helps, Tom
| TomVolpe0 -
Should we de-index pages that not receiving any traffic?
Hi Vadim, My initial response/question would be: If you are willing to consider de-indexing those pages, why not just remove them from the site completely? Perhaps I am misunderstanding. Maybe this is what you are thinking of doing anyway? It is not uncommon on larger sites for a very small number of pages to be driving almost all of the traffic. Still, there may be people linking to some of those pages and that may be helping you, even if you aren't getting traffic from them. It sounds like it could be a possibility that you are cannibalizing keywords but not sure until taking a deeper dive. If you added pages targeting substantially similar keywords it could be that Google is having difficulty determining which page is more important for a given term. Consequently, neither page does as well as it could if there were only one. Generally speaking, more pages on a site is a good thing, but only when the content is really unique and fulfills a need or want from your audience. What did you do to promote this new content? Sometimes it takes some serious effort and coordination to get a piece of content noticed. The days of "If we build it they will come" are long gone. Maybe you just need to promote those new pages? Just some thoughts. Cheers, Dana
| danatanseo0 -
Where would I start with optimizing my site
Welcome! Another simple-ish trick you can do is to create a campaign in Moz. The campaign will run an audit of your site and take a look at all of your webpages, checking all of your tags, links, etc and it will actually give suggestions of what to do & where to start. The Moz campaign will show you which keywords are sending traffic to your website and which pages should be optimized from there. Unfortunately, there is no quick fix for SEO, and depending on how much needs to be done, it might be in your best interest to contact an outside professional for a consultation! Good Luck!
| HashtagHustler0 -
What to do with removed pages and 404 error
If they are truly gone, then a 410 would be the best option for you. Since they are indexed even if there are no links pointing at them, people can still find them besed upon what they are searching for. You never know when your link will show up, because you dont know how long google is going to take to get rid of the links. http://www.checkupdown.com/status/E410.html "The 410 error is primarily intended to assist the task of Web maintenance by notifying the client system that the resource is intentionally unavailable and that the Web server wants remote links to the URL to be removed. Such an event is common for URLs which are effectively dead i.e. were deliberately time-limited or simply orphaned. The Web server has complete discretion as to how long it provides the 410 error before switching to another error such as 404" We did this for a client that needed old defunct pages removed. Once you set the pages to return a 410, and use Google url removal tool, you should see them dropping off really quick. (all of ours were gone within a month) Having that many pages return a 404 may be hurting the experience of your users as when they see a 404, they go right for the back button.
| David-Kley0 -
What to do about resellers duplicating content?
There was actually a pretty solid Whiteboard Friday covering a similar topic. The Googles are only 'pretty good' at figuring out which page is the original, and which site should be given better placement for the same content. So you're not being silly for being concerned. Panda. I honestly think you'll find something you can use in the video, so I won't get too carried away on that subject. Your next problem is buy-in. Will your client do what needs to be done? To get the sale you have to sell. Present it as something that hurts resellers as well as the client. Which it likely is, or will, hurt both in a material way. If the resellers aren't being seen well enough due to the duplicate content issue, your client is losing on sales. The reseller is directly losing potential revenue. So whatever you put together out of this, the wise will heed your words - as they're losing money.
| Travis_Bailey0 -
How do you handle URLs with slashes?
QUESTION 1: Setup a catchall .htaccess 301 redirect to point http:// to https:// (So the age of your page is transferred to the https:// pages) QUESTION 2: In short: No But It wouldn't hurt to make sure you have a canonical tag on all your pages pointing to the preferred url structure For example these 2 pages load 1 page (and both URLs work): http://singlespeedbikes.co/abacabb-2-0 http://singlespeedbikes.co/abacabb-2-0/ But the cannonical tag on this page tells Google which URL it should index to avoid confusion with duplicate content etc, example: <link href="[http://singlespeedbikes.co/abacabb-2-0](view-source:http://singlespeedbikes.co/abacabb-2-0)" rel="<a class="attribute-value">canonical</a>" />
| benji10 -
Website view diffrent in Chrome than FireFox and IE
It looks like there are two issues. The first issue is that Firefox and Chrome are treating the padding differently. Firefox is padding from the bottom of the text of the navigation whereas chrome is padding from the top of the containing div. The second issue is that your site appears to be adding hardcoded css via javascript which is why you don't see that style in the source code. It's being added by a script executed in the browser after the site is loaded.
| spencerhjustice0 -
Long list of companies spread out over several pages - duplicate content?
Thanks George, I'll think I'll take your advice and hold off for now. Aaron
| AaronGro0 -
On-page tool idea. What do you think? Like to hear it!
Thanks Jane! Maybe I just will port it to a chrome plug-in. Thanks for the tip! And I know and have used ayima plug-in!
| DanielMulderNL0 -
.htaccess file uploaded, website won't load
You were almost there buddy. It looks like you have regex characters that need to be escaped with . Plus, most seem to prefer rewriting index before messing with www. Give this at try. I took a look at the site before the loop and it doesn't appear to use any CMS per se. If it were a WordPress site, you would want to place the code before the WordPress rewrites. P.S. You don't need the last comment in the example.
| Travis_Bailey0 -
Will changing my home page cause rankings to drop?
Is the bounce rate on the homepage high? Leave the store page and make the homepage more conversion friendly is my recommendation. I understand them wanting to get the user one click closer, but not really the best solution, try to mimic the store page on the home page a bit a keep track of changes and increases or decreases so you can revert if need be. FYI look at the links in your left nav here http://www.iboats.com/aboutus.html and also your global footer, you are nofollowing internal links, this is throwing away precious page juice.
| irvingw0 -
Whats wrong with meta title? In SERP it looks different.
You can alter your titles and see if that helps, but in the end it's a matter of seeing how Google is changing your titles and then creating a title that's a compromise between what you want and what Google wants. And even in the that case, Google might not listen.
| WilliamKammer0 -
Duplicate content issue, across site domains (blogging)
Ketan, I'm going to encourage them to publish only fresh content on their real blog, would you agree? If you look at pretty much any of the blog posts on these forums you will see that more and more everything comes back to content. Original content. Original Content. Did I mention original content? EGOL shared a link with me and I'm sharing it again on another post but it's pertinent. http://www.thesempost.com/google-rewrites-quality-rating-guide-seos-need-know/ Content used to be king and he wants his thrown back! Can this actually harm the ranking of their blog and website - should we delete the old entries when migrating the blog? Duplicate content is something that I am working a lot with right now. There is a difference between duplicate content and plagiarism. What needs to be determined is how the are using this content. Matt Cutts said that roughly 25% of the internet is duplicate content, and a lot of duplicate content is ok, for example if you were writing an analysis, or writing commentary on one of those blog posts then of course there is going to be some duplicate content. That sort of this would be ok. If they are simply taking the article, and posting it, yet still giving credit, then no its not really doing anything except potentially giving your blog traffic. Take social bookmarking websites for example. Let's look at digg.com. A long time ago digg used to take the entire article and post it to their page and you could actually view everything within their website and everything was all gravy. Now, that doesn't work so well. Now, digg usually writes a little blurb, and provides a link. Google will look at duplicate content, determine which ever is the best representation of the content, usually who wrote it first, who has the strongest domain etc, and gives the credit to them. The other thing too is that these websites, Havard, NPR, etc have in their favor is that they are prob indexed often enough to guarantee that they are going to get credit first. As to the question about deleting them. I don't think you necessarily need to delete them, depending on how many articles exist, how much traffic they generate etc. There is a lot to look at. If nobody is looking at them, then sure, you can always do a 301 to one of your new blog posts later down the line. Or maybe your first posts are rewrites or analysis of the articles. Regardless I would provide a link on all of the pages letting people know where you got the information, that way nobody can say you were trying to steal the information. My thought on this whole thing. If it makes you uncomfortable, it's gonna make Google feel uncomfortable. Hope that helps! Good luck! Matt Cutts on Duplicate content: https://www.youtube.com/watch?v=mQZY7EmjbMA&feature=kp Matt Cutts on Original content: https://www.youtube.com/watch?v=4LsB19wTt0Q
| HashtagHustler0 -
Are three Adsense ads on a long page a negative for search ranking?
If your ads negatively affect the users experience, then yes it could hurt your page interaction. What you are describing sounds very conservative, and I doubt it affects your page at all. Even if you had more ads, as long as they are out of the way and do not intrude on the user you should be fine. Ads do not affect your page rank or position in any way. If that was the case, plenty of sites would be non-existent.
| David-Kley0 -
Image naming best practices?
1. Name the image what is showing in the picture 2. Describe briefly what the picture is or what it means in alt text 3. Make sure to allow your images folder to be crawled in robots.txt. It would be a shame to go through all that work and have the image folder blocked 4. Mark up your images with schema: http://schema.org/ImageObject
| David-Kley0 -
How do you treat http/https and slashes at the end of a site?
Hi Tiffany, There is a difference between all of these URLs, as far as Google is concerned. If a URL has any one characters different to another, Google considers it to be different. This is true even if the pages each URL loads are exactly the same. For all of these URLs, you should choose one that you consider to be the "proper" URL. Call this the "canonical" URL - the correct version. There is no gold standard for which versions you should choose, besides for one, which I'll get into later: https://www.abc.com/ is realistically not better or worse than http://abc.com/. However, you might have a good reason why the entire site should be on HTTPS URLs, i.e. on secured as opposed to unsecured URLs. Some people choose to not use the "version" of their site that loads with "www" - again, there is no benefit or detriment either way. For the abc.com/blog/ example, the general rule is that **if more content site beneath the /blog/ subfolder, the URL should have a trailing slash. **If "/blog is just a page with nothing housed beneath it (i.e. there are no pages like www.abc.com/blog/2014/post.html), then you can leave the trailing slash off if you like. No matter which versions you choose, all alternative versions should be 301 redirected to the canonical version (the one you chose as your preference). If you choose http://www.abc.com/ and someone types in https://abc.com/, they should be 301 redirected to http://www.abc.com/. The other option is to place the canonical tag on each "alternative" version, pointing to the canonical URL of that page. This means that https://abc.com/, etc. load, but the canonical tag tells Google that the primary version is not on this URL, but on the one you specify in the tag. This is quite easy to do: each URL will be pulling its content from the same file (that is, there are not usually two files for the home page, one populating www.abc.com and one populating http://abc.com - it is the same file being displayed on different URLs). As such, that one file needs to have the canonical tag indicating your desired canonical URL. Each page requires its own canonical tag, indicating the desired URL. 301 redirection to the canonical URLs is the traditional way of getting this done. Cheers, Jane
| JaneCopland0