Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • Few things you can try: Google the URL if it doesn't come up strong possibility it was de-indexed and has a penalty. Use a service like link detox which will roughly shows you what it thinks to be nasty. Majestic seo has a neat tool for finding out what it thinks of sites - https://www.majesticseo.com/reports/neighbourhood-checker Similar to above http://spyonweb.com/ can be handy for working out link wheels Look into the stats of the site e.g trust flow, authority etc.  recommend tools like Open site explorer, majestic seo or Hrefs. Research is the key and you could dig pretty deep Hope some of those help. but as to what Google thinks you're still going to have to figure that out on your own. Good luck

    | GPainter
    0

  • Irving, thanks once again for your thoughtful response.

    | ccbamatx
    1

  • You want your sitemap to include all your important URLs. Don't remove them from the sitemap just because you have been crawled.

    | KeriMorgret
    0

  • Your welcome. Interesting question. My answer is that if the HTML TITLE is set with client side JavaScript then it's has little change of being picked up as the title by crawlers or Google. Let's say we alter the node element Title value with like this: In this case it will alter the value after the hard coded HTML title was send to the browser. It would need the crawler to load the document in full and read the HTML title value only after fully rendering it as if it where a human user. This is not likely. Then we could also try a document write to construct the HTML HEAD tag Title as a string to use for the browser as the title like this: Will not work as the title text is not actually altered after evaluation of the script line. This does not work because the title is not set but because it's not actually printed to the browser as a string. The source code for the title still looks like this in to any browser: As you can see the script does not print the result string of the evaluation to the browser but still sets the value of the document object model node HTML TITLE to the value it evaluates to. Try it for yourself with this dummy page I made just to be certain. http://www.googlewiki.nl/test/seojavascripttest2.html And this is the DOM info for this page http://www.googlewiki.nl/seo-checker/testanchor.php?url=http://www.googlewiki.nl/test/seojavascripttest2.html&anchor=test Or am I missing something here? Hope this helps.

    | DanielMulderNL
    0

  • Yes, unless you have a compelling reason to change them. By changing the names you are just creating more work for yourself. Was there a reason you were wanting to name them something different? Changing the name (if you choose to do so) of the sitemap is not going to have an effect on already-indexed URL's. I dont see any reason that you should modify the names unless you are trying to organize them better

    | David-Kley
    0

  • Google has stated that duplicate content will be penalized, if it is deemed that the content is meant to manipulate search results. https://support.google.com/webmasters/answer/66359 "In the rare cases in which Google perceives that duplicate content may be shown with intent to manipulate our rankings and deceive our users, we'll also make appropriate adjustments in the indexing and ranking of the sites involved. As a result, the ranking of the site may suffer, or the site might be removed entirely from the Google index, in which case it will no longer appear in search results." If two pages both have the same content, even with a canoical url Google will choose which one to index most likely the newer version of that page. "Google tries hard to index and show pages with distinct information. This filtering means, for instance, that if your site has a "regular" and "printer" version of each article, and neither of these is blocked with a noindex meta tag, we'll choose one of them to list." I would not recommend having duplicate content on your site if it can be avoided. If it can't, set one of the pages you dont want indexed to "no-index".

    | David-Kley
    0

  • If you are sold on prestashop and are sure you have the URLs taken care of then that is pretty much all you can do in that regard. I still think you should look deeply at the links after the site is done, and possible do all of this on a dev server to make sure everything copies over as well as you hope. I am not super familiar with Prestashop. So I am hesitant to give much specific advice on it. Some of my concerns come more with that. I personally use Magento if the client is sold on a eCommerce based CMS. They are the largest, most SEO friendly, and the most protected when it comes to plugins etc. Plus eBay owns it now, so you have that behind it. Regardless, I don't know if I really even have '10 things" for you. Check links before and after to make sure everything matches up. TEST, TEST, TEST. Try to break your website, on a dev server. Someone else will if you don't, that way you can identify it before somebody else does. Check plugins. Plugins are one of the best ways to ruin a website. One update, or lack of an update can throw an entire website offline. Accept that any change you make will have an effect. SEO is just like Newtons laws. Every action has an equal and opposite reaction. Now, in the long run, switching to a CMS could be extremely beneficial, at first it very well may hurt you. I would highly doubt that it would drastically impact your rankings, but in my opinion, you cannot make a change in this world and not have there be some sort of ripple felt somewhere else. Good Luck

    | HashtagHustler
    0

  • I would not recommend putting anything on your site that Google can see but not your customers. From- https://support.google.com/webmasters/answer/66353?hl=en Hiding text or links in your content to manipulate Google’s search rankings can be seen as deceptive and is a violation of Google’s Webmaster Guidelines. Text (such as excessive keywords) can be hidden in several ways, including: Using white text on a white background Locating text behind an image Using CSS to position text off-screen Setting the font size to 0 Hiding a link by only linking one small character—for example, a hyphen in the middle of a paragraph

    | EcommerceSite
    0

  • I did exactly what you said but now I am running into this posted above- Yeah, Google Webmaster Tools is where I found the errors. When I implemented Schema it showed me with about 35,000 items, and about 9000 errors. I came in today and now my errors are almost gone, down to 600 but my total items dropped to 1600. I checked to see if any other code was removed and it wasn't, and ran the Bing Webmaster Tools Validator and it shows everything is correct. Why that drop?

    | EcommerceSite
    0

  • Add this to your htaccess file (remove the .txt extension from the file in order to use it) Remove index.php or index.htm/html from URL requests RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /(([^/]+/)*)index.(php|html?)\ HTTP/ RewriteCond %{REQUEST_URI} !^/administrator RewriteRule ^([^/]+/)*index.(html?|php)$ http://your_site_URL/$1 [R=301,L] Obviously change the your_site_url to the your domain in   http://your_site_URL/$1 Also remove the # before RewriteEngine On to make these changes work.

    | QGS77
    0

  • Thank you!

    | Silkstream
    0

  • Hello Nick, It sounds like you're talking about category pages that link to product pages. I do recommend putting unique content on category pages, as they target different phases of a buying cycle (and keywords). For example... "Blue Widgets" is a top-level category search. "Blue Widgets for Men" is a category search. "Men's V9 Blue Widget" (not the singular use of widget) is most likely a product search. All three pages types should have their own unique content using keywords specific to that stage of the buying cycle, and displaying content that is useful to that stage of the buying cycle. For example... "Blue Widgets" ---> Explain how to choose a blue widget. Which sub-category should they check out? Help them decide where to go next. Help them narrow their search. "Men's Blue Widgets" ---> Explain the difference between brands and perhaps leading models. What are their needs and budget? Which are the high-end, top-of-the-line men's blue widgets, and which are affordable and reliable options, or entry-level widgets? Educate them. "Men's V9 Blue Widget" ---> Explain the product. What are the features that either set it apart or make it a good value for the money? The linking strategy can work this way as well. Naturally you're going to be passing pagerank from category pages into sub-categories and into products. This happens naturally on most sites because that is the natural flow of a purchasing path and a logical taxonomy. However you should also be passing pagerank back up if you want to help category pages rank. Enhanced produce descriptions give you an opportunity to do this. Good luck!

    | Everett
    0

  • Sure, you can absolutely create brand pages and from there branch to product pages. A bigger site is more effective and easier to maintain and promote and analyze than multiple small sites.

    | irvingw
    0

  • Thank you for the reply, Trouble is I'm not sure that I can convince the web designers to implement a change like that. Really appreciate the help. Thanks.

    | CKerr
    0

  • I don't think you're looking at a penalty situation, if that's what you are asking. Seems perfectly legitimate. The more interesting question to me is how Google will "weigh" the hidden content in it's algorithm. I suspect that anything that is hidden by javascript (or another method) will hold less weight than text in plain sight. You could try Google's new "Fetch and Render" tool in Webmaster Tools to see how Google views the page. Anything that doesn't display might not get as much consideration as plain text. Of course, this is a lot of speculation. We don't really know for sure how Google treats text like this, but it's a pretty common situation.

    | Cyrus-Shepard
    0

  • I think an overall better solution would be to work on developing a strong enough website that the site is given sitelinks, indicating a stronger brand that has extra real estate space at the top of the SERP. With subdomains, you might be able to rank second or third as well and I've definitely seen this happen (although can't replicate it right now). However, Google usually tries to show relatively diverse results, even for company names. You might have trouble ranking content on the same domain, even presented as subdomains, in both top positions. It's more common for Google to pick a social profile, a separate company resource and perhaps a review site for the top results, rather than listing link after link from the same root domain.

    | JaneCopland
    0

  • SEO student is correct re where manual or partial manual penalties will appear. As to your real problem, I do not think it is a manual penalty. I think you have messed things up with trying to build a new site on the old while it is still up. Is there a reason you did not build the new site on a dev domain : OliversDevDomain.com/clientsite? Get the site where you want it and then simply repoint the domain. Probably, you have 'confused' things with the search engines by having duped content, and/or you have redirects that are 404ing, etc. Have you run the site on screaming frog or similar software to see what you have? That would be my first move, then I would make a change in how I was rebuilding. It is likely, if the rebuild is complete and you no longer have duplicate content (and the url structure is unchanged) you will likely pop back into the rankings fairly soon. I would be disingenuous if I did not say that given the approach to the redesign, it is likely there are other issues. If you have a baseline crawl of the old site to compare to the new that would be helpful. Good luck with this, Robert

    | RobertFisher
    0

  • "So instead I took some advice and changed them to 200, but with a "noindex" meta tag and set them to not render any content. I get less errors but I now have a lot of pages that do this." I would not recommend keeping it that way. You could mass redirect them to the sitemap page if they are passing PR and or some traffic, and there is no logical other place to point them. 404's are not really something that can hurt you, providing that they are coming from external sources and you aren't providing 404 links on your site to dead pages on your site, if there are these, then you should fix the internal links at the source.

    | irvingw
    0

  • What questions do clients and potential clients have about offices? If they called your company, would you be able to tell them more than 400 words over the phone? Try putting some of that information on your page.

    | KeriMorgret
    0

  • Take a look at the pages that are indexed. Chances are that since it is a cart or CMS-based site, you just need to use robots.txt to block out some areas you don't want indexed. You also need to look at your indexed pages, to see if any of them are duplicates, meaning you have 2 or more url's that display the same content. "It's an ecommerce site but i can't see any issues with duplicate content - they employ a very good canonical tag strategy. Could it be that Google has decided to ignore the canonical tag? " Could be that your cms or cart is not forwarding all the pages to the canonical version. Again, check to see if you can access multiple versions of the same page. Ecom and CMS sites always have these types of errors if you dont keep a close eye on the URL's since they are database driven, vs static HTML. Look for www or non-www versions of pages, url's with and without index.php, etc. Once you target what the offending url's are, use redirects to forward them to the proper and search engine friendly version.

    | David-Kley
    0