Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Intermediate & Advanced SEO

Looking to level up your SEO techniques? Chat through more advanced approaches.


  • You may find this helpful - https://www.youtube.com/watch?v=hy3_Rjc0Tso I suppose you could get around it by creating it in an image or a way that Google bot wouldn't see is as duplicate content as much but its iffy. Alternatively don't copy the content just reference it in a link then you don't have the content problem but the users can still see the content.

    | GPainter
    0

  • Hi there, Realistically, the tag should be used for duplicates, yes. How "duplicated" a page is, is subjective: a page with 50% of the same content as another page is probably going to count as duplicated as far as Google goes... where that line of duplication acceptability goes isn't something any of us really know. For pages where the content is totally different besides the header and footer, you technically shouldn't use canonicalisation. However, experiments have shown that Google honours the tag, even if the pages aren't duplicates. Dr. Pete did an experiment when the tag came out (admittedly a few years ago) where he showed that you could radically reduce the number of pages Google had indexed for a site by canonicalising everything to the home page. I personally had a client do this by accident a couple of years ago, and sure enough, their number of indexed pages dropped very quickly, along with all the rankings those pages had. As an ecommerce site that was ranking for clothing terms, this was very very bad. It took about six weeks to get those rankings back again after we fixed the tags, and the tags were fixed within about five days (should have been quicker but our urgent request went into a dev queue). So the answer would be that Google seems to honour the tag no matter the content of the pages, but I am pretty sure that if you asked a Googler, they'd tell you that it should only be used for dupes or near-dupes.

    | JaneCopland
    0

  • Index status, Pagerank, DA, Moz Trust, Social Engagement, Alexa, etc. There are tons of possible variables but the standard for a "good" site would vary by industry. Honestly, you are better off manually looking through all of the links and determining whether a site is 1) indexed in google 2) not filled with spam 3) related to your website (that goes for directories and low DA sites as well) Rule of thumb: if you think it might be a bad link, it probably is.

    | OlegKorneitchouk
    0

  • We recently changed our domain name to match our brand. I read everything on Google and Bing Webmaster about domain moves. I redirected (301) each page individually to the matching new page. (Ok, we moved to WordPress from html/css and used the HTML 2 Import plugin so there was a lovely list of all the redirects for the webmaster to install on the old website) One thing I did that was bad was made a placeholder page for the old homepage informing people of the move. I corrected that and 301'd the old home page to the new home page. I should have done that straight away. Then I used the Site Address Change tools in Google and Bing Webmasters to notify them of the move. Ours is a smaller site (~800 pages) and site traffic moved with it. I'm checking links now to ask external links to update our info. These articles helped, but there are lots more if you search for Google Webmaster domain move. https://support.google.com/webmasters/answer/83106?hl=en https://support.google.com/webmasters/topic/6033102?hl=en&ref_topic=6029673

    | zharriet
    0

  • Do three things a) add the meta noindex b) add the search section you don't want crawled to your robots.txt file c) request url removal in webmaster tools doing the first two steps will ensure that the pages remain unindexed after you remove them in WMT

    | irvingw
    0

  • Hello Alan, Jane got a lot of good answers for you below. I do agree with her ... Google is going to discover links at different times as many different sites get crawled. Your website ranking can be improved at all times as Google discovers the new links you build or acquire. We see clients site rankings go up consistantly all year long ... Not just 1 or 2 times per year during an update. Like Jane said, that's where penalties are being discovered and given out. Your website will benefit from new links anytime they are discovered. Regarding building links ... Social media shares and likes are not to replace link building but in addition to that work. ( Bit.ly is fine ) If you have time and know what to do, you could certainly get those links yourself. My clients just don't have time and are unsure how or where to get links. If you hire a good company, they should be able to make a big impact on a site with a relatively low DA / PA within a few months for sure.The higher your scores the more work it tkaes to improve them. It's easier to go from a PA of 15 to 25 than to go from a PA of 35 to 45. Of course high quality content is the first step, next step is to get that content out to readers and followers so they can share the content they like. I would share your content on your social media platforms you use and work on buiding subscribers and an email list. You could send out market analysis of sales in your area by price range, average days on market till sold. Put something usefull together ... you could also ask questions to your clients, sort of verbal survey to find out what kind of info they would like to see. Then put an optin offer, sign up with your email here and get our FREE market report... Real estate mistakes to avoid ! Give out some valuable info, help them out and build a loyal following. You should do some research on your top ranking competitors and see where their highest PA backlinks are coming from ... and go after these links first ... that should get you some faster results ... of course research those links and make sure they are quality before you add them to your website. For Twitter how about you use a more specific term like #nycofficespace All the best, Joe

    | jlane9
    0

  • Thank you much. Reading your answer is giving me kind of a "duh" moment. I think if I were looking at this situation from the outside it would be a different story. I definitely am over thinking this. Thanks again!

    | ThridHour
    0

  • From the user perspective, they really only know about "their site" and not all of the other sites that are out there. I know I'm being vague (confidentiality reasons), but think of it as SaaS where your company can buy access to content and testing material that is only available to your employees. Another company may get access to the same stuff, but it is available only through a URL that's customized to them. My basic thinking is that if the content is not unique, it's password protected and the user's experience is relatively solid (aka they don't stumble around the URL structure), then possibly the only way a consolidated URL structure could add value to search would be to have some sort of content strategy to increase the publicly available content on all of these sites. And even then we wouldn't really have an inbound linking strategy to these pages as they are really client/employee specific...so we'd be back to producing content for the corporate presence rather than these individual sites. Okay, now I'm rambling but I'd love a general nod if at least my logic is sound. I'd happily take any challenges to my thought process as well! Thanks!

    | trideagroup
    0

  • Hi Dan, There are 2 ways to handle pagination in Google's eyes: 1. Use a "view-all" page, whereby all paginated pages (?pg=x) have a canonical to the page that has all the results on it: http://googlewebmastercentral.blogspot.co.uk/2011/09/view-all-in-search-results.html 2. Use rel=next and rel=previous between the paginated pages. There is no one single page that has all the results on it: http://googlewebmastercentral.blogspot.co.uk/2011/09/pagination-with-relnext-and-relprev.html Each has its own merits and the choice usually depends on page performance (for the first option) vs. time to implement (for the 2nd option). If you have lots of categories and products and offer list sorting in the URL and faceted navigation you run the risk of submitting a ton of URLs to Google, so the clean up you're doing sounds like a good approach. Finally, in WMT I presume you have the pg parameter configured as "paginates" for the "effect". Hope that helps George

    | webmethod
    0

  • When you add a canonical its pretty much saying this page is a duplicate of the other page, which is fine. This normally means link juice would flow to the page you said was the original. Other options can be just linking to the original content, Matt Cutts has mentioned duplicate content isn't the worse enemy out there and as long as its not spammy it's not solely going to bring you down Take a read - http://searchenginewatch.com/article/2319706/Googles-Matt-Cutts-A-Little-Duplicate-Content-Wont-Hurt-Your-Rankings So if the content is there legitimately and its to help the user of your site just link to the original article, obviously if you're doing this across multiple pages you might want to rethink it a bit but for one or two pages you should be fine.

    | GPainter
    0

  • Hi Andy all pages indexed, i have set up a new alert for this one, so will be interesting to see when it pops up! Thanks for the advice Ash

    | AshShep1
    0

  • Looks like I can only do the first thousand. It's a start though. Thank you for the information. Many of the URL's on my list, when put in to Google search, are giving me 80-100  other variants I can remove by hand. http://www.mathewporter.co.uk/list-a-domains-indexed-pages-in-google-docs/ for anyone else following.

    | sparrowdog
    0

  • I think this white board Friday may help you out a little http://moz.com/blog/how-google-knows-what-sites-you-control-and-why-it-matters-whiteboard-friday I would just start creating fresh content on the old site and wait for natural organic links to grow.  The traffic will come back over time.

    | DJ123
    0

  • You are correct, generally, Google will index when it makes sense for them to do so per their algorithms.  It is not in my experience driven by the time on the XML sitemap.  It could be loosely correlated, but not huge. You should setup up 301 redirects on the auctions that have ended.  Don't 404 them, that is bad.  Redirect them to a page for new related auctions (probably will require some code to be whipped up). You could try to utilize webmaster tools Fetch as Google to get the pages indexed, however, this is manual, not guaranteed that Google will index it and you only have so many requests available. Spend some more time on getting organic links via articles and other content in the niche.  This will help on a number of levels not only in ranking but also attracting traffic from people interested in the product/market.

    | DJ123
    0

  • Yep - this is the purpose of the tag and by all accounts, it works well. It has been supported since late-2011, I believe, and I've both had no bad experiences with using it, nor have I heard stories of it not working properly.

    | JaneCopland
    0

  • Hi Marie: Thanks for your quite detailed response to my question. Some of the possibilities you mention probably don't apply for the following reasons: 1. None of the URLs' changed, so it cannot be that. 2. Page titles did not change, so it's not that. 3. As for unnatural links, these have existed for several years. In fact we succeeded in getting 28 our of 100 removed and made a disavow request for Google for the other 80 toxic links. While the link profile is weak it is not worse than what it was before. When the upgrade was launched in early June Wordpress was upgraded to the latest version. I wonder if at that time some issue did not develop with robot txt or no-index. I find it very curious if that a removal request was made for the 175 URLs on June then the number of indexed pages went down for a few days and now they are back to 851. My developer may be a little bit shy about accepting responsibility about this issue. Is there any source, a Guru or sorts that could check the Wordpress installation to see if that is the source of the 175 appearing on Google that should not be there? Someone way to eliminate any doubt about what is causing this issue? Thanks, Alan

    | Kingalan1
    0

  • Just one more note: You have a Double Redirect and you should write your Rewrite rules to avoid this double redirect as each redirect will cause some loss of page juice. Checked link: http://www.opiates.com/ Type of redirect: 301 Moved Permanently Redirected to: http://opiates.com/ 2. Checked link: http://opiates.com/ Type of redirect: 302 Found Redirected to: https://opiates.com/

    | irvingw
    0

  • Copyright protection, possible seo benefits (depending upon description), technical data (if you see a shot you like, and want to know how/what it was taken with), size parameters (for when people search for certain sizes)

    | David-Kley
    0

  • Take a look at the image index rate, and see how many are indexed and showing. 20k images is quite a bit, and I would want to see if Google actually indexed them all before I started creating redirects for that many items. If they are indexed, see if there is a way on your new site's sytem to have the images named the same. That way you would only be creating a redirect for the image paths, and not the images themselves. Example: oldsite.com/image/25/coolpicture.jpg newsite.com/images/awesome/25/coolpicture.jpg All that would be needed would be an HTACCESS rule redirecting from one directory to the other, using the image name as a variable. You could probably redirect most of them using only a few rules than 20,000 separate redirects. Is it worth it to redirect? I think that depends on how many are indexed, and if the images bring you a significant amount of traffic.

    | David-Kley
    0