Are you using the advantage of rel="alternate" hreflang="x"?
Posts made by FedeEinhorn
-
RE: Google adding strings to meta title in SERPs => Driving my client crazy!
-
RE: Creative Commons Images Good for SEO?
It won't make your site look spammy if the content you are publishing isn't spam. CC images require you to link back to the original source, you can even use a nofollow attribute on those links.
But still, as the images are not yours, you won't benefit from image search, as Google will list the original image posted by the author instead of yours.
There are royalty free stock photos that you can use and they aren't that expensive if you are on a subscription. Like Fotolia offers a subscription for 5 images at $25 per mo. But you can download a lower resolution one, which will deduct half a credit and then you can download 10 images. Most likely, you don't need the one that's worth 1 entire credit as the 1/2 credit one is large enough.
PS: Here's a post from Ann Smarty about how to use CC images from flickr: http://www.seosmarty.com/flickr-creative-commons/
Hope that helps!
-
RE: Suggestions on Website Recovery
Is is widely known that manual penalties do expire. If you have fixed the issued, cleaned up the backlink profile + disavowing those that were impossible to remove, then perhaps the penalty just expired and as you are no longer in violation to Google quality guidelines you haven't receive the penalty again.
If on another scenario, the penalty expired and you didn't do the cleanup, then most likely the penalty will be back to bite in the a***.
I always heard that penalties do expire but no one was able to tell how long it took, I think you are the first one that can verify that penalties do expire and don't come back unless you haven't fixed the issue

Anyways, after a penalty is revoked/expired, it will take some time, probably month to see the changes.
From my point of view, you are in the right direction, building content to earn backlinks, that's the way to go.
Hope that helps!
-
RE: What is the best practice for linking between own sites in various countries
From my point of view, there's nothing you need to worry about. Do the sites offer different content for each Country? If yes, then you don't have to worry about those links, plus you should add the rel="alternate" hreflang="x" to all your pages pointing to the corresponding site in the different language/location, as explained here: https://support.google.com/webmasters/answer/189077?hl=en
If you want, and that's up to you as I don't think that could lead to any penalty, you can also add a rel="nofollow" to the links that point to the other Countries' sites.
Hope that helps!
-
RE: Best way to noindex long dynamic urls?
If you have a page that lists all the villas outside the search results, then you don't lose anything by blocking that folder on the robots.txt
But still, somebody, the guy that wrote the custom theme knows how to do the changes needed.
If you want I can help you with it, for free
Just PM me (I'll need FTP access). -
RE: Best way to noindex long dynamic urls?
If you have a /all-villas/ page then you should go ahead and noindex the search results as Google Guidelines suggests. You can either do it in the /property-search-page/ or using the robots.txt file.
In the robots.txt, add:
disallow: /property-search-page/
The robots method guarantees that no page inside that folder is indexed or even crawled (including /property-search-page/?whatever).
Or on the page /property-search-page/ you can add the meta noindex as such:
Then check if that meta tag is shown in all search results (just check a couple of them).
Hope that works!
-
RE: SEMRush Ads Traffic Price VS their PDF report
No problem. Keep checking on SEMRush in a few days to see if those numbers are changing.
But for now, you can report to your customer that Nest wasn't doing any noticeable advertising before the acquisition. They are now, apparently, but those stats from SEMRush are just estimates based on open metrics, no one outside Google or Nest can actually give you an exact sum.
-
RE: Best way to noindex long dynamic urls?
Well, that will make a little easier from one side and harder from the other.
You can try installing SEO by Yoast, that will put all the canonical tags for you, however, I think it won't link the search result pages to the canonical page that lists them all.
That might require a little coding.
If there's another page, outside /property-search-page/ folder that lists all villas, then you can disallow that folder in the robots.txt file, and that should fix it. If there isn't, well, then you will need to edit the /property-search-page/ page to use a static canonical tag that points to the page that lists all the villas removing any kind of filtering.
Hope that helps!
-
RE: SEMRush Ads Traffic Price VS their PDF report
Well, I checked SEMRush and they in fact report $2K on ads, but it seems that they have no data of the past. So they are just catching up with the data or Nest just started with their online ads. In any case, as the stats are just pouring in, I would leave a few more weeks before "trusting" on what SEMRush reports. I'm also sure (like you) that after that acquisition they are spending much more than 11K a month in ads.
Hold on for a week or two and see if there any changes.
-
RE: SEMRush Ads Traffic Price VS their PDF report
I guess you are talking about Nest?
-
RE: What do you think about this links? Toxic or don't? disavow?
Hey,
I tested LinkDetox myself for removing our manual penalty. Paid for several credits as every time I used them I received a rejection from Google after sending the reconsideration request (identifying the links, sending mails to have them removed, rediscovering them to see if they were still there, disavowing). All links removed as LinkDetox suggested, still no removal of the penalty.
Then we took another approach, we downloaded our linking domains from GWT, using text editor (dreamweaver in our case) added a "domain:" in front of ALL domains. Then we went over one by one to identify the good ones and removed them from the list, we ended up disavowing about 80% of the domains, then we sent the disavow file and a few minutes later sent a reconsideration request, a week later, penalty revoked.
Now, that we finally had our penalty revoked, we can still go over those links on the disavow file, one by one and if we find that the link was in fact worth having, or not there anymore, then we remove the domain from the list. We also check for subdomains, in our case we had like 1000 links from wordpress.com blogs, from about 5 subdomains. In the first disavow (that worked) we disavowed the entire domain wordpress.com, then we went over all the blogs to see which ones were worth having, and removed the root domain (wordpress.com) and instead added each subdomain (each blog) that we wanted disavowed.
Conclusion: nobody knows the real value of a link just by looking at some metrics like linkdetox does. You know the value, so go over the links by yourself and disavow all those that you think have no value, for example, those stats, or website worth links are useless, go ahead and disavow them all.
Hope that helps!
-
RE: Best way to noindex long dynamic urls?
I wouldn't put a noindex meta on them, instead I would consider using a canonical tag pointing to the page that lists all the villas.
Anyway, what programming language are you using?
-
RE: Analytics reauthorisation
It seems to be an issue with your account as I just tested and wasn't able to reproduce the error. I recommend you contact Moz help@moz.com
-
RE: Does this popup get crawled?
Yes, those "pop-ups" are crawled. Search engines are already smart enough to interpret lightboxes just fine, which is the name of that kind of "pop-ups".
-
RE: Colons in title tag?
Colons are seen by search engines as what they are. You say something, a word, then a colon, and then comes an explanation or enumeration.
In your example, you did it right, perhaps you should move the colon to where they belong, right next to the last letter of the brand, so it reads: GENERAL ALTIMAX ARCTIC: 225/45R17 91Q
The idea you mentioned, building titles for users, not for engines, is the way to go. However, there are some tweaks you can make to make it easier for both.
As in your example, the title could become: GENERAL TIRE: ALTIMAX ARCTIC 225/45R17 91Q - YOURSITENAME (personally I would put the colon next to the brand, and then comes the rest of the product name + you end up with " - YOURSITENAME" to help build YOUR brand.
Hope that helps!
-
RE: Blog Marketing
Keri,
There are some excellent profiles that can share your content, unlike other "services" that are paid to share, the people sharing the posts are actually reading and finding them interesting enough to share. Of course, there are some that click on each and every share button just to get the credits, but as viralcontentbuzz "pays" by how popular you are based on metrics (I think from Klout) those that share just for the credits can't do you no harm.
I've used them for several posts and some ended up with tweets from high level profiles that got retweets, favs, etc.
I am totally against their paid version as it could become a paid to share scheme, which ends up as a way of spamming the networks, but still, high end profiles won't share everything just for the points, they have a "reputation" to maintain.
-
RE: Blog Marketing
There are lots of blog directories, but those won't get you any significant traffic at all. In fact, I would recommend you focus on social media to become viral.
First, make sure you have created a FB page, Twitter profile and G+ page, verify everything needed and start posting on the 3 networks on a regular basis. Not only post your blog posts, but try to build a community. This could take a while (and you can push it with a few ads from facebook and twitter). Remember to use hashtags to get new followers that read about certain topics.
Then you can also use sites like viralcontentbuzz, which allows others to share your pages in exchange of sharing theirs. There's also a paid version that gives you more exposure and a few of free credits, although the idea is to share in exchange, not to "buy shares".
To add up, you can always offer yourself to guest post on other related blogs, even if the outbound links are nofollow (no juice to the site), you still get your article and your name out there.
There are millions of other ways to promote your blog that I;m sure you'll discover on the way

Hope that helps!
-
RE: Salvaging links from WMT “Crawl Errors” list?
Exactly.
Let's do some cleanup

To redirect everything domain.com/** to www.domain.com you need this:
RewriteCond %{HTTP_HOST} !=www.domain.com [NC]
RewriteRule ^(.*)$ http://www.domain.com/$1 [R=301,L]That's it for the www and non-www redirection.
Then, you only need one line per 301 redirection you want to do, without the need of specifying those rewrite conds you had previously, doing it like this:
RewriteRule ^pagename1.html(.*)$ pagename1.html [R=301,L]
That will in fact redirect any www/non-www page like pagename1.htmlhgjdfh to www.domain.com/pagename1.html. The (.*) acts as a wildcard.
You also don't need to type the domain as you did in your examples. You just type the page (as it is in your same domain, you don't need to specify it): pagename1.html

-
RE: Salvaging links from WMT “Crawl Errors” list?
Well, if you still want to go that way, the rewrite conds there are not needed (as it is given that the htaccess IS in your domain). Then a rewrite rule for www.mydomain.com/pagename1.htmlsale should be:
RewriteRule ^pagename1.htmlsale$ pagename1.html [R=301,L]
Plus a rule to cover everything that is pagename1.html*** such as pagename1.html123, pagename1.html%22, etc. can be redirected with this rule:
RewriteRule ^pagename1.html(.*)$ pagename1.html [R=301,L]
-
RE: Salvaging links from WMT “Crawl Errors” list?
Although you can redirect any URL to the one you consider they wanted to link, you may end up with hundreds of rules in your htaccess.
I personally wouldn't use this approach, instead, you can build a really good 404 page, which will look into the typed URL and show a list of possible pages that the user was actually trying to reach, while still returning a 404 as the typed URL actually doesn't exists.
By using the above method you also avoid worrying about those links as you mentioned. No linkjuice is passed tho, but still traffic coming from those links will probably get the content they were looking for as your 404 page will list the possible URLs they were trying to reach...