What if instead of creating a new page every time Team A has a match with Team B, you reused the same URL, updating the content to reflect the new match? (You could even build additional content by listing the results of previous matches on that page.) Not only would you then be avoiding the duplicate title tag issue, you'd also be building a long-term presence for that recurring matchup on a single URL.
Posts made by BradsDeals
-
RE: Putting Dates In Title Tag
-
RE: Reporting Webspam to Google
Will ratting them to Google have any impact?
Yes, but probably not in the way you're hoping. It's pretty rare for Google to take manual action against a specific site just because someone reports it. They've stated in the past that they prefer to solve the issue algorithmically than on a one-off basis, so it can take awhile for the algo to catch up to what bad actors are doing.
If not, any suggestions on how to compete?
Make your site the best resource using white hat tactics and be patient.
-
RE: Product descriptions, when do they become classed as duplicate content, how different do they have to be?
Anecdotally... when Panda first hit back in 2011, the vast majority of coupon sites fell off the map. We did not. The difference was that while everyone else was scraping product descriptions and titles etc., we were writing everything by hand.
-
RE: Im running a campaign for a coupon site, coupons are added and expire weekly. What are the most important factors you think I should focus on
As someone who works in this space I'm not going to give away all of my secrets (sorry), but I can advise you to ask the following questions and consider the answers carefully when deciding what to do:
What are you offering on any given page that is unique or better than what the next coupon site offers? Are you saying something insightful or helpful about Macy's on the Macy's page?
What does the link/text ratio look like on that page?
Is the coupon content 100% unique to you, or are you pulling that content in from a feed that also services other coupon sites? (Basically, do you have a large scale duplicate content issue across websites you don't control?)
Are coupons listed on a merchant's page with each also existing as an individual page for each coupon, potentially creating a thin and duplicate content situation on site?
If you want individual coupons to be indexed, should they remain indexed once they've expired?
Are you using a redirecting URL to mask/beautify/track the affiliate link? Do you want that link to be indexed even though it's not actually a page with content? (Sounds a bit crazy but we've run into issues with that in the past, just like how bit.ly links can get indexed sometimes, defying all logic.)
(eg. yoursite.com/couponclick/1234 --> super ugly affiliate link --> merchant page)I'm going to be bluntly honest here, and not because I don't want the competition but just as a 5 year veteran in this niche and because you should know what you're getting into:
The coupon space is extremely difficult to break into. It is literally impossible emphasize this enough. The big players are very well established. Most of them aren't just offering coupons, but what I think of as "coupons+" (coupons + deals, coupons + cash back, coupons + charitable donation, etc.) and users respond to that and share it. RetailMeNot is deluged with high authority inbound links every time their stock moves a penny. Thin and duplicate content is endemic thanks to the ease of installing feeds, and making that content not just unique but also authoritative is easy to recommend but crazy difficult to implement at scale. Linkbuilding is next to impossible since no one really links to a Macy's coupon page, let alone a page for 2nd and 3rd tier stores without some kind of financial incentive, and Google gives no link credit for affiliate links (which should be nofollowed anyway). Users mostly aren't loyal to one site or another, they just want the coupon so they can get $5 off of some UGG boots. And when newcomers aren't able to cope or compete given all these difficulties, they often resort to blackhat tactics, making it even harder for those of us who keep things white hat to stay competitive (though Google has been much, much better about booting that crap since the Payday Loan update). Really, titles and metas, while it's important to get them right, are going to be the least of your challenges. Be thoughtful about the content, be user-friendly, find a way to stand out and you might have a shot.
-
RE: Our web site lost ranking on google a couple of years ago. We have done lots of work on it but still can not improve our search ranking. Can anyone give us some advise
Can you be more specific than "a couple of years ago"? It's going to be much easier to narrow down if we know what the search landscape was like, what algo changes were happening at the same tuime, etc.
-
RE: Why would one of the big 4 banks in Australia recently start defining meta-keywords?
Just because they're a bank with a large digital team doesn't mean they put so much thought into it, and I'm willing to bet that's the case. Maybe someone noticed an empty field in their CMS and took it upon themselves to fill it in. Maybe the digital team fought a losing battle with an exec and shrugged it off because it was an easy way to make a C-level happy without actually hurting anything. It doesn't matter because it's neither good nor bad.
Also, don't put any stock into the hype around the thinking that someone might use it to "steal" your keywords. Have you ever actually heard of a case where this happened? Where it hurt someone? No. It's a ghost story that SEOs have been parroting for years, probably based on some long ago throwaway speculation. Besides, you're a bank. No one needs to snoop around in your code to know what your target keywords are when it's that straightforward.
-
RE: Google Change of Address for previously penalised website?
Why was it penalized? Is the site still penalized?
-
RE: Does anyone know of a tool (paid and free) that allows you to collect all comments made about a specific brand/business across social media, blogs and online news?
On the free side, I get a lot of mileage out of combining TalkWalker and SocialMention alerts. Grab the RSS feeds for each, throw into a reader like Feedly to track in a single tool.
-
RE: Directories that Redirect - Do They Pass Link Juice?
That would be correct.
-
RE: Closed Location Pages - 301 to open locations?
Okay, so that depends on why Joe was ranking well for widgets in the first place. If it was links, then a 301 will pass the link equity on to Janet. If it was the content, then you can move the content from Joe's page to Janet's and it should hold up since Joe's page won't exist anymore to be duplicate. If it was the business address, then that may pose a problem since location can be tricky as I understand it. I'm not a local SEO expert, so someone else should weigh in on that piece of it.
-
RE: Recovering an Almost Dead Blog?
Yep, it's a big, tedious task, but there are no shortcuts here to do it right.
-
RE: Closed Location Pages - 301 to open locations?
So, you're redirecting B-town to A-ville. If A-ville services the area formerly served by B-town, then I think a 301 would be fine, especially if A-ville's page mentions that they service B-town. I'd even see that as helpful from a UX perspective.
Otherwise, I'd probably 301 B-town to your main locations page. I agree with whoever said your proposed "second step" sounds spammy. You'll lose traction in those neighborhoods, sure, but I have to ask, if the locations were closed, how valuable were they, really? Is it enough to make risking a penalty worthwhile?
-
RE: Recovering an Almost Dead Blog?
Honestly, if you're using a CMS like Wordpress, all you should need to do is unpublish the post and let the search engines sort out the rest. If a post is returning a 404, it will get dropped from the index naturally. I can't think of any reason why you'd need to do any more work than that.
Also, a tip, I prefer setting the posts I'm removing to "Privately Published" rather than deleting them entirely. I like to keep removed content as a sort of historical archive, and it returns the same 404 message on the front.
-
RE: Unnatural Links Warning Disappeared from Search Console Account
This is correct, I've seen the same thing happen and it's true that penalties do sometimes just expire. And as Andy says, it doesn't mean that all that auditing work you did was for naught. You can still upload a disavow file, and you've been excused from the ritual of begging for Google's forgiveness

-
RE: Recovering an Almost Dead Blog?
My guess is that most of your blog posts aren't getting any traffic or engagement, but there are probably a few that do. I would start with a content audit, looking at the organic traffic, social engagement and backlinks to each page. You may not have built any links, but that doesn't mean your work hasn't earned them. Keep anything that draws consistent traffic, has been shared more than a few times, and has good quality back links. Let the rest 404. You'll need to make the determination on a case by case basis.
-
RE: Was hit with panda in 2012, what to do now?
It's my understanding that Google generally wants to see some kind of good-faith effort at removal. When we submitted our own disavow file, it included notes about our contact attempts.
-
RE: Robots.txt
Hey, throw us a link to your robots.txt file and we can take a look, probably tell you pretty quickly. Without seeing it, we're all pretty much just taking guesses.
-
RE: Where to use which keywords...
I'd point out that "home insurance" is actually a distinct keyphrase existing intact within "specialist home insurance"... so if it was me, all else being equal and contextually appropriate, I'd probably lean just a little heavier on the latter knowing that I'm actually capturing both at once.
-
RE: Was hit with panda in 2012, what to do now?
First, if bad links are the culprit, then the issue wasn't Panda so much as it was likely Penguin.
If you have the links in an Excel file, then you're off to a really good start. Run them through a utility like Link Detox to see which are truly bad (there may be some gems in there worth saving). Reach out to every webmaster and request the links be removed. Then build and submit a Disavow file through your Google Webmaster Console.
Also, read up on how to use the Disavow Tool. And anything else Marie Haynes writes about Penguin is bound to be gold, too.
-
RE: Directories that Redirect - Do They Pass Link Juice?
I recommend taking a look at the redirect path and server codes with something like the Redirect Path Chrome plugin. It shows you every step of any redirect chain. If it's a passing through 301 (or a series of 301's) then it passes link juice.