I am trying to add "Media Coverage" as a custom post type, and it asks for singular and plural... well, singular is the same as plural ('Media Coverages' sounds ridiculous), but it's telling me that they MUST be different from each other. Ever run into this problem and find a way around it?
Posts made by Philip-DiPatrizio
-
RE: Best way to separate blogs, media coverage, and press releases on WordPress?
-
RE: Best way to separate blogs, media coverage, and press releases on WordPress?
I had a feeling that post types might be my solution... I've been putting off learning about them for too long. Today is my day!
I'll look into that and let you know if I end up with any follow-up questions

-
Best way to separate blogs, media coverage, and press releases on WordPress?
I'm curious what some of your thoughts are on the best way to handle the separation of blog posts, from press releases stories, from media coverage. With 1 WordPress installation, we're obviously utilizing the Posts for these types of content.
It seems obvious to put press releases into a "press release" category and media coverage into a "media coverage" category.... but then what about blog posts? We could put blog posts into a "blog" category, but I hate that. And what about actual blog categories? I tried making sub-categories for the blog category which seemed like it was going to work, until the breadcrumbs looked all crazy.
- Example: Homepage > Blog > Blog > Sub-Category
- Homepage = http://www.example.com
- First 'Blog' = http://www.example.com/blog
- Second 'Blog' = http://www.example.com/category/blog
- Sub-Category = http://www.example.com/category/blog/sub-category
This just doesn't seem very clean and I feel like there has to be a better solution to this. What about post types? I've never really worked with them. Is that the solution to my woes?
All suggestions are welcome!
EDIT: I should add that we would like the URL to contain /blog/ for blog posts /media-coverage/ for media coverage, and /press-releases/ for press releases. For blog posts, we don't want the sub-category to be in the URL.
-
RE: Open Site Explorer Not Showing Full Pro Version
I don't work for Moz so I can't help answer... But I just wanted to point out that my OSE looks very different from yours. I am also able to see everything for www.wpbf.com.
-
RE: How do I know for sure if my site has been slapped?
If you feel like you have done everything within your power to try and get the links removed, but there's just no way, then you should disavow the URL or domain. You should attempt to reach the owners of the domain 2-3 times before giving up. During my link removals, there have been a decent number of webmasters that finally responded on my 2nd or 3rd attempt.
As for disavowing URL or domain, if the entire domain is something you'd never want a link from, disavow the entire domain. Even if you only have ONE link from the entire site. Still, disavow the domain. Only disavow the URL if you think the site in general is good quality but you happen to be on 1 particular spammy page for some reason.
-
RE: Subpage ranking for homepage keyword
For some reason, Google has decided that the interior page is more relevant for the query. There's many reason this might happen...
Go to google.com and do a "site:" search for your domain. site:example.com -- Is your homepage ranking #1 on the SERP? If not, your homepage may not be indexed or might have a manual penalty imposed. Are you 100% sure the homepage is indexed? Do you have Webmaster Tools? Make sure everything is all square in there.
What do you mean by the subpage's PR is 28? Do you mean PA? Is this interior page very relevant for your main keyword or is it just sort of (loosely) relevant?
-
RE: Ahrefs - What Causes a Drastic Loss in Referring Pages?
The Google Disavow tool doesn't work like that. It won't actually remove links from any pages. It is basically just a signal to Google that you want those links to be nofollowed. Ahrefs would have no clue if something has been disavowed or not.
-
RE: Duplicate Content
Each page with unique 300 words will be fine in google's eyes?
If you have 300 words on each page, as long as it's useful content that people are sticking around to read, then you should be okay. Your end goal should be to provide value to your visitors. If 300 words is plenty of content for the subject of your pages, then you're okay. If you have a blog about quantum physics and you only write 300 words per page... you might not be so okay anymore
After the text is removed is there any chance to recover from Panda? If your site is penalized by Panda, and you make adjustments to fix the issues you were once penalized for, yes, you can certainly recover. It's possible that duplicate content isn't your only issue, and there may be more to fix. Again, this is assuming you're penalized by Panda. I found a really good post about Panda recovery a couple weeks ago. Lucky for you, I bookmarked it! http://www.ventureharbour.com/panda-recovery-a-guide-to-recovering-googles-panda-update/
What about Page title and page meta description? I wouldn't personally write my titles and meta descriptions like that. It is probably a good idea to vary them up and make them a bit more unique from one another. If I'm being totally honest, I think your example title tags might work for Google. That would be up to you though if you're willing to take that chance. If everything else on your site is fantastic, and your only issue is those types of title tags, I really don't think Google would give you a problem. Either way, the best thing to do (obviously) is make them more unique. I'm not a personal fan of them being too similar, but I have seen it done like that on a site before and the pages ranked just fine (they were pretty low competition keywords though). Edit: This is the only question I'm not that sure about... your examples might be okay, but I don't want to give you bad advice.
This is my second question on MOZ
and your answered both of them. 
Hooray! I hope I'm helping you out
I've made it a goal of mine to make it to the top 50 in Moz Points before the end of 2014. -
RE: Ahrefs - What Causes a Drastic Loss in Referring Pages?
First thing that comes to mind is maybe the site had a lot of site-wide links before. If it had 5,000 or 10,000 links coming from 1 single domain and that website went down, that would be a huge loss of referring pages in a short amount of time. Maybe they were in web directories and asked to be removed? While simultaneously attempting to build some high quality backlinks from more referring pages?
It's all speculation of course, but plausible.
-
RE: Google Manual Penalty - Unnatural Links FROM My Site - Where?
Curious... was your site apart of the MyBlogGuest.com network? They were recently taken down hard by Google.
Here's a recent Tweet from Matt Cutts stating that sites posting guest posts can receive manual penalties, not just sites that receive links from guest posts: https://twitter.com/mattcutts/statuses/446438659689316353
Your site does seem like pretty good quality, but the sole purpose of it appears to be for guest blogging opportunities. Someone manually reviewed it and decided it was penalty worthy... To be reconsidered you might need to either A) remove all the links or B) nofollow all the links. I'm not 100% sure if nofollowing is enough. You'll probably also want to start posting a lot more content that isn't guest blogs. You might be already doing that (I didn't look around for too long). Good luck, Aaron.
-
RE: Duplicate Content
The answer is a big, fat, juicy, YES. That is the epitome of duplicate content.
You need to write the content completely unique from the other page. You cannot trick Google. The Panda will bite you hard

-
RE: Does Hiding the article´s date in a blog affect SEO?
I wouldn't recommend hiding the date because you don't want users to know that the content is old. What about when you publish something fresh and someone lands on the page but they can't find a date? They won't know how up to date that information is. I think a lot of people look for dates on blog posts, and rightfully so. They want to see that they're getting good information. You're right, if something is 2+ years old they might look for something more up to date. But you can update old blog posts and re-date them. Add something new to it, make some changes, and update the date.
Imagine an SEO strategy blog that didn't date the posts. You would be doing your visitors a complete disservice by hiding the date. You might have a post all about article directory submissions and they won't see that it's from 2008. That's not enhancing user experience, and people won't be happy with you.
Old content won't always be a bad thing. Read #4, "Burstiness," on this blog post: http://www.seobythesea.com/2014/03/incomplete-google-ranking-signals-1/
It's really interesting and a great read about how older content will sometimes receive the boost in rankings over fresh content.
EDIT: I'd like to add that it's completely okay to hide the date in some circumstances. You might have some sort of evergreen content that truly will stand the test of time and info may not ever, or rarely, change on the topic. For instance, if you were writing a blog post about how to improve your basketball shot. Who cares if the post is from 2006? In that case, hiding the date isn't going to reduce the overall user experience.
-
RE: Noindexing Duplicate (non-unique) Content
Sounds like you should actually be using rel=next and rel=prev.
More info here: http://googlewebmastercentral.blogspot.com/2011/09/pagination-with-relnext-and-relprev.html
-
RE: Noindexing Duplicate (non-unique) Content
Good find. I've never seen this part of the help section. Their resonating reason behind all of the examples seems to be "You don’t need to manually remove URLs; they will drop out naturally over time."
I have never had an issue, nor have I ever heard of anyone having an issue, removing URLs with the Removal Tool. I guess if you don't feel safe doing it, you can wait for Google's crawler to catch up, although it could take over a month. If you're comfortable waiting it out, have no reasons to rush it, AND feel like playing it super safe... you can disregard everything I've said

We all learn something new every day!
-
RE: Noindexing Duplicate (non-unique) Content
Yes. It will remove /page-52 and EVERYTHING that exists in /oahu/honolulu/metro/waikiki-condos/. It will also remove everything that exists in /page-52/ (if anything). It trickles down as far as the folders in that directory will go.
**Go to Google search and type this in: **site:honoluluhi5.com/oahu/honolulu/metro/waikiki-condos/
That will show you everything that's going to be removed from the index.
-
RE: Noindexing Duplicate (non-unique) Content
Yep, you got it.
You can think of it exactly like Windows folders, if that helps you stay focused. If you have C:\Website\folder1 and C:\Website\folder12. "noindexing" \folder1\ would leave \folder12\ alone because they're not in the same directory.
-
RE: Noindexing Duplicate (non-unique) Content
Yep. Just last week I had an entire website deindexed (on purpose, it's a staging website) by entering just / into the box and selecting directory. By the next morning the entire website was gone from the index

It works for folders/directories too. I've used it many times.
-
RE: Noindexing Duplicate (non-unique) Content
I'm not 100% sure Google will understand you if you leave off the slashes. I've always added them and have never had a problem, so you want to to type: /oahu/waianae-makaha-condos/
Typing that would NOT include the neighborhood URL, in your example. It will only remove everything that exists in the /waianae-makaha-condos/ folder (including that main category page itself).
edit >> To remove the neighborhood URL and everything in that folder as well, type /oahu/waianae-makaha/maili-condos/ and select the option for "directory".
edit #2 >> I just want to add that you should be very careful with this. You don't want to use the directory option unless you're 100% sure there's nothing in that directory that you want to stay indexed.
-
RE: Noindexing Duplicate (non-unique) Content
Yep! After you remove the URL or directory of URLs, there is a "Reinclude" button you can get to. You just need to switch your "Show:" view so it shows URLs removed. The default is to show URLs PENDING removal. Once they're removed, they will disappear from that view.