It's going to come down to just a couple of things:
- How well the content is written--what's the reading grade level score, are there grammatical errors, etc.
- How close the content comes to existing content on the web.
I'm not familiar with these two companies, but from what you say, it sounds like #1 isn't going to be an issue. And most likely #2 won't be a problem either--even though realistically they're going to be looking up the raw data from somewhere (Wikipedia!) and doing their best to completely rewrite it....and that's always a challenge.
But the reason I don't think #2 is honestly going to be a problem is because right now, Google doesn't seem to be very good at spotting near duplicate content with even just light edits. Or even WITHOUT edits, just with other content on the page. As an example, search for this phrase (including the quotes) in Google (it's taken straight from Freebase, i.e. Wikipedia's back door):
"The local people are mainly Maasai, but people from other parts of the country have settled there"
Google finds and displays 96 YES NINETY-SIX pages before chopping it off and leaving it to the supplementals.
Will this go on forever? Of course not, Google is undoubtedly working at trying to handle detection of this kind of thing. Certainly it isn't in Google's quality interests to list 96 pages all that suck the description of something straight out of Freebase. But, this paints a picture of where they are today in doing that....and that tells us that they're a long ways from taking a run at content that you've hand-rewritten from other sources....even if it's a light rewrite.