I'm not sure this is true. Remember, AI will not 'evolve', it will be created. By us. If you want to look at what humanity will create, the question is not merely a technological but a cultural one.
Only the
first AI will be "created" by us. The rest will evolve. Or something akin to evolution, with that first AI helping to write the next generation, and rather quickly, with no human input at all, or any code humans could understand even if invited, or with the authority/control to do so. It's been said that AI would be the last invention man ever makes. Presuming that from then all AI designs all new inventions for us, or we go extinct.
And further, it's a good possibility that something as complex as an AI, even the first one, will evolve. Rapid prototyping and genetic algorithms may be employed to create the first AI's or their constituent parts and architecture through trial and error, then refined through selective generations, but done incredibly fast. There's already a lot of experimental work on this for engineering problems.
Look here:
http://ti.arc.nasa.gov/projects/esg/research/antenna.htm It's not even remotely a self-aware system, or anything to do with creating an AI, but the end results in those antenna designs are things that arguably no human, even using direct input and software tools could ever come up with.
Instead those evolutionarily designed antennas wind up mimicking the shapes of sea-life or plants.
It's literally a small slice of the old thought experiment of "
An infinite number of monkeys banging on an infinite number of typewriters..." but then the results are refined. The monkeys that produced some coherent text by luck are then "bred" then their offspring start banging on typewriters, rinse, lather, repeat. In a remarkably short time, you have the one Shakespeare monkey you needed. Or, more likely, something radically different than anything you could conceive of, but it gets the job done, and produces a nice neat copy of Shakespeare's works for you.
Now, it's not such a stretch of the imagination that such evolutionary systems be turned towards writing ever more efficient software code, without human input, through brute-force evolutionary rapid prototyping and virtual testing. Such a system given enough computing power and time could possibly produce self-aware and emergent qualities.
Possibly the best outcome we can hope for might be that the best results will always come from machine/human collaboration. That there are fundamental qualities to human consciousness that machines won't be able to reproduce no matter how much computing power is expended. In that case people could provide intuitive leaps, or the "big ideas" while the AI's/machines provide the number crunching, data manipulation, and precision, as in the klunky way they do already.
In that case human/AI collaboration on challenges will always (or at least more often) be superior to human-only endeavor, or AI-only, and as such would preserve a place for both humans and AI's.
Of course, that could also produce a Matrix-like scenario where captive humans are just parts of the larger machine. And how much rights or choices we have in that situation could be severely limited.