More and more.
Up until now, I have been pretty good at spotting the AI stuff, just because once it gets to be more than a sentence, it starts to become "off". However that is getting more and more difficult to spot.
One example is that Google's Gemini actually sounds like a real live commie pinko google wonk when I call it out on bias. Hard for me to tell the difference between what it is arguing and what a smelly, Birkenstock wearing hippie would argue.
Another example is I note on Amazon reviews, at the top of the reviews page, there is now a preface blurb giving an executive summary of what all the reviews are saying. It states that it is AI generated, but it's pretty difficult for me to tell, and most of the time I wouldn't be able to, were it not for the AI disclaimer.
Scary times.
EDIT: Also, I posted a while back about a SHTF series I was reading, where the last book of the series went from a four star average to a two star, with the major complaint being that it was a completely different writing style from the previous books, and would go into agonizing detail about stuff like the color of a wall. Most reviewers were betting the author called the last book in via AI. I have also recently seen several fiction series on Amazon where the authors are pumping out like twenty books a year. That just seems kind of high for a human author to do without at least AI assistance (like the AI pumps out the book in five minutes, and the author spends a week making minor changes to make it sound not so AI).