Shocking Results From AI Manuscript Analysis : AutoCrit Vs ProWriting Aid
(Beth Turnage Blog)
The Rejections That Started an AI Experiment
After receiving some puzzling rejections for my manuscript Hanging By A Thread—a neo-noir thriller with MM romantic suspense elements—I found myself genuinely confused. As someone who’s been ghostwriting for 11 years and has helped clients achieve numerous bestsellers, I know my craft connects with readers. Thousands of Amazon reviews over the years have confirmed that.
But these rejections were so vague: “wasn’t quite drawn in,” “didn’t connect with the story materials.” In other words—vagueness. Were there issues I wasn’t seeing? Would an AI tools help me identify problems? What would artificial intelligence think of my manuscript?
I decided to run an experiment.
The Test
I invested in premium versions of every major AI manuscript analysis tool I could find: AutoCrit, and ProWritingAid. I ran my complete 90,000-word manuscript through each system and compared their feedback.
What I discovered was fascinating—and might be useful for other writers wondering about these tools.
The Surprising Results
Here’s what shocked me: the AI tools didn’t agree with each other. At all.
AutoCrit flagged my opening chapter for “Deep POV violations” and complained about my protagonist making observations “beyond what he could realistically notice.” But these weren’t errors—they were deliberate noir conventions. My detective protagonist is supposed to have heightened awareness and make intuitive leaps. That’s the genre.
ProWritingAid completely missed that this was a romance novel. In their market analysis, PWA suggested I target crime thriller readers and comp my book to L.A. Confidential and The Black Dahlia—both straight noir with no romantic elements. They fundamentally misunderstood what I’d written.
But here’s where it gets interesting: when I fed PWA my complete manuscript instead of individual chapters, their analysis was dramatically better. They correctly identified it as M/M romantic suspense, gave me spot-on comp titles (Cut & Run, Captive Prince), and provided sophisticated developmental feedback about character arcs and plot structure.
The Context Problem
This revealed something crucial: AI tools are highly dependent on context.
AutoCrit analyzing individual chapters missed the broader relationship dynamics that drive the story. PWA’s chapter-by-chapter analysis treated my work as literary crime fiction, but their full-manuscript analysis understood it was romance with thriller elements.
It was like having two completely different editors evaluate the same work.
What This Means for Writers
If you’re using AI tools to improve your craft (and many of us are), here are the key takeaways from my experiment:
1. Different tools have different blind spots. What one AI flags as an error, another might recognize as a genre convention.
2. Context matters enormously. Full manuscript analysis consistently outperformed chapter-by-chapter critique.
3. Genre recognition varies wildly. Some tools understood romantic suspense; others were completely genre-blind.
4. Market analysis requires human judgment. The AI that suggested marketing my M/M romance to straight crime thriller readers would have sent me down a commercially disastrous path.
The Bigger Picture
This experiment taught me that while AI tools can be helpful for craft development, they’re not infallible. Each has strengths and significant weaknesses. More importantly, what passes or fails with one system might have the opposite result with another.
In my next post, I’ll dive deeper into the specific feedback each tool provided and what writers can learn from their different approaches to manuscript analysis.
Have you used AI tools to analyze your writing? I’d love to hear about your experiences in the comments.
Cover art by Dall-e.
When you ran the full manuscript, did Autocrit still outperform PWA?
I have PWA. I’ve used the chapter critiques. I’m trying to decide if I want to switch to Autocrit or just stay with PWA.
PWA and Autocrit have different tools, and their pricing is vastly different. I have lifetime subscriptions to both AutoCrit and PWA, which makes sense to me given the amount I use both. IMO, the PWA chapter analysis is good for highlighting major concerns, but since it looks at only one chapter, it doesn’t have the continuity of the previous chapters. And I find the whole one chapter a day thing limiting. I don’t like paying $35 a pop to analyze the whole manuscript. Like any AI tool, it misses subtext and sometimes the continuity of the chapters. AutoCrit is very good at using its Alpha and Beta readers to narrow down what’s working and what’s not overall, though you have to take the recommendations with a grain of salt and go with your author’s instinct. Personally, I lean in more toward AutoCrit than PWA right now, but if PWA offers a lifetime upgrade that lets me get more reports, I’d buy it.