ai for authors circle

5 Surprising Truths About Writing a Book With AI

Navigating the AI Hype and Hysteria

For writers, the current landscape feels chaotic. Artificial intelligence is presented as both a magic solution that can write a book in a weekend and an existential threat that will make human creativity obsolete. The online noise is a mix of hype and hysteria, making it nearly impossible for working authors to figure out what’s real, what’s useful, and what’s a dangerous distraction.

A new book, Write Your Book With AI by Steve Castledine, cuts through this noise with a dose of grounded, no-hype wisdom. It treats AI not as a shortcut or a substitute for craft, but as a powerful assistant that requires skill to operate. By offering systematic frameworks like RISE and LYRA, the book demonstrates that when used properly, AI can help serious writers work more effectively. Its core message is that the author must remain firmly in the driving seat, making all the key decisions about quality, voice, and ethics.

After reading it, the most impactful takeaways weren’t about speed or algorithmic tricks. They were about the deeper principles of craft, responsibility, and what it truly takes to build a sustainable author career today. Here are five of the most surprising truths the book uncovers.

1. The Real Risk of AI Isn’t Replacement, It’s Damaging Your Own Reputation

The primary danger for authors using AI isn’t being replaced by the technology. It’s carelessly publishing generic, error-filled, or bland work that harms their own credibility.

In a market already flooded with low-quality content, an author’s unique voice and commitment to rigorous quality control become their most powerful differentiators. When readers have been burned by AI-generated “slop,” they become more cautious, relying on authors they know and trust to deliver a professional, worthwhile experience.

This reframing is the book’s most powerful argument: the central conflict isn’t author vs. machine, but author vs. mediocrity. The focus shifts from an external threat to a non-negotiable internal responsibility. The risk isn’t what AI will do to you; it’s what you might do with it.

The real risk isn’t that AI will replace authors. It’s that authors will use AI carelessly and damage their own credibility.

2. Your Most Valuable Asset Isn’t Speed, It’s Trust

AI tools are often sold on the promise of speed and volume, feeding a “publish faster” mentality. But Write Your Book With AI argues that in today’s saturated market, reader trust is the “only currency that truly matters.”

Shortcuts like publishing un-vetted AI content or using manipulative marketing tactics might offer a short-term boost, but they corrode the long-term trust required to build a sustainable author career. A reader who feels tricked or disappointed by a low-quality book won’t buy the next one, no matter how clever the launch strategy is.

This is a crucial counter-narrative to the hustle culture that dominates online author spaces. It stands in stark contrast to the endless stream of transient “hacks” and algorithmic “loopholes” sold on social media. The book emphasizes that trust, built through consistent quality and honest communication, compounds over time in ways that short-term tricks never can.

Trust has become the most valuable currency in publishing. Readers who trust you will buy your next book without reading the description. Readers who don’t trust you won’t take a chance on your first one, regardless of how clever your marketing is.

But building that trust is impossible if your core assistant is fundamentally unreliable, which brings us to the most dangerous truth of all.

3. Your AI Assistant Is a Confident, and Dangerous, Liar

The book clearly explains the concept of AI “hallucinations,” a critical danger for any writer using AI for research. Large Language Models (LLMs) are designed to predict the next most likely word in a sequence based on patterns in their training data. They are not designed to understand or verify truth.

Because their goal is to always provide a coherent-sounding answer, they will confidently invent facts, historical events, and sources when they don’t have reliable data. This can include anything from fake citations attributing claims to non-existent academic papers to invented statistics claiming a study shows a specific percentage when no such study exists. They produce plausible-sounding nonsense with the same authority as verified facts.

The book recommends a non-negotiable safety practice: treat every factual claim from an AI as unverified. As a professional habit, writers should mark any such claims with a [VERIFY] tag in their drafts and not remove it until they have checked the information against a primary source. This isn’t just a best practice; in an era of rampant misinformation, it’s a core responsibility for any author who values their credibility.

One of the most dangerous characteristics of LLMs is their tendency to hallucinate. They confidently generate false information when they don’t have reliable data.

Once you accept that an AI is an unreliable narrator, the next step is learning to manage its worst habits, which, surprisingly, is best done by telling it what not to do.

4. To Get Good Results, Tell AI What Not to Do

One of the most surprising and effective techniques from the book is “negative prompting.” Instead of only telling the AI what you want, you can dramatically improve the output by providing clear constraints on what to avoid.

For instance, instead of just asking for a dramatic scene, the book shows how you can forbid the AI’s most common lazy habits. A prompt might include constraints like “Do NOT use weather as mood” or “Do NOT use clichéd dialogue tags like he growled.” This technique gives the author a powerful tool to steer the AI away from its worst tendencies and toward an output that is more useful.

Special Offer
Your first month just £20 — use code FIRST20 at checkout.
Full VIP access. Cancel anytime.