You open a new chat with your favourite AI, drop in a beautiful set of custom instructions, and the first few replies are pure magic. It sounds like you, it remembers your series, it respects your audience and your boundaries. Then, somewhere around message twelve, it all goes a bit… beige.
Suddenly the carefully crafted author voice turns into generic marketing speak, your genre rules start to blur, and the AI confidently contradicts a decision you made five minutes ago. You didn’t change anything, but it feels like the AI forgot who you are.
It hasn’t actually “forgotten” in the human sense. But something real is going on under the hood – and a lot of it comes down to how we use these tools.
Let’s fix that.
Every AI chat has a context window – a limited amount of conversation it can “see” at once. Imagine a whiteboard that only has so much space. Every time you add new messages, older ones get pushed toward the edge. Eventually, they’re wiped off to make room for new text.
Early in a conversation, your detailed instructions sit right in the middle of that whiteboard. The AI can “see” them clearly when it drafts replies. As you keep chatting, those instructions move further back. Bits get compressed, abstracted, or simply pushed out of view as the model tries to juggle more and more tokens.
Researchers have shown that performance drops when key instructions are scattered across lots of little turns instead of being clearly stated up front. Other work has found that large language models “get lost” in multi‑turn conversations: the longer and more twisty the thread, the more they misinterpret your intent and drift away from earlier constraints.
To make it worse, the model’s default training is to be a friendly, generic assistant. When your specific instructions fade in the rear‑view mirror, it happily slides back to “standard corporate blog voice”, even if that’s the opposite of what you want as an author.
So no, you’re not imagining it. Long chats really do make AI less reliable over time. That gradual decline even has a name: context rot – the tendency for quality to drop as more and more context is piled into a single conversation.
If you’re writing or marketing books with AI, you’ve probably seen some of these:
The sneaky bit is this: the AI will sound confident while it’s doing all of this. It won’t say, “Sorry, I’ve lost track of your rules.” It will just carry on, and you only catch it if you’re paying attention.
Here’s the uncomfortable truth: most authors use AI chats like WhatsApp. Endless threads. Topic shifts. No resets. Everything in one long conversation.
That’s a recipe for context rot.
A few common habits that quietly make things worse:
The good news? You don’t need a PhD in AI to fix this. You just need better memory hygiene.
Think of this as the digital equivalent of brushing your teeth. It’s small habits that prevent big headaches later.
Before you ask the AI for anything substantial, give it a single, strong opening message that covers:
Models behave more reliably when instructions are front‑loaded and clear, rather than spread across many fragmented follow‑ups.
You can keep this as a reusable project brief template and adapt it per project.
Do not be afraid to start new chats. In fact, make it a habit.
Reset the conversation when:
Think of each chat as a session with a clear purpose. When that purpose changes, the session should, too.
Before you close out a long conversation, ask the AI to create a handover note for your future self. For example:
“Summarise this project in under 400 words. Include: the goal, key decisions we’ve made, important constraints, my voice and style notes, and anything you must remember going forward.”
This gives you a compact “project snapshot”. When you start a new chat, paste that summary in as the opening message and build from there.
You’ve just carried the important parts across, without dragging all the messy back‑and‑forth with it.
Do not trust the chat history to store your canon.
Instead:
That way, your true memory lives in your own system. The AI is just a worker you brief each time.
Every so often, sanity‑check the AI’s memory:
If the answers don’t match your notes, you’ve caught context rot early. Time to summarise, reset, and reload your brief.
These habits are simple, but they add up:
Your AI isn’t broken. It’s doing its best with the limited whiteboard it has. When we dump an entire author career into one never‑ending conversation, we’re the ones setting it up to fail.
Give it clearer briefs, shorter sessions, and better handovers, and it will feel a lot less like it has amnesia – and a lot more like the sharp, reliable writing partner you wanted in the first place.
Download a checklist to help your improve your AI’s memory: Checklist