Use GPT Prompts to Generate Blog Post Outlines in Notion

Why GPT prompts keep breaking in Notion

If you’ve spent more than ten minutes experimenting with AI-generated prompts in Notion, you’ve probably seen this: you carefully build a sleek database, add a prompt template using the Notion AI block or an inline slash action, and just when everything looks good, something quietly snaps. Either the AI suddenly stops responding, your variables return blank, or the formatting gets wrecked for no clear reason. 🤷

I had one setup last week where one row in the database was a blog topic idea, another column was the target audience, and I had a formula column pulling that into a GPT prompt like:

“`
Write a blog outline about “Title” for “Target Audience” in a friendly tone
“`

It worked… for about 5 queries. Then for certain rows, it just generated gibberish titles like “Write a blog outline about for” — completely ignoring the “Title” field. Turns out: I renamed the column from “Blog Title” to “Title” and forgot that the formula references are super brittle. You don’t get errors like in Excel, either — it just keeps running broken logic silently 😅

Stop nesting formulas inside AI prompts

It’s super tempting, especially when you’re trying to be clever and automate everything in one step. I used to string together a full GPT prompt inside a Notion formula field so I could send it to the AI block in one go. Like:

“`
“Write a blog post about ” + prop(“Topic”) + ” for readers interested in ” + prop(“Audience”) + “. Include sections such as: ” + prop(“SuggestedSections”) + “.”
“`

Sounds cool, right? Except it fell apart in every possible way:

1. Prompt kept cutting off halfway in the AI field. Probably a character limit thing, but there’s no warning.
2. Multiline text from other properties broke the whole result unless I manually sanitized line breaks which… why.
3. Any change in column name silently made the formula return “null” or even worse — look correct in preview but pass no usable data to the AI block.

Eventually I just made a dedicated “Prompt” column and copied the full text manually. Or had a separate template block hold the full string that I updated when needed. Yes, that’s less automated — but way more fixable.

Using buttons versus native AI fields

Notion’s native AI block is beautiful until you have 36 things to run and start desperately wishing for a Run All button 😭

So for batch-generation tasks, I tried switching to a Notion Button that runs on a trigger — specifically “Insert text using AI”. But again… surprises.

The problem is that if you try to use a database property like “Title” inside that button’s prompt block, you can’t — unless that button is set up inside a Template. And then you have to either:

– Have a Template that people duplicate properly (manual), or
– Duplicate inside an automation and somehow click the button remotely (nope, you can’t)

Also — and this made me scream out loud — those AI Buttons max out fast. They can only read a few blocks around them reliably. If the data you need is higher up or coming from a rollup, it reads blank. I literally pasted the intended text inside the AI block itself just to make it work.

So when people say “set up a Button to batch request 20 outlines,” they mean “you have to click every button manually unless you make a cron Zap to write something to a trigger column and then script the AI to re-run.” Which is way too much 🫠

Watch out for rich text in formula outputs

One of the most ridiculous bugs I’ve hit: rich text props that break your GPT prompt without warning.

Here’s the setup: I had a “Resources” column that was a rich text list. Stuff like:

– Related blog links
– Docs
– Twitter thread

And then I pulled that into the prompt using a formula — expecting it to show up as bullet points or just plain text. But every GPT response after that was chaotic. Some ignored the prompt. Some spit out raw code blocks. One gave me six paragraphs of lorem ipsum like it had a brain freeze.

What was happening: the rich text wasn’t plain text underneath. When Notion passed that formula result into the AI, it tried to preserve formatting even if you couldn’t see it — tabs, indentation characters, even smart dashes from copy-pasted text. And GPT just choked on it.

I replaced it using a simple Name column and then included the resources manually. Or used a Linked Database with clean properties, no formatting, and definitely no pasted bullets from Slack messages.

Using page templates for better control

Eventually I gave up on trying to make every GPT prompt run inside the database row itself. This sounds obvious now — but at first, I thought it was more efficient.

Instead, I made a Page Template that had the full GPT prompt prewritten inside, like this:

“`
You are a technical writer producing blog post outlines in a casual tone. Use the following input:

Title: {{Blog Title}}
Reader: {{Target Audience}}

Write 5-6 section headings…
“`

Then I automated Zapier to duplicate that page, replace the placeholders using text find-and-replace, and trigger the AI.

It gave way better results because:

– The AI context was more consistent
– Prompt stayed formatted no matter what
– No broken columns or brittle formulas

Yes, it’s clunkier than just typing straight into a row. But once I built that route, it let me batch-generate posts really fast — and better yet, I could copy-paste the working version over and over without worrying some renamed property would ruin everything 🙂

Don’t rely on Auto AI responses in shared workspaces

Here’s a trap I didn’t even realize until a client asked why every outline suddenly had the wrong tone. Turns out AI responses vary depending on **who** triggered it — even if the prompt was identical.

If someone else in your shared Notion workspace runs an AI button, the AI might use their personal usage context. And if your response history includes a bunch of AI prompts in a brand voice… theirs might not.

In my case, it swapped out every example in the blog structure to sound ultra-formal — because my client primarily used Notion to summarize legal contracts 😅

Now I make sure:

– Only I run the AI prompts for full blog outlines
– I embed strict prompt instructions like “Use a casual tone, even if prior examples were serious”
– I don’t assume two identical prompts = two identical results

When everything works except Notion AI itself

Sometimes — and this is the most frustrating part — everything seems right: the prompt is fine, the columns are referenced correctly, you even tested it earlier… and the Notion AI block just won’t respond.

It usually happens when:

– Network connection flakes out but Notion doesn’t show an error
– GPT usage hits rate limits behind the scenes (you have no way to see this)
– The AI integration silently resets and stops responding to button triggers

Restarting Notion doesn’t always fix it 😐 I’ve had to:

1. Clone the block into a new one — suddenly it works again
2. Create a fresh AI request two inches below the old one
3. Copy only the plain text prompt, remove all formatting, paste again

So yeah, the fix is usually “nuke it and start fresh,” which is so incredibly satisfying when it finally responds with the outline you needed twenty minutes ago.

Zapier automation makes this slightly less painful

Zapier lets you fill Notion pages or database fields with preset data, which can sort of halve your workload if you’re generating a ton of prompts. I had a Zap that pulled new titles from a Google Sheet, wrote them into a Notion database, and then prefilled a prompt field like:

“`
Write an outline for [Title] in five sections
“`

Then I used another Zap with a delay to duplicate a template page and write that prompt into the body of an AI-enabled block. That triggered the AI to generate on page load — automatic-ish.

It worked surprisingly well most days, except when Notion’s API choked and returned a 429 (Too many requests) even after I barely sent anything. Or when title fields had extra quotes and the template got grammar-wrecked.

Still… better than clicking each button by hand 20 times a day 😅

Leave a Comment