Prompt GPT to Convert CRM Notes into Follow-Up Tasks

Why you even need this in the first place

Let me guess — you’re trying to get your CRM notes out of the note graveyard and into something a human (or your future self) will actually act on. I’ve been there. Sales reps leave a short novella in the contact’s notes field about what happened on a call, and then… nothing happens. Nobody follows up. Tasks never get created. And three weeks later, someone sends a totally cold outreach — to someone who literally said, “Call me in two weeks.” :/

Most CRMs are built like they assume someone else is going to read those notes and remember to do something with them. And if you’ve ever tried to parse human-written sales notes, you already know it doesn’t take long before you’re reading stuff like:

> “Call went well, said to follow up early April re: licensing stuff, he might be flexible on seats if we talk to Karen. Also worried about timeline.”

Okay, great. Thanks, past self. Now what?

If you’re using ChatGPT (or any other LLM) to automate task creation from those notes, you don’t just want summaries — you need something that’ll spit out deadlines, next steps, and maybe even who should do what. Ideally, without making a total mess.

What kind of prompt actually works reliably

You can’t just say “summarize this note” or “turn this into tasks.” That gets you bland responses like:

> Task: Follow up in April. Subject: Licensing issues. Assignee: TBD.

TBD is doing a lot of heavy lifting there 🙃

Here’s what actually helped me get more usable actions from raw CRM notes:

I prompt with a kind of roleplay-based instruction. Something like:

> “You are a helpful assistant who reads CRM notes made by salespeople and outputs follow-up tasks that someone can actually do. Convert the notes below into 1–5 clear, specific tasks with deadlines, owners, and context if possible. If a deadline or name is missing, make a reasonable guess based on the note content.”

And I make sure to add this part:

> “Use task format: [Task Description] — [Due Date] — [Owner or Team]”

If you leave that out, GPT will give you a mix of bullet points, faux markdown headers, and sometimes… a motivational quote for no reason. ¯\_(ツ)_/¯

It helps a lot to include the CRM context, even if the model already knows it. For example, I’ll add this to the top:

> “The notes are from HubSpot CRM. The user is a salesperson following client interactions.”

Once I started giving that up front, the model stopped hallucinating weird admin-only tasks like “Send 10% discount code to ‘April Promo List'” when that was never mentioned.

Dealing with hallucinated due dates or owners

This part is where I lost my mind for about two days. GPT likes to fill in the blanks, and sometimes those blanks are your downfall.

Say the note says:

> “Potential fit. Wants a demo next week. Said email is better.”

GPT might output:

> Demo follow-up — July 5 — Support Team
> Prepare technical walkthrough — July 3 — Engineering

Wait… what? Nobody said Engineering needed to be involved. Now you’ve got someone creating tickets for teams that don’t even touch customers. I had one test run where GPT just started tagging everyone as “Marketing.”

My fix: I added a strict instruction in the prompt.

> “If task ownership is unclear, assign to ‘Sales Team’ by default. Do not invent people or teams.”

Also added this:

> “If dates are vague (e.g., ‘next week’), use the Monday of that week as the deadline.”

That actually anchored the output really well. It’ll still guess sometimes, but the guesses are now something I can live with rather than… involving the UI team for no reason.

How to trigger GPT from CRM activity automatically

So, the dream is that when someone updates a note in the CRM or adds a new call summary, your automation kicks in, pushes it through GPT, and then makes actual tasks inside your system. It mostly works — unless it doesn’t.

I started by using a Zapier workflow that triggers on a new note in HubSpot. That’s supposed to pick up when someone adds a new note to a contact or company record. Important detail: it only works if it’s NOT a call log or an email log — those come through as different object types (ugh).

I used a Formatter step to clean up the note text — because a lot of reps use weird line breaks, emojis, or odd tags that confuse GPT. Then, I used the OpenAI Zapier integration to send the cleaned note, my custom prompt, and some context like the contact owner’s name.

Finally, I used a Paths step to check if the GPT output actually contains recognizable tasks (if not, don’t create anything). Output tasks were split with line breaks, and I used separate Zaps to create tasks in ClickUp based on each line.

Big issue: the Zap kept triggering twice for the same note if someone edited the note after saving it. So I had to add a deduplication filter using the note ID + a timestamp hash stored in a separate Airtable base.

Yes. Gross. But it stopped the duplicate tasks.

Handling messy input like transcripts and call logs

This is where things broke hard. Salespeople started pasting transcriptions instead of summaries. So GPT would get a 20-paragraph dump of:

> “Um yeah, sure, so I think the issue’s more like timeline, not budget. Does that sound right? Yeah. Right. Cool. Should we, um… oh, and talk to Steve maybe?”

GPT tries hard, but unless you tell it what to do, it gets overwhelmed and basically writes:

> There was a conversation. It covered timeline and budget. Follow up is likely needed.

The fix was a layered approach. I added a pre-processing prompt BEFORE the main follow-up task prompt. Example:

> “Here is a messy transcript from a customer call. Please extract 3–5 key takeaways in simple language. Do not include speaker names or filler words.”

That turns the whole thing into clear statements like:

– Client has concerns about launch timeline.
– Decision likely depends on meeting with Steve.
– No issues raised about budget.

THEN I send those takeaways into my follow-up task prompt. Suddenly GPT is like “Oh! Now I get it.” And the tasks make more sense:

> Schedule check-in re: launch timeline — July 8 — Sales Team
> Contact Steve to align — July 5 — Sales Team

It’s two prompts instead of one, but hey… quality matters more than convenience here.

Prevent GPT from rewriting CRM history

Out of nowhere, I found that sometimes GPT started inventing things like “Client complained about pricing” or “Asked for a discount” — neither of which showed up in the original note at all. Just… fully made up 😐

What I believe was happening: I tested too many similar prompts in the same session, and GPT started blending memory (even without persistent chat history).

I fixed it by 1) always clearing conversation context before each Zap run (this means not using prior messages at all), and 2) removing emotionally weighted prompt phrasing like “Be proactive…” or “Assume client wants to buy soon.” That kind of stuff makes it take liberties.

Now I say:

> “Extract only what is clearly stated or strongly implied. Do not guess or invent missing information.”

Also, if you ever train your own fine-tuned model — please, PLEASE filter your training data so it doesn’t learn fictional task patterns. GPT is a pattern machine. If it sees enough made-up stuff with confident tone, it’ll just do the same 😛

Best format to return multiple tasks cleanly

You’d think a simple bulleted list would work. But nah. Sometimes GPT turns that into weird double-indented bullets inside JSON-looking formats. Super annoying for automation.

Here’s the best output format I found that plays nice with parsing:

> Task 1: Schedule demo with client — July 7 — Sales Team
> Task 2: Confirm licensing questions with Karen — July 10 — Sales Team

Then I use Zapier’s Text Split to break on newlines + “Task” prefix. You can loop each one into ClickUp or Asana or whatever you’re using. Just make sure your parsing tool doesn’t panic if GPT skips a task number. Occasionally it does this:

> Task 1: Follow up
> Task 3: Confirm decision timeline

Sigh. So I just remove numbering with a Formatter step before splitting. Cleaner.

Two fixes I had to undo later

Yeah, I got cute and tried to make GPT insert ClickUp task URLs into the output. Idea was: “Here’s your task AND the link to it.” Cute. Until it started hallucinating URLs for tasks that didn’t exist yet — or inserting placeholder text like `https://clickup.com/task/abc123`.

Also tried asking GPT to tag urgency based on phrasing (e.g., “urgent,” “sometime,” “by next Friday”). Seemed great… until someone wrote “wasn’t urgent” in a note, and GPT tagged it as URGENT.

So now: no auto-generated urgency, no task links. Just clean, plain tasks. Humans can add the links after creating in the real system.

Final version of prompt that works well enough

This could change in about three days if anything randomly stops working, but here’s what’s held up the best lately:

> “You are a CRM note assistant. Your job is to read a sales note and output 1–5 tasks in this format: [Task Description] — [Due Date] — [Team]. Use simple phrasing. If no due date is mentioned, guess based on content. Never invent client names, pricing, or details that do not appear in the note. Do not use Markdown or formatting. Assign all tasks to ‘Sales Team’ unless a specific name is stated.”

I pass the note in as raw text below that.

This version survived multiple messy call logs, a few edited notes, and that one note someone wrote entirely in lowercase with no punctuation. Still worked — mostly.

Leave a Comment