Setting up chatgpt for meeting summaries
The first time I tried using ChatGPT to summarize my messy meeting notes, I dumped in an entire Google Doc of scattered bullet points, timestamps, and half written sentences like “Alex maybe said something about budgets???” and honestly I expected magic. Instead it spit back something that read more like a press release for a company I didn’t even work at. The problem was simple: I didn’t give it instructions clear enough. ChatGPT will make stuff up if you just say “summarize this.” You need to spell out what you want, like you’re talking to a new intern.
What finally worked for me was writing prompts that tell it exactly what type of summary I want and how much detail it should keep. For example, instead of just pasting my notes and saying “summarize,” I type something like: “Summarize the following meeting notes in a short list of main decisions and open questions. Keep people’s names. Leave out small talk.”
That sounds silly-simple, but the difference between those two instructions was huge. In the first version I got half-invented sentences about how our team was very aligned and high performing (lol no). In the second, I got a bulleted list literally saying: “Decision: Move deadline by one week. Question: Who owns the API updates?” which is what I actually needed to remember.
My advice: always add in your prompt whether you want ChatGPT to keep names, who said what, and whether you want only action items or also general context. If you forget to tell it “keep names,” it’ll often erase them entirely and you later can’t remember who volunteered for what.
Preventing hallucinated details in notes
The annoying thing is, ChatGPT loves to add filler. It’ll say stuff like “The team agreed to foster better collaboration” even though no one on my team has ever used the phrase “foster better collaboration” in our lives. That’s what people call hallucination, but you don’t need the jargon — it’s just when it makes stuff up that sounds nice. The way I stopped it was by literally adding a line to the prompt that says “Do not add information that is not written here.” Sometimes I even add “If something is unclear, leave it as unclear.”
A real example: I had notes with an entry that said “budget? maybe Q4 adjust??? not sure.” When I didn’t specify that rule, ChatGPT rewrote it as “The team decided to adjust the budget in Q4.” That’s totally wrong — no decision was made. After I added my guardrail line to the prompt, it came back as “Unclear whether budget adjustments will happen in Q4,” which is at least faithful to my chaos.
Tip that actually saved me: if something in your raw notes is messy, leave the mess in. Don’t try to over clean before feeding it. The model handles raw confusion better when you warn it in the instruction that uncertainty should stay visible.
Breaking long meetings into chunks
Feeding a full transcript into ChatGPT at once nearly broke my browser. Anything huge just causes it to cut off halfway. My workaround is chopping long meeting notes into smaller segments before summarizing. A two hour meeting transcript I had was impossible to feed in one go. I ended up pasting the first half, asking for summary of that chunk, then pasting the second half and saying “continue from before, now summarize this part too.” Finally I asked it to combine both summaries into a single cleaned version. The result was readable and didn’t lose context.
If you’re hitting those moments where ChatGPT just stops mid sentence, it doesn’t mean your notes are cursed. It just means you fed too much. Think of it like scanning in separate pages. The output might not flow yet, but after you stitch them with a final combine prompt, it reads like one continuous meeting recap.
Making action items stand out clearly
Whenever I clicked back into old meeting summaries, the single most important thing was always “who’s doing what by when.” But ChatGPT usually buries those inside paragraphs. The trick I use now is to literally demand a separate section: “At the end of the summary, list action items as a checklist formatted like Task Owner Due Date.”
Here’s a simplified version of how I structure it:
“`
Summary:
– Key discussion points written here…
Action items:
[ ] Draft API spec — Alex — next Friday
[ ] Update budget sheet — Sam — pending
“`
That checkbox style makes it so much easier to scan later. I don’t even care if it doesn’t format perfectly. Even in plain text notes, the square brackets give your eyes something to land on.
Small warning though: if you don’t want fake deadlines invented, again you need to say “if due date not mentioned, leave it as blank rather than guessing.” Without that line, ChatGPT would happily add stuff like “next week” when nobody agreed on that.
Telling it to keep the tone short
I once got back a 2 page wall of text from ChatGPT summarizing a 15 minute meeting. Totally useless. What helped was demanding a specific style. For example: “Please keep this under 200 words total, in plain phrases, no corporate jargon.” I had to say “no corporate jargon” because otherwise it loves phrases like “synergize priorities.” You may laugh, but it really does that.
For me, a sweet spot has been around a half page of bulleted notes. That length holds enough detail to actually share, but still short enough I’ll reread it later. I also sometimes add “Write so that someone who missed the meeting can catch up in a minute or less.” That instruction makes it summarize like a story rather than a stack of paragraphs.
Automating meeting summary workflows
Right now I have a Zapier flow that records meetings in Google Meet, places the transcript into a Google Doc, and then fires that Doc text into ChatGPT for summarization. It mostly works fine except sometimes the Zap crashes if the transcript is too long. When that happens, I rewire it so it sends the Doc in chunks as I mentioned before.
There’s also a cleaner way I tried: connecting Otter transcripts directly into ChatGPT via an API call. The setup process got messy — I kept hitting errors where the system sent the same text twice, so the summary repeated itself. That double fire bug drove me nuts. The fix was adding one small filter step in Zapier that checks “only continue if text length is over X characters but under Y.” That prevented the duplicate triggers. Took me too many tabs of fiddling to realize it, but once it worked the automation was smooth 🙂
If you don’t want to mess with Zapier costs, another option is exporting your transcript manually and then running prompts yourself. It’s slower but at least you know exactly what text you dropped in.
Formatting summaries for teammates
One last practical thing. Even if ChatGPT gives good summaries, dumping them raw into Slack or email looks sloppy. I like to add little formatting rules in the instruction: “Use bullet points. Put action items under their own heading. Bold people’s names.” That way, when I copy paste into Slack, everything is instantly skimmable. No one complains that it looks like I pasted a novel.
Another fun trick is asking it to write the top three highlights as a TLDR section. That section saves me when I’m scrolling my phone in the grocery store trying to remember what was said. If the model doesn’t know what’s most important, you can nudge it like “list the three things someone would be sad they missed if they skipped the meeting.” That tends to yield surprisingly accurate picks.
If you share into Google Docs instead, adding a simple table format works well. Something like this:
“`
| Task | Owner | Status |
|——|——-|——–|
| Draft proposal | Casey | pending |
| Schedule training | Pat | complete |
“`
That way, action items don’t vanish into text blobs.
Testing and adjusting prompts over time
Here’s the truth: the first prompt you write will almost never work the way you hoped. My first ten attempts made outputs that were either too vague, too long, or too much fiction. The trick is to treat prompts like real tools you prototype. Keep a note somewhere of what exact phrase you wrote that worked well, so you can reuse it next time. I literally keep a sticky note file on my desktop with lines like “No invented details, keep names, checklist at end.”
I also learned you don’t need complicated language for ChatGPT. Phrasing it like “please write a short summary, leave out small talk, and list tasks separately” works better than fancy stuff like “precisely distill the salient action items.” The simple version is like talking to a coworker; the fancy version just confuses it.
The process feels clunky at first but once you land on a prompt set you like, it’s like unlocking a switch. Suddenly you stop hating your meeting notes. Well, maybe you still hate the meetings themselves, but at least the summaries don’t make it worse 😛