Why automate meeting summaries at all
Every Monday at 10am, we have a sync meeting. And almost every Monday by lunch, someone says: “Wait, what were the action items from that?” And unless I manually comb through the transcript or Slack thread, it’s lost in the vortex of forgotten good intentions. That’s basically why I automated the summary emails — not because it was fancy, but because no one was remembering who was doing what 👀.
Also, I’d tried using Notion’s AI summarizer, but it requires you to paste in the transcript. And Zoom’s summaries were, well… not even summaries. Just a transcript with some bold headers pretending to be helpful. So I turned to GPT and Slack — two tools I already lived in daily anyway.
If you’re like me, you might already have a Slack channel like #meeting-notes that gets occasional love and frequent ghosting. Automating summaries to that channel, AND pinging each team member with what they were assigned verbally, helped me actually close the loop. No manual copy-paste. No forgotten notes. Just recurring clarity. Mostly 🙂
The GPT and Slack combo actually works
Here’s how it plays out:
1. Meeting ends (Zoom, Google Meet, or even an in-person chat recorded via Otter).
2. Transcript is either auto-saved or uploaded to a specific Google Drive folder.
3. A Zap gets triggered when a new file shows up there.
4. That file content is parsed and sent to GPT-4 via OpenAI’s API.
5. The AI processes it using a predefined prompt.
6. The output is posted to a specific Slack channel (like #meeting-summaries) and/or sent to individual users depending on mentions.
So yeah — there’s a lot of back and forth wires. Let me break that up to show where I kept tripping.
First, Google Drive triggers in Zapier sometimes just… stop firing. Like, zero error messages, but a file gets uploaded and the Zap doesn’t run. Turns out, if you rename the Drive folder after adding a Zap, it silently breaks. No alert, no hint. Thanks, Google 🙃.
Second, GPT responses with long transcripts (say, over 5 pages of text) often get cut off in mid-paragraph. Specifically if you use the GPT-4 model. The prompt eats up too many tokens. So I had to write a mini chunking function using a Code step in Zapier (you can also use Make/Integromat, but let’s stay focused).
Here’s what that GPT prompt roughly looks like:
“`
You’re a professional meeting analyst. Given the following meeting transcript, generate:
– A 3 sentence summary of the discussion.
– A bulleted list of action items with person names.
– Any unresolved questions or blockers.
Transcript:
{{Insert transcript here}}
“`
I had to write a whole script with logic like:
– Trim transcript after 3000 tokens
– Start splitting if it’s too long
– Handle cases where speakers overlap (e.g., Jane: “I’ll get the deck done” and then later “Wait no, Mike’s doing it”)
And yes, parsing speaker attribution is a bit messy. Otter sometimes labels two lines with the same speaker even when it’s clearly a different voice. GPT can only guess based on context.
Handling different kinds of meetings
One surprise I hit: not all meetings need the same type of summary. I originally tried using one prompt for every meeting — team syncs, standups, strategy calls, even 1:1s. Bad idea.
For example, I had a 1:1 where we talked mostly about personal development and process frustrations, not tasks. The GPT summary came back like:
> Action Item: Start using OKRs
> Assigned to: Both
Not only was that vague, we hadn’t even agreed to that! The model hallucinated a commitment we never made. 🙄
So now I use branching logic in my automation, depending on a keyword in the file name or a label in the calendar event. For example:
– “Weekly Sync” uses a summary + action items prompt.
– “1:1” triggers a softer summary without assigning tasks unless explicitly stated (“I’ll do that by next Monday”).
– “Customer Call” includes an extra section: “Customer feedback or pain points.”
If you’re using Google Calendar, grabbing the `event title` in Zapier is really helpful here to decide what kind of prompt to use.
Routing GPT summaries back into Slack
Posting to Slack was supposed to be the easy part. But turns out, if your summary includes @mentions and Slack’s formatting is off (for example, if you mistakenly format usernames as plain text rather than Slack user IDs), Slack just drops the mention without warning.
Even worse — GPT sometimes puts a dash before names instead of bullets. Slack interprets this as a quote block. Weird formatting behavior showed up like this:
“`
– @jordan Complete Q3 deck
– @katie Follow up with product
“`
ended up like:
>@jordan Complete Q3 deck
>@katie Follow up with product
Which looks like a chat reply, not an actionable task. I now wrap the entire content inside triple backticks (“`markdown) when posting to preserve formatting. That fixed a lot of the layout weirdness.
Also learned this the hard way: if you’re posting with a bot account, it can’t @-mention people directly unless it’s installed with the right permissions. So now I include user IDs dynamically from a lookup table (in Airtable) mapped to employee names.
Here’s a practical tip table based on experience:
| Problem | Fix |
|——–|—–|
| Mentions not working | Use Slack user IDs, not @names |
| File too long for GPT | Chunk input into segments <=3000 tokens |
| Wrong summary type | Use calendar event title to choose prompt |
| Slack formatting off | Use ```markdown or sanitize symbols before posting |
| Missing Zap trigger | Don’t rename Google Drive folders after config |
| GPT hallucinating action items | Add instructions to only include explicit assignments |
Almost forgot — Slack message size has limits. Hit over 4000 chars and API refuses it. So if GPT gives you a massive summary, truncate with a ‘Read full transcript here’ link (to the original Drive file).
Extra tweaks that actually helped
There were a couple of things I totally didn’t expect to matter but made a surprising difference:
– **Timestamp alignment**: I added a tiny step that checks `Transcript last modified` timestamp against `Calendar event end time`. If the delta’s more than an hour, I don’t send it — just flag manually. Too many times the wrong file got summarized 🤷
– **Acknowledgement pings**: I added a checkbox in Airtable next to each team member’s task. When checked, Slack DMs the person with “Hey, you’ve got a follow-up from today’s meeting: [X]. Just confirming you saw it?” Not everyone loves it, but it works 😛
– **Bot personality**: I hacked in a tiny touch of tone variation depending on the meeting type. For informal syncs, it signs off with “Let’s crush this week 💪” (yes, yes, cringe). For client calls, it says “Here are the follow-ups based on our customer check-in.” Helps it feel more human.
– **Fallback to email**: If Slack user ID isn’t found, it emails the person their tasks instead using Gmail’s API. That alone saved me from Slack permission disasters more than once.
– **Debug channel**: I created an internal #meeting-summary-debug where failures post their error trace. One time I had 11 messages in a row that said: “Could not find speaker name.” Turned out Otter switched to a new export format without notice. Classic.
Sometimes, I get weird edge cases — one single-word summary (“Discussed.”) or a draft where GPT included its own instructions like, “Insert bullet points here.” These now trigger a quality check function that resubmits the prompt with additional guardrails if output is too short or contains brackets.
But honestly? Despite the occasional derailment, this setup has been rock solid. Way better than forgetting action items for the fifth week in a row ¯\_(ツ)_/¯