Why I Needed Auto Translation for Slack
So this started because of our dev team in Berlin. They love posting long feedback threads in German. Fine. But then someone replies in English. Then someone else drops Mandarin. Within ten minutes, you’ve got a full-blown multilingual soup where no one is really sure who they’re responding to 😅
Slack lets you hover to translate now (sort of), but if you’re reading on mobile or pulling threads into a dashboard later — good luck. That manual translation hover bubble? It doesn’t transfer. And that’s a problem when you’re automating things.
I’m building a shared conversation archive where critical feedback threads get pushed to Airtable. Neat and searchable. Except if half the thread is untranslated, it becomes basically unusable unless you speak three languages.
So I figured… okay. Let’s rebuild the whole flow in Zapier using prompt-based translation with OpenAI. That way, every thread — not just a single message — gets translated inline and piped into Airtable or Notion. Sounds reasonable, right? It was not 😂
Capturing Just the Right Part of the Thread
Slack’s trigger options suck for this specific use case. If you use “New Message Posted to Channel,” you trigger every time *any* message is posted — but you don’t know if it’s a reply or a top-level message. And if you use “New Message Posted to Thread,” it only picks up *replies*, so you never get the actual first message that started it.
What I ended up doing: I used both triggers in parallel and filtered them using conditions.
– For the new message event, I checked if `thread_ts` *does not exist*. That means it’s a top-level post.
– For the thread replies, I checked `thread_ts` *is not equal to ts* — meaning it is a reply.
This way, I rebuild the thread by collecting the parent and then all children. But here’s the catch — Slack doesn’t let you fetch the whole thread context directly in Zapier 😤
So I had to do a webhook “custom request” step where I called the Slack API directly with `conversations.replies`. That did work — but I had to manually supply the `channel` and `thread_ts` from earlier steps. Awkward, but got the job done.
If you’re wondering what it returns, here’s what a sample object looks like:
“`
{
“messages”: [
{
“text”: “Original post in German”,
“user”: “U12345”,
“ts”: “1681079200.000100”
},
{
“text”: “Antwort!”,
“user”: “U54321”,
“ts”: “1681079210.000200”,
“thread_ts”: “1681079200.000100”
}
]
}
“`
Now you have the full conversation. Or at least the messages. You do *not* get usernames unless you fetch user info separately. That’s fun. 🙄
Prompt Engineering to Translate the Thread
This part actually felt kind of magical. I used the OpenAI step in Zapier — with the real GPT model — and built a custom prompt that looked like this:
“`
You are a translator. Translate this thread into American English:
{{WebhookSlackResponse.messages}}
“`
Initially I just dumped the array in there directly and it was a mess. It came out like:
“{‘text’: ‘Antwort!’, ‘user’: ‘U54321’…” and so on. Not human readable at all.
So I added a Formatter step before it that mapped each message into this format:
`[username]: message` — like `Timo: Das ist gut!` — and joined them with new lines.
This is the block of Zapier steps I used:
1. **Slack Incoming Message** ➡ check if thread
2. **Slack API Request** — get full thread
3. **Formatter** — map each line to `username: message`
4. **Formatter** — join lines into single text block
5. **OpenAI** — prompt: “Translate into US English”
One small thing that tripped me up: if you pass in more than around 1000 characters in the prompt, OpenAI sometimes chokes or just stops mid-thread. So I used a conditional loop with Webhooks to chunk long threads into blocks. Not elegant, but functional.
Handling Posts with Mixed Languages
Okay, real talk. Not all messages need translating. Some team members slide between English and their native language in the same reply, like:
“Ich finde das cool — but can we ship before Friday?”
I tried detecting the language first using AI, but it slowed things down way too much. Instead, I rewrote the prompt like this:
“`
Translate any non-English content in the following thread into American English. If a message is already in English, leave it untouched. Only return user-visible output.
Thread:
…
“`
This works decently well. GPT is pretty good at ignoring English when told to. But it’s not perfect — it sometimes rewrites perfectly good English anyway, like turning “we could iterate” into “we try options” 🤷
Also: GPT leaves out messages it doesn’t understand. Like, just silently excludes them. That’s a little scary, honestly. So I added a basic fallback that JSON logs the original untranslated thread into Airtable alongside the GPT one, just in case someone needs to do a manual sanity check.
Posting Back to Slack with Translations
At this point the translation output was solid — readable, chronological, good tone matching. Now I had to decide *where* to post it back.
Option 1: Add the translated version as a reply in the *same thread*. Easy to follow, but clutters the thread. Especially if there are multiple language chains from multiple bots.
Option 2: Send it as a DM or into a private moderation channel. Cleaner, but then folks in the original convo don’t see it.
I went with a hybrid. If a thread had more than 3 messages and more than one unique language, it went to a `#translated-threads` channel. That way the core translation log exists somewhere shared. I used a Formatter step again to make the output readable:
“`
Thread from #support-de:
Anna: Guten Morgen zusammen!
Ben: Good morning 🙂
Anna: Gibt es News vom API-Team?
Translated:
Anna: Good morning all!
Ben: Good morning 🙂
Anna: Is there any news from the API team?
“`
One thing to watch out for here: putting the GPT output into a Slack Webhook as-is often hits the character limit. I had to slice larger blocks and use the chat.postMessage API with `blocks` formatting to safely deliver multi-part messages.
Keeping Translations Organized in a Database
Ultimately I wanted this whole thing to feed our Notion tracker. But I switched to Airtable because Notion requires extra steps for app tokens, and I was mid-sprint 😬 Airtable just lets you dump records by webhook and sort them later.
Here’s what my Airtable record fields included:
– `channel_name`
– `thread_ts`
– `translated_text`
– `original_messages_json`
– `number_of_replies`
– `message_languages_detected`
I put the original payload in there *raw* just for backup purposes. That has already saved me twice.
One time GPT hallucinated a totally different response that wasn’t even in the input. User said “bitte später” and GPT rendered it: “Sure, let’s launch now!” I had to manually check the raw payload to debug that one. So don’t skip logging raw input — it’s boring, but useful.
Weird Zapier Errors That Might Surprise You
A few random things I ran into, because this is Zapier and nothing is ever normal:
– Webhook steps sometimes time out with large Slack thread payloads. Even if it’s under 500kb. I fixed it by reducing message properties (don’t pass attachments, reactions, etc).
– If you have more than one OpenAI step per Zap, Zapier shards the memory between them. You’ll get random GPT hallucinations or truncations.
– Occasionally GPT fails silently. Like, you get a 200 OK but the output is blank. No error, just a ghost. Adding a dummy response check (e.g., text contains more than 10 characters) helped filter those.
– Slack usernames in thread replies are only `U12345` IDs. You have to call the API to turn those into real names. Otherwise your translated thread says: `U12345: Yes that’s fine` 🙄
I’m probably still missing edge cases, but those were the ones that punched me in the face on day one.
Final Thoughts from the Zone Where Zaps Break
Right now this auto-translation flow makes our cross-lingual Slack threads readable, reviewable, and searchable — which is all I needed it to do. But typing this out, it reminds me how many duct-tape steps are holding it together.
Would I trust this to scale to 300 users across five languages? No way. But for a mid-sized team where the worst thing that happens is GPT accidentally compliments a bug report, it’s fine 😛
And anyway, it breaks less than half the apps in my other tabs, so…