Write SOP Checklists with GPT and Process Street

ConnectProcessStreetToGPTCorrectlyFirstTime

Okay so the first time I tried setting up GPT to auto-fill templates in Process Street — it looked simple. Emphasis on looked. I’d already made the checklist, called “New Client Onboarding,” and a few form fields were set up with variables I wanted to fill using GPT. Like, one for “Client Industry” and one for “Next Suggested Steps.” Sounded easy: get GPT to generate that stuff, drop the responses into the checklist, and done. Right? 🙂

Well. Here’s what I learned (the hard way).

Step-by-step:
1. First, log in to Process Street.
2. Go to the template you want. Not just a checklist — the actual template.
3. Click the three dots > Integrations > OpenAI.
4. You have to paste your GPT API key here. This is from platform.openai.com. It won’t yell at you if the key is wrong — it just silently does nothing when you test it. Ask me how I know. ¯\_(ツ)_/¯
5. Then, define which fields will get filled by GPT. Now here’s the kicker — even if you map the prompt to a field like “Summary Paragraph,” it won’t do anything unless that field is already visible in the default Step view. Also, those prompt-to-field mappings can randomly disappear if you click away or lose Internet while editing. No warning. Thanks.
6. Always test by running a live checklist right away. Don’t trust what the Integration tab says — the prompt “Success!” message will still show even if GPT isn’t really getting triggered.

The other painful thing no one tells you — if you’re using multiple GPT prompts inside the same Process Street template, and one fails (like if you get the input wrong, e.g. quote marks inside a JSON payload), the whole line just fails silently. No errors.

So my rule: only one prompt per field per run. Reset the field before re-running — GPT sometimes won’t overwrite its own previous answer.

UseHiddenFieldsToPreloadGPTInputs

Here’s a weird trick, but it saved me hours on week 2 of this.

You can make hidden fields in Process Street steps that exist ONLY as input variables for GPT. You can dynamically fill them with form data — dropdowns, text areas — whatever the user enters. Then, you feed those into your GPT prompt using curly braces (like {{IndustryName}}).

I made this hidden step at the end with 4 fields:
– Industry
– Lead Qualifier Notes
– Last Contact Method
– Urgency Level

The user sees those early in the checklist but doesn’t see this hidden step. GPT sees it when it runs the prompt.

Then in the GPT prompt step, I write something like:

“Use the following data to generate a professional yet informal email to re-engage this client who hasn’t replied. Include one fun fact about their industry. Data: {{Industry}}, {{LeadQualifierNotes}}, {{LastContactMethod}}, {{UrgencyLevel}}.”

Why this helps:
– Keeps your GPT prompt input clean and predictable.
– Avoids clutter in the visible steps participants see.
– You can nest fallbacks (like if Urgency Level is blank, GPT knows to say “as soon as you’re free” instead of “by Friday”).

Pro tip? Don’t add GPT response fields to the same step as your input fields. It gets messy. Just add a new step called “Generated Content” and put the GPT responses there neatly.

WhenYourPromptStopsSavingAndYouGoInCircles

This one completely broke me for a solid 45 minutes. I was editing a GPT block inside a Process Street template, and it would randomly reset the entire prompt body back to the default example. I thought I was going crazy.

Turns out:
– If you change field names *after* saving the prompt, the system doesn’t know how to resolve them anymore and throws out your prompt content.
– If you use single quotes (apostrophes) inside the prompt without escaping them, sometimes GPT blocks think the content broke mid-string. The result? It quietly resets and you’re back to the Starting Prompt.

My fix was disgustingly manual:
– I wrote all my prompt text in Google Docs, pasted it into a plain-text notepad to kill formatting, then copied it into the GPT field.
– I stopped using field names like “Client’s industry” and changed everything to non-spacy, underscore-smashed names like “ClientIndustry”.

Now if it resets, I can just paste mine back in instead of crying softly into my keyboard.

TestingMultipleGPTOutputsIsntAsStraightforward

So let’s say you want multiple GPT steps inside one checklist. Like a “Write Email Reply,” a “Summarize Client Feedback,” and a “Generate Bullet Points.” If you think you can just stack them like magic, surprise — nope.

Only one GPT block gets triggered at a time — the one in the active step or the first one with visible fields. So if they’re all on one step? The top one wins. The others sit there quietly, pretending like they’re doing something. Classic Slack-meeting behavior.

So here’s how to wrangle it:
– Put each GPT prompt in its own step.
– Reorder steps carefully so only one runs at a time.
– Name each one clearly. Don’t leave default names like “AI step” unless you want to play guessing games at 11pm.

You’ll also hit field limits — if two GPT prompts are writing into the same field, they overwrite each other. Always make unique fields like “SummaryGPT1” and “InsightsGPT2” even if you hate clutter.

Oh — and sometimes one GPT call randomly doesn’t fire when you hit “Run.” I usually just duplicate the step, delete the old one, and try again. Zapier-level nonsense.

HowIGotZapierToTriggerGPTWithinProcessStreet

This took me way longer than it should’ve. I was trying to get a Zap to auto-create a Process Street checklist and also fill in GPT fields before anyone touched it. The goal: generate the first draft of a report before the actual team even saw it.

Here’s how I finally pieced the chain together:
– Trigger: new row in Airtable.
– Zapier: create Checklist in Process Street based on that row.
– Inside that same Zap: update Checklist Fields step, and one field is set to a long-form input, like a list of client URLs.

Now, inside the Process Street checklist, the step with a GPT prompt receives this input and auto-fires. But this only works *if* the field is prefilled correctly. If the GPT step is early in the checklist and sees a blank field, it errors silently.

Pattern I use: add a non-GPT gating step first, like “Review Input,” which slows the user down for 20 seconds, giving GPT enough time to finish its thing. 😛

Bonus Zap trick: add a delay of 20 seconds before moving to the GPT field-updater if your API key is slow. Otherwise, the checklist runs but the GPT never whipped into action.

WeirdCachingIssuesThatOnlyShowUpLateAtNight

It’s always after midnight when GPT starts ignoring me. I swear.

Sometimes Process Street caches the prompt results even if the checklist was regenerated with new data. You’ll see stale GPT responses inside a newly launched checklist, and nothing you do — refreshing, restarting the checklist — will fix it… unless:

– You rename the field that receives the response. Even just adding “_1” to the end makes Process Street reload that whole GPT block.
– You disable and re-enable the GPT integration. Click the three-dot menu in the setup tab again, it re-invokes everything.
– You fully delete the step and recreate it from scratch. Not ideal, but useful Hail Mary play.

The cause? Something about the browser-side caching of the prompt steps never fully clears on re-runs. Especially bad in Safari. Switched to Chrome and magically my GPT response showed up.

UseRepeatersWithExtremeCautionIfGPTIsInvolved

Repeaters (where you dynamically spawn a list of form fields) can combine with GPT… but kind of hate each other.

Here’s what blew up on me:
– I had a Repeater step where the user added up to 5 client names.
– Each was stored in its own dynamic field, but as a batch.
– I thought I could just insert {{Clients}} into the GPT prompt and it would loop. Nope. You just get the whole string like: “[Bob, Alice, Derek]” written weirdly into the middle of the message, or worse — GPT tries to generate 5 emails in one block.

To actually loop over multiple inputs:
– You have to write a GPT prompt that includes instructions *to itself* like: “For each client in the following list, create a line item summary.”
– Use comma-separated values or a numbered list format. Make it predictable.

And don’t ever trust it in Repeater steps with checkboxes. GPT can’t handle boolean True/False values snuggled inside a Repeater unless you parse that into plain English first.

I ended up making a “Flatten Clients Input” hidden field that turns the Repeater values into one long string before giving them to GPT.

TheFallbackStepperThatForgotToExist

This one made me question how much of my setup was actually real.

So Process Street lets you build conditional steps — show this if X input equals Y. I made a fallback step called “Prompt Failed?” that was supposed to catch when GPT didn’t return anything. Simple logic: if GPT_Result field is blank, show this step.

It never triggered.

I messed with it for 2 days thinking the condition was wrong. Then I realized: the condition triggers *only* after the GPT step loads, and only if the field is stored as something the field logic engine can understand. If the GPT block failed **before** it even wrote to “GPT_Result,” then that field technically was never created — and therefore… no condition can ever find it.

I nearly screamed.

My weird workaround:
– I created a backup “Default Response” text field and set it with a dummy value at checklist start.
– Then I ran the GPT step to overwrite it.
– If the GPT doesn’t run, the old value remains. Now I can check “If Default Response contains ‘FILL_THIS_IN’, go to fallback step.”

So basically, I rigged it to fake a condition where something must happen, and then let GPT overwrite it. If it doesn’t? Well, now I know.

Leave a Comment