Use ChatGPT to Create Survey Questions from a Research Brief

A diverse team of three professionals in a bright office setting, collaborating over survey questions. One person is using a laptop with ChatGPT displayed, another is taking notes, and the third is reviewing a research brief on a tablet, pointing out important information.

Copyjusting the research brief into actual prompts

The first issue I always hit when using ChatGPT to write survey questions from a research brief is that briefs are, 95% of the time, not prompt-friendly. They’ll have vague phrases like “explore customer perceptions of brand trust” and bullet points like “understand post-purchase satisfaction drivers.” Sounds fine until you paste that into ChatGPT and ask it to generate survey questions — and it spits out 20 junk questions like:

– “To what extent do you trust this brand?”
– “How do you feel about your purchase?”

Like… come on.

Here’s what I now do every time. I read the brief (even if I skim), and I copy-paste each bullet into a doc — and then directly underneath, I write what a Good Prompt version of that really is. For example:

**Brief says:**
“Explore reasons customers cancel after trial”

**ChatGPT-friendly prompt version:**
“Generate 10 survey questions designed to help understand why customers decide to cancel their subscription right after the trial ends. Make sure questions target things like pricing, unmet expectations, or unclear value.”

If you don’t do this transform-first step, ChatGPT will default to high-level, overly polite language that makes it sound like a PR press release in disguise. It’s too helpful and not skeptical enough 😛

Choosing between multiple choice and open text

ChatGPT leans way too heavily on Likert scales and multiple choice. I always have to manually break the pattern. Like yeah, multiple choice is fine, but sometimes I want:

– Forced ranking (“Rank these features from most important to least”)
– Open text (“In your own words, what frustrated you about the experience?”)
– Binary filters (“Did you cancel your subscription? Yes/No”)

So I started writing this directly into the prompt itself:

“Include a mix of multiple choice, Likert scale, and a few open-ended questions. Prioritize clarity and emotional honesty in wording.”

OR, if I’m trying to reverse engineer a broken flow:

“Write 8 questions intended to expose emotional or usability friction in a checkout process. Use a mix of yes/no, simple rating, and qualitative text fields.”

If you leave it vague, it’ll default to asking everything on a 1-5 scale with the same bland phrasing. When I caught myself rewording 70% of the output every time, I realized this wasn’t just a style quirk — it meant I wasn’t giving it enough of a shape to begin from.

Fixing subtle bias baked into the wording

Here’s the part that really annoyed me. I asked ChatGPT to write questions about a new onboarding flow that we’d already gotten 20 support tickets on — stuff like, “Where’s my saved cart?” or “Why do I have to add my phone number now?”

The questions it generated looked fine at first glance:

– “Did you enjoy the new onboarding experience?”
– “How helpful did you find the sign-up instructions?”

Too soft. Too biased. The phrasing assumes the experience was good, and just tweaks around the wording of enjoyment. Even when asked to be “neutral”, it errs on the side of optimism.

Here’s how I solved it:

Instead of asking for survey questions, I asked ChatGPT to simulate an angry customer who just experienced the flow. That gave me completely different phrasing suggestions:

– “What part of the sign-up process felt unclear or unnecessary?”
– “Was there a moment where you considered leaving the page? Why?”

This felt more raw — and more aligned with the actual vibe from our support tickets. It sounds weird, but switching to a role-based prompt like “Respond as a frustrated customer” or “Write honest questions from the POV of someone confused” unlocks a new tone that gets more real answers.

¯\\_(ツ)_/¯ we build onboarding flows and then sabotage our own feedback mechanisms, huh?

What broke when I used the wrong tone

I once threw ChatGPT a prompt like: “We need to test whether users understand our payment page changes. Generate 15 survey questions looking for usability gaps.”

It delivered a clean and elegant set of questions… and literally none of them got useful answers when sent out. People either skipped them or gave neutral/positive feedback. No signal.

Turns out, the survey *sounded* too professional. It triggered “corporate buzzword” fatigue in users. Phrasing like:

– “Please rate the clarity of the payment step.”
– “How intuitive was the location of the discount code input?”

Nobody talks that way. Real users scan and click fast; they don’t consciously notice whether something is “intuitive.” I re-ran the same ChatGPT prompt but this time added: “Make these questions sound like your friend texting you about a weird checkout flow.”

Totally different vibe:

– “Was there a part where you didn’t know what to click next?”
– “Did anything on that page make you stop and squint?”

Got way more signal. People left funny responses like: “Yeah the font was like 6px wtf.” Which helped us realize the UI was glitching on mobile Safari (again).

Getting ChatGPT to use skip logic phrasing

Skip logic is one of those things that sounds like fancy survey speak, but it’s basically just smart filtering. Like:

– If user says “No” to “Did you complete the process?” → Then show follow-up: “What made you stop mid-way?”

But ChatGPT won’t do this unless you ask specifically. Even if your input mentions skip logic or conditional blocks, it usually won’t structure the questions in a way that links them.

So I brute-force it. I say:

“Write 10 survey questions. For any question that ties to a conditional flow, describe the follow-up logic in plain English after the question.”

Example output:

– “Did you cancel during your first week?”
– (If yes, ask: What pushed you to cancel so soon?)

– “Have you referred a friend yet?”
– (If no, ask: What’s held you back from referring?)

This style forces it to show the logic inline — which makes building the survey way easier in whatever tool you’re using (Typeform, Survicate, whatever). I screenshot that output and send it to the PM as-is.

What to do when responses feel generic

Even when you write great questions, your response pool can end up feeling generic. I had a batch of survey data where over half of the “open text” answers were things like:

– “Seems fine.”
– “No complaints.”
– “Worked for me.”

Which, lol, thanks. Super helpful.

I tweaked the prompt and found a trick that works more often than not: Ask ChatGPT to suggest wording that “makes the responder feel like they’re sharing a tip for a friend or new hire.”

This shifts the tone of the survey questions without making them overly casual. For example:

Instead of:
“What could we improve on this page?”

Try:
“If your friend was using this for the first time, what’s the one thing you’d tell them to look out for on this page?”

That framing brings out specific quirks like:

– “Tabs don’t work unless you click them twice.”
– “Login button scrolls offscreen on small laptops.”

It’s wild how much difference the framing makes. People love giving advice more than just pointing out flaws.

Trying multi-language prompts with mixed results

We once needed the same survey written in English, Spanish, and Portuguese. Let me just say: do not trust ChatGPT’s translations blindly. When I ran:

“Translate these survey questions into Spanish and Portuguese in informal customer-friendly tone.”

…I got back a suspiciously perfect-looking set. But two of the Spanish questions showed up as formal (usted instead of tú), and one of the Portuguese ones had a phrase that our regional CS team said sounded robotic.

I now run it like this:

1. Ask it to write the questions in English with an informal, casual tone.
2. Copy and paste just the final English set into a new prompt: “Translate this into Latin American Spanish using a casual non-corporate customer voice. Double check tone.”
3. Rinse for Portuguese.

Then, send to a real human translator for a sanity check.

In short: Use ChatGPT to draft, but not to deploy. Unless you want customer responses like: “Why are they talking to me like I’m applying for a visa?” 🙂

Pressing it to generate variants fast

When stakeholders want to “try a few alternatives” for key questions, I used to rewrite them manually. Then I started asking ChatGPT:

“Give me 5 phrasing variants of this question, each with a slightly different angle or tone: What frustrated you most about using the dashboard?”

This generates a quick batch like:

– What confused you most when using the dashboard?
– Was there anything on the dashboard that didn’t work the way you expected?
– Which part of the dashboard slowed you down?
– Did anything on the dashboard feel broken or clunky?
– What would you change first on the dashboard?

I scan those and usually one stands out — or I send all five to the team for input. Beats arguing over synonyms in Slack.

Also, I noticed it helps to say: “Include one that’s humorous” or “Include one that’s direct to the point.” Variety never hurts.

Undoing clunky answers with pre-survey prompting

Sometimes the actual wording of the survey is fine — but you get weirdly stiff responses from users. This happened when I asked a group of people what they thought about our onboarding redesign.

I got paragraphs like:

– “The procedural flow of the sign-up sequence was satisfactory.”

Bruh.

I realized I had included a pre-survey intro like “Your honest feedback helps us improve.” Which… is technically true, but sounds like HR speak. I changed it to:

“Tell us what tripped you up, threw you off, or made you hesitate. We’re not judging.”

Totally different tone in answers after that. Same questions, same form, but more:

– “Took forever to find the back button”
– “Felt like I was being upsold before I even knew what I was signing up for”

Lesson learned — sometimes the questions are fine, but the frame is broken.

Who I share ChatGPT drafts with before anything goes live

I don’t send the chatbot’s answers directly to the survey builders anymore. I preview everything inside a cheap Notion page or a quick Google Doc (depends who’s working on it with me). Here’s why:

1. I highlight anything that sounds too product-y or self-promoting in yellow.
2. I add comments like “Let’s test this wording vs something more blunt.”
3. I paste a few of the support tickets that inspired the questions underneath the draft.

Then when someone inevitably pops in with, “Do we need this many questions?” I can say: yes — here’s the mess this is based on 🙂