Setting up ChatGPT to write your tweets
This is the part that tripped me up for way too long. I just wanted GPT to write tweets from existing blog content, and then maybe make a LinkedIn version too — polished enough to schedule later. I assumed this would be like tossing your content into ChatGPT with a prompt like “make 3 tweets,” but nope. That gives you generic nothings like “Check out our latest post about AI and productivity!” 🙄
Here’s the setting I use now:
**Prompt**:
“Write 3 casual, human-sounding tweets that summarize this blog post. Each tweet must feel like a human who just read the article sharing what they learned. Avoid promotion or calls to action. Use humor and personal tone if relevant.”
**Model**: GPT4 Turbo
**Temperature**: 0.7
**Max tokens**: around 600
The temperature tweak actually mattered. 0.5 was too dry, like bot-polished copy that has all the words but no vibe. Bumping to 0.7+ lets it get a little weird in the good way, which is where actual engagement lives.
Funny thing — I left Max Tokens too low for a while, and GPT kept truncating tweet threads mid-sentence. I didn’t even notice until someone replied “bro where’s the rest.” 😅
Prompt variation for threads versus single tweets
If your goal is a proper tweet thread — not just 3 standalone tweets — it’s a totally different prompt style. The voice has to build over multiple tweets. Also: GPT loves to start every thread with “1/ Here’s how…” which gets old fast.
Prompt I landed on:
“Write a 5 tweet thread from this blog post. Each tweet should lead naturally into the next. The tone should be casual and sound like one person talking. The first tweet should tease the value without being clickbait. No numbers like 1/5.”
If you don’t include “No numbers like 1/5,” it’ll nearly always default to that. Threads can still be easy to follow without being super formal.
Also note: when GPT generates these, it adds double line breaks between tweets. That’s perfect for copy-paste into X (Twitter), but causes weird spacing on content schedulers like Buffer or Hypefury. I added a formatter step after this using regular expressions in Make to clean that up automatically. Just removes the newlines, trims whitespace, and collapses line spacing into what X likes.
Where to store GPT output for manual posting
This part took too many false starts. I’ve tried saving GPT output to:
– Google Sheets (short-term, feels clunky)
– Notion synced database (a pain, required manual cleanups)
– Airtable and then reference via ReadOnly view (this one worked best)
What worked best for me:
In Make (formerly Integromat), I have the GPT output flow save clean tweet text directly into an Airtable base. It drops each tweet or thread into fields like:
– `Topic`
– `First Tweet`
– `Tweet Thread?` (yes/no)
– `Raw GPT Output`
– `Approved Tweet Copy`
Then when I want to schedule something, I read from the `Approved Tweet Copy` field only, and push that into Buffer later. I manually mark Approved vs not just because GPT sometimes adds cringey takes like “AI is inevitable, embrace it 🤯” and I just can’t post that with a straight face 😅
Converting one idea into platform specific versions
This part feels like magic when it works. I use a multi-step GPT call:
1. Blog link or current draft
2. GPT prompt to create tweet version (as above)
3. **Second GPT call using that same tweet as input**, with prompt:
“Rewrite this tweet as a more professional, medium-length LinkedIn post. Maintain the same idea, but expand with more human tone.”
So instead of rewriting from scratch, the second GPT run treats the tweet as seed content. That keeps the tone consistent and keeps it from going off on a weird tangent.
My worst fail happened when I fed it the same blog post twice for both prompts — it started sounding like a weird keynote speaker. All generic phrases, no opinion. When you cascade from tweet → LinkedIn, the tone stays way tighter.
Storing and organizing GPT prompt templates
I now maintain my actual prompts in a single Notion page that’s broken into sections:
– Tweet generation
– Thread writing
– LinkedIn rewrites
– Calls to action formatting
– Add sarcasm to boring posts (this one’s 🔥 when used carefully)
Each has 3 things:
– The prompt text
– Example before and after
– Notes on what fails (like “adds hashtags every time”)
That last part — keeping failure notes — was the biggest irritation-saver. After a few attempts where GPT kept ending tweets with “Explore the link here,” I added a snippet to the prompt: “Do not include links or phrases like ‘read more.’”
Also: save examples when something works well. GPT looooves to forget what you liked unless you train it like a dog.
When auto posting doesn’t behave
Tried Zapier and Make for auto publishing. Both worked — until they didn’t. Zapier sometimes just silently fails for X (probably rate limit or changed login). Make’s error messages were a little clearer: “429 Too Many Requests” from Twitter API. But annoying regardless.
Current workaround:
– Use automation ONLY to write and schedule drafts.
– Final publish = manual click, from Buffer or TweetDeck.
That way I can visually check tone, make sure nothing looks robotic, and avoid GPT hallucinating something like “Elon just changed the algorithm again — here’s what you MUST know.” Which it randomly generated in March. Not okay, lol.
Publishing across Twitter LinkedIn threads email
Here’s the mini-pipeline I run now:
1. I write or edit the main blog.
2. GPT creates tweet options using prompt above.
3. I pick the best tweet or thread.
4. GPT expands that into a LinkedIn post.
5. GPT formats that into a plaintext-style newsletter blurb.
6. All options get stored in Airtable with fields for each platform.
I can then schedule confidently, even if I revisit 3 weeks later and forget what version I liked. Airtable limits don’t get in my way because I just archive ~weekly.
Also, once every few months, I find an old tweet that STILL bangs. I’ll ask GPT: “Rewrite this tweet for 3 months from now so it sounds fresh. Change phrases and timestamp.” That lets me re-use good ideas without annoying people who’ve already seen them.
Adding personality without getting blocked
Sometimes I ask GPT to “make version with light sarcasm or a dad joke” and it loosens up the content way better than you’d expect. You MUST follow up the prompt with edits though — GPT keeps adding bits like “#LOL” or “😎” when it’s trying to be edgy. Kill those quick.
A real example:
Input tweet: “I tested 4 automations to post on multiple platforms. Only one didn’t suck.”
GPT’s sarcastic version: “Tried 4 automations. Only one didn’t spontaneously combust. Guess which 😏”
I removed the emoji and some of the snark, and it ended up pulling the most engagement that month. People like jokes. People hate AI trying to be funny.
¯\_(ツ)_/¯