Understanding why product mockups matter
When you are trying to show a client what a product might look like in real life, sending them a flat design file feels underwhelming. I remember once sending a perfectly neat PNG of a coffee packaging label, and the client emailed back asking if I could “put it on a box or something.” That was the moment I realized that people struggle to visualize things in two dimensions. They want to see how a design wraps around an object, how shadows fall, and whether it still feels balanced once it is sitting on a table.
That’s why Midjourney became a lifesaver for me. Instead of spending hours setting up Photoshop smart objects, I started experimenting with Midjourney prompts to spit out realistic product shots. The difference was night and day. A box with subtle lighting suddenly made my design look like something that belonged in an actual store. That little bit of realism makes a client less hesitant, because it no longer feels like they are gambling on something only half finished. I would compare it to the difference between describing a dish and actually being served a nice plate of food. One sounds fine, but the other makes you hungry right away 🙂
Crafting a basic prompt for mockups
The first thing I learned, sometimes the hard way, is that Midjourney is incredibly literal but also sometimes just… moody. You type what you want, but if your request is vague like “realistic shampoo bottle,” you suddenly get alien space vials. My process now always starts with naming the exact item shape. If I want a rectangular cardboard box, I literally type “realistic cardboard box packaging mockup on white background.” The word “mockup” helps guide it toward product photography rather than fantasy artwork.
I usually add a few details about angle and light. For example, “front facing, soft studio light” or “angled three quarters, shadows falling right side.” These small tweaks prevent the generated image from looking flat. One problem I ran into was forgetting to specify background color — Midjourney loves to throw in trendy minimal furniture or dramatic clouds unless you specifically say “plain white background.”
Sometimes I copy my base design into the description: “with a green leaf logo in the center.” It’s shocking how well it translates text prompts into visual placement, though it won’t recreate the exact design. I learned not to expect pixel-perfect logos but instead to treat it as a stand-in representation.
Avoiding weird distortions in outputs
If you have used any AI image generator, you already know the pain of hands and warped objects. Midjourney is not immune. I once generated a coffee mug and the handle looked like some melted loop that no one could actually pick up. The trick for fixing this was to specify “ergonomic handle clear and functional.” Yes, it feels silly, but adding these little human cues makes the AI less likely to invent unusable nonsense.
Another recurring issue is skewed labels. Bottles come back with labels resized at odd angles, as if someone applied a sticker drunk. To fix that, I started using terms like “flat centered label” or “aligned sticker.” Even then, some outputs make me laugh rather than present to a client 😛
What’s worth remembering is that Midjourney usually gives four outputs in a single batch. At least one usually looks usable, but set your expectations — you will never get a 100 percent predictable result. Upscaling the variation you like with the built-in buttons tends to clean up the edges and eliminate some distortions.
Layering text prompts with Midjourney parameters
So Midjourney has parameters you add after your text that control ratio, detail level, and style weight. At first, I ignored them, but it turns out they fix half the problems I was running into. For instance, setting the aspect ratio to 2 3 creates tall bottle shots that look closer to real product photography proportions, instead of squished square mockups.
The style setting also matters. If you leave it alone, sometimes the output leans into concept art territory, which looks stunning but not realistic. Adding “style raw” pulls Midjourney closer to neutral and photorealistic outputs. When I discovered that, I stopped getting artsy bloom effects that made my packaging look like it belonged in a sci-fi scene.
I also began messing with the “no” parameter. For example, “no watermark no text no unreal background” cuts down strange hallucinated elements. I wish I had known that earlier; I once presented a mockup to a friend, and he pointed out there was a random letter F floating in the background. Embarrassing.
Combining AI results with real editing
No matter how good the output is, I never hand over a Midjourney image raw. There is always cleanup. I drag the file into Photoshop and align my real vector design on top of the AI’s placeholder. My early mistake was trying to force AI to replicate my exact branding perfectly — that part just does not work. Instead, I let it create the scene, lighting, and perspective, then overlay my actual logo using warp or perspective tools.
One useful workflow has been using Midjourney for multiple angles so I can compose a little grid that looks like a photo shoot. For example, three shots of the same box at slightly different rotations. In Photoshop, I keep them consistent in tone so the set looks cohesive. That way, the product seems like it exists physically, not patched together.
For anyone struggling with flat images, this hybrid method solves the problem. AI gives you the frame, editing software gives you the precision. Together it feels like a cheat code, skipping hours of 3D modeling I never wanted to touch anyway.
Using table format to compare prompts
Here is a simple rundown I built in my notes one day after too many failed prompts. Pretend this is a whiteboard scratch because that is how messy my tabs were that afternoon.
| Prompt Example | Result |
|—————-|——–|
| realistic cardboard box packaging mockup on white background | Works best for e commerce style shot with clean box |
| realistic shampoo bottle with centered label 2 3 aspect ratio style raw | Good photoreal output tall bottle no weird stretching |
| product mockup with modern logo in center plain background no watermark | Removes random letters or AI invented text |
| realistic coffee mug ergonomic handle soft studio light | Prevents distorted handles although not perfect |
That table summarizes weeks of frustration and slow victories. I had to build it for myself after forgetting what actually worked.
When to stick with real photography
There are moments where Midjourney just won’t cut it. If you need your exact trademarked design in razor sharp high resolution, you cannot rely on AI. One time I generated a box mockup that looked perfect until you zoomed in — the corners had strange gradients that no printer would replicate. A real prototype photo just wins in cases like that.
But for pitching ideas, testing brand concepts, or quickly showing stakeholders what a design could feel like, AI gives you a shortcut. Instead of waiting days for a prototype print, you can put something in a presentation the same afternoon. I find myself doing that even when I know we will do proper photography later. It calms anxious clients who just want an early glimpse.
Where to find more structured tips
Midjourney’s own community is constantly dropping new prompt recipes. The main site midjourney.com is a good place to start if you want to see documentation about parameters and community reference images.
Honestly, half of what I use today came from scrolling Discord chats and seeing how others word their phrases. It’s like deciphering incantations that sometimes summon a beautiful product photo and sometimes summon chaos ¯\\_(ツ)_/¯. The learning is mostly trial and error, but after a while you will build a small library of reusable prompts in your notes app, just like I did.
Dealing with constant changes in output
The last headache and also strangely exciting part of Midjourney is that the models keep changing. I swear I had a perfect candy wrapper output last week, tried again yesterday, and suddenly the reflections were glossier and the perspective shifted. No update notice, no warning — just different. It feels like the rug being pulled while you are mid project, but you adapt.
My coping mechanism is to always generate a batch when I find something that works, not just one image. That way I have backups if the model shifts later. I store them in a folder like mini insurance files. It is messy, but it has saved me multiple times when trying to keep consistency in client decks.
At the end of the day, I keep using Midjourney for mockups because even with all the quirks, it lets me skip entire tedious steps and actually move faster in my messy workflow full of open tabs and half working Zaps.