If you haven’t seen my last posts about making comics with AI, here are the links:
- The Problems Of Making Comics With AI (Midjourney 2025)
- The Last Superhero Part 1
- The Last Superhero Part 2
I previously stated that I would say that I was able to get 20-25% done of what I wanted to create by using Midjourney. Not enough to create comics that I would actually try to sell but it’s a start.
Here is a rundown of the techniques I’ve used to create at least these 20-25%:
1) Look for a big name in comics – I used Scott Snyder as his comics are known for a specific style. I would describe it as a mature and detailed style which I was looking for. Whatever name you use, make sure Midjourney knows it, by running ten different image generations as a test (e.g., Man driving a car, [artist name] | woman running in the streets, [artist name] | fighter jet over New York skyline, [artist name]…)
The artist’s name is the keyword to define your general style. It should be part of all of your prompts.
Add to all prompts: Scott Snyder
2) Add a color palette – I added a color scheme to further ensure that my images get a consistent style. In The Last Superhero Part 1 I used “black and blue colors”, in The Last Superhero Part 2 I used “black and green colors”. I always added the color keyword after the artist prompt.
Add to all prompts: Scott Snyder, black and blue colors
3) Use universal environments – I defined an environment that allowed for small differences in style. If you go too detailed in your prompt, you’ll get too many differences for each image generation. But if you go universal from the start, you can get away with differences in environmental detail.
For example, I used “streets of New York” a lot in the first comic. This worked, as the character walked through the streets. Differences of shops, cars, and pedestrians are easily explained by the protagonist moving through the scenery.
In the second part, I used “office” and “car repair shop”. This didn’t work as well, but still worked better than trying to generate a specific office or shop like “oval office in the white house” or “car repair shop with a Bugatti and wooden walls” as it gave me lots of different perspectives that I could use to get away with the differences of detail.
Add to all prompts: streets of New York, Scott Snyder, black and blue colors
4) Add weather and/or daytime – I added “rainy day” in part 1. It always gave me rain drops in the scenery which added to the overall feel of a consistent style. In part 2, I always used “at night” which also helped.
Add to all prompts: streets of New York, rainy day, Scott Snyder, black and blue colors
5) Use character references – First I let Midjourney design a character that I reuploaded to use as a character reference. This made the protagonist of the story have the same look at around 90% of its details.
6) Forget about moodboards – Moodboards didn’t help me at all. Artist name and color scheme had a much higher impact.
7) Adjust aspect ratio – If something doesn’t turn out well, rerun the prompt, but adjust the –ar parameter. The aspect ratio has a big impact on the results.
8) Forget about other parameters – I didn’t mess around with other parameters, like –s or –c, they didn’t add much to the freedom of adjustments anyway.
9) Prompt order – The order of your keywords in the prompt has an impact on the results. Try to keep the same order for colors, artist name, daytime, etc. throughout your project.
10) Shorten prompts – When you make your prompts too long, the words at the end of your prompt will be ignored by Midjourney. So keep the prompts short to not lose the keywords for the overall style at the end.
11) Stay with one character per image – Currently, Midjourney is very bad at generating images of characters interacting. Whenever I tried to have more than one consistent character in an image, Midjourney mixed actions and characteristics of the two characters, creating weird results. For now, describe what one character does per image only.
12) Character reference can also be a problem – The character reference can also limit your freedom for this character. In part 2, I used a character with sunglasses. He was supposed to take them off in the last scenes to fire laser beams from his eyes. As the sunglasses were part of the character reference, I couldn’t get Midjourney to have the character take the glasses off anymore. Keep that in mind, when you design your stories.
13) You have to know a little bit of Photoshop – I tried to limit using Photoshop to have a good representation of what Midjourney can do on its own, but for some images, I used generative fill and photo filters to add details, adjust the aspect ratio, and change the color mix.
14) Forget about hard action – Midjourney doesn’t allow certain words to be used in prompts which makes R-rated scenes almost impossible to generate. Write your scenes accordingly.
15) Generate text with another program – Midjourney is advertised as an AI model that can generate text, but it’s such a hit and miss that using Photoshop was simply quicker and easier for me. So, don’t rely on Midjourney to give you good text results.
To Conclude
Still lots of issues but it is possible to get somewhat of a start at making comics now. Just check my results under the links above and decide for yourself if it’s already worth it for you to get into AI comics with Midjourney.
I am going to test the next model now and compare it to Midjourney afterwards. See you then.
Leave a Reply