Originally I was not that impressed when stable diffusion was released for blender, but this changed quickly when I found out that if you provide the ai with a basic standin geometry render, wich provides some basic shading and some perspective clues, you can pretty much automate the scifi concept design AND use the scene you provide as a starting point to model the output of the AI.
Since time is limited, I just put the ai part and the initial modeling in this article: this is again a half a day project to see if things work out (learning process). I think I'll do a full model if times allow in the comming weeks and add it to the article. (need to make new 3d content anyway) But I can't promise it. Too much going on at work and in my free time.
Here is a basic example: This "spaceship" is quickly created, and a basic texture is attached. There a few things I took care of when setting it up: First I take care that there is a destinc visual shading, and that the texture adds some visual noise, so the ai has something to work with. Also I want some perspective / clues in the input image, so the ai has something to work with..its ugly and simple by purpose. Thats the point of the whole article.
Now I feed this into dream texture and generate an ai image with a strength around 0.44 to .52 Since it provides perspective clues, I can make a dozend variations of it, all with almost the same perspective
Now I can put them into one photoshop file and use a coarse eraser brush to keep the parts of each image I like to get a nice concept in no time
After this I feed THIS image into the AI and generate a bunch of variations
And again in photoshop, merge the parts I like
Now I can use all those little tricks I learned from transfering sketches into 3d models in blender to rough out the basic shape (thats where I'm currently at and yes, its ugly.... but this is not about the modeling process yet).
And, just for fun I overlayed the original model and my 3d sketch to see how they compare proportion wise
Point is, not only are you able to art direct the ai, but also use the input and output to build up complexity in no time.
Same goes for environments, sure the ai shifted the background model to a foreground (its not perfect, maybe some mist would have helped) and I had to provide good shading of the elments for the input image and make the sky brighter than the buildings. But what really surprises me is the perspective consitency it retains even if you do multiple passes on multiple outputs.
And of you go: A concept image wich matches some standins wich can be used to start your detailed model procedure. Of course you can do this project with any other ai engines wich allows you to use input images, but the parametrisation of the blender addon helps a lot. Not to mention that a local ai does not restrict you in the amount of images
One thing I miss in my current version is to define a range of random seeds, so I can render out 20-70 concept sketches while going out for coffee. (EDIT: Has been fixed in the latest versions)
EDIT: Whatever, started to model a bit more so this article might become a progress report..but now on to the other bazillion projects I want to finish :)
|