I’ve been diving into AI assisted workflows and found an extreme font for creativity. My recent efforts have been towards RPG-style characters like you’d see in a D&D game, and this guy came from the idea of a royal guard of an ancient city, Egyptian/African-esque. The AI gave me a variation with just the shield and I really liked the aspect of not killing but defending. If anyone is curious about the workflow I’d be happy to share :)
Just curious what the “assistance” part of things was?
For sure! Often I’ll come in with a visual idea already, or will iterate on some with the AI giving inspiration. If I have the idea strongly I’ll sketch out the composition and elements I know I want - sometimes on real tricky poses like fingers I’ll take a photo of myself doing them. Throw that into stable diffusion with img-2-img to generate images based on my sketch/photograph to something more full featured or something I hadn’t thought of but really like (you can also set how “dreamy” the AI should be, how much it should vary from the input material).
There’s a lot of detail I could get into but the “assistance” is fleshing out a composition -> I go in and correct anatomical mistakes or elements I want to change specifically -> run it through again if it needs it.
Thanks!