directed by degenstoic
Viewing sample resized to 70% of original (view original) Loading...
Children: 1 child (learn more) show »
  • Comments
  • Your deer are wonderful!
    This "paw of dominance" on the deer's back.

    Hmmm, a version with the roles reversed would be interesting. }:->

  • Reply
  • |
  • 1
  • I added the hand-holding last minute, (and didn't catch the middle finger being larger than the others after the upscale). It's impressive how much it changes the tone of the image.

  • Reply
  • |
  • 1
  • This is really impressive!

    How did you achieve this? Which model, etc? I see some post-processing in GIMP has been involved too.

  • Reply
  • |
  • 1
  • witecek157 said:
    How did you achieve this? Which model, etc? I see some post-processing in GIMP has been involved too.

    model was bb95_v13, but you could do the same with any model.
    I sketched out the pose and rough colors, then inpainted one character at a time (to keep the tags separate). Regional prompter can also be used, but I found in a later experiment that setting up the mask isn't any faster.
    It's a process of gradual refinement: draw the general idea, run it through img2img with high denoising (~0.6 - 0.75), pick the best one, paint again to fix the mistakes, run it through with low denoising (~0.4) to add detail and shading. Find another part not good enough and repeat.
    Here's a few snapshots of the process https://files.catbox.moe/sofmrq.png

    A step that I usually do, but forgot this time, is to run it through the "ultimate sd upscale" plugin with very low denoising (~0.2), and 1-1.5x upscale ratio, just to catch any blurry areas that I forgot to inpaint after editing in gimp.

  • Reply
  • |
  • 2
  • That's quite an interesting process, and more involved than I expected. Less "AI generated" and more "AI assisted". Thank you for sharing it!

  • Reply
  • |
  • 0
  • degenstoic said:
    model was bb95_v13, but you could do the same with any model.
    I sketched out the pose and rough colors, then inpainted one character at a time (to keep the tags separate). Regional prompter can also be used, but I found in a later experiment that setting up the mask isn't any faster.
    It's a process of gradual refinement: draw the general idea, run it through img2img with high denoising (~0.6 - 0.75), pick the best one, paint again to fix the mistakes, run it through with low denoising (~0.4) to add detail and shading. Find another part not good enough and repeat.
    Here's a few snapshots of the process https://files.catbox.moe/sofmrq.png

    A step that I usually do, but forgot this time, is to run it through the "ultimate sd upscale" plugin with very low denoising (~0.2), and 1-1.5x upscale ratio, just to catch any blurry areas that I forgot to inpaint after editing in gimp.

    Really nice work! really interesting to see the process

  • Reply
  • |
  • 0
  • witecek157 said:
    Less "AI generated" and more "AI assisted".

    I'd say it's "prompting with colors". Normally when you write the prompt you're trying to get an idea out of your head and onto the screen. Sometimes you throw a really simple prompt in there to see what it comes up with, but most of the time you're using it because you have some vision of the result. Words (or the AI's understand of grammar) is often insufficient. The flat color sketch is a way to describe that idea more specifically.

    The AI is still doing most of the heavy lifting. I'm just learning to do the first and last 10% of the image, the stuff it's bad at: composition, anatomy, and continuation of occluded things.

  • Reply
  • |
  • 1