directed by k3y
Viewing sample resized to 53% of original (view original) Loading...
Description

Long haul

 
Sometimes the work is just too much and one has to unwind for a moment.

Remember to keep your seatbelts fastened. The airplane mode on your personal devices should be switched off - so you can check out my telegram channel:
https://t.me/NSFWaiyiffmix

Blacklisted
  • Comments
  • k3y

    Member

    feralbreeder said:
    Lovely image.
    Very good uniforms and background.
    "No intercourses" is killing me! LMAO

    I'm glad you like it! Actually the background wasn't generated, as I was unable to force the ai engine to make believable airplane corridor/toilet interior. I simply generated the characters separately, and the combined it with a real picture of a background, and then regenerated everything together so it blended somewhat nicely. It worked, but for whatever reason it became slightly blurry... gotta work on that some more.

  • Reply
  • |
  • 2
  • k3y said:
    It worked, but for whatever reason it became slightly blurry... gotta work on that some more.

    I recommend you to give Canny & Depth modules a try. They make working with "real" backgrounds much easier in my experience, and the restyling capabilities are very flexible. Here's my workflow just as an example (apologies for typos as English isn't my native language):

    (before upscaling)

    1. Perhaps you found a real-life photograph you'd like to use as background but you don't like how it would look with your current styling
    2. Load the background photo into img2img, select your model. The prompt (including the negative one) should be a brief description of what's on the photo. In my experience 50-75 tokens in the positive prompt is enough. If your model uses score tags then use them here as well. Don't forget to include all your style LoRAs!
    3. If the photo is bigger than the working resolution of the model you use then downscale it using the "Resize to/by" submenu or in an external editor, this is important
    4. Sampler/steps: I use either DPM++ 2M @ 40 steps or Euler A @ 80 steps for this purpose
    5. Activate the ControlNet plugin, you'll need 2 units. First one would be Canny, second one would be Depth. As for the module files I recommend using models by Xinxir as they work the best with PDXL-based base checkpoints and mixes. You can easily find them on Hugging Face.
    6. For each unit tick the "Upload independent control image", upload the same source photo. Select the preprocessor ("canny" for Canny, "depth_midas" for Depth), make the resolution higher (bumping 512 up to 960 works fine with 16 GB of VRAM), set Control Weight to 1. Set "Ending control step" to 0.3 (Unit will self-deactivate at 30% of steps) if you use plain styles or somewhere around 0.7 if you use "realistic models". As for the two Threshold sliders for Canny I recommend trying various values to see which would preserve enough details for you, they vary for each picture you're working with. Click the smol button with the "Bang" emoji on it, save both Canny and Depth masks, set the Preprocessor to "None" and reupload freshly produced masks again - this would save you from running preprocessors every generation
    7. Play around with denoise strength, in my personal experience 0.7~0.83 works the best for plain styles.
    8. Click "Generate" (if you are on A1111 ofc) and it will get you a restyled photo that would fit your overall style.

    After you combine the picture of your character(s) with the background repeat the same procedure with following considerations:
    1. Merge the prompts for both the background and your characters, use BREAK to separate keywords
    2. Use much lower denoising strength in img2img, 0.1~0.2 should be enough to blend the background and foreground and kill some artifacts and jagged lines without affecting the details
    When done, proceed with your usual upscaling and post-upscaling routines... :P Maybe there are better alternative I don't know about yet (I'm sure as hell there are) but for me Canny&Depth were The life changers.

  • Reply
  • |
  • 1
  • k3y

    Member

    oaf40 said:
    I recommend you to give Canny & Depth modules a try. They make working with "real" backgrounds much easier in my experience, and the restyling capabilities are very flexible. Here's my workflow just as an example (apologies for typos as English isn't my native language):

    (before upscaling)

    1. Perhaps you found a real-life photograph you'd like to use as background but you don't like how it would look with your current styling
    2. Load the background photo into img2img, select your model. The prompt (including the negative one) should be a brief description of what's on the photo. In my experience 50-75 tokens in the positive prompt is enough. If your model uses score tags then use them here as well. Don't forget to include all your style LoRAs!
    3. If the photo is bigger than the working resolution of the model you use then downscale it using the "Resize to/by" submenu or in an external editor, this is important
    4. Sampler/steps: I use either DPM++ 2M @ 40 steps or Euler A @ 80 steps for this purpose
    5. Activate the ControlNet plugin, you'll need 2 units. First one would be Canny, second one would be Depth. As for the module files I recommend using models by Xinxir as they work the best with PDXL-based base checkpoints and mixes. You can easily find them on Hugging Face.
    6. For each unit tick the "Upload independent control image", upload the same source photo. Select the preprocessor ("canny" for Canny, "depth_midas" for Depth), make the resolution higher (bumping 512 up to 960 works fine with 16 GB of VRAM), set Control Weight to 1. Set "Ending control step" to 0.3 (Unit will self-deactivate at 30% of steps) if you use plain styles or somewhere around 0.7 if you use "realistic models". As for the two Threshold sliders for Canny I recommend trying various values to see which would preserve enough details for you, they vary for each picture you're working with. Click the smol button with the "Bang" emoji on it, save both Canny and Depth masks, set the Preprocessor to "None" and reupload freshly produced masks again - this would save you from running preprocessors every generation
    7. Play around with denoise strength, in my personal experience 0.7~0.83 works the best for plain styles.
    8. Click "Generate" (if you are on A1111 ofc) and it will get you a restyled photo that would fit your overall style.

    After you combine the picture of your character(s) with the background repeat the same procedure with following considerations:
    1. Merge the prompts for both the background and your characters, use BREAK to separate keywords
    2. Use much lower denoising strength in img2img, 0.1~0.2 should be enough to blend the background and foreground and kill some artifacts and jagged lines without affecting the details
    When done, proceed with your usual upscaling and post-upscaling routines... :P Maybe there are better alternative I don't know about yet (I'm sure as hell there are) but for me Canny&Depth were The life changers.

    Ohh Thanks! I've read about this technique recently but didn't try this yet. I appreciate your help a lot, will give it a try!

  • Reply
  • |
  • 0