Topic: I am new

Posted under General

Hello, I am new to this site and I would also like to start making images, what is the best way to make them?

herias_tenam said:
Hello, I am new to this site and I would also like to start making images, what is the best way to make them?

Many people on the site run Stable Diffusion on their own PC using the AUTOMATIC1111 web-ui. (And I use it, too.)
If you have a gaming PC with >8 GB video memory, this is probably the best option.
If not, there are also online services, that allow you to generate images, but I don't have experience with that.

silvicultor said:
Many people on the site run Stable Diffusion on their own PC using the AUTOMATIC1111 web-ui. (And I use it, too.)
If you have a gaming PC with >8 GB video memory, this is probably the best option.
If not, there are also online services, that allow you to generate images, but I don't have experience with that.

And for those who have a ChromeBook?

herias_tenam said:
And for those who have a ChromeBook?

If you don't have the hardware to run it yourself, then you can use an online service. Here are a few furry-friendly and NSFW-friendly ones:

frosting.ai
seaart.ai
tungsten.run
mage.space
graydient.ai

terraraptor said:
If you don't have the hardware to run it yourself, then you can use an online service. Here are a few furry-friendly and NSFW-friendly ones:

frosting.ai
seaart.ai
tungsten.run
mage.space
graydient.ai

Thanks you ^^

In case you have the hardware that can handle AI and want to do it locally without using online service

1) As said above install Stable Diffusion Web UI
2) Install custom AI model. The built-in can’t generate such content. Try indigoFurryMix or YiffyMix AI model as start point
3) When printing what to generate use same tags like in e621
4) Using artist name can greatly change how AI will draw picture
5) Simple example prompts:

todex, machine, muscular, dick, erection, balls
pencil drawn, dragon, muscular, dick, balls, dakimakura design
taran fiddler, zoroj, chunie, demon, forest, muscular, dick, balls, erection, oral penetration, duo
alien, balls, dick, duo, muscular, breasts, pussy, male/female, looking pleasured, missionary position
realistic, xenomorph, muscular, anal penetration, doggystyle, duo
Killioma, dragon, forest, muscular, dick, balls, male, autofellatio, oral penetration, sitting
truegraves9, deathclaw, muscular, beach, presenting hindquarters, all fours, dick, balls, anus, looking pleasured, ass up, cum in ass, tongue out
truegraves9, deathclaw, muscular, beach, anal penetration, doggystyle, duo, male/male

6) You can find more example prompt by using tag prompt https://e6ai.net/posts?tags=prompt+
7) If you want to improve the picture you need to experiment with different prompts, negative prompts, different artist prompts, different AI models and etc.
8) Sometimes AI can't generate character you want. For example brooklyn https://e6ai.net/posts?tags=brooklyn . In that case use separate LORA model. It's like plugin for current activated AI model that teaches how to draw that character or thing without using another large AI model

Updated

terraraptor said:
If you don't have the hardware to run it yourself, then you can use an online service. Here are a few furry-friendly and NSFW-friendly ones:

frosting.ai
seaart.ai
tungsten.run
mage.space
graydient.ai

Super-double-plus recommend frosting.ai. It has Yiffymix built in, so for this site's purposes it's top tier.

Also adding:

  • jscammie.com (queued, lots of loras)
  • mobians.ai (queued, for Sonic-style content)
  • purplesmart.ai (queued, requires Discord for generation)
  • bing.com/create (daily credits, for SFW stuff only naturally, but really good at old art styles)

As concerns custom models, I recommend two:

  • If you have a 3060 or higher with at least 12GB of memory, use Pony v6 XL. It uses the more accurate and natural language like derpibooru tags, and has a more fine tuned training set than most models.
  • If you don't have a 3060 but have at least a 1080, use yiffymix. It's a smaller training set and uses the stilted e621 tags, but it generates great looking stuff at that memory tier.

As to the best method:

  • Always start from a sketch. No matter how bad you think you are, it will *always* give better results than the default of starting from random noise.
  • For the purpose of the sketch, ignore lineart. Break out of the mindset of creating your own coloring sheets: lines are just the boundary between areas of two colors, and StableDiffusion is going to be much better than you at that. Instead, think in terms of values, proportions, and lighting.
    • By 'values', I mean it should be roughly clear by the *color* alone whether a pixel is part of a foreground character (and which) or is part of the background. For example, the character in the foreground might be brighter than everything else in the background, or the character might be the only red thing in a field of greys and browns.
    • By 'proportions', I mean use the sketch to get the sizes and angles right. Tags like 'small', 'tiny', 'huge', 'hyper' what-so-ever are vague in terms of size; tags like 'low-angle view', 'high-angle view', 'side view' are vague in terms of position. A sketch is a much better way to indicate how your character differs from the baseline average proportions, how and where they're sitting in a scene.
    • By 'lighting', I mean to make certain you're highlighting your areas, to be consistent with the lighting direction used elsewhere in the picture; most models understand lighting but aren't necessarily consistent. Color-pick from an existing area, scrub a few percentage points brighter with the pallete, color-select to restrict your brush strokes to areas of that color, brush. Boom! You're highlighting. Scrub a few percentage points in the opposite direction: boom! You're shading.
    • Hatching with the single pixel tool is really useful to indicate texture, flow, and contours of an area. Hatching in a similar or complimentary color gives an impression of soft shading once the model smooths it out, hatching in a contrasting color gives more intense effect but runs the risk of the model assuming you're trying to add a new object in.
    • Save lines for dead last, to emphasize areas that should be seen as different entities. Draw lines a few pixels broader than you think you need them (I use 5px on a 1024-px canvas); it's easier for most models to visually winnow down a line by combining colors on the outside than it is to consistently thicken up a line along its entire length.
      • Experiment with colored lines, colorpicking across a boundary and slightly darkening the average color this generates. This gives the sense of chromatic aberration, which helps for more realistic styles
  • Watch Bob Ross. Digital paintbrushes are a lot less messy than titanium white, but his wet-on-wet techniques are very applicable to making good canvas-filling sketches, applying base colors then applying highlighting to shape areas.
  • Tag every pixel. This means that for *every* element in a picture, you should have something in your tags describing it. The e621 background tags and e621 art tags pages are handy. You want to reinforce colors, positions, poses (limbs, eyes, and hands), actions, art style (shading type, lineart type), emotions, interactions, background elements, and props.
  • For your principal character, avoid large flat areas of a single color. At its heart, all StableDiffusion does is clean up areas using the tags you give it as guidance, and if you a large area has *nothing* to go off of, it won't be able to do much of anything. Use hatching to give texture, use shading to give light and dark, put some freckles on that butt that occupies 37% of your image, anything but a giant mysterious rectangle.
  • We generate, we touchup, we generate again. Make mistakes. Iterate.

Auto1111-specific advice:

  • use euler a for the algorithm, and remarci for the upscaling. Euler A adds a little random noise each generation to keep it from converging so quickly, which tends to give more pleasant-looking shading. Remarci is just a bit more memory-efficient, and good at preserving lineart. (Euler A is built in. If you can't install Remarci, Lanczos is fine.)
  • use the [tag1|tag2] format. This means it will alternate between tags every step, starting with the first one and going through the next in order. Be aware that this effectively divides the number of steps in a piece by the alternator with the largest number of pieces, so you may want to up the number of steps to 30 (for 2-3) or 40 (4+).
    • This is critical for when you want to do hyper sizes. The information in the 'big X' tags is the base for the 'huge X' tags, is the base for the 'hyper X' tags, is the base for the 'macro X' tags; however, including every tag in every step will rapidly go through your token budget.
  • Auto1111 groups tokens in batches of 75, and weighs each batch equally. So if you're at, say, 120 tokens, you might as well go to 150 tokens.
    • When you're filling out a batch, you might want to repeat especially important tokens from the beginning at the end, so it appears in multiple batches. I usually only bother for interactions and emotions, since those have to be consistent across a piece.
    • If you're at, say, 76 tokens, consider combining two low-priority tokens with one another using the [first tag|second tag] format, so you don't have one token that is weighted the same as the other 75 put together.
      • If *that* doesn't work, take out the token which covers the least pixels that isn't part of your primary character.
      • If you can't decide which token has least pixels, look up the tags on derpibooru (if using PonyXL) or e621 (if using yiffymix). Tags with less than 1000 images will be less accurate; tags with less than 100 images will most likely be interpreted literally rather than interpreted as a tag.
  • Once you're comfortable with using sketches to form good bases and using tags to tag every pixel and element, my biggest Get Good advice is to learn Regional Prompting . It takes a lot of practice (especially because most of the documentation is in Japanese), but it lets you apply individual tags to specific areas, which is amazing for control, saving token budget, and saving generation time. You can even use a tag to select an area and put a secondary design there, which gets verbose thanks to the awkward UI language but allows very intricate pieces.
  • Save some disk space - turn off saving grids.

Some examples with images and manual work from denatural. Inpaint example also included, where you can try fix some parts of generated image if you don't want count only on big numbers of itterations and hope that it will be generated correctly after several hundreds or thousands tries

Inpaint example
https://e6ai.net/posts/36468?q=denatural
https://e6ai.net/posts/33915?q=denatural
https://e6ai.net/posts/52723?q=denatural

Manual example with photoshop
https://e6ai.net/posts/50700?q=denatural
https://e6ai.net/posts/18634?q=denatural

Updated

  • 1