AIBooru Beta

Comments

Blacklisted:
[hidden]

JanusKamlygan said:

Fantastic imagery. What is the model exactly? I've never heard of Anydream.

It's just a mix I made myself. Click the AnyDream tag to get info on how to make and use it. This image in particular was a variant using a 50/50 mix of Dreamlike and SimpMaker3k1.

  • 1
  • Reply
  • [hidden]

    barroth said:

    The best one from behind i ever seen,besides,i’m curious about the second Lora “epoch-000016” you used,what is it used for?

    It's a futa/newhalf LoRA I trained. Unfortunately, it was pretty overfitted and couldn't do much aside from "from behind" tags, so I got rid of it. I trained a new LoRA that is overall more consistent with other poses as well, and can still do "from behind" tags. Here's the folder:
    https://mega.nz/folder/XMgjVDja#hSYAT_EsNSokys7oAIKm2A

    Updated

  • 1
  • Reply
  • [hidden]

    Hey, i'm going crazy trying to get something close to your image. Even using the data you provide, i can't get nothing similar. I'm also using zankuro_e6, concept_pronebone and pyramithrapneuma, but no matter what values i assign, my images look nothing like yours. ¿Can you give me some clues?

    Also, thanks for your amazing work!

  • 0
  • Reply
  • [hidden]

    Kukar said:

    I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic
    I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?

    You have to use LoRAs like pronebone and zankuro_e6 in order to get these, I also used Hinata Hyuuga LoRA.

  • 0
  • Reply
  • [hidden]

    Saskweise said:

    I am using AbyssOrangeMix2_hard

    I use the model, the hash is the same, everything is the same, but the drawing style and quality are very different, the ones that come out of me are more realistic
    I don't understand, you didn't do the AbyssOrangeMix2_hard merge, did you?

  • 0
  • Reply
  • [hidden]

    sheev_the_senate said:

    Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.

    If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.

    thx thx thx

  • 0
  • Reply
  • [hidden]

    Judging by the desaturated colors you don't have a VAE loaded. You might want to download a VAE and tell Stable Diffusion to use it. This will make the colors a lot more vibrant.

    If you choose to use the VAE from the Anything model then you'll probably have to launch the WebUI with the --no-half-vae parameter, otherwise it will occasionally produce black images. Took me a while to figure that one out.

  • 2
  • Reply
  • [hidden]

    antlers_anon said:

    It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).

    Good to know, thanks. Seems to just be built in to the generation I suppose

  • 0
  • Reply
  • [hidden]

    Ocean3 said:

    Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌

    It works almost exactly the same as in the img2img tab. I lowered it from the default (0.7 I think) to 0.6 to reduce the amount of mouths and nipples popping in random places (might be placebo though).

  • 0
  • Reply
  • [hidden]

    antlers_anon said:

    It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)

    Ah, thanks - I'm not even too sure what that setting does and haven't used it 👌

  • 0
  • Reply
  • [hidden]

    ForeskinThief said:

    bro used 80 different models to generate this masterpiece

    Was experimenting with a mass 'anime styled' model mix. I noticed my prompt had a lot of weight on specific things at the time which shifted how the model responded and made me notice how it does with certain things (architecture/food etc.). I've done a few tests since with the random merge and this is one of the results 👌

  • 1
  • Reply
  • [hidden]

    Ocean3 said:

    When you mention denoising strength, is that referring to the upscale setting or something else?

    It's the one in highres fix. I'm over 500 commits behind on the webui so it might work differently now. (New versions changed the api which broke my autoprompt scripts and I'm too lazy to fix it.)

  • 0
  • Reply
  • [hidden]

    sheev_the_senate said:

    Judging by the filename the prompt was something like "peaceful landscape cinematic early morning flat grassy".

    im guessing grassy field or something, im gonna stop being lazy from now on using picture links and download with metadata instead

  • 0
  • Reply
  • [hidden]

    sheev_the_senate said:

    Judging by the desaturated colors you don't have a VAE (variational autoencoder) loaded. I've had the same issue in the past too. You should download one of those and try generating with it selected - the colors look much better that way. vae-ft-mse-840000-ema-pruned is a decent choice, but for anime style images the one from Anything-v3 or NovelAI would probably work better. If you choose either of those two latter ones you'll need to launch automatic-1111 with the --no-half-vae argument to avoid occasional completely black images.

    Thank you for sharing your experience! Now I know how it works 🙂

  • 2
  • Reply
  • [hidden]

    Judging by the desaturated colors you don't have a VAE (variational autoencoder) loaded. I've had the same issue in the past too. You should download one of those and try generating with it selected - the colors look much better that way. vae-ft-mse-840000-ema-pruned is a decent choice, but for anime style images the one from Anything-v3 or NovelAI would probably work better. If you choose either of those two latter ones you'll need to launch automatic-1111 with the --no-half-vae argument to avoid occasional completely black images.

  • 2
  • Reply
  • [hidden]

    Lyren said:

    No one here knows who made the image, so unfortunately the model is unknown unless someone finds the artist who made the picture.

    But maybe we also need the same hyperNetwork. Just using the same model, I can't get the same result...(Oh no!!!!!)

  • 0
  • Reply
  • [hidden]

    Lyren said:

    No one here knows who made the image, so unfortunately the model is unknown unless someone finds the artist who made the picture.

    I found it! The model is 'AbyssOrangeMix2_hard.safetensors', it's hash is the 931f9552 !

  • 0
  • Reply
  • [hidden]

    xssw said:

    The effect of this model is like anime, it's amazing!!!!! Would you like to share the model?(please!!!!!!)

    No one here knows who made the image, so unfortunately the model is unknown unless someone finds the artist who made the picture.

  • 0
  • Reply
  • [hidden]

    Greg_Torbinson said:

    Hey this is awesome man! I'd like to try to make some too, do u have a tutorial or anything that can teach me how? I could send it to you after.

    Hi mate thanks :) i dont really know, u should go on the unstable diffusion discord and start talking with people thats how i learned. I'm basically doing img2img batch with a base video i'm also doing. U can pm me on discord: Sambalek#8026
    Or tiktok: @Proteinique

  • 0
  • Reply
  • [hidden]

    interesting part is that this picture is pretty because i set up stable diffusion wrong.

    desaturation was partially caused by "auto" in VAE settings. After I set it to nai.vae, it became more saturated (still good tho: https://files.catbox.moe/850cnu.png ). You can try disabling VAE (set to "auto" or "none" in settings) for more desaturation.

    Read here for more info: https://rentry.org/hdgrecipes#bruising-in-merged-model-outputs

    Updated

  • 2
  • Reply
  • [hidden]

    kaisu1 said:

    Hi what model formula did you use? Looks good!

    Animefull(t2i) -> AbyssOrangeMix2_hard(i2i)

    ---t2i parameters---
    Steps: 28, Sampler: Euler a, CFG scale: 11, Seed: 1636018072, Size: 768x576, Model hash: 925997e9

    ---i2i parameters---
    Steps: 28, Sampler: Euler a, CFG scale: 6, Seed: 909055350, Size: 768x576, Model hash: 931f9552, Denoising strength: 0.7

    Updated

  • 0
  • Reply
  • [hidden]

    The model used was a 50% merge of Protogen x3.4 and Anything-3.0.
    Believe it or not, but the "aroused" and "horny" keywords are there for the facial expression.
    "Dreamlikeart" was included by mistake when I used an old prompt with a different model (the previous model I used was a 50% merge of Dreamlike and Anything-v3).

  • 0
  • Reply
  • [hidden]

    @antlers_anon are you sure there's no hypernetwork you've selected in your settings tab, or TI that's being used to generate this image?

    I tried to generate this with your provided mix__2.ckpt with all of the exact same settings (prompt, sampler, CFG scale, seed) and it still results in a fairly realistic non-anime style.

    If nothing else, would you be willing to share your embeddings folder and hypernetworks folder? (just a screenshot would probably do as well?)

    edit: looks like the firstpass width/height was set to 0x0, so it was just straight-up going for 768x1280 for initial resolution. I ended up finding that it gets pretty close, especially if I use kl-f8-anime2.vae instead of novelai/anythingv3 vae.

    I'm guessing that colour correction and some filtering adds a little noise

    Updated

  • 0
  • Reply
  • [hidden]

    avrix said:

    Thanks. Yeah, I tried same prompt, same model mix, same seed, I found I wasn't even able to get close to a similar anime style output.

    Can you give me the hash of the models you used? My result hash is different, might be slightly different models

  • 0
  • Reply
  • [hidden]

    ANJU said:

    I was gonna make a meme comment, but I feel like being helpful. :u

    Being a Builder gives you access to a few additional features regular users don't have, like using the mode menu when viewing posts (for quick faving/unfaving, and using tag scripts) and you can give other people feedback, to name two of them... and since the Gold and Platinum levels are... sort of used, I guess (but not really?) Builder is kind of the default level that people get promoted to if they're active in uploading and tagging stuff, I suppose.

    Oh I know that now. But this pic is actually an accurate representation of how I felt when the message came in. I was like, "Oh cool! I'm a builder! ....... what's a builder?"

  • 3
  • Reply
  • 1 16 17 18 19 20 21 22 23 24