Note that the first model is trained with clip skip 1. If you prefer to use clip skip 2, please use the second model.
The two models should perform quite similarly in most situations (as long as you use the clip skip each model is trained on). Some comparisons are provided in the last two example images.
--
The updated models are trained with the entire anime series + fan arts that I collect up to this date 2023.04.30
I think it should improve in particular for Miyo and Asahi
There are few images of FujimiNemu and TenkawaNayuta but I don't think they are properly learned
Checkpoints of other steps can be found in the associated hugging face repository https://huggingface.co/alea31415/onimai-characters and in particular https://huggingface.co/alea31415/onimai-characters/tree/main/loha_all_0430
As usual the networks are trained on top of ACertainty and should work on most popular anime models
For LoHa I have both clip skip 1 and clip skip 2 versions while the old LoRa was trained on clip skip 1 (as far as I remember)
You can try to switch between several styles as well: aniscreen, fanart, edstyle, manga cover. Note that however
aniscreen may conflict with style model
edstyle unfortunately comes with text most of the time and it is hard to get rid of if
manga cover may also produce some text occasionally but this can be mitigated with proper prompts and sampler
most of the time putting nothing may be just the best thing to do
More:
The clothes seem to be properly learned. I am not sure what is the minimum you need to trigger each of them. Just use common sense. For example for uniform you have either "school uniform, jacket" or "school uniform, suspenders" and for Mihari you can put "bolo tie, labcoat". Play with negative prompt to avoid blending.
I only remove tags about eyes during training. Therefore you may need more description about hair style in your prompt. I made this decision because each character appears with several different hair styles.
While the choice of sampler mainly depends on your flavor. It can sometimes results some weird effect. For example a lot of strange things appear with manga cover style when I use DDIM. On the other hand Euler a and DPM++ 2S Karras give better results in general. I do not have any text in the training images of this style, so it is quite mysterious that these stuffs appear.
--
I don't need support or credit, but I would be glad to know that you are using the models I trained and find it useful.
Moreover, I would like to advocate for more franchise models.
You can take a look at my workflow https://github.com/cyber-meow/anime_screenshot_pipeline if you are interested.
I just want to spread the fact that there is no reason to encode a single concept in each lora.