update - 4th Feb 2023: V15 version added
update - 19th Jan 2023: Johnson has been experimenting and passed this to me to include on this listing:
djzGingerTomCatV21-512-inpainting
please use an SD-V21-512-inpainting.yaml config (included below)
we recommend using latent nothing mask mode.
This Model combines two GingerTomCat datasets, created by DriftJohnson. It is then reinforced by two textual inversions;
the KittyPics embedding by stille_willem &
the Point-E negative embedding by Doctor_Diffusion
3x3 grid used both embeddings, 2x2 grid shows same seed outputs with some suggested upscaler settings.
showcase credit: DriftJohnson
"strong style" Models are intended to be merged with each other and any model for Stable Diffusion 2.1 -- although you can also use these without the Trigger word like any model
I recommend merging with 0.5 (50/50 blend) then using prompt weighting to control the Aesthetic gradient.
example merged model prompt with automatic1111:
(GingerTomCat:1.2) (yourmodeltoken:0.8)
if you drop the "djz" and the "V21" what remains is the token you need to call up the concept in the model. All examples shown were the Raw Token, no other words. Tokens are case sensitive and in almost all models it will match the filename.
It is possible to merge these models with each other using a different value. It is possible to pair models and then merge those resulting models. In this way we can blend abstract concepts together and then weight the tokens to achieve the result we may wish to create.
Of course to eliminate all those tokens, you can simply train a new custom model from the outputs, which means you are back to a single token.
A video explanation will follow, but for now the above explanation should do. We are focused on getting as many style/aesthetic models into artist hands to enhance the creativity already at their finger tips.
Art Freedom for all!!
[all original artwork used for training with full permission from Drift Johnson]