This is a LoRA of the internet celebrity Belle Delphine for Stable Diffusion XL. (For my previous LoRA for 1.5 based checkpoints see here.)
This LoRA is quite flexible, but this should be mostly thanks to SDXL, not really my specific training.
As trigger word "Belle Delphine" is used. Additionally, “braces” has been tagged a few times in the source images, so you might also be able to use this as a tag during generation.
I suggest a default LoRA weight of 1.
I tested this LoRA with the base model + refiner, but I wasn’t really happy with that, as the refiner removed characteristics again.
Most images were generated with DreamShaper XL1.0 Alpha2.
Additionally tested were Tdg8uU's SDXL1.0 mix and Anime Art Diffusion XL (the last 4 wide images are 4 different models).
These images have been rendered using ComfyUI using quite a high-fidelity workflow (also depending on the image). But you can assume results were produced with a 1024x1024 base resolution (with different aspect ratio), then latent based img2img upscale with a factor of 1.5 and then also using ComfyUI-Impact-Pack for face details (if needed). The workflow should be embedded with the example images. It should work at >= 8GB VRAM. To make use of it you will need the following custom ComfyUI nodes:
WAS Node Suite (only really used to save images with a timestamp, besides that not used)
I assume some people might be interested in how this was trained, but I will have to disappoint you, as it was my first try with default settings and I am sufficiently happy with it, so no secret tips. In detail I used the following base configuration file.
With those settings I used 151 uncropped, manually captioned high resolution images with bucketing in the resolution range 512-2048, 32 as bucket steps. The image dataset was set for 25 repetitions, additionally there was a regularization dataset (freshly generated with SDXL) with about 3000 images, set to 2 repetitions. Originally, I set training to 30 epochs (as that was the default of the previous mentioned config file), but around 8 epochs the results began to overbake. For that reason I stopped after 18 epochs (around 27000 steps). The final LoRA is then a merge of epoch 3 (10%), 6 (70%), 9 (12%) and 18 (8%). After that I resized it with a factor of 0.96, target rank 32 resulting in the ~40MB file. As you can see in the percentages (of the epochs in the merge), good LoRA results were already achieved relatively early.
Training was done on a rented 3090 (cloud). Should you have any additional questions, feel free to ask.