This LoRA should not be used at a strength higher than 0.4, because it uses a new technique of double-training, not because it is over-trained.
A LoRA of American actress Beverly D'Angelo, covering the period 1980-1985 (29-34 years of age). It was trained TWICE on 76 images, with 1980 reg images. It was trained for two hours on a 3090 GPU, for four epochs for each training set, and the two resulting models merged in Kohya.
This LoRA uses a new technique first shared on Reddit in late September 2023, by the user shootthesound. Please see the above link for details, but the long and short of it is that you create two versions of the same training data (one portrait and one square, i.e., for instance, 512x512 and 512x768), and train a LoRA for each of them.
You then pick the best trained checkpoint from each and merge them in Kohya at 100% strength each. See the original post for comments from a machine learning expert as to why this massively improves the quality of the LoRA, but suffice to say that the merged LoRA now has the best of both worlds.
The fact that they are merged at 100% each is why you need to use LoRAs made with this technique at around 0.4 strength, because technically the two LoRAs represent a 200% strength!
There are several benefits with this approach:
Faces are much more detailed, even when they are small in frame.
The overall quality is extraordinarily magnified.
You can ramp up the CFG almost to the end of the scale before there is any degradation of quality, which means your prompt instructions can be followed without sacrificing quality.
The resulting LoRA is incredibly disentangled, and can adopt poses and characteristics present in the LAION dataset that do not exist anywhere in the training material that you used for the LoRA.
And there's more besides this - but try it for yourself, and see for yourself.
I generally use these models in complex workflows, inpainting faces after initial T2Is, and using ControlNet extensively. So if you're hoping for one-click prompt magic, my models aren't data-curated with this in mind, but rather as tools for traditional workflows that use Photoshop and other older methods.