A fking StableDiffusion model
Welcome, all you beautiful Civitai people! This model was created directly from the images given by model/TI/lora/etc uploaders for their model samples. The prompt data given from the uploader or PNG
metadata was directly used for training, the only curation done was to limit all images to be at least 320 pixels in width or height and less than 1024 pixel in width or height.
Furthermore, all weights and most special characters were removed, such as .:()[]
and possibly more (or less, whoops). Any prompt that had a lora, <lora:lora_name:weight>
was also stripped. Images were further curated by checking if the remaining tags were > 0
.
Current Version v20230318
This version was trained on `42,350 images` for 100 epochs
or 4,235,000 steps
on RunPod A100 Secure Cloud (ouch, that was expensive).
320 <= image size <= 1024
, so try different generation dimensions
by <uploader>
added to training caption
<imagetags>, <triggerwords>
added to training caption. (Note: Image tags were scraped on 03/18/2023, and may no longer be accurate, RIP)
Text encoder was trained, as a result the training captions from the primarily 1.x
training dataset, you'll need to prompt this model as if it were a 1.x
model for the best results. If you are not looking for anime
be sure to add it to your negative prompts as the training set also contained a large amount of it. (Future version I will be pruning it)
This model was trained without a trigger word directly with the prompts pulled from previews of models on this site. You can try using it as a general purpose model, or as an experiment. Try and see if you can invoke a certain model's style by using words explicitly in their preview image prompts, likely a trigger word.
I do have plans to update the checkpoint with the newly uploaded images to Civitai, perhaps monthly or bi-monthly, it takes a loooooong time for training and I haven't decided if I want to spend money to off-load it. Next updates will also include a txt
file of all images and models used for quicker crediting. I apologize that I didn't have the foresight to do that before I got started.
[DONE] Include trigger words as caption data as long as they conform to the tag filters
[DONE] Be more precise in caption filter
[DONE] Include a credits.txt
with version/dataset to quickly identify which image was sourced from what model
[IMPLEMENTED] CHAD Score images to further prune the dataset of images with low aesthetic scores. (Implemented in scraper, but not in use on this version)
[WIP] Continue ignoring requests for NSFW
or 1.x
models
There will not be a checkpoint trained against a 1.x
base. You are welcome to train it yourself using the dataset I provided.
If you are interested in contributing monetarily, please check out the links in my profile, (various charities), or consider donating directly to Civitai. Otherwise, just a thanks is all that is needed.
I will only be scraping models and images flagged SFW
, I will not be training a NSFW
or 18+
model, but I also didn't look at each individual picture and some NSFW
material may have slipped through.
This is Civitai's model, by the people, for the people. Please merge and remix away! I only ask that you DO NOT mix with a model intended for resale, don't keep from your community.
Please remember to like or rate model. Happy prompting!