🐣 Please follow me for new updates https://twitter.com/camenduru
🔥 Please join our discord server https://discord.gg/k5BwmmvJJU
🥳 Please join my patreon community https://patreon.com/camenduru
Model Colab: https://github.com/camenduru/ios-emoji-xl-model-colab
Model Mirror: https://huggingface.co/camenduru/ios-emoji-xl/blob/main/ios_emoji_xl_v2_lora_webui.safetensors
Model Dataset: https://huggingface.co/camenduru/ios-emoji-xl/blob/main/dataset_160x160_images.zip
Thanks to https://github.com/samuelngs/apple-emoji-linux for the 160x160 pixel emojis ❤
Thanks to https://replicate.com ❤
Training Logs:
Trained with 160x160 pixel ios v16.4 emojis 😋
GPU = Nvidia A40 (Large) at https://replicate.com
Num examples = 4129
Num batches each epoch = 1033
Num Epochs = 1
Instantaneous batch size per device = 4
Total train batch size (w. parallel, distributed & accumulation) = 4
Gradient Accumulation steps = 1
Total optimization steps = 1000
Total Run time: 45.46 minutes
Total Cost: $1.98
Replicate LoRA to WebUI LoRA Converter
pip install safetensors==0.3.3
import re
from safetensors.torch import load_file, save_file
checkpoint = load_file('/content/ui/models/Lora/ios_emoji_xl_v2_lora.safetensors')
new_dict = dict()
for idx, key in enumerate(checkpoint):
new_key = re.sub('\.processor\.', '_', key)
new_key = re.sub('mid_block\.', 'mid_block_', new_key)
new_key = re.sub('_lora.up.', '.lora_up.', new_key)
new_key = re.sub('_lora.down.', '.lora_down.', new_key)
new_key = re.sub('\.(\d+)\.', '_\\1_', new_key)
new_key = re.sub('to_out', 'to_out_0', new_key)
new_key = 'lora_unet_' + new_key
new_dict[new_key] = checkpoint[key]
save_file(new_dict, 'ios_emoji_xl_v2_lora_webui.safetensors')
Thanks to fofr ❤ for the idea.
fofr's model: https://twitter.com/fofrAI/status/1698741974835065171