This is a Style LoRa model trained with data from "いちご飴" (Ichigo Ame). It is an accidentally created LoRa model due to an unintended mistake. Although it was a mistake, I found that the results were surprisingly good in some areas when I actually tried it, so I decided to release it here, just in case someone might need or like it. The dataset mainly consists of scene-based images. However, when I applied this LoRa to characters, I found that although it did not fully reproduce the style of the dataset, it still formed a new style, or retained some of the line and color tone features from the dataset.
As for the recommended weight, I suggest 0.4-0.8, but this is not absolute. Normally, a good style does not require a trigger word for usage, but I inadvertently added a unique "ichigoameeee" label as the trigger word while processing the data. In fact, I have provided an image below to show the effect of weight changes and the presence or absence of the trigger word on the style. Whether to add the trigger word and the choice of weight is entirely up to the user.
The model I used is refslvaeV1 and its corresponding VAE. In this model, this style can be adapted to most character LoRa models. However, I have not tested the compatibility of other checkpoints with this style. A more stable second version may be released in the future.
这是一个用“いちご飴”的数据训练而成的style lora。这是一个由意外失误所产生的lora模型。
虽然我失误了,但是我实际使用了一下发现有些地方效果还不错,于是我选择发布在这里,万一有人会需要和喜欢。
数据集主要是一些场景为主的图片,但是当我把这个lora应用到人物身上时,我发现虽然没有完全还原数据集的风格,但是他仍然形成了一种新的风格,或者说保留了部分数据集中的线条和色调特征。
至于权重,我推荐0.4-0.8,但是这也不绝对。正常来说,一个好的风格是不需要trigger word就可以使用的,但我在处理数据时候疏忽了,为其加上了一个独特的“ichigoameeee”标签作为触发词。
实际上我在下面放了一个图片来表明,权重的变化,以及是否加触发词对于画面风格的影响。我认为是否添加触发词以及选择多大的权重完全由使用者来决定。
我所使用的模型是refslvaeV1和其对应的vae。在该模型下,这个风格可以和大多数的人物lora所适配。但是我还没有测试其他的checkpoint对于该画风的适配程度。后续也许会发布更加稳定的第二版本。