Are `Dreamshaper` or `Something` fashions re-trained from scratch or fine-tuned?
I wish to understand how fashions like Something or Dreamshaper are skilled and what number of pictures have been used for them:
1. Are they utterly retrained and fine-tuned from scratch?
2. Are they fine-tuned utilizing Dreambooth?
3. Are they fine-tuned utilizing LoRA?
4. Are they skilled NOT on the unique SD 1.5, 2.0 or SDXL and as an alternative skilled on a Dreambooth or LoRA mannequin? Is that this potential?
5. Are they fine-tuned utilizing every other strategy?
If these fashions are fine-tuned utilizing Dreambooth or LoRA, is there any correct tutorial on fashion coaching? Most tutorials I discover on YouTube differ from one another in fashion coaching. For instance [this tutorial](https://www.youtube.com/watch?v=m72B17O3xDw) has a distinct manner of captioning and coaching the fashion than [this tutorial](https://www.youtube.com/watch?v=tgRiZzwSdXg) and all of them declare to coach kinds utilizing Dreambooth or LoRA.
Is there any information or tutorial that may correctly clarify the totally different coaching regimes for `Individual` and `Fashion`? Thus far most contradict one another😔
I know that models like Holygenex and others models from u/ShatalinArt were praised for being different. Don’t know exactly the precise difference. But it was about being retrained or something.
You can’t retrain from scratch or that model would have no link with SD at all (no possible merge etc) and it would take billions of pictures and millions of GPU hours.
Fine tuning mean you are usually using many pictures (thousands) with good captions of various subjects that share something (aesthetic quality, photographic style, good looking people etc).
Dreambooth is just a particular way to fine tune using fewer pictures of a single concept initially (as in the paper) using regularisation pictures generated by the base model to prevent overlearning and class bleeding.
LoRa are trained a bit like dreambooth (fewer pictures), but as they can be toggled on and off people are somewhat moving away from regularisation pictures because overlearning is either not a problem (because of the toggling) or can be limited by using a small rank model.
People disagree with each others on parameters, but the truth is most of them somewhat works so just follow any tutorial and get some experience.