Training your own custom Stable Diffusion models in dreambooth with AMD GPU's is awesome! Add pictures of people you know and train the AI to put them into pictures. Bring patterns and textures into a model. Train SD to draw a character more reliably by using real photos and telling SD who that character is to produce reliable results! Endless possibilities!
Install Dreambooth extension in Automatic1111
Change diffusers to 0.23.1
go to stable-dffusion-webui/extensions/sd_dreambooth_extension
edit file requirements.txt
find line with diffusers and change to ==0.23.1
save file
In command line
source venv/bin/activate
pip install -r /extensions/sd_dreambooth_extension/requirements.txt
After that you should be able to start SD like normal
Dreambooth tab
create a model
once created ensure that model is selected
go to concepts
put in directory of training images
put in the prompt you want to be associated with those images
sample images tab
put in a prompt to generate images after training is done -- include the new prompt you just made
parameters tab
8bit adam uses bits and bytes, only viable if you have bits and bytes compiled correctly with ROCm or are using Nvidia cards
AMD use Torch AdamW
uncheck use ema
uncheck cache latents
Mixed precision bf16
Check Full Mixed Precision
Intervals
Save Model Frequency bump up to only saving a few times during training
Save Previews Frequency drop to 0
Image Generation Tab
Native Diffusers
EulerAncestralDiscrete has worked for me
Image Processing - if you have vram issues drop down scale to 256
Save config before training!
Hit train and see how it goes, you can train a model more, but if you over train a model you will need to start over.
Ещё видео!