Nothing special here, just hyped from the release and noodled those Flux.Tools examples together to have (all but redux by now) in one workflow for using it with img2vid-generation on Architectural Imagery. Might be of use to others here too. The Workflow generates images from Depth-Generation (switch between lora/model or customize e.g. to canny), outputs can then be In-/Outpainted before being send to cogvideox at the end of the process.
_______________________________
Ressources:
MODELS used:
checkpoints
flux1-dev.safetensors
flux1-depth-dev.safetensors
flux1-fill-dev.safetensors
lora
flux1-depth-dev-lora.safetensors
CogVideoX
CogVideoX-5b-1.5-I2V
clip
t5xxl_fp16.safetensors
clip_l.safetensors
vae
ae.safetensors
CUSTOM NODES used:
GitHub - ltdrdata/ComfyUI-Manager
GitHub - rgthree/rgthree-comfy
GitHub - chrisgoringe/cg-image-picker
GitHub - kijai/ComfyUI-KJNodes
GitHub - kijai/ComfyUI-CogVideoXWrapper
GitHub - yolain/ComfyUI-Easy-Use
_______________________________
Download FLUX etc. from BlackForestLabs
Kijai's CogVideoX1_5i2v video related stuff + KJNodes
rgthree-comfy for more comfort
chrisgoringe's cg-image-picker and
yolain's easy-use nodes used.
A complete generation of a 1920x1440px image and a 49f long 1360x768px video without extra In- & Outpainting takes about 530s on a 4090, vram peaks at ~22gb, so yes unfortunately for high ressources only atm. If you are interested in updates, head over to my instagram.
Ещё видео!