Highlights for 2024-10-29
- Support for all SD3.x variants
SD3.0-Medium, SD3.5-Medium, SD3.5-Large, SD3.0-Large-Turbo - Allow quantization using
bitsandbytes
on-the-fly during models load Load any variant of SD3.x or FLUX.1 and apply quantization during load without the need for pre-quantized models - Allow for custom model URL in standard model selector
Can be used to specify any model from HuggingFace or CivitAI - Full support for
torch==2.5.1
- New wiki articles: Gated Access, Quantization, Offloading
Plus tons of smaller improvements and cumulative fixes reported since last release
You must log in or register to comment.