You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat: add U-Net specials of SDXS
* docs: update distilled_sd.md for SDXS-512
* feat: for SDXS use AutoencoderTiny as the primary VAE
* docs: update distilled_sd.md for SDXS-512
* fix: SDXS code cleaning after review by stduhpf
* format code
* fix sdxs with --taesd-preview-only
---------
Co-authored-by: leejet <leejet714@gmail.com>
The file segmind_tiny-sd.ckpt will be generated and is now ready for use with sd.cpp. You can follow a similar process for the other models mentioned above.
@@ -97,3 +97,31 @@ for key, value in ckpt['state_dict'].items():
97
97
ckpt['state_dict'][key] = value.contiguous()
98
98
torch.save(ckpt, "tinySDdistilled_fixed.ckpt")
99
99
```
100
+
101
+
102
+
### SDXS-512
103
+
104
+
Another very tiny and **incredibly fast** model is SDXS by IDKiro et al. The authors refer to it as *"Real-Time One-Step Latent Diffusion Models with Image Conditions"*. For details read the paper: https://arxiv.org/pdf/2403.16627 . Once again the authors removed some more blocks of U-Net part and unlike other SD1 models they use an adjusted _AutoEncoderTiny_ instead of default _AutoEncoderKL_ for the VAE part.
105
+
106
+
##### 1. Download the diffusers model from Hugging Face using Python:
if (version == VERSION_SD1 || version == VERSION_SD1_INPAINT || version == VERSION_SD1_PIX2PIX || version == VERSION_SD1_TINY_UNET) {
54
+
if (version == VERSION_SD1 || version == VERSION_SD1_INPAINT || version == VERSION_SD1_PIX2PIX || version == VERSION_SD1_TINY_UNET || version == VERSION_SDXS) {
0 commit comments