docs: update README.md
This commit is contained in:
parent
14da17a923
commit
b139434b57
@ -11,6 +11,8 @@ Inference of [Stable Diffusion](https://github.com/CompVis/stable-diffusion) in
|
||||
- Plain C/C++ implementation based on [ggml](https://github.com/ggerganov/ggml), working in the same way as [llama.cpp](https://github.com/ggerganov/llama.cpp)
|
||||
- Super lightweight and without external dependencies
|
||||
- SD1.x, SD2.x and SDXL support
|
||||
- !!!The VAE in SDXL encounters NaN issues under FP16, but unfortunately, the ggml_conv_2d only operates under FP16. Hence, a parameter is needed to specify the VAE that has fixed the FP16 NaN issue. You can find it here: [SDXL VAE FP16 Fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/blob/main/sdxl_vae.safetensors).
|
||||
|
||||
- [SD-Turbo](https://huggingface.co/stabilityai/sd-turbo) and [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo) support
|
||||
- 16-bit, 32-bit float support
|
||||
- 4-bit, 5-bit and 8-bit integer quantization support
|
||||
@ -186,6 +188,7 @@ You can specify the model weight type using the `--type` parameter. The weights
|
||||
```sh
|
||||
./bin/sd -m ../models/sd-v1-4.ckpt -p "a lovely cat"
|
||||
# ./bin/sd -m ../models/v1-5-pruned-emaonly.safetensors -p "a lovely cat"
|
||||
# ./bin/sd -m ../models/sd_xl_base_1.0.safetensors --vae ../models/sdxl_vae-fp16-fix.safetensors -H 1024 -W 1024 -p "a lovely cat" -v
|
||||
```
|
||||
|
||||
Using formats of different precisions will yield results of varying quality.
|
||||
|
Loading…
Reference in New Issue
Block a user