stable-diffusion.cpp/examples/convert/README.md
Steward Garcia 8124588cf1
feat: ggml-alloc integration and gpu acceleration (#75)
* set ggml url to FSSRepo/ggml

* ggml-alloc integration

* offload all functions to gpu

* gguf format + native converter

* merge custom vae to a model

* full offload to gpu

* improve pretty progress

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-11-26 19:02:36 +08:00

697 B

Model Convert

Usage

usage: convert.exe [MODEL_PATH] --type [OUT_TYPE] [arguments]
Model supported for conversion: .safetensors models or .ckpt checkpoints models

arguments:
  -h, --help                         show this help message and exit
  -o, --out [FILENAME]               path or name to converted model
  --vocab [FILENAME]                 path to custom vocab.json (usually unnecessary)
  -v, --verbose                      print processing info - dev info
  -l, --lora                         force read the model as a LoRA
  --vae [FILENAME]                   merge a custom VAE
  -t, --type [OUT_TYPE]              output format (f32, f16, q4_0, q4_1, q5_0, q5_1, q8_0)