docs: add Vulkan build command
This commit is contained in:
parent
e71ddcedad
commit
4f87b232c2
11
README.md
11
README.md
@ -21,7 +21,7 @@ Inference of Stable Diffusion and Flux in pure C/C++
|
||||
- Accelerated memory-efficient CPU inference
|
||||
- Only requires ~2.3GB when using txt2img with fp16 precision to generate a 512x512 image, enabling Flash Attention just requires ~1.8GB.
|
||||
- AVX, AVX2 and AVX512 support for x86 architectures
|
||||
- Full CUDA, Metal and SYCL backend for GPU acceleration.
|
||||
- Full CUDA, Metal, Vulkan and SYCL backend for GPU acceleration.
|
||||
- Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs models
|
||||
- No need to convert to `.ggml` or `.gguf` anymore!
|
||||
- Flash Attention for memory usage optimization (only cpu for now)
|
||||
@ -142,6 +142,15 @@ cmake .. -DSD_METAL=ON
|
||||
cmake --build . --config Release
|
||||
```
|
||||
|
||||
##### Using Vulkan
|
||||
|
||||
Install Vulkan SDK from https://www.lunarg.com/vulkan-sdk/.
|
||||
|
||||
```
|
||||
cmake .. -DSD_VULKAN=ON
|
||||
cmake --build . --config Release
|
||||
```
|
||||
|
||||
##### Using SYCL
|
||||
|
||||
Using SYCL makes the computation run on the Intel GPU. Please make sure you have installed the related driver and [Intel® oneAPI Base toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html) before start. More details and steps can refer to [llama.cpp SYCL backend](https://github.com/ggerganov/llama.cpp/blob/master/docs/backend/SYCL.md#linux).
|
||||
|
Loading…
Reference in New Issue
Block a user