Commit Graph

59 Commits

Author SHA1 Message Date
leejet
dcf91f9e0f chore: change SD_CUBLAS/SD_USE_CUBLAS to SD_CUDA/SD_USE_CUDA 2024-12-28 13:27:51 +08:00
piallai
b5cc1422da
fix: fix typo for skip layers parameters (#492) 2024-12-28 13:12:08 +08:00
R0CKSTAR
5cc74d1f09
feat: support Moore Threads GPU (#529)
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
2024-12-28 13:08:36 +08:00
Erik Scholz
1c168d98a5
fix: repair flash attention support (#386)
* repair flash attention in _ext
this does not fix the currently broken fa behind the define, which is only used by VAE

Co-authored-by: FSSRepo <FSSRepo@users.noreply.github.com>

* make flash attention in the diffusion model a runtime flag
no support for sd3 or video

* remove old flash attention option and switch vae over to attn_ext

* update docs

* format code

---------

Co-authored-by: FSSRepo <FSSRepo@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>
2024-11-23 12:39:08 +08:00
William Murray
ea9b647080
docs: update readme, add python bindings (#423) 2024-11-23 11:52:33 +08:00
Flavio Bizzarri
b99cbfe4dc
docs: update README.md (#452) 2024-11-23 11:46:50 +08:00
fszontagh
07585448ad
docs: update readme (#462) 2024-11-23 11:42:12 +08:00
leejet
ac54e00760
feat: add sd3.5 support (#445) 2024-10-24 21:58:03 +08:00
zhentaoyu
e410aeb534
sync: update ggml to fix large image generation with SYCL backend (#380)
* turn off fast-math on host in SYCL backend

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* update ggml for sync some sycl ops

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* update sycl readme and ggml

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

---------

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>
2024-09-02 22:29:35 +08:00
leejet
58d54738e2 docs: add star history 2024-08-28 00:27:54 +08:00
leejet
4f87b232c2 docs: add Vulkan build command 2024-08-28 00:25:31 +08:00
stduhpf
f4c937cb94
fix: add some missing cli args to usage (#363) 2024-08-28 00:17:46 +08:00
Daniele
dc0882cdc9
feat: add exponential scheduler (#346)
* feat: added exponential scheduler

* updated README

* improved exponential formatting

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-08-28 00:13:35 +08:00
Daniele
d00c94844d
feat: add ipndm and ipndm_v samplers (#344) 2024-08-28 00:03:41 +08:00
Daniele
2d4a2f7982
feat: add GITS scheduler (#343) 2024-08-28 00:02:17 +08:00
leejet
d08d7fa632 docs: update README.md 2024-08-24 14:38:44 +08:00
leejet
64d231f384
feat: add flux support (#356)
* add flux support

* avoid build failures in non-CUDA environments

* fix schnell support

* add k quants support

* add support for applying lora to quantized tensors

* add inplace conversion support for f8_e4m3 (#359)

in the same way it is done for bf16
like how bf16 converts losslessly to fp32,
f8_e4m3 converts losslessly to fp16

* add xlabs flux comfy converted lora support

* update docs

---------

Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com>
2024-08-24 14:29:52 +08:00
zhentaoyu
697d000f49
feat: add SYCL Backend Support for Intel GPUs (#330)
* update ggml and add SYCL CMake option

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* hacky CMakeLists.txt for updating ggml in cpu backend

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* rebase and clean code

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* add sycl in README

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* rebase ggml commit

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* refine README

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

* update ggml for supporting sycl tsembd op

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>

---------

Signed-off-by: zhentaoyu <zhentao.yu@intel.com>
2024-08-10 13:42:50 +08:00
leejet
5b8d16aa68 docs: reorganize README.md 2024-08-03 12:06:34 +08:00
leejet
73c2176648
feat: add sd3 support (#298) 2024-07-28 15:44:08 +08:00
Grauho
ce1bcc74a6
feat: add AYS(Align Your Steps) scheduler (#241)
Added NVIDEA's new "Align Your Steps" style scheduler in accordance with their
quick start guide. Currently has handling for SD1.5, SDXL, and SVD, using the
noise levels from their paper to generate the sigma values. Can be selected
using the --schedule ays command line switch. Updates the main.cpp help
message and README to reflect this option, also they now inform the user
of the --color switch as well.

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-04-29 23:21:32 +08:00
Phu Tran
607e39489f
docs: add Jellybox as UI using sd.cpp (#214) 2024-04-02 12:31:54 +08:00
bssrdf
a469688e30
feat: add TencentARC PhotoMaker support (#179)
* first efforts at implementing photomaker; lots more to do

* added PhotoMakerIDEncoder model in SD

* fixed soem bugs; now photomaker model weights can be loaded into their tensor buffers

* added input id image loading

* added preprocessing inpit id images

* finished get_num_tensors

* fixed a bug in remove_duplicates

* add a get_learned_condition_with_trigger function to do photomaker stuff

* add a convert_token_to_id function for photomaker to extract trigger word's token id

* making progress; need to implement tokenizer decoder

* making more progress; finishing vision model forward

* debugging vision_model outputs

* corrected clip vision model output

* continue making progress in id fusion process

* finished stacked id embedding; to be tested

* remove garbage file

* debuging graph compute

* more progress; now alloc buffer failed

* fixed wtype issue; input images can only be 1 because issue with transformer when batch size > 1 (to be investigated)

* added delayed subject conditioning; now photomaker runs and generates images

* fixed stat_merge_step

* added photomaker lora model (to be tested)

* reworked pmid lora

* finished applying pmid lora; to be tested

* finalized pmid lora

* add a few print tensor; tweak in sample again

* small tweak; still not getting ID faces

* fixed a bug in FuseBlock forward; also remove diag_mask op in for vision transformer; getting better results

* disable pmid lora apply for now; 1 input image seems working; > 1 not working

* turn pmid lora apply back on

* fixed a decode bug

* fixed a bug in ggml's conv_2d, and now > 1 input images working

* add style_ratio as a cli param; reworked encode with trigger for attention weights

* merge commit fixing lora free param buffer error

* change default style ratio to 10%

* added an option to offload vae decoder to CPU for mem-limited gpus

* removing image normalization step seems making ID fidelity much higher

* revert default style ratio back ro 20%

* added an option for normalizing input ID images; cleaned up debugging code

* more clean up

* fixed bugs; now failed with cuda error; likely out-of-mem on GPU

* free pmid model params when required

* photomaker working properly now after merging and adapting to GGMLBlock API

* remove tensor renaming;  fixing names in the photomaker model file

* updated README.md to include instructions and notes for running PhotoMaker

* a bit clean up

* remove -DGGML_CUDA_FORCE_MMQ; more clean up and README update

* add input image requirement in README

* bring back freeing pmid lora params buffer; simply pooled output of CLIPvision

* remove MultiheadAttention2; customized MultiheadAttention

* added a WIN32 get_files_from_dir; turn off Photomakder if receiving no input images

* update docs

* fix ci error

* make stable-diffusion.h a pure c header file

This reverts commit 27887b630db6a92f269f0aef8de9bc9832ab50a9.

* fix ci error

* format code

* reuse get_learned_condition

* reuse pad_tokens

* reuse CLIPVisionModel

* reuse LoraModel

* add --clip-on-cpu

* fix lora name conversion for SDXL

---------

Co-authored-by: bssrdf <bssrdf@gmail.com>
Co-authored-by: leejet <leejet714@gmail.com>
2024-03-12 23:15:17 +08:00
Cyberhan123
583cc5bba2
docs: add binding (#189) 2024-03-03 13:27:07 +08:00
Sean Bailey
193fb620b1
feat: add capability to repeatedly run the upscaler in a row (#174)
* Add in upscale repeater logic

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-02-24 21:31:01 +08:00
leejet
b6368868d9
feat: introduce GGMLBlock and implement SVD(Broken) (#159)
* introduce GGMLBlock and implement SVD(Broken)

* add sdxl vae warning
2024-02-24 20:06:39 +08:00
Steward Garcia
36ec16ac99
feat: Control Net support + Textual Inversion (embeddings) (#131)
* add controlnet to pipeline

* add cli params

* control strength cli param

* cli param keep controlnet in cpu

* add Textual Inversion

* add canny preprocessor

* refactor: change ggml_type_sizef to ggml_row_size

* process hint once time

* ignore the embedding name case

---------

Co-authored-by: leejet <leejet714@gmail.com>
2024-01-29 22:38:51 +08:00
旺旺碎冰冰
c6071fa82f
feat: add hipBlas support (#94) 2024-01-14 11:53:42 +08:00
leejet
5c614e4bc2
feat: add convert api (#142) 2024-01-14 11:43:24 +08:00
leejet
b139434b57 docs: update README.md 2023-12-31 11:48:41 +08:00
leejet
78ad76f3f4
feat: add SDXL support (#117)
* add SDXL support

* fix the issue with generating large images
2023-12-29 00:16:10 +08:00
Steward Garcia
004dfbef27
feat: implement ESRGAN upscaler + Metal Backend (#104)
* add esrgan upscaler

* add sd_tiling

* support metal backend

* add clip_skip

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-28 23:46:48 +08:00
旺旺碎冰冰
0e64238e4c
feat: implement the complete bpe function (#119)
* implement the complete bpe function
---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-23 12:11:07 +08:00
leejet
ac8f5a044c feat: add SD-Turbo support 2023-12-10 13:15:09 +08:00
leejet
968226abb2 docs: update v2-1_768-nonema-pruned.safetensors url 2023-12-05 22:52:19 +08:00
Steward Garcia
134883aec4
feat: add TAESD implementation - faster autoencoder (#88)
* add taesd implementation

* taesd gpu offloading

* show seed when generating image with -s -1

* less restrictive with larger images

* cuda: im2col speedup x2

* cuda: group norm speedup x90

* quantized models now works in cuda :)

* fix cal mem size

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-05 22:40:03 +08:00
leejet
d7af2c2ba9
feat: load weights from safetensors and ckpt (#101) 2023-12-03 15:47:20 +08:00
Steward Garcia
8124588cf1
feat: ggml-alloc integration and gpu acceleration (#75)
* set ggml url to FSSRepo/ggml

* ggml-alloc integration

* offload all functions to gpu

* gguf format + native converter

* merge custom vae to a model

* full offload to gpu

* improve pretty progress

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-11-26 19:02:36 +08:00
leejet
64f6002457 docs: add contributors info to README.md 2023-11-19 18:35:19 +08:00
leejet
9a9f3daf8e feat: add LoRA support 2023-11-19 17:43:49 +08:00
leejet
536f3af672 feat: add lcm sampler support
This referenced an issue discussion of the stable-diffusion-webui at
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13952, which
may not be too perfect.
2023-11-17 22:53:46 +08:00
Robert Bledsaw
29a56f2e98
docs: update README.md (#71)
fixed type in curl command.
2023-10-22 13:03:32 +08:00
Urs Ganse
afec5051cf
feat: write generation parameter exif data into output png (#57)
* Write generation parameter exif data into output pngs.

This adds prompt, negative prompt (if nonempty) and other generation
parameters to the output file as a tEXt PNG block, in the same format as
AUTOMATIC1111 webui does.

In order to keep everything free of external library dependencies, I
have somewhat dirtily hacked this into the stb_image_write
implementation.

* Mention png text data in README.md, include "karras" in sampler text

* add Steps/Model/RNG to parameter string

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-09-18 21:09:15 +08:00
Urs Ganse
3a25179d52
feat: add DPM2 and DPM++(2s) a samplers (#56)
* Add DPM2 sampler.

* Add DPM++ (2s) a sampler.

* Update README.md with added samplers

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-09-12 23:02:09 +08:00
Urs Ganse
b6899e8fc2
feat: add Euler, Heun and DPM++ (2M) samplers (#50)
* Add Euler sampler

* Add Heun sampler

* Add DPM++ (2M) sampler

* Add modified DPM++ (2M) "v2" sampler.

This was proposed in a issue discussion of the stable diffusion webui,
at https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8457
and apparently works around overstepping of the DPM++ (2M) method with
small step counts.

The parameter is called dpmpp2mv2 here.

* match code style

---------

Co-authored-by: Urs Ganse <urs@nerd2nerd.org>
Co-authored-by: leejet <leejet714@gmail.com>
2023-09-08 23:47:28 +08:00
leejet
e5a7aec252 feat: add CUDA RNG 2023-09-03 19:24:07 +08:00
leejet
31e77e1573
feat: add SD2.x support (#40) 2023-09-03 16:00:33 +08:00
leejet
008d80a0b1 docs: update README.md 2023-08-25 20:59:18 +08:00
leejet
721cb324af chore: add sd Dockerfile 2023-08-22 22:14:20 +08:00
leejet
a393bebec8 docs: update README.md 2023-08-22 20:45:23 +08:00