Commit Graph

89 Commits

Author SHA1 Message Date
leejet
2eac844bbd fix: generate image correctly in img2img mode 2023-12-09 14:39:43 +08:00
Steward Garcia
134883aec4
feat: add TAESD implementation - faster autoencoder (#88)
* add taesd implementation

* taesd gpu offloading

* show seed when generating image with -s -1

* less restrictive with larger images

* cuda: im2col speedup x2

* cuda: group norm speedup x90

* quantized models now works in cuda :)

* fix cal mem size

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-12-05 22:40:03 +08:00
leejet
8a87b273ad fix: allow model and vae using different format 2023-12-03 17:12:04 +08:00
leejet
d7af2c2ba9
feat: load weights from safetensors and ckpt (#101) 2023-12-03 15:47:20 +08:00
Steward Garcia
8124588cf1
feat: ggml-alloc integration and gpu acceleration (#75)
* set ggml url to FSSRepo/ggml

* ggml-alloc integration

* offload all functions to gpu

* gguf format + native converter

* merge custom vae to a model

* full offload to gpu

* improve pretty progress

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-11-26 19:02:36 +08:00
Urs Ganse
ae1d5dcebb
feat: allow LoRAs with negative multiplier (#83)
* Allow Loras with negative weight, too.

There are a couple of loras, which serve to adjust certain concepts in
both positive and negative directions (like exposure, detail level etc).

The current code rejects them if loaded with a negative weight, but I
suggest that this check can simply be dropped.

* ignore lora in the case of multiplier == 0.f

---------

Co-authored-by: Urs Ganse <urs@nerd2nerd.org>
Co-authored-by: leejet <leejet714@gmail.com>
2023-11-20 22:23:52 +08:00
leejet
51b53d4cb1 chore: typo remote => remove 2023-11-19 23:21:49 +08:00
leejet
0d9b801aaa fix: fix multi loras prompt parse 2023-11-19 23:19:37 +08:00
leejet
176a00b606 chore: add .clang-format 2023-11-19 19:35:33 +08:00
leejet
9a9f3daf8e feat: add LoRA support 2023-11-19 17:43:49 +08:00
leejet
536f3af672 feat: add lcm sampler support
This referenced an issue discussion of the stable-diffusion-webui at
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13952, which
may not be too perfect.
2023-11-17 22:53:46 +08:00
leejet
3bf1665885 chore: clear the msvc compilation warning 2023-10-28 20:55:24 +08:00
leejet
3001c23f7d perf: change ggml graph eval order to RIGHT_TO_LEFT to optimize memory usage 2023-10-28 20:19:15 +08:00
leejet
ed374983f3 fix: set eps of ggml_norm(LayerNorm) to 1e-5 2023-10-27 00:50:23 +08:00
leejet
fbd18e1059 fix: avoid stack overflow on MSVC 2023-10-23 21:10:46 +08:00
leejet
69e54ace14 sync: update ggml 2023-10-22 14:11:06 +08:00
Urs Ganse
3a25179d52
feat: add DPM2 and DPM++(2s) a samplers (#56)
* Add DPM2 sampler.

* Add DPM++ (2s) a sampler.

* Update README.md with added samplers

---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-09-12 23:02:09 +08:00
Urs Ganse
968fbf02aa
feat: add option to switch the sigma schedule (#51)
Concretely, this allows switching to the "Karras" schedule from the
Karras et al 2022 paper, equivalent to the samplers marked as "Karras"
in the AUTOMATIC1111 WebUI. This choice is in principle orthogonal to
the sampler choice and can be given independently.
2023-09-09 00:02:07 +08:00
Urs Ganse
b6899e8fc2
feat: add Euler, Heun and DPM++ (2M) samplers (#50)
* Add Euler sampler

* Add Heun sampler

* Add DPM++ (2M) sampler

* Add modified DPM++ (2M) "v2" sampler.

This was proposed in a issue discussion of the stable diffusion webui,
at https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/8457
and apparently works around overstepping of the DPM++ (2M) method with
small step counts.

The parameter is called dpmpp2mv2 here.

* match code style

---------

Co-authored-by: Urs Ganse <urs@nerd2nerd.org>
Co-authored-by: leejet <leejet714@gmail.com>
2023-09-08 23:47:28 +08:00
leejet
34a118d407 fix: avoid coredump when steps == 1 2023-09-04 21:44:38 +08:00
leejet
f6ff06fcb7 fix: avoid coredump when generating large image 2023-09-04 21:37:46 +08:00
leejet
b247581782 fix: insufficient memory error on macOS 2023-09-04 03:50:42 +08:00
leejet
bb3f19cb40 fix: increase ctx_size 2023-09-04 03:45:43 +08:00
leejet
7620b920c8 use new graph api to avoid stack overflow on msvc 2023-09-03 22:56:33 +08:00
leejet
3ffffa6929 fix: do not check weights of open clip last layer 2023-09-03 21:10:08 +08:00
leejet
45842865ff fix: seed should be 64 bit 2023-09-03 20:08:22 +08:00
leejet
e5a7aec252 feat: add CUDA RNG 2023-09-03 19:24:07 +08:00
leejet
31e77e1573
feat: add SD2.x support (#40) 2023-09-03 16:00:33 +08:00
leejet
c542a77a3f fix: correct the handling of weight loading 2023-08-30 21:44:06 +08:00
Derek Anderson
1b5a868296
fix: flushes after printf (#38) 2023-08-30 20:47:25 +08:00
leejet
c8f85a4e30 sync: update ggml 2023-08-27 14:35:26 +08:00
leejet
d765b95ed1 perf: make ggml_conv_2d faster 2023-08-26 17:08:59 +08:00
Tim Miller
a22722631a
fix: fix regex for macOS/Clang support (#16)
---------

Co-authored-by: leejet <leejet714@gmail.com>
2023-08-21 21:55:00 +08:00
leejet
17095dddea
feat: add token weighting support (#13) 2023-08-20 20:28:36 +08:00
leejet
8f34dd7cc7 perf: free unused params immediately to reduce memory usage 2023-08-17 00:55:36 +08:00
leejet
7aeb2fab63 perf: sync ggml 2023-08-16 22:20:00 +08:00
leejet
58735a2813
feat: add img2img mode (#5) 2023-08-16 01:48:07 +08:00
leejet
228e94b924 fix: include extra header files to prevent compile errors on some platforms 2023-08-13 21:44:40 +08:00
leejet
3aca342e60 Initial commit 2023-08-13 16:00:22 +08:00