This commit is contained in:
justumen
2024-10-23 12:05:48 +02:00
parent 4d7e0ad85a
commit 5af17f5f16
17 changed files with 495 additions and 211 deletions

239
README.md
View File

@@ -1,6 +1,6 @@
# 🔗 Comfyui : Bjornulf_custom_nodes v0.48 🔗
# 🔗 Comfyui : Bjornulf_custom_nodes v0.49 🔗
A list of 55 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
A list of 56 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
# Coffee : ☕☕☕☕☕ 5/5
@@ -44,6 +44,7 @@ You can manage looping operations, generate randomized content, trigger logical
`42.` [♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model](#42----loop-modelclipvae---aka-checkpoint--model)
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
`54.` [♻ Loop Lora Selector](#54----loop-lora-selector)
`56.` [♻📝 Loop Sequential (Integer)]()
## 🎲 Randomization 🎲
`3.` [✒🗔 Advanced Write Text (+ 🎲 random selection and 🅰️ variables)](#3----advanced-write-text---random-selection-and-🅰%EF%B8%8F-variables)
@@ -249,27 +250,26 @@ cd /where/you/installed/ComfyUI && python main.py
- **v0.46**: ❗ A lot of changes to Video nodes. Save to video is now using FLOAT for fps, not INT. (A lot of other custom nodes do that as well...) Add node to preview video, add node to convert a video path to a list of images. add node to convert a list of images to a temporary video + video_path. add node to synchronize duration of audio with video. (useful for MuseTalk) change TTS node with many new outputs ("audio_path", "full_path", "duration") to reuse with other nodes like MuseTalk, also TTS rename input to "connect_to_workflow", to avoid mistakes sending text to it.
- **v0.47**: New node : Loop Load checkpoint (Model Selector).
- **v0.48**: Two new nodes for loras : Random Lora Selector and Loop Lora Selector.
- **v0.49**: New node : Loop Sequential (Integer) - Loop through a range of integer values. (But once per workflow run), audio sync is smarter and adapt the video duration to the audio duration. add requirements.txt
# 📝 Nodes descriptions
## 1 - 👁 Show (Text, Int, Float)
![Show Text](screenshots/show.png)
**Description:**
The show node will only display text, or a list of several texts. (read only node)
3 types are managed : Green is for STRING type, Orange is for FLOAT type and blue is for INT type. I put colors so I/you don't try to edit them. 🤣
## 2 - ✒ Write Text
![Show Text](screenshots/show.png)
![write Text](screenshots/write.png)
## 2 - ✒ Write Text
**Description:**
Simple node to write text.
## 3 - ✒🗔 Advanced Write Text (+ 🎲 random selection and 🅰️ variables)
![write Text](screenshots/write.png)
![write Text Advanced](screenshots/write_advanced.png)
## 3 - ✒🗔 Advanced Write Text (+ 🎲 random selection and 🅰️ variables)
**Description:**
Advanced Write Text node allows for special syntax to accept random variants, like `{hood|helmet}` will randomly choose between hood or helmet.
@@ -281,29 +281,36 @@ Raw text: photo of a {green|blue|red|orange|yellow} {cat|rat|house}
Picked text: photo of a green house
```
![write Text Advanced](screenshots/write_advanced.png)
You can also create and reuse variables with this syntax : `<name>`.
Usage example :
![variables](screenshots/variables.png)
## 4 - 🔗 Combine Texts
![Combine Texts](screenshots/combine_texts.png)
**Description:**
Combine multiple text inputs into a single output. (can have separation with : comma, space, new line or nothing.)
![Combine Texts](screenshots/combine_texts.png)
## 5 - 🎲 Random (Texts)
![Random Text](screenshots/random_text.png)
**Description:**
Generate and display random text from a predefined list. Great for creating random prompts.
You also have `control_after_generate` to manage the randomness.
![Random Text](screenshots/random_text.png)
## 6 - ♻ Loop
![Loop](screenshots/loop.png)
**Description:**
General-purpose loop node, you can connect that in between anything.
![Loop](screenshots/loop.png)
It has an optional input, if no input is given, it will loop over the value of the STRING "if_no_input" (take you can edit).
❗ Careful this node accept everything as input and output, so you can use it with texts, integers, images, mask, segs etc... but be consistent with your inputs/outputs.
Do not use this Loop if you can do otherwise.
@@ -312,21 +319,23 @@ This is an example together with my node 28, to force a different seed for each
![Loop](screenshots/loop4.png)
## 7 - ♻ Loop Texts
![Loop Texts](screenshots/loop_texts.png)
**Description:**
Cycle through a list of text inputs.
![Loop Texts](screenshots/loop_texts.png)
Here is an example of usage with combine texts and flux :
![Loop Texts example](screenshots/loop_text_example.png)
## 8 - ♻ Loop Integer
![Loop Integer](screenshots/loop_integer.png)
![Loop Int + Show Text](screenshots/loop_int+show_text.png)
**Description:**
Iterate through a range of integer values, good for `steps` in ksampler, etc...
![Loop Integer](screenshots/loop_integer.png)
![Loop Int + Show Text](screenshots/loop_int+show_text.png)
❗ Don't forget that you can convert ksampler widgets to input by right-clicking the ksampler node :
![Widget to Input](screenshots/widget-to-input.png)
@@ -334,55 +343,60 @@ Here is an example of usage with ksampler (Notice that with "steps" this node is
![Widget to Input](screenshots/example_loop_integer.png)
## 9 - ♻ Loop Float
![Loop Float + Show Text](screenshots/loop_float+show_text.png)
![Loop Float](screenshots/loop_float.png)
**Description:**
Loop through a range of floating-point numbers, good for `cfg`, `denoise`, etc...
![Loop Float + Show Text](screenshots/loop_float+show_text.png)
![Loop Float](screenshots/loop_float.png)
Here is an example with controlnet, trying to make a red cat based on a blue rabbit :
![Loop All Samplers](screenshots/loop_float_example.png)
## 10 - ♻ Loop All Samplers
![Loop All Samplers](screenshots/loop_all_samplers.png)
**Description:**
Iterate over all available samplers to apply them sequentially. Ideal for testing.
![Loop All Samplers](screenshots/loop_all_samplers.png)
Here is an example of looping over all the samplers with the normal scheduler :
![Loop All Samplers](screenshots/example_loop_all_samplers.png)
## 11 - ♻ Loop All Schedulers
![Loop All Schedulers](screenshots/loop_all_schedulers.png)
**Description:**
Iterate over all available schedulers to apply them sequentially. Ideal for testing. (same idea as sampler above, but for schedulers)
![Loop All Schedulers](screenshots/loop_all_schedulers.png)
## 12 - ♻ Loop Combos
![Loop Combos](screenshots/loop_combos.png)
**Description:**
Generate a loop from a list of my own custom combinations (scheduler+sampler), or select one combo manually.
Good for testing.
![Loop Combos](screenshots/loop_combos.png)
Example of usage to see the differences between different combinations :
![example combos](screenshots/example_combos.png)
## 13/14 - 📏 + 🖼 Resize and Save Exact name ⚠️💣
![Resize and Save Exact](screenshots/resize_save_exact.png)
**Description:**
Resize an image to exact dimensions. The other node will save the image to the exact path.
⚠️💣 Warning : The image will be overwritten if it already exists.
![Resize and Save Exact](screenshots/resize_save_exact.png)
## 15 - 💾 Save Text
![Save Text](screenshots/save_text.png)
**Description:**
Save the given text input to a file. Useful for logging and storing text data.
![Save Text](screenshots/save_text.png)
## 16 - 💾🖼💬 Save image for Bjornulf LobeChat (❗For my custom [lobe-chat](https://github.com/justUmen/Bjornulf_lobe-chat)❗)
![Save Bjornulf Lobechat](screenshots/save_bjornulf_lobechat.png)
**Description:**
❓ I made that node for my custom lobe-chat to send+receive images from Comfyui API : [lobe-chat](https://github.com/justUmen/Bjornulf_lobe-chat)
@@ -391,24 +405,30 @@ The name will start at `api_00001.png`, then `api_00002.png`, etc...
It will also create a link to the last generated image at the location `output/BJORNULF_API_LAST_IMAGE.png`.
This link will be used by my custom lobe-chat to copy the image inside the lobe-chat project.
![Save Bjornulf Lobechat](screenshots/save_bjornulf_lobechat.png)
## 17 - 💾🖼 Save image as `tmp_api.png` Temporary API ⚠️💣
![Save Temporary API](screenshots/save_tmp_api.png)
**Description:**
Save image for short-term use : ./output/tmp_api.png ⚠️💣
![Save Temporary API](screenshots/save_tmp_api.png)
## 18 - 💾🖼📁 Save image to a chosen folder name
![Save Temporary API](screenshots/save_image_to_folder.png)
**Description:**
Save image in a specific folder : `my_folder/00001.png`, `my_folder/00002.png`, etc...
Also allow multiple nested folders, like for example : `animal/dog/small`.
![Save Temporary API](screenshots/save_image_to_folder.png)
## 19 - 🦙 Ollama
![Ollama](screenshots/ollama_1.png)
**Description:**
Will generate detailed text based of what you give it.
![Ollama](screenshots/ollama_1.png)
I recommend using `mistral-nemo` if you can run it, but it's up to you. (Might have to tweak the system prompt a bit)
You also have `control_after_generate` to force the node to rerun for every workflow run. (Even if there is no modification of the node or its inputs.)
@@ -423,68 +443,78 @@ Each run will be significantly faster, but not free your VRAM for something else
⚠️ You can create a file called `ollama_ip.txt` in my comfyui custom node folder if you have a special IP for your ollama server, mine is : `http://192.168.1.37:11434`
## 20 - 📹 Video Ping Pong
![Video Ping Pong](screenshots/video_pingpong.png)
**Description:**
Create a ping-pong effect from a list of images (from a video) by reversing the playback direction when reaching the last frame. Good for an "infinity loop" effect.
![Video Ping Pong](screenshots/video_pingpong.png)
## 21 - 📹 Images to Video
![Images to Video](screenshots/imgs2video.png)
**Description:**
Combine a sequence of images into a video file.
![Images to Video](screenshots/imgs2video.png)
❓ I made this node because it supports transparency with webm format. (Needed for rembg)
Temporary images are stored in the folder `ComfyUI/temp_images_imgs2video/` as well as the wav audio file.
## 22 - 🔲 Remove image Transparency (alpha)
![Remove Alpha](screenshots/remove_alpha.png)
**Description:**
Remove transparency from an image by filling the alpha channel with a solid color. (black, white or greenscreen)
Of course it takes in an image with transparency, like from rembg nodes.
Necessary for some nodes that don't support transparency.
![Remove Alpha](screenshots/remove_alpha.png)
## 23 - 🔲 Image to grayscale (black & white)
![Image to Grayscale](screenshots/grayscale.png)
**Description:**
Convert an image to grayscale (black & white)
![Image to Grayscale](screenshots/grayscale.png)
Example : I sometimes use it with Ipadapter to disable color influence.
But you can sometimes also want a black and white image...
## 24 - 🖼+🖼 Stack two images (Background + Overlay)
![Superpose Images](screenshots/combine_background_overlay.png)
**Description:**
Stack two images into a single image : a background and one (or several) transparent overlay. (allow to have a video there, just send all the frames and recombine them after.)
![Superpose Images](screenshots/combine_background_overlay.png)
Update 0.11 : Add option to move vertically and horizontally. (from -50% to 150%)
❗ Warning : For now, `background` is a static image. (I will allow video there later too.)
⚠️ Warning : If you want to directly load the image with transparency, use my node `🖼 Load Image with Transparency ▢` instead of the `Load Image` node.
## 25 - 🟩➜▢ Green Screen to Transparency
![Greenscreen to Transparency](screenshots/greeenscreen_to_transparency.png)
**Description:**
Transform greenscreen into transparency.
Need clean greenscreen ofc. (Can adjust threshold but very basic node.)
![Greenscreen to Transparency](screenshots/greeenscreen_to_transparency.png)
## 26 - 🎲 Random line from input
![Random line from input](screenshots/random_line_from_input.png)
**Description:**
Take a random line from an input text. (When using multiple "Write Text" nodes is annoying for example, you can use that and just copy/paste a list from outside.)
You can change fixed/randomize for `control_after_generate` to have a different text each time you run the workflow. (or not)
![Random line from input](screenshots/random_line_from_input.png)
## 27 - ♻ Loop (All Lines from input)
![Loop input](screenshots/loop_all_lines.png)
**Description:**
Iterate over all lines from an input text. (Good for testing multiple lines of text.)
![Loop input](screenshots/loop_all_lines.png)
## 28 - 🔢 Text with random Seed
**Description:**
❗ This node is used to force to generate a random seed, along with text.
But what does that mean ???
When you use a loop (♻), the loop will use the same seed for each iteration. (That is the point, it will keep the same seed to compare results.)
@@ -512,24 +542,28 @@ FLUX : Here is an example of 4 images without Random Seed node on the left, and
![Text with random Seed 5](screenshots/result_random_seed.png)
## 29 - 🖼 Load Image with Transparency ▢
![Load image Alpha](screenshots/load_image_alpha.png)
**Description:**
Load an image with transparency.
The default `Load Image` node will not load the transparency.
![Load image Alpha](screenshots/load_image_alpha.png)
## 30 - 🖼✂ Cut image with a mask
![Cut image](screenshots/image_mask_cut.png)
**Description:**
Cut an image from a mask.
![Cut image](screenshots/image_mask_cut.png)
## 31 - 🔊 TTS - Text to Speech (100% local, any voice you want, any language)
![TTS](screenshots/tts.png)
**Description:**
Use my TTS server to generate high quality speech from text, with any voice you want, any language.
[Listen to the audio example](https://github.com/user-attachments/assets/5a4a67ff-cf70-4092-8f3b-1ccc8023d8c6)
![TTS](screenshots/tts.png)
❗ Node never tested on windows, only on linux for now. ❗
Use my TTS server to generate speech from text, based on XTTS v2.
@@ -567,91 +601,101 @@ If you can afford to run both at the same time, good for you, but Locally I can'
![TTS](screenshots/tts_preload_2.png)
### 32 - 🧑📝 Character Description Generator
![characters](screenshots/characters.png)
![characters](screenshots/characters2.png)
**Description:**
Generate a character description based on a json file in the folder `characters` : `ComfyUI/custom_nodes/Bjornulf_custom_nodes/characters`
Make your own json file with your own characters, and use this node to generate a description.
![characters](screenshots/characters.png)
![characters](screenshots/characters2.png)
❗ For now it's very basic node, a lot of things are going to be added and changed !!!
Some details are unusable for some checkpoints, very much a work in progress, the json structure isn't set in stone either.
Some characters are included.
### 33 - ♻ Loop (All Lines from input 🔗 combine by lines)
![loop combined](screenshots/loop_combined.png)
**Description:**
Sometimes you want to loop over several inputs but you also want to separate different lines of your output.
So with this node, you can have the number of inputs and outputs you want. See example for usage.
![loop combined](screenshots/loop_combined.png)
### 34 - 🧹 Free VRAM hack
![free vram](screenshots/free_vram_hack1.png)
![free vram](screenshots/free_vram_hack2.png)
**Description:**
So this is my attempt at freeing up VRAM after usage, I will try to improve that.
![free vram](screenshots/free_vram_hack1.png)
![free vram](screenshots/free_vram_hack2.png)
For me, on launch ComfyUI is using 180MB of VRAM, after my clean up VRAM node it can go back down to 376MB.
I don't think there is a clean way to do that, so I'm using a hacky way.
So, not perfect but better than being stuck at 6GB of VRAM used if I know I won't be using it again...
Just connect this node with your workflow, it takes an image as input and return the same image without any changes.
Just connect this node with your workflow, it takes anything as input and return it as output.
You can therefore put it anywhere you want.
❗ Comfyui is using cache to run faster (like not reloading checkpoints), so only use this free VRAM node when you need it.
❗ For this node to work properly, you need to enable the dev/api mode in ComfyUI. (You can do that in the settings)
It is also running an "empty/dummy" workflow to free up the VRAM, so it might take a few seconds to take effect after the end of the workflow.
### 35 - ⏸️ Paused. Resume or Stop ?
**Description:**
Automatically pause the workflow, and rings a bell when it does. (play the audio `bell.m4a` file provided)
![pause resume stop](screenshots/pause1.png)
![pause resume stop](screenshots/pause2.png)
![pause resume stop](screenshots/pause3.png)
**Description:**
Automatically pause the workflow, and rings a bell when it does. (play the audio `bell.m4a` file provided)
You can then manually resume or stop the workflow by clicking on the node's buttons.
I do that let's say for example if I have a very long upscaling process, I can check if the input is good before continuing. Sometimes I might stop the workflow and restart it with another seed.
You can connect any type of node to the pause node, above is an example with text, but you can send an IMAGE or whatever else, in the node `input = output`. (Of course you need to send the output to something that has the correct format...)
### 36 - ⏸️🔍 Paused. Select input, Pick one
![pick input](screenshots/pick.png)
**Description:**
Automatically pause the workflow, and rings a bell when it does. (play the audio `bell.m4a` file provided)
![pick input](screenshots/pick.png)
You can then manually select the input you want to use, and resume the workflow with it.
You can connect this node to anything you want, above is an example with IMAGE. But you can pick whatever you want, in the node `input = output`.
### 37 - 🎲🖼 Random Image
![random image](screenshots/random_image.png)
**Description:**
Just take a random image from a list of images.
### 38 - ♻🖼 Loop (Images)
![random image](screenshots/random_image.png)
![loop images](screenshots/loop_images.png)
### 38 - ♻🖼 Loop (Images)
**Description:**
Loop over a list of images.
![loop images](screenshots/loop_images.png)
Usage example : You have a list of images, and you want to apply the same process to all of them.
Above is an example of the loop images node sending them to an Ipadapter workflow. (Same seed of course.)
### 39 - ♻ Loop (✒🗔 Advanced Write Text)
![loop write text](screenshots/loop_write_text.png)
**Description:**
If you need a quick loop but you don't want something too complex with a loop node, you can use this combined write text + loop.
![loop write text](screenshots/loop_write_text.png)
It will take the same special syntax as the Advanced write text node `{blue|red}`, but it will loop over ALL the possibilities instead of taking one at random.
0.40 : You can also use variables `<name>` in the loop.
### 40 - 🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model
![random checkpoint](screenshots/random_checkpoint.png)
**Description:**
Just simply take a trio at random from a load checkpoint node.
![random checkpoint](screenshots/random_checkpoint.png)
Notice that it is using the core Load checkpoint node. It means that all checkpoint will be preloaded in memory.
Details :
@@ -662,10 +706,11 @@ Check node number 41 before deciding which one to use.
### 41 - 🎲 Random Load checkpoint (Model Selector)
![pick input](screenshots/random_load_checkpoint.png)
**Description:**
This is another way to select a load checkpoint node randomly.
![pick input](screenshots/random_load_checkpoint.png)
It will not preload all the checkpoints in memory, so it will be slower to switch between checkpoints.
But you can use more outputs to decide where to store your results. (`model_folder` is returning the last folder name of the checkpoint.)
I always store my checkpoints in a folder with the type of the model like `SD1.5`, `SDXL`, etc... So it's a good way for me to recover that information quickly.
@@ -685,10 +730,11 @@ Loop over all the trios from several checkpoint node.
### 43 - 📥🖼📂 Load Images from output folder
![pick input](screenshots/load_images_folder.png)
**Description:**
Quickly select all images from a folder inside the output folder. (Not recursively.)
![pick input](screenshots/load_images_folder.png)
So... As you can see from the screenshot the images are split based on their resolution.
It's also not possible to edit dynamically the number of outputs, so I just picked a number : 4.
The node will separate the images based on their resolution, so with this node you can have 4 different resolutions per folder. (If you have more than that, maybe you should have another folder...)
@@ -708,10 +754,11 @@ Here is another example of the same thing but excluding the save folder node :
### 44 - 🖼👈 Select an Image, Pick
![pick input](screenshots/select_image.png)
**Description:**
Select an image from a list of images.
![pick input](screenshots/select_image.png)
Useful in combination with my Load images from folder and preview image nodes.
You can also of course make a group node, like this one, which is the same as the screenshot above :
@@ -719,10 +766,11 @@ You can also of course make a group node, like this one, which is the same as th
### 45 - 🔀 If-Else (input / compare_with)
**Description:**
Complex logic node if/else system.
![if else](screenshots/if_0.png)
**Description:**
If the `input` given is equal to the `compare_with` given in the widget, it will forward `send_if_true`, otherwise it will forward `send_if_false`. (If no `send_if_false` it will return `None`.)
You can forward anything, below is an example of forwarding a different size of latent space depending if it's SDXL or not.
@@ -792,31 +840,32 @@ Here another simple example taking a few selected images from a folder and combi
### 48 - 🔀🎲 Text scrambler (🧑 Character)
![scrambler character](screenshots/scrambler_character.png)
**Description:**
Take text as input and scramble (randomize) the text by using the file `scrambler/character_scrambler.json` in the comfyui custom nodes folder.
### 49 - 📹👁 Video Preview
![scrambler character](screenshots/scrambler_character.png)
![video preview](screenshots/video_preview.png)
### 49 - 📹👁 Video Preview
**Description:**
This node takes a video path as input and displays the video.
### 50 - 🖼➜📹 Images to Video path (tmp video)
![video preview](screenshots/video_preview.png)
![image to video path](screenshots/image_to_video_path.png)
### 50 - 🖼➜📹 Images to Video path (tmp video)
**Description:**
This node will take a list of images and convert them to a temporary video file.
### 51 - 📹➜🖼 Video Path to Images
![image to video path](screenshots/image_to_video_path.png)
![video path to image](screenshots/video_path_to_image.png)
### 51 - 📹➜🖼 Video Path to Images
**Description:**
This node will take a video path as input and convert it to a list of images.
![video path to image](screenshots/video_path_to_image.png)
In the above example, I also take half of the frames by setting `frame_interval` to 2.
Note that i had 16 frames, on the top right preview you can see 8 images.
@@ -824,21 +873,25 @@ Note that i had 16 frames, on the top right preview you can see 8 images.
**Description:**
This node will basically synchronize the duration of an audio file with a video file by adding silence to the audio file if it's too short, or demultiply the video file if too long. (Video ideally need to be a loop, check my ping pong video node.)
It is good like for example with MuseTalk <https://github.com/chaojie/ComfyUI-MuseTalk>, If you want to chain up videos (Let's say sentence by sentence) it will always go back to the last frame. (Making the video transition smoother.)
This node is an overengineered node that will try to synchronize the duration of an audio file with a video file.
❗ Video ideally needs to be a loop, check my ping pong video node if needed.
The main goal of this synchronization is to have a clean transition between the end and the beginning of the video. (same frame)
You can then chain up several video and they will transition smoothly.
Here is an example without `Audio Video Sync` node (The duration of the video is shorter than the audio, so after playing it will not go back to the last frame, ideally i want to have a loop where the first frame is the same as the last frame. -See my node loop video ping pong if needed-) :
Some details, this node will :
- If video slightly too long : add silence to the audio file.
- If video way too long : will slow down the video up to 0.50x the speed + add silence to the audio.
- If audio slightly too long : will speed up video up to 1.5x the speed.
- If video way too long : will speed up video up to 1.5x the speed + add silence to the audio.
![audio sync video](screenshots/audio_sync_video_without.png)
It is good like for example with MuseTalk <https://github.com/chaojie/ComfyUI-MuseTalk>
Here is an example with `Audio Video Sync` node, notice that it is also convenient to recover the frames per second of the video, and send that to other nodes. :
Here is an example of the `Audio Video Sync` node, notice that it is also convenient to recover the frames per second of the video, and send that to other nodes. (Spaghettis..., deal with it. 😎 If you don't understand it, you can test it.) :
![audio sync video](screenshots/audio_sync_video_with.png)
![audio sync video](screenshots/audio_sync_video.png)
### 53 - ♻ Loop Load checkpoint (Model Selector)
![loop model selector](screenshots/loop_model_selector.png)
**Description:**
This is the loop version of node 41. (check there for similar details)
It will loop over all the selected checkpoints.
@@ -846,12 +899,15 @@ It will loop over all the selected checkpoints.
❗ The big difference with 41 is that checkpoints are preloaded in memory. You can run them all faster all at once.
It is a good way to test multiple checkpoints quickly.
### 54 - ♻ Loop Lora Selector
![loop model selector](screenshots/loop_model_selector.png)
![loop lora selector](screenshots/loop_lora_selector.png)
### 54 - ♻ Loop Lora Selector
**Description:**
Loop over all the selected Loras.
![loop lora selector](screenshots/loop_lora_selector.png)
Above is an example with Pony and several styles of Lora.
Below is another example, here with flux, to test if your Lora training was undertrained, overtrained or just right :
@@ -860,7 +916,20 @@ Below is another example, here with flux, to test if your Lora training was unde
### 55 - 🎲 Random Lora Selector
![random lora selector](screenshots/random_lora_selector.png)
**Description:**
Just take a single Lora at random from a list of Loras.
![random lora selector](screenshots/random_lora_selector.png)
### 56 - ♻📝 Loop Sequential (Integer)
**Description:**
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
The first time it will output the first integer, the second time the second integer, etc...
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
Under the hood it is using the file `counter_integer.txt` in the `ComfyUI/Bjornulf` folder.
![loop sequential integer](screenshots/loop_sequential_integer_1.png)
![loop sequential integer](screenshots/loop_sequential_integer_2.png)
![loop sequential integer](screenshots/loop_sequential_integer_3.png)
![loop sequential integer](screenshots/loop_sequential_integer_4.png)

View File

@@ -58,9 +58,11 @@ from .video_preview import VideoPreview
from .loop_model_selector import LoopModelSelector
from .random_lora_selector import RandomLoraSelector
from .loop_lora_selector import LoopLoraSelector
from .loop_sequential_integer import LoopIntegerSequential
NODE_CLASS_MAPPINGS = {
"Bjornulf_ollamaLoader": ollamaLoader,
"Bjornulf_LoopIntegerSequential": LoopIntegerSequential,
"Bjornulf_LoopLoraSelector": LoopLoraSelector,
"Bjornulf_RandomLoraSelector": RandomLoraSelector,
"Bjornulf_LoopModelSelector": LoopModelSelector,
@@ -120,6 +122,7 @@ NODE_CLASS_MAPPINGS = {
NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_WriteText": "✒ Write Text",
"Bjornulf_LoopIntegerSequential": "♻📝 Loop Sequential (Integer)",
"Bjornulf_LoopLoraSelector": "♻ Loop Lora Selector",
"Bjornulf_RandomLoraSelector": "🎲 Random Lora Selector",
"Bjornulf_LoopModelSelector": "♻ Loop Load checkpoint (Model Selector)",

View File

@@ -10,24 +10,60 @@ class AudioVideoSync:
pass
@classmethod
def INPUT_TYPES(s):
def INPUT_TYPES(cls):
return {
"required": {
"audio": ("AUDIO",),
"video_path": ("STRING", {"default": ""}),
"audio_duration": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 3600.0, "step": 0.001}),
},
}
RETURN_TYPES = ("AUDIO", "STRING", "STRING", "FLOAT")
RETURN_NAMES = ("synced_audio", "audio_path", "synced_video_path", "video_fps")
RETURN_TYPES = ("AUDIO", "STRING", "STRING", "FLOAT", "FLOAT", "INT", "FLOAT", "FLOAT")
RETURN_NAMES = ("sync_audio", "sync_audio_path", "sync_video_path", "video_fps", "video_duration", "sync_video_frame_count", "sync_audio_duration", "sync_video_duration")
FUNCTION = "sync_audio_video"
CATEGORY = "audio"
CATEGORY = "Bjornulf"
# def get_video_duration(self, video_path):
# cmd = ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', video_path]
# result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
# duration = float(result.stdout)
# return math.ceil(duration * 10) / 10
def sync_audio_video(self, audio, video_path, audio_duration):
if not isinstance(audio, dict) or 'waveform' not in audio or 'sample_rate' not in audio:
raise ValueError("Expected audio input to be a dictionary with 'waveform' and 'sample_rate' keys")
audio_data = audio['waveform']
sample_rate = audio['sample_rate']
# Get original video properties
original_duration = self.get_video_duration(video_path)
video_fps = self.get_video_fps(video_path)
original_frame_count = self.get_frame_count(video_path)
print(f"Original video duration: {original_duration}")
print(f"Target audio duration: {audio_duration}")
print(f"Video FPS: {video_fps}")
print(f"Original frame count: {original_frame_count}")
# Create synchronized versions of video and audio
sync_video_path = self.create_sync_video(video_path, original_duration, audio_duration)
sync_audio_path = self.save_audio(audio_data, sample_rate, audio_duration, original_duration)
# Get properties of synchronized files
sync_video_duration = self.get_video_duration(sync_video_path)
sync_frame_count = self.get_frame_count(sync_video_path)
sync_audio_duration = torchaudio.info(sync_audio_path).num_frames / sample_rate
print(f"Sync video duration: {sync_video_duration}")
print(f"Sync video frame count: {sync_frame_count}")
print(f"Sync audio duration: {sync_audio_duration}")
return (
audio, # Return original audio dictionary
sync_audio_path,
sync_video_path,
video_fps,
original_duration,
sync_frame_count,
sync_audio_duration,
sync_video_duration
)
def get_video_duration(self, video_path):
cmd = ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', video_path]
@@ -43,114 +79,136 @@ class AudioVideoSync:
return num / den
return float(fps)
def sync_audio_video(self, audio, video_path):
if not isinstance(audio, dict) or 'waveform' not in audio or 'sample_rate' not in audio:
raise ValueError("Expected audio input to be a dictionary with 'waveform' and 'sample_rate' keys")
def get_frame_count(self, video_path):
cmd = ['ffprobe', '-v', 'error', '-count_packets', '-select_streams', 'v:0', '-show_entries', 'stream=nb_read_packets', '-of', 'csv=p=0', video_path]
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
return int(result.stdout.strip())
audio_data = audio['waveform']
sample_rate = audio['sample_rate']
print(f"Audio data shape: {audio_data.shape}")
print(f"Sample rate: {sample_rate}")
# Calculate video duration
video_duration = self.get_video_duration(video_path)
# Calculate audio duration
audio_duration = audio_data.shape[-1] / sample_rate
print(f"Video duration: {video_duration}")
print(f"Audio duration: {audio_duration}")
# Calculate the desired audio duration and number of video repetitions
if audio_duration <= video_duration:
target_duration = video_duration
repetitions = 1
else:
repetitions = math.ceil(audio_duration / video_duration)
target_duration = video_duration * repetitions
# Calculate the number of samples to add
current_samples = audio_data.shape[-1]
target_samples = int(target_duration * sample_rate)
samples_to_add = target_samples - current_samples
print(f"Current samples: {current_samples}, Target samples: {target_samples}, Samples to add: {samples_to_add}")
if samples_to_add > 0:
# Create silence
if audio_data.dim() == 3:
silence_shape = (audio_data.shape[0], audio_data.shape[1], samples_to_add)
else: # audio_data.dim() == 2
silence_shape = (audio_data.shape[0], samples_to_add)
silence = torch.zeros(silence_shape, dtype=audio_data.dtype, device=audio_data.device)
# Append silence to the audio
synced_audio = torch.cat((audio_data, silence), dim=-1)
else:
synced_audio = audio_data
print(f"Synced audio shape: {synced_audio.shape}")
# Save the synced audio file and get the file path
audio_path = self.save_audio(synced_audio, sample_rate)
# Create and save the synced video
synced_video_path = self.create_synced_video(video_path, repetitions)
video_fps = self.get_video_fps(video_path)
# Return the synced audio data, audio file path, and synced video path
return ({"waveform": synced_audio, "sample_rate": sample_rate}, audio_path, synced_video_path, video_fps)
def save_audio(self, audio_tensor, sample_rate):
# Create the sync_audio folder if it doesn't exist
os.makedirs("Bjornulf/sync_audio", exist_ok=True)
# Generate a unique filename using the current timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"Bjornulf/sync_audio/synced_audio_{timestamp}.wav"
# Ensure audio_tensor is 2D
if audio_tensor.dim() == 3:
audio_tensor = audio_tensor.squeeze(0) # Remove batch dimension
elif audio_tensor.dim() == 1:
audio_tensor = audio_tensor.unsqueeze(0) # Add channel dimension
# Save the audio file
torchaudio.save(filename, audio_tensor, sample_rate)
print(f"Synced audio saved to: {filename}")
# Return the full path to the saved audio file
return os.path.abspath(filename)
def create_synced_video(self, video_path, repetitions):
# Create the sync_video folder if it doesn't exist
def create_sync_video(self, video_path, original_duration, target_duration):
os.makedirs("Bjornulf/sync_video", exist_ok=True)
# Generate a unique filename using the current timestamp
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
output_path = f"Bjornulf/sync_video/synced_video_{timestamp}.mp4"
final_output_path = f"Bjornulf/sync_video/sync_video_{timestamp}.mp4"
# Create a temporary file with the list of input video files
with open("Bjornulf/temp_video_list.txt", "w") as f:
for _ in range(repetitions):
f.write(f"file '{video_path}'\n")
# Calculate the relative difference between durations
duration_difference = abs(target_duration - original_duration) / original_duration
# Use ffmpeg to concatenate the video multiple times
cmd = [
'ffmpeg',
'-f', 'concat',
'-safe', '0',
'-i', 'Bjornulf/temp_video_list.txt',
'-c', 'copy',
output_path
]
subprocess.run(cmd, check=True)
# If target duration is longer but within 50% difference, use speed adjustment instead of repeating
if target_duration > original_duration and duration_difference <= 0.5:
# Calculate slowdown ratio
speed_ratio = original_duration / target_duration
pts_speed = 1/speed_ratio
# Remove the temporary file
os.remove("Bjornulf/temp_video_list.txt")
speed_adjust_cmd = [
'ffmpeg',
'-i', video_path,
'-filter:v', f'setpts={pts_speed}*PTS',
'-an',
'-c:v', 'libx264',
'-preset', 'medium',
'-crf', '23',
final_output_path
]
subprocess.run(speed_adjust_cmd, check=True)
print(f"Speed-adjusted video (slowdown ratio: {speed_ratio}) saved to: {final_output_path}")
print(f"Synced video saved to: {output_path}")
return os.path.abspath(output_path)
elif target_duration > original_duration:
# Use the original repeating logic for larger differences
repeat_count = math.ceil(target_duration / original_duration)
concat_file = f"Bjornulf/sync_video/concat_{timestamp}.txt"
with open(concat_file, 'w') as f:
for _ in range(repeat_count):
f.write(f"file '{os.path.abspath(video_path)}'\n")
concat_cmd = [
'ffmpeg',
'-f', 'concat',
'-safe', '0',
'-i', concat_file,
'-c', 'copy',
final_output_path
]
subprocess.run(concat_cmd, check=True)
os.remove(concat_file)
print(f"Duplicated video {repeat_count} times, saved to: {final_output_path}")
else:
# Original speed-up logic remains the same
speed_ratio = original_duration / target_duration
if abs(speed_ratio - 1.0) <= 0.1: # If the difference is less than 10%
copy_cmd = [
'ffmpeg', '-i', video_path, '-c', 'copy', final_output_path
]
subprocess.run(copy_cmd, check=True)
print(f"Video copied without speed adjustment to: {final_output_path}")
else:
speed = min(speed_ratio, 1.5)
pts_speed = 1/speed
speed_adjust_cmd = [
'ffmpeg',
'-i', video_path,
'-filter:v', f'setpts={pts_speed}*PTS',
'-an',
'-c:v', 'libx264',
'-preset', 'medium',
'-crf', '23',
final_output_path
]
subprocess.run(speed_adjust_cmd, check=True)
print(f"Speed-adjusted video (ratio: {speed}) saved to: {final_output_path}")
return os.path.abspath(final_output_path)
def save_audio(self, audio_tensor, sample_rate, target_duration, original_video_duration):
os.makedirs("Bjornulf/sync_audio", exist_ok=True)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
filename = f"Bjornulf/sync_audio/sync_audio_{timestamp}.wav"
if audio_tensor.dim() == 3:
audio_tensor = audio_tensor.squeeze(0)
elif audio_tensor.dim() == 1:
audio_tensor = audio_tensor.unsqueeze(0)
current_duration = audio_tensor.shape[1] / sample_rate
# Calculate the relative difference between durations
duration_difference = abs(target_duration - original_video_duration) / original_video_duration
# Calculate the final duration based on the same logic as create_sync_video
if target_duration > original_video_duration:
if duration_difference <= 0.5:
# For small differences, we'll keep the original audio duration
sync_video_duration = target_duration
else:
# For larger differences, we'll repeat the video
sync_video_duration = math.ceil(target_duration / original_video_duration) * original_video_duration
else:
# Handle speed-up cases
speed_ratio = original_video_duration / target_duration
if abs(speed_ratio - 1.0) <= 0.1:
sync_video_duration = original_video_duration
else:
speed = min(speed_ratio, 1.5)
sync_video_duration = original_video_duration / speed
# Adjust audio to match sync video duration
if current_duration < sync_video_duration:
# Pad with silence
silence_samples = int((sync_video_duration - current_duration) * sample_rate)
silence = torch.zeros(audio_tensor.shape[0], silence_samples)
padded_audio = torch.cat([audio_tensor, silence], dim=1)
else:
# Trim audio to match sync video duration
required_samples = int(sync_video_duration * sample_rate)
padded_audio = audio_tensor[:, :required_samples]
torchaudio.save(filename, padded_audio, sample_rate)
print(f"target_duration: {target_duration}")
print(f"original_video_duration: {original_video_duration}")
print(f"sync_video_duration: {sync_video_duration}")
print(f"current_audio_duration: {current_duration}")
print(f"final_audio_duration: {padded_audio.shape[1] / sample_rate}")
print(f"sync audio saved to: {filename}")
return os.path.abspath(filename)

View File

@@ -2,17 +2,20 @@ import torch
import gc
import requests
import json
class Everything(str):
def __ne__(self, __value: object) -> bool:
return False
class FreeVRAM:
@classmethod
def INPUT_TYPES(s):
return {"required": {"image": ("IMAGE",)}}
return {"required": {"anything": (Everything("*"),)}}
RETURN_TYPES = ("IMAGE",)
RETURN_TYPES = (Everything("*"),)
RETURN_NAME = ("anything",)
FUNCTION = "free_vram"
CATEGORY = "Bjornulf"
def free_vram(self, image):
def free_vram(self, anything):
print("Attempting to free VRAM...")
# Clear CUDA cache
@@ -28,7 +31,7 @@ class FreeVRAM:
self.trigger_http_request()
# Return the input image unchanged
return (image,)
return (anything,)
def trigger_http_request(self):
url = "http://localhost:8188/prompt"

View File

@@ -0,0 +1,77 @@
import os
from aiohttp import web
from server import PromptServer
import logging
class LoopIntegerSequential:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"from_this": ("INT", {"default": 0, "min": 0, "max": 50000, "step": 1}),
"to_that": ("INT", {"default": 10, "min": 0, "max": 50000, "step": 1}),
"jump": ("INT", {"default": 1, "min": 0, "max": 1000, "step": 1}),
},
}
RETURN_TYPES = ("INT", "INT")
RETURN_NAMES = ("int_value", "remaining_cycles")
FUNCTION = "get_next_value"
CATEGORY = "Bjornulf"
@classmethod
def IS_CHANGED(cls, **kwargs):
return float("NaN") # This ensures the node always runs
def get_next_value(self, from_this, to_that, jump):
counter_file = os.path.join("Bjornulf", "counter_integer.txt")
os.makedirs(os.path.dirname(counter_file), exist_ok=True)
try:
with open(counter_file, 'r') as f:
current_value = int(f.read().strip())
except (FileNotFoundError, ValueError):
current_value = from_this - jump # Start with from_this on first run
next_value = current_value + jump
# Block execution if we exceed to_that
if next_value > to_that:
raise ValueError(f"Counter has reached its limit of {to_that}, Reset Counter to continue.")
# Save the new value
with open(counter_file, 'w') as f:
f.write(str(next_value))
# Calculate how many times it can run before reaching the limit
if jump != 0:
remaining_cycles = max(0, (to_that - next_value) // jump + 1)
else:
remaining_cycles = 0 # Avoid division by zero
return (next_value, remaining_cycles - 1) # Subtract 1 to account for the current run
# Server routes
# @PromptServer.instance.routes.get("/get_counter_value")
# async def get_counter_value(request):
# logging.info("Get counter value called")
# counter_file = os.path.join("Bjornulf", "counter_integer.txt")
# try:
# with open(counter_file, 'r') as f:
# value = int(f.read().strip())
# return web.json_response({"success": True, "value": value}, status=200)
# except (FileNotFoundError, ValueError):
# return web.json_response({"success": False, "error": "Counter not initialized"}, status=404)
@PromptServer.instance.routes.post("/reset_counter")
async def reset_counter(request):
logging.info("Reset counter called")
counter_file = os.path.join("Bjornulf", "counter_integer.txt")
try:
os.remove(counter_file)
return web.json_response({"success": True}, status=200)
except FileNotFoundError:
return web.json_response({"success": True}, status=200) # File doesn't exist, consider it reset
except Exception as e:
return web.json_response({"success": False, "error": str(e)}, status=500)

View File

@@ -1,7 +1,7 @@
[project]
name = "bjornulf_custom_nodes"
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
version = "0.48"
version = "0.49"
license = {file = "LICENSE"}
[project.urls]

Binary file not shown.

After

Width:  |  Height:  |  Size: 665 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 778 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 793 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 394 KiB

After

Width:  |  Height:  |  Size: 373 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 400 KiB

After

Width:  |  Height:  |  Size: 345 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

View File

@@ -10,7 +10,7 @@ class SelectImageFromList:
return {
"required": {
"all_images": ("IMAGE", {}),
"selection": ("INT", {"default": 1, "min": 1, "max": 999, "step": 1}),
"selection": ("INT", {"default": 1, "min": -999999, "max": 999999, "step": 1}), # Updated to allow negative values
}
}
@@ -20,11 +20,20 @@ class SelectImageFromList:
CATEGORY = "Bjornulf"
def select_an_image(self, all_images, selection):
# Ensure the selection is within bounds
selection = max(1, min(selection, all_images.shape[0]))
num_images = all_images.shape[0]
# Adjust selection to 0-based index
index = selection - 1
# Convert selection to 0-based index
if selection > 0:
index = selection - 1
else:
# Handle negative indices directly
index = selection
# Ensure the index is within bounds
if index >= num_images:
index = num_images - 1
elif index < -num_images:
index = 0
# Select the image at the specified index
selected_image = all_images[index].unsqueeze(0)

View File

@@ -0,0 +1,65 @@
import { app } from "../../../scripts/app.js";
app.registerExtension({
name: "Bjornulf.LoopIntegerSequential",
async nodeCreated(node) {
if (node.comfyClass !== "Bjornulf_LoopIntegerSequential") return;
// Hide seed widget
const seedWidget = node.widgets.find(w => w.name === "seed");
if (seedWidget) {
seedWidget.visible = false;
}
// Add get value button
// const getValueButton = node.addWidget("button", "Get Counter Value", null, () => {
// fetch('/get_counter_value')
// .then(response => response.json())
// .then(data => {
// if (data.success) {
// app.ui.toast(`Current counter value: ${data.value}`, {'duration': 5000});
// } else {
// app.ui.toast(`Failed to get counter value: ${data.error || "Unknown error"}`, {'type': 'error', 'duration': 5000});
// }
// })
// .catch((error) => {
// console.error('Error:', error);
// app.ui.toast("An error occurred while getting the counter value.", {'type': 'error', 'duration': 5000});
// });
// });
// Add reset button
const resetButton = node.addWidget("button", "Reset Counter", null, () => {
fetch('/reset_counter', {
method: 'POST'
})
.then(response => response.json())
.then(data => {
if (data.success) {
app.ui.toast("Counter reset successfully!", {'duration': 5000});
} else {
app.ui.toast(`Failed to reset counter: ${data.error || "Unknown error"}`, {'type': 'error', 'duration': 5000});
}
})
.catch((error) => {
console.error('Error:', error);
app.ui.toast("An error occurred while resetting the counter.", {'type': 'error', 'duration': 5000});
});
});
// Override the original execute function
const originalExecute = node.execute;
node.execute = function() {
const result = originalExecute.apply(this, arguments);
if (result instanceof Promise) {
return result.catch(error => {
if (error.message.includes("Counter has reached its limit")) {
app.ui.toast(`Execution blocked: ${error.message}`, {'type': 'error', 'duration': 5000});
}
throw error; // Re-throw the error to stop further execution
});
}
return result;
};
}
});