0.41
129
README.md
@@ -1,4 +1,4 @@
|
|||||||
# 🔗 Comfyui : Bjornulf_custom_nodes v0.39 🔗
|
# 🔗 Comfyui : Bjornulf_custom_nodes v0.41 🔗
|
||||||
|
|
||||||
# Coffee : ☕☕☕☕☕ 5/5
|
# Coffee : ☕☕☕☕☕ 5/5
|
||||||
|
|
||||||
@@ -33,7 +33,7 @@ huggingface-cli download comfyanonymous/flux_text_encoders clip_l.safetensors --
|
|||||||
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip
|
huggingface-cli download comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors --local-dir /workspace/ComfyUI/models/clip
|
||||||
huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae
|
huggingface-cli download black-forest-labs/FLUX.1-dev ae.safetensors --local-dir /workspace/ComfyUI/models/vae
|
||||||
```
|
```
|
||||||
To use Flux you can just drag and drop in your browser the .json from my github repo : `workflows/FLUX_dev_troll.json`, direct link : <https://github.com/justUmen/ComfyUI-BjornulfNodes/blob/main/workflows/FLUX_dev_troll.json>.
|
To use Flux you can just drag and drop in your browser comfyui interface the .json from my github repo : `workflows/FLUX_dev_troll.json`, direct link : <https://github.com/justUmen/ComfyUI-BjornulfNodes/blob/main/workflows/FLUX_dev_troll.json>.
|
||||||
|
|
||||||
For downloading from civitai (get token here <https://civitai.com/user/account>), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL :
|
For downloading from civitai (get token here <https://civitai.com/user/account>), just copy/paste the link of checkpoint you want to download and use something like that, with your token in URL :
|
||||||
```
|
```
|
||||||
@@ -42,11 +42,46 @@ wget --content-disposition -P /workspace/ComfyUI/models/checkpoints "https://civ
|
|||||||
```
|
```
|
||||||
If you have any issues with this template from Runpod, please let me know, I'm here to help. 😊
|
If you have any issues with this template from Runpod, please let me know, I'm here to help. 😊
|
||||||
|
|
||||||
# Dependencies
|
# 🏗 Dependencies (nothing to do for runpod ☁)
|
||||||
|
|
||||||
|
## 🪟🐍 Windows : Install dependencies on windows with embedded python (portable version)
|
||||||
|
|
||||||
|
First you need to find this python_embedded `python.exe`, then you can right click or shift + right click inside the folder in your file manager to open a terminal there.
|
||||||
|
|
||||||
|
This is where I have it, with the command you need :
|
||||||
|
`H:\ComfyUI_windows_portable\python_embeded> .\python.exe -m pip install pydub ollama`
|
||||||
|
|
||||||
|
When you have to install something you can retake the same code and install the dependency you want :
|
||||||
|
`.\python.exe -m pip install whateveryouwant`
|
||||||
|
|
||||||
|
You can then run comfyui.
|
||||||
|
|
||||||
|
## 🐧🐍 Linux : Install dependencies (without venv, not recommended)
|
||||||
|
|
||||||
- `pip install ollama` (you can also install ollama if you want : https://ollama.com/download) - You don't need to really install it if you don't want to use my ollama node. (BUT you need to run `pip install ollama`)
|
- `pip install ollama` (you can also install ollama if you want : https://ollama.com/download) - You don't need to really install it if you don't want to use my ollama node. (BUT you need to run `pip install ollama`)
|
||||||
- `pip install pydub` (for TTS node)
|
- `pip install pydub` (for TTS node)
|
||||||
|
|
||||||
|
## 🐧🐍 Linux : Install dependencies with python virtual environment (venv)
|
||||||
|
|
||||||
|
If you want to use a python virtual environment only for comfyUI, which I recommended, you can do that for example (also pre-install pip) :
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo apt-get install python3-venv python3-pip
|
||||||
|
python3 -m venv /the/path/you/want/venv/bjornulf_comfyui
|
||||||
|
```
|
||||||
|
|
||||||
|
Once you have your environment in this new folder, you can activate it with and install dependencies inside :
|
||||||
|
|
||||||
|
```
|
||||||
|
source /the/path/you/want/venv/bjornulf_comfyui/bin/activate
|
||||||
|
pip install ollama pydub
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you can start comfyui with this environment (notice that you need to re-activate it each time you want to launch comfyui) :
|
||||||
|
|
||||||
|
```
|
||||||
|
cd /where/you/installed/ComfyUI && python main.py
|
||||||
|
```
|
||||||
# Nodes menu
|
# Nodes menu
|
||||||
|
|
||||||
1. [👁 Show (Text, Int, Float)](#1----show-text-int-float)
|
1. [👁 Show (Text, Int, Float)](#1----show-text-int-float)
|
||||||
@@ -64,15 +99,15 @@ If you have any issues with this template from Runpod, please let me know, I'm h
|
|||||||
13. [📏 Resize Exact](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
13. [📏 Resize Exact](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
||||||
14. [🖼 Save Exact name](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
14. [🖼 Save Exact name](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
||||||
15. [💾 Save Text](#15----save-text)
|
15. [💾 Save Text](#15----save-text)
|
||||||
16. [🖼💬 Save image for Bjornulf LobeChat](#16----save-image-for-bjornulf-lobechat-for-my-custom-lobe-chat)
|
16. [💾🖼💬 Save image for Bjornulf LobeChat](#16-----save-image-for-bjornulf-lobechat-for-my-custom-lobe-chat)
|
||||||
17. [🖼 Save image as `tmp_api.png` Temporary API](#17----save-image-as-tmp_apipng-temporary-api-%EF%B8%8F)
|
17. [💾🖼 Save image as `tmp_api.png` Temporary API](#17-----save-image-as-tmp_apipng-temporary-api-%EF%B8%8F)
|
||||||
18. [🖼📁 Save image to a chosen folder name](#18----save-image-to-a-chosen-folder-name)
|
18. [💾🖼📁 Save image to a chosen folder name](#18-----save-image-to-a-chosen-folder-name)
|
||||||
19. [🦙 Ollama](#19----ollama)
|
19. [🦙 Ollama](#19----ollama)
|
||||||
20. [📹 Video Ping Pong](#20----video-ping-pong)
|
20. [📹 Video Ping Pong](#20----video-ping-pong)
|
||||||
21. [📹 Images to Video](#21----images-to-video)
|
21. [📹 Images to Video](#21----images-to-video)
|
||||||
22. [🔲 Remove image Transparency (alpha)](#22----remove-image-transparency-alpha)
|
22. [🔲 Remove image Transparency (alpha)](#22----remove-image-transparency-alpha)
|
||||||
23. [🔲 Image to grayscale (black & white)](#23----image-to-grayscale-black--white)
|
23. [🔲 Image to grayscale (black & white)](#23----image-to-grayscale-black--white)
|
||||||
24. [🖼+🖼 Combine images (Background + Overlay)](#24----combine-images-background--overlay)
|
24. [🖼+🖼 Stack two images (Background + Overlay)](#24----combine-images-background--overlay)
|
||||||
25. [🟩➜▢ Green Screen to Transparency](#25----green-screen-to-transparency)
|
25. [🟩➜▢ Green Screen to Transparency](#25----green-screen-to-transparency)
|
||||||
26. [🎲 Random line from input](#26----random-line-from-input)
|
26. [🎲 Random line from input](#26----random-line-from-input)
|
||||||
27. [♻ Loop (All Lines from input)](#27----loop-all-lines-from-input)
|
27. [♻ Loop (All Lines from input)](#27----loop-all-lines-from-input)
|
||||||
@@ -93,7 +128,7 @@ If you have any issues with this template from Runpod, please let me know, I'm h
|
|||||||
42. [♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model](#42----loop-modelclipvae---aka-checkpoint--model)
|
42. [♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model](#42----loop-modelclipvae---aka-checkpoint--model)
|
||||||
43. [📂🖼 Load Images from output folder](#43----load-images-from-output-folder)
|
43. [📂🖼 Load Images from output folder](#43----load-images-from-output-folder)
|
||||||
44. [🖼🔍 Select an Image, Pick](#44----select-an-image-pick)
|
44. [🖼🔍 Select an Image, Pick](#44----select-an-image-pick)
|
||||||
45. [🔀 If-Else (input == compare_with)](#45----if-else-input--compare_with)
|
45. [🔀 If-Else (input / compare_with)](#45----if-else-input--compare_with)
|
||||||
|
|
||||||
# 📝 Changelog
|
# 📝 Changelog
|
||||||
|
|
||||||
@@ -140,6 +175,7 @@ If you have any issues with this template from Runpod, please let me know, I'm h
|
|||||||
- **v0.38**: New node : If-Else logic. (input == compare_with), examples with different latent space size. +fix some deserialization issues.
|
- **v0.38**: New node : If-Else logic. (input == compare_with), examples with different latent space size. +fix some deserialization issues.
|
||||||
- **v0.39**: Add variables management to Advanced Write Text node.
|
- **v0.39**: Add variables management to Advanced Write Text node.
|
||||||
- **v0.40**: Add variables management to Loop Advanced Write Text node. Add menu for all nodes to the README.
|
- **v0.40**: Add variables management to Loop Advanced Write Text node. Add menu for all nodes to the README.
|
||||||
|
- **v0.41**: Two new nodes : image details and combine images. Also ❗ Big changes to the If-Else node. (+many minor changes)
|
||||||
|
|
||||||
# 📝 Nodes descriptions
|
# 📝 Nodes descriptions
|
||||||
|
|
||||||
@@ -272,7 +308,7 @@ Resize an image to exact dimensions. The other node will save the image to the e
|
|||||||
**Description:**
|
**Description:**
|
||||||
Save the given text input to a file. Useful for logging and storing text data.
|
Save the given text input to a file. Useful for logging and storing text data.
|
||||||
|
|
||||||
## 16 - 🖼💬 Save image for Bjornulf LobeChat (❗For my custom [lobe-chat](https://github.com/justUmen/Bjornulf_lobe-chat)❗)
|
## 16 - 💾🖼💬 Save image for Bjornulf LobeChat (❗For my custom [lobe-chat](https://github.com/justUmen/Bjornulf_lobe-chat)❗)
|
||||||

|

|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
@@ -282,13 +318,13 @@ The name will start at `api_00001.png`, then `api_00002.png`, etc...
|
|||||||
It will also create a link to the last generated image at the location `output/BJORNULF_API_LAST_IMAGE.png`.
|
It will also create a link to the last generated image at the location `output/BJORNULF_API_LAST_IMAGE.png`.
|
||||||
This link will be used by my custom lobe-chat to copy the image inside the lobe-chat project.
|
This link will be used by my custom lobe-chat to copy the image inside the lobe-chat project.
|
||||||
|
|
||||||
## 17 - 🖼 Save image as `tmp_api.png` Temporary API ⚠️💣
|
## 17 - 💾🖼 Save image as `tmp_api.png` Temporary API ⚠️💣
|
||||||

|

|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
Save image for short-term use : ./output/tmp_api.png ⚠️💣
|
Save image for short-term use : ./output/tmp_api.png ⚠️💣
|
||||||
|
|
||||||
## 18 - 🖼📁 Save image to a chosen folder name
|
## 18 - 💾🖼📁 Save image to a chosen folder name
|
||||||

|

|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
@@ -333,11 +369,11 @@ Convert an image to grayscale (black & white)
|
|||||||
Example : I sometimes use it with Ipadapter to disable color influence.
|
Example : I sometimes use it with Ipadapter to disable color influence.
|
||||||
But you can sometimes also want a black and white image...
|
But you can sometimes also want a black and white image...
|
||||||
|
|
||||||
## 24 - 🖼+🖼 Combine images (Background + Overlay)
|
## 24 - 🖼+🖼 Stack two images (Background + Overlay)
|
||||||

|

|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
Combine two images into a single image : a background and one (or several) transparent overlay. (allow to have a video there, just send all the frames and recombine them after.)
|
Stack two images into a single image : a background and one (or several) transparent overlay. (allow to have a video there, just send all the frames and recombine them after.)
|
||||||
Update 0.11 : Add option to move vertically and horizontally. (from -50% to 150%)
|
Update 0.11 : Add option to move vertically and horizontally. (from -50% to 150%)
|
||||||
❗ Warning : For now, `background` is a static image. (I will allow video there later too.)
|
❗ Warning : For now, `background` is a static image. (I will allow video there later too.)
|
||||||
⚠️ Warning : If you want to directly load the image with transparency, use my node `🖼 Load Image with Transparency ▢` instead of the `Load Image` node.
|
⚠️ Warning : If you want to directly load the image with transparency, use my node `🖼 Load Image with Transparency ▢` instead of the `Load Image` node.
|
||||||
@@ -563,7 +599,6 @@ Loop over all the trios from several checkpoint node.
|
|||||||
**Description:**
|
**Description:**
|
||||||
Quickly select all images from a folder inside the output folder. (Not recursively.)
|
Quickly select all images from a folder inside the output folder. (Not recursively.)
|
||||||
So... As you can see from the screenshot the images are split based on their resolution.
|
So... As you can see from the screenshot the images are split based on their resolution.
|
||||||
It is not a choice I made, it is something that is part of the comfyui environment.
|
|
||||||
It's also not possible to edit dynamically the number of outputs, so I just picked a number : 4.
|
It's also not possible to edit dynamically the number of outputs, so I just picked a number : 4.
|
||||||
The node will separate the images based on their resolution, so with this node you can have 4 different resolutions per folder. (If you have more than that, maybe you should have another folder...)
|
The node will separate the images based on their resolution, so with this node you can have 4 different resolutions per folder. (If you have more than that, maybe you should have another folder...)
|
||||||
To avoid error or crash if you have less than 4 resolutions in a folder, the node will just output white tensors. (white square image.)
|
To avoid error or crash if you have less than 4 resolutions in a folder, the node will just output white tensors. (white square image.)
|
||||||
@@ -578,6 +613,8 @@ If you are satisfied with this logic, you can then select all these nodes, right
|
|||||||
Here is another example of the same thing but excluding the save folder node :
|
Here is another example of the same thing but excluding the save folder node :
|
||||||

|

|
||||||
|
|
||||||
|
⚠️ If you really want to regroup all the images in one flow, you can use my node 47 `Combine images` to put them all together.
|
||||||
|
|
||||||
### 44 - 🖼🔍 Select an Image, Pick
|
### 44 - 🖼🔍 Select an Image, Pick
|
||||||
|
|
||||||

|

|
||||||
@@ -589,29 +626,71 @@ Useful in combination with my Load images from folder and preview image nodes.
|
|||||||
You can also of course make a group node, like this one, which is the same as the screenshot above :
|
You can also of course make a group node, like this one, which is the same as the screenshot above :
|
||||||

|

|
||||||
|
|
||||||
### 45 - 🔀 If-Else (input == compare_with)
|
### 45 - 🔀 If-Else (input / compare_with)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
If the `input` given is equal to the `compare_with` given in the widget, it will forward `send_if_true`, otherwise it will forward `send_if_false`.
|
If the `input` given is equal to the `compare_with` given in the widget, it will forward `send_if_true`, otherwise it will forward `send_if_false`. (If no `send_if_false` it will return `None`.)
|
||||||
You can forward anything, below is an example of forwarding a different size of latent space depending if it's SDXL or not.
|
You can forward anything, below is an example of forwarding a different size of latent space depending if it's SDXL or not.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
Here is an example of the node with all outputs displayed with Show text nodes :
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
`send_if_false` is optional, if not connected, it will be replaced by `None`.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
If-Else are chainables, just connect `output` to `send_if_false`.
|
If-Else are chainables, just connect `output` to `send_if_false`.
|
||||||
⚠️ Always simply test `input` with `compare_with`, and connect the desired value to `send_if_true`. ⚠️
|
⚠️ Always simply test `input` with `compare_with`, and connect the desired value to `send_if_true`. ⚠️
|
||||||
Here a simple example with 2 If-Else nodes (choose between 3 different resolutions). ❗ Notice the same write text node is connected to both If-Else nodes input :
|
Here a simple example with 2 If-Else nodes (choose between 3 different resolutions).
|
||||||
|
❗ Notice that the same write text node is connected to both If-Else nodes input :
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Let's take a similar example but let's use my Write loop text node to display all 3 types once :
|
Let's take a similar example but let's use my Write loop text node to display all 3 types once :
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
If you understood the previous examples, here is a complete example that will create 3 images, landscape, portrait and normal :
|
If you understood the previous examples, here is a complete example that will create 3 images, landscape, portrait and square :
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Workflow is hidden for simplicity, but is very basic, just connect latent to Ksampler, nothing special.)
|
Workflow is hidden for simplicity, but is very basic, just connect latent to Ksampler, nothing special.)
|
||||||
You can also connect the same advanced loop write text node with my save folder node to save the images (landscape/portrait/normal) in separate folders, but you do you...
|
You can also connect the same advanced loop write text node with my save folder node to save the images (landscape/portrait/square) in separate folders, but you do you...
|
||||||
|
|
||||||
|
### 46 - 🖼🔍 Image Details
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Display the details of an image. (width, height, has_transparency, orientation, type)
|
||||||
|
`RGBA` is considered as having transparency, `RGB` is not.
|
||||||
|
`orientation` can be `landscape`, `portrait` or `square`.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 47 - 🖼🔗 Combine Images
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Combine multiple images (A single image or a list of images.)
|
||||||
|
|
||||||
|
There are two types of logic to "combine images". With "all_in_one" enabled, it will combine all the images into one tensor.
|
||||||
|
Otherwise it will send the images one by one. (check examples below) :
|
||||||
|
|
||||||
|
This is an example of the "all_in_one" option disabled :
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
But for example, if you want to use my node `select an image, pick`, you need to enable `all_in_one` and the images must all have the same resolution.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
You can notice that there is no visible difference when you use `all_in_one` with `preview image` node. (this is why I added the `show text` node, not that show text will make it blue, because it's an image/tensor.)
|
||||||
|
|
||||||
|
When you use `combine image` node, you can actually also send many images at once, it will combine them all.
|
||||||
|
Here is an example with `Load images from folder` node, `Image details` node and `Combine images` node. (Of course it can't have `all_in_one` set to True in this situation because the images have different resolutions) :
|
||||||
|
|
||||||
|

|
||||||
17
__init__.py
@@ -50,6 +50,8 @@ from .load_images_from_folder import LoadImagesFromSelectedFolder
|
|||||||
from .select_image_from_list import SelectImageFromList
|
from .select_image_from_list import SelectImageFromList
|
||||||
from .random_model_selector import RandomModelSelector
|
from .random_model_selector import RandomModelSelector
|
||||||
from .if_else import IfElse
|
from .if_else import IfElse
|
||||||
|
from .image_details import ImageDetails
|
||||||
|
from .combine_images import CombineImages
|
||||||
|
|
||||||
# from .pass_preview_image import PassPreviewImage
|
# from .pass_preview_image import PassPreviewImage
|
||||||
# from .check_black_image import CheckBlackImage
|
# from .check_black_image import CheckBlackImage
|
||||||
@@ -59,6 +61,8 @@ from .if_else import IfElse
|
|||||||
NODE_CLASS_MAPPINGS = {
|
NODE_CLASS_MAPPINGS = {
|
||||||
# "Bjornulf_CustomStringType": CustomStringType,
|
# "Bjornulf_CustomStringType": CustomStringType,
|
||||||
"Bjornulf_ollamaLoader": ollamaLoader,
|
"Bjornulf_ollamaLoader": ollamaLoader,
|
||||||
|
"Bjornulf_CombineImages": CombineImages,
|
||||||
|
"Bjornulf_ImageDetails": ImageDetails,
|
||||||
"Bjornulf_IfElse": IfElse,
|
"Bjornulf_IfElse": IfElse,
|
||||||
"Bjornulf_RandomModelSelector": RandomModelSelector,
|
"Bjornulf_RandomModelSelector": RandomModelSelector,
|
||||||
"Bjornulf_SelectImageFromList": SelectImageFromList,
|
"Bjornulf_SelectImageFromList": SelectImageFromList,
|
||||||
@@ -149,14 +153,13 @@ NODE_DISPLAY_NAME_MAPPINGS = {
|
|||||||
# "Bjornulf_ShowFloat": "👁 Show (Float)",
|
# "Bjornulf_ShowFloat": "👁 Show (Float)",
|
||||||
"Bjornulf_ImageMaskCutter": "🖼✂ Cut Image with Mask",
|
"Bjornulf_ImageMaskCutter": "🖼✂ Cut Image with Mask",
|
||||||
"Bjornulf_LoadImageWithTransparency": "🖼 Load Image with Transparency ▢",
|
"Bjornulf_LoadImageWithTransparency": "🖼 Load Image with Transparency ▢",
|
||||||
"Bjornulf_CombineBackgroundOverlay": "🖼+🖼 Combine images (Background+Overlay alpha)",
|
"Bjornulf_CombineBackgroundOverlay": "🖼+🖼 Stack two images (Background+Overlay alpha)",
|
||||||
"Bjornulf_GrayscaleTransform": "🖼➜🔲 Image to grayscale (black & white)",
|
"Bjornulf_GrayscaleTransform": "🖼➜🔲 Image to grayscale (black & white)",
|
||||||
"Bjornulf_RemoveTransparency": "▢➜⬛ Remove image Transparency (alpha)",
|
"Bjornulf_RemoveTransparency": "▢➜⬛ Remove image Transparency (alpha)",
|
||||||
"Bjornulf_ResizeImage": "📏 Resize Image",
|
"Bjornulf_ResizeImage": "📏 Resize Image",
|
||||||
"Bjornulf_SaveImagePath": "🖼 Save Image (exact path, exact name) ⚠️💣",
|
"Bjornulf_SaveImagePath": "💾🖼 Save Image (exact path, exact name) ⚠️💣",
|
||||||
"Bjornulf_SaveImageToFolder": "🖼📁 Save Image(s) to a folder",
|
"Bjornulf_SaveImageToFolder": "💾🖼📁 Save Image(s) to a folder",
|
||||||
"Bjornulf_SaveTmpImage": "🖼 Save Image (tmp_api.png) ⚠️💣",
|
"Bjornulf_SaveTmpImage": "💾🖼 Save Image (tmp_api.png) ⚠️💣",
|
||||||
# "Bjornulf_SaveApiImage": "🖼 Save Image (./output/api_00001.png...)",
|
|
||||||
"Bjornulf_SaveText": "💾 Save Text",
|
"Bjornulf_SaveText": "💾 Save Text",
|
||||||
# "Bjornulf_LoadText": "📥 Load Text",
|
# "Bjornulf_LoadText": "📥 Load Text",
|
||||||
"Bjornulf_CombineTexts": "🔗 Combine (Texts)",
|
"Bjornulf_CombineTexts": "🔗 Combine (Texts)",
|
||||||
@@ -169,7 +172,9 @@ NODE_DISPLAY_NAME_MAPPINGS = {
|
|||||||
"Bjornulf_PauseResume": "⏸️ Paused. Resume or Stop, Pick 👇",
|
"Bjornulf_PauseResume": "⏸️ Paused. Resume or Stop, Pick 👇",
|
||||||
"Bjornulf_LoadImagesFromSelectedFolder": "📂🖼 Load Images from output folder",
|
"Bjornulf_LoadImagesFromSelectedFolder": "📂🖼 Load Images from output folder",
|
||||||
"Bjornulf_SelectImageFromList": "🖼🔍 Select an Image, Pick",
|
"Bjornulf_SelectImageFromList": "🖼🔍 Select an Image, Pick",
|
||||||
"Bjornulf_IfElse": "🔀 If-Else (input == compare_with)",
|
"Bjornulf_IfElse": "🔀 If-Else (input / compare_with)",
|
||||||
|
"Bjornulf_ImageDetails": "🖼🔍 Image Details",
|
||||||
|
"Bjornulf_CombineImages": "🖼🔗 Combine Images",
|
||||||
}
|
}
|
||||||
|
|
||||||
WEB_DIRECTORY = "./web"
|
WEB_DIRECTORY = "./web"
|
||||||
|
|||||||
86
combine_images.py
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
import torch
|
||||||
|
import numpy as np
|
||||||
|
import logging
|
||||||
|
|
||||||
|
class CombineImages:
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(cls):
|
||||||
|
return {
|
||||||
|
"required": {
|
||||||
|
"number_of_images": ("INT", {"default": 2, "min": 1, "max": 50, "step": 1}),
|
||||||
|
"all_in_one": ("BOOLEAN", {"default": False}),
|
||||||
|
"image_1": ("IMAGE",),
|
||||||
|
},
|
||||||
|
"hidden": {
|
||||||
|
**{f"image_{i}": ("IMAGE",) for i in range(2, 51)}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
RETURN_TYPES = ("IMAGE",)
|
||||||
|
FUNCTION = "all_in_one_images"
|
||||||
|
OUTPUT_NODE = True
|
||||||
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
|
def all_in_one_images(self, number_of_images, all_in_one, ** kwargs):
|
||||||
|
images = [kwargs[f"image_{i}"] for i in range(1, number_of_images + 1) if f"image_{i}" in kwargs]
|
||||||
|
|
||||||
|
for i, img in enumerate(images):
|
||||||
|
logging.info(f"Image {i+1} shape: {img.shape}, dtype: {img.dtype}, min: {img.min()}, max: {img.max()}")
|
||||||
|
|
||||||
|
if all_in_one:
|
||||||
|
# Check if all images have the same shape
|
||||||
|
shapes = [img.shape for img in images]
|
||||||
|
if len(set(shapes)) > 1:
|
||||||
|
raise ValueError("All images must have the same resolution to use all_in_one. "
|
||||||
|
f"Found different shapes: {shapes}")
|
||||||
|
|
||||||
|
# Convert images to float32 and scale to 0-1 range if necessary
|
||||||
|
processed_images = []
|
||||||
|
for img in images:
|
||||||
|
if isinstance(img, np.ndarray):
|
||||||
|
if img.dtype == np.uint8:
|
||||||
|
img = img.astype(np.float32) / 255.0
|
||||||
|
elif img.dtype == np.bool_:
|
||||||
|
img = img.astype(np.float32)
|
||||||
|
elif isinstance(img, torch.Tensor):
|
||||||
|
if img.dtype == torch.uint8:
|
||||||
|
img = img.float() / 255.0
|
||||||
|
elif img.dtype == torch.bool:
|
||||||
|
img = img.float()
|
||||||
|
|
||||||
|
# Ensure the image is 3D (height, width, channels)
|
||||||
|
if img.ndim == 4:
|
||||||
|
img = img.squeeze(0)
|
||||||
|
|
||||||
|
processed_images.append(img)
|
||||||
|
|
||||||
|
# Stack all images along a new dimension
|
||||||
|
if isinstance(processed_images[0], np.ndarray):
|
||||||
|
all_in_oned = np.stack(processed_images)
|
||||||
|
all_in_oned = torch.from_numpy(all_in_oned)
|
||||||
|
else:
|
||||||
|
all_in_oned = torch.stack(processed_images)
|
||||||
|
|
||||||
|
# Ensure the output is in the format expected by the preview node
|
||||||
|
# (batch, height, width, channels)
|
||||||
|
if all_in_oned.ndim == 3:
|
||||||
|
all_in_oned = all_in_oned.unsqueeze(0)
|
||||||
|
if all_in_oned.shape[-1] != 3 and all_in_oned.shape[-1] != 4:
|
||||||
|
all_in_oned = all_in_oned.permute(0, 2, 3, 1)
|
||||||
|
|
||||||
|
return (all_in_oned,)
|
||||||
|
else:
|
||||||
|
# Return a single tuple containing all images (original behavior)
|
||||||
|
return (images,)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def IS_CHANGED(cls, **kwargs):
|
||||||
|
return float("NaN")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def VALIDATE_INPUTS(cls, ** kwargs):
|
||||||
|
if kwargs['all_in_one']:
|
||||||
|
cls.OUTPUT_IS_LIST = (False,)
|
||||||
|
else:
|
||||||
|
cls.OUTPUT_IS_LIST = (True,)
|
||||||
|
return True
|
||||||
93
if_else.py
@@ -7,20 +7,97 @@ class IfElse:
|
|||||||
def INPUT_TYPES(cls):
|
def INPUT_TYPES(cls):
|
||||||
return {
|
return {
|
||||||
"required": {
|
"required": {
|
||||||
"input": ("STRING", {"forceInput": True, "multiline": False}),
|
"input": (Everything("*"), {"forceInput": True, "multiline": False}),
|
||||||
|
"input_type": ([
|
||||||
|
"STRING: input EQUAL TO compare_with",
|
||||||
|
"STRING: input NOT EQUAL TO compare_with",
|
||||||
|
"BOOLEAN: input IS TRUE",
|
||||||
|
"NUMBER: input GREATER THAN compare_with",
|
||||||
|
"NUMBER: input GREATER OR EQUAL TO compare_with",
|
||||||
|
"NUMBER: input LESS THAN compare_with",
|
||||||
|
"NUMBER: input LESS OR EQUAL TO compare_with"
|
||||||
|
], {"default": "STRING: input EQUAL TO compare_with"}),
|
||||||
"send_if_true": (Everything("*"),),
|
"send_if_true": (Everything("*"),),
|
||||||
"send_if_false": (Everything("*"),),
|
|
||||||
"compare_with": ("STRING", {"multiline": False}),
|
"compare_with": ("STRING", {"multiline": False}),
|
||||||
},
|
},
|
||||||
|
"optional": {
|
||||||
|
"send_if_false": (Everything("*"),),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
RETURN_TYPES = (Everything("*"),"STRING")
|
RETURN_TYPES = (Everything("*"), Everything("*"), "STRING", "STRING", "STRING")
|
||||||
RETURN_NAMES = ("output","true_or_false")
|
RETURN_NAMES = ("output", "rejected", "input_type", "true_or_false", "details")
|
||||||
FUNCTION = "if_else"
|
FUNCTION = "if_else"
|
||||||
CATEGORY = "Bjornulf"
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
def if_else(self, input, send_if_true, send_if_false, compare_with):
|
def if_else(self, input, send_if_true, compare_with, input_type, send_if_false=None):
|
||||||
if input == compare_with:
|
result = False
|
||||||
return (send_if_true,"True")
|
input_type_str = "STRING"
|
||||||
|
details = f"input: {input}\ncompare_with: {compare_with}\n"
|
||||||
|
error_message = ""
|
||||||
|
|
||||||
|
# Input validation
|
||||||
|
if input_type.startswith("NUMBER:"):
|
||||||
|
try:
|
||||||
|
float(input)
|
||||||
|
float(compare_with)
|
||||||
|
except ValueError:
|
||||||
|
error_message = "If-Else ERROR: For numeric comparisons, both \"input\" and \"compare_with\" must be valid numbers.\n"
|
||||||
|
elif input_type == "BOOLEAN: input IS TRUE":
|
||||||
|
if str(input).lower() not in ("true", "false", "1", "0", "yes", "no", "y", "n", "on", "off"):
|
||||||
|
error_message = "If-Else ERROR: For boolean check, \"input\" must be a recognizable boolean value.\n"
|
||||||
|
|
||||||
|
if error_message:
|
||||||
|
details = error_message + "\n" + details
|
||||||
|
details += "\nContinuing with default string comparison."
|
||||||
|
input_type = "STRING: input EQUAL TO compare_with"
|
||||||
|
|
||||||
|
if input_type == "STRING: input EQUAL TO compare_with":
|
||||||
|
result = str(input) == str(compare_with)
|
||||||
|
details += f"\nCompared strings: '{input}' == '{compare_with}'"
|
||||||
|
elif input_type == "STRING: input NOT EQUAL TO compare_with":
|
||||||
|
result = str(input) != str(compare_with)
|
||||||
|
details += f"\nCompared strings: '{input}' != '{compare_with}'"
|
||||||
|
elif input_type == "BOOLEAN: input IS TRUE":
|
||||||
|
result = str(input).lower() in ("true", "1", "yes", "y", "on")
|
||||||
|
details += f"\nChecked if '{input}' is considered True"
|
||||||
|
else: # Numeric comparisons
|
||||||
|
try:
|
||||||
|
input_num = float(input)
|
||||||
|
compare_num = float(compare_with)
|
||||||
|
if input_type == "NUMBER: input GREATER THAN compare_with":
|
||||||
|
result = input_num > compare_num
|
||||||
|
details += f"\nCompared numbers: {input_num} > {compare_num}"
|
||||||
|
elif input_type == "NUMBER: input GREATER OR EQUAL TO compare_with":
|
||||||
|
result = input_num >= compare_num
|
||||||
|
details += f"\nCompared numbers: {input_num} >= {compare_num}"
|
||||||
|
elif input_type == "NUMBER: input LESS THAN compare_with":
|
||||||
|
result = input_num < compare_num
|
||||||
|
details += f"\nCompared numbers: {input_num} < {compare_num}"
|
||||||
|
elif input_type == "NUMBER: input LESS OR EQUAL TO compare_with":
|
||||||
|
result = input_num <= compare_num
|
||||||
|
details += f"\nCompared numbers: {input_num} <= {compare_num}"
|
||||||
|
input_type_str = "FLOAT" if "." in str(input) else "INT"
|
||||||
|
except ValueError:
|
||||||
|
result = str(input) == str(compare_with)
|
||||||
|
details += f"\nUnexpected error in numeric conversion, compared as strings: '{input}' == '{compare_with}'"
|
||||||
|
|
||||||
|
if result:
|
||||||
|
output = send_if_true
|
||||||
|
rejected = send_if_false if send_if_false is not None else None
|
||||||
else:
|
else:
|
||||||
return (send_if_false,"False")
|
output = send_if_false if send_if_false is not None else None
|
||||||
|
rejected = send_if_true
|
||||||
|
|
||||||
|
|
||||||
|
result_str = str(result)
|
||||||
|
details += f"\nResult: {result_str}"
|
||||||
|
details += f"\nReturned value to {'output' if result else 'rejected'}"
|
||||||
|
details += f"\n\noutput: {output}"
|
||||||
|
details += f"\nrejected: {rejected}"
|
||||||
|
|
||||||
|
return (output, rejected, input_type_str, result_str, details)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def IS_CHANGED(cls, input, send_if_true, compare_with, input_type, send_if_false=None):
|
||||||
|
return float("NaN")
|
||||||
98
image_details.py
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
import torch
|
||||||
|
import numpy as np
|
||||||
|
from PIL import Image
|
||||||
|
import io
|
||||||
|
|
||||||
|
class ImageDetails:
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(cls):
|
||||||
|
return {
|
||||||
|
"required": {
|
||||||
|
"image_input": ("IMAGE",),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
RETURN_TYPES = ("INT", "INT", "BOOL", "STRING", "STRING", "STRING")
|
||||||
|
RETURN_NAMES = ("WIDTH", "HEIGHT", "HAS_TRANSPARENCY", "ORIENTATION", "TYPE", "ALL")
|
||||||
|
FUNCTION = "show_image_details"
|
||||||
|
OUTPUT_NODE = True
|
||||||
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
|
def show_image_details(self, image_input):
|
||||||
|
if isinstance(image_input, torch.Tensor):
|
||||||
|
is_tensor = True
|
||||||
|
input_type = "tensor"
|
||||||
|
# Ensure the tensor is on CPU and convert to numpy
|
||||||
|
image_input = image_input.cpu().numpy()
|
||||||
|
elif isinstance(image_input, (bytes, bytearray)):
|
||||||
|
is_tensor = False
|
||||||
|
input_type = "bytes"
|
||||||
|
image_input = [image_input] # Wrap single bytes object in a list
|
||||||
|
else:
|
||||||
|
is_tensor = False
|
||||||
|
input_type = "bytes"
|
||||||
|
|
||||||
|
all_widths, all_heights, all_transparencies, all_details, all_orientations = [], [], [], [], []
|
||||||
|
|
||||||
|
if is_tensor:
|
||||||
|
# Handle tensor images
|
||||||
|
if len(image_input.shape) == 5: # (batch, 1, channels, height, width)
|
||||||
|
image_input = np.squeeze(image_input, axis=1)
|
||||||
|
|
||||||
|
batch_size = image_input.shape[0]
|
||||||
|
for i in range(batch_size):
|
||||||
|
image = image_input[i]
|
||||||
|
|
||||||
|
# Ensure the image is in HxWxC format
|
||||||
|
if image.shape[0] == 3 or image.shape[0] == 4: # If it's in CxHxW format
|
||||||
|
image = np.transpose(image, (1, 2, 0)) # Change to HxWxC
|
||||||
|
|
||||||
|
# Normalize to 0-255 range if necessary
|
||||||
|
if image.max() <= 1:
|
||||||
|
image = (image * 255).astype('uint8')
|
||||||
|
else:
|
||||||
|
image = image.astype('uint8')
|
||||||
|
|
||||||
|
pil_image = Image.fromarray(image)
|
||||||
|
self.process_image(pil_image, input_type, all_widths, all_heights, all_transparencies, all_details, all_orientations)
|
||||||
|
else:
|
||||||
|
# Handle bytes-like objects
|
||||||
|
batch_size = len(image_input)
|
||||||
|
for i in range(batch_size):
|
||||||
|
pil_image = Image.open(io.BytesIO(image_input[i]))
|
||||||
|
self.process_image(pil_image, input_type, all_widths, all_heights, all_transparencies, all_details, all_orientations)
|
||||||
|
|
||||||
|
# Combine all details into a single string
|
||||||
|
combined_details = "\n".join(all_details)
|
||||||
|
|
||||||
|
# Return the details of the first image, plus the combined details string
|
||||||
|
return (all_widths[0], all_heights[0], all_transparencies[0], all_orientations[0],
|
||||||
|
input_type, combined_details)
|
||||||
|
|
||||||
|
def process_image(self, pil_image, input_type, all_widths, all_heights, all_transparencies, all_details, all_orientations):
|
||||||
|
# Get image details
|
||||||
|
width, height = pil_image.size
|
||||||
|
has_transparency = pil_image.mode in ('RGBA', 'LA') or \
|
||||||
|
(pil_image.mode == 'P' and 'transparency' in pil_image.info)
|
||||||
|
|
||||||
|
# Determine orientation
|
||||||
|
if width > height:
|
||||||
|
orientation = "landscape"
|
||||||
|
elif height > width:
|
||||||
|
orientation = "portrait"
|
||||||
|
else:
|
||||||
|
orientation = "square"
|
||||||
|
|
||||||
|
# Prepare the ALL string
|
||||||
|
details = f"\nType: {input_type}"
|
||||||
|
details += f"\nWidth: {width}"
|
||||||
|
details += f"\nHeight: {height}"
|
||||||
|
details += f"\nLoaded with transparency: {has_transparency}"
|
||||||
|
details += f"\nImage Mode: {pil_image.mode}"
|
||||||
|
details += f"\nOrientation: {orientation}\n"
|
||||||
|
|
||||||
|
all_widths.append(width)
|
||||||
|
all_heights.append(height)
|
||||||
|
all_transparencies.append(has_transparency)
|
||||||
|
all_details.append(details)
|
||||||
|
all_orientations.append(orientation)
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "bjornulf_custom_nodes"
|
name = "bjornulf_custom_nodes"
|
||||||
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
|
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
|
||||||
version = "0.40"
|
version = "0.41"
|
||||||
license = {file = "LICENSE"}
|
license = {file = "LICENSE"}
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
|
|||||||
@@ -29,28 +29,31 @@ class RandomModelSelector:
|
|||||||
def random_select_model(self, number_of_models, seed, **kwargs):
|
def random_select_model(self, number_of_models, seed, **kwargs):
|
||||||
random.seed(seed)
|
random.seed(seed)
|
||||||
|
|
||||||
available_models = [kwargs[f"model_{i}"] for i in range(1, number_of_models + 1) if f"model_{i}" in kwargs]
|
# Collect available models from kwargs
|
||||||
|
available_models = [
|
||||||
|
kwargs[f"model_{i}"] for i in range(1, number_of_models + 1) if f"model_{i}" in kwargs and kwargs[f"model_{i}"]
|
||||||
|
]
|
||||||
|
|
||||||
|
# Raise an error if no models are available
|
||||||
if not available_models:
|
if not available_models:
|
||||||
raise ValueError("No models selected")
|
raise ValueError("No models selected")
|
||||||
|
|
||||||
|
# Randomly select a model
|
||||||
selected_model = random.choice(available_models)
|
selected_model = random.choice(available_models)
|
||||||
|
|
||||||
# Extract just the name of the model (no folders and no extensions)
|
# Get the model name (without folders or extensions)
|
||||||
model_name = os.path.splitext(os.path.basename(selected_model))[0]
|
model_name = os.path.splitext(os.path.basename(selected_model))[0]
|
||||||
|
|
||||||
# Get the full path of the selected model
|
# Get the full path to the selected model
|
||||||
model_path = get_full_path("checkpoints", selected_model)
|
model_path = get_full_path("checkpoints", selected_model)
|
||||||
|
|
||||||
# Get the folder of the selected model (Hopefully people use that to organize their models...)
|
# Get the folder name where the model is located
|
||||||
model_folder = os.path.basename(os.path.dirname(model_path))
|
model_folder = os.path.basename(os.path.dirname(model_path))
|
||||||
|
|
||||||
# Load the model
|
# Load the model using ComfyUI's checkpoint loader
|
||||||
loaded_objects = comfy.sd.load_checkpoint_guess_config(model_path)
|
loaded_objects = comfy.sd.load_checkpoint_guess_config(model_path)
|
||||||
|
|
||||||
# Unpack only the values we need
|
# Unpack only the values we need
|
||||||
model = loaded_objects[0]
|
model, clip, vae = loaded_objects[:3]
|
||||||
clip = loaded_objects[1]
|
|
||||||
vae = loaded_objects[2]
|
|
||||||
|
|
||||||
return (model, clip, vae, model_path, model_name, model_folder)
|
return model, clip, vae, model_path, model_name, model_folder
|
||||||
|
Before Width: | Height: | Size: 108 KiB After Width: | Height: | Size: 108 KiB |
|
Before Width: | Height: | Size: 195 KiB After Width: | Height: | Size: 195 KiB |
|
Before Width: | Height: | Size: 111 KiB After Width: | Height: | Size: 111 KiB |
|
Before Width: | Height: | Size: 203 KiB After Width: | Height: | Size: 203 KiB |
|
Before Width: | Height: | Size: 444 KiB After Width: | Height: | Size: 444 KiB |
BIN
screenshots/combine_images_1.png
Normal file
|
After Width: | Height: | Size: 317 KiB |
BIN
screenshots/combine_images_2.png
Normal file
|
After Width: | Height: | Size: 299 KiB |
BIN
screenshots/combine_images_3.png
Normal file
|
After Width: | Height: | Size: 853 KiB |
BIN
screenshots/if_0.png
Normal file
|
After Width: | Height: | Size: 100 KiB |
BIN
screenshots/if_0_1.png
Normal file
|
After Width: | Height: | Size: 254 KiB |
BIN
screenshots/if_1.png
Normal file
|
After Width: | Height: | Size: 203 KiB |
BIN
screenshots/if_2.png
Normal file
|
After Width: | Height: | Size: 196 KiB |
BIN
screenshots/if_3.png
Normal file
|
After Width: | Height: | Size: 146 KiB |
BIN
screenshots/if_4.png
Normal file
|
After Width: | Height: | Size: 165 KiB |
BIN
screenshots/if_5.png
Normal file
|
After Width: | Height: | Size: 374 KiB |
BIN
screenshots/image_details_1.png
Normal file
|
After Width: | Height: | Size: 203 KiB |
52
web/js/combine_images.js
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
import { app } from "../../../scripts/app.js";
|
||||||
|
|
||||||
|
app.registerExtension({
|
||||||
|
name: "Bjornulf.CombineImages",
|
||||||
|
async nodeCreated(node) {
|
||||||
|
if (node.comfyClass === "Bjornulf_CombineImages") {
|
||||||
|
const updateInputs = () => {
|
||||||
|
const numInputsWidget = node.widgets.find(w => w.name === "number_of_images");
|
||||||
|
if (!numInputsWidget) return;
|
||||||
|
|
||||||
|
const numInputs = numInputsWidget.value;
|
||||||
|
|
||||||
|
// Initialize node.inputs if it doesn't exist
|
||||||
|
if (!node.inputs) {
|
||||||
|
node.inputs = [];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter existing text inputs
|
||||||
|
const existingInputs = node.inputs.filter(input => input.name.startsWith('image_'));
|
||||||
|
|
||||||
|
// Determine if we need to add or remove inputs
|
||||||
|
if (existingInputs.length < numInputs) {
|
||||||
|
// Add new text inputs if not enough existing
|
||||||
|
for (let i = existingInputs.length + 1; i <= numInputs; i++) {
|
||||||
|
const inputName = `image_${i}`;
|
||||||
|
if (!node.inputs.find(input => input.name === inputName)) {
|
||||||
|
node.addInput(inputName, "IMAGE");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
// Remove excess text inputs if too many
|
||||||
|
node.inputs = node.inputs.filter(input => !input.name.startsWith('image_') || parseInt(input.name.split('_')[1]) <= numInputs);
|
||||||
|
}
|
||||||
|
|
||||||
|
node.setSize(node.computeSize());
|
||||||
|
};
|
||||||
|
|
||||||
|
// Move number_of_images to the top initially
|
||||||
|
const numInputsWidget = node.widgets.find(w => w.name === "number_of_images");
|
||||||
|
if (numInputsWidget) {
|
||||||
|
node.widgets = [numInputsWidget, ...node.widgets.filter(w => w !== numInputsWidget)];
|
||||||
|
numInputsWidget.callback = () => {
|
||||||
|
updateInputs();
|
||||||
|
app.graph.setDirtyCanvas(true);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Delay the initial update to ensure node is fully initialized
|
||||||
|
setTimeout(updateInputs, 0);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
});
|
||||||
@@ -47,6 +47,10 @@ app.registerExtension({
|
|||||||
color = '#0096FF'; // Integer
|
color = '#0096FF'; // Integer
|
||||||
} else if (/^-?\d*\.?\d+$/.test(value)) {
|
} else if (/^-?\d*\.?\d+$/.test(value)) {
|
||||||
color = 'orange'; // Float
|
color = 'orange'; // Float
|
||||||
|
} else if (value.startsWith("If-Else ERROR: ")) {
|
||||||
|
color = 'red'; // If-Else ERROR lines
|
||||||
|
} else if (value.startsWith("tensor(")) {
|
||||||
|
color = '#0096FF'; // Lines starting with "tensor("
|
||||||
}
|
}
|
||||||
|
|
||||||
w.inputEl.style.color = color;
|
w.inputEl.style.color = color;
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ class WriteTextAdvanced:
|
|||||||
"text": ("STRING", {"multiline": True, "lines": 10}),
|
"text": ("STRING", {"multiline": True, "lines": 10}),
|
||||||
},
|
},
|
||||||
"optional": {
|
"optional": {
|
||||||
"variables": ("STRING", {"multiline": True, "lines": 5}),
|
"variables": ("STRING", {"multiline": True, "forceInput": True}),
|
||||||
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
|
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|||||||