This commit is contained in:
justumen
2025-02-27 18:00:12 +01:00
parent 6a21e32a42
commit 10263f2110
38 changed files with 1965 additions and 432 deletions

149
README.md
View File

@@ -1,6 +1,6 @@
# 🔗 Comfyui : Bjornulf_custom_nodes v0.71 🔗
# 🔗 Comfyui : Bjornulf_custom_nodes v0.76 🔗
A list of 133 custom nodes for Comfyui : Display, manipulate, create and edit text, images, videos, loras, generate characters and more.
A list of 142 custom nodes for Comfyui : Display, manipulate, create and edit text, images, videos, loras, generate characters and more.
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
# Watch Video (Quick overview 28 minutes) :
@@ -53,6 +53,10 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
`116.` [📥 Load Text From Path](#116----load-text-from-path)
`117.` [📝👈🅰️ Line selector (🎲 or ♻ or ♻📑)](#117---🅰%EF%B8%8F-line-selector--or--or-)
`131.` [✒👉 Write Pick Me Chain](#131----write-pick-me-chain)
`136.` [🔛📝 Text Switch On/Off](#136)
`138.` [📑👈 Select from List](#138)
`141.` [🌎✒👉 Global Write Pick Me](#141)
`142.` [🌎📥 Load Global Pick Me](#142)
## 🔥 Text Generator 🔥
`81.` [🔥📝 Text Generator 📝🔥](#81----text-generator-)
@@ -109,6 +113,8 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
`48.` [🔀🎲 Text scrambler (🧑 Character)](#48----text-scrambler--character)
`55.` [🎲👑 Random Lora Selector](#55----random-lora-selector)
`117.` [📝👈🅰️ Line selector (🎲 or ♻ or ♻📑)](#117---🅰%EF%B8%8F-line-selector--or--or-)
`139.` [🎲 Random Integer](#139)
`140.` [🎲 Random Float](#140)
## 🖼💾 Save Image / Text 💾🖼
`16.` [💾🖼💬 Save image for Bjornulf LobeChat](#16----save-image-for-bjornulf-lobechat-for-my-custom-lobe-chat)
@@ -143,6 +149,7 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
`80.` [🩷 Empty Latent Selector](#80----empty-latent-selector)
## 🅰️ Variables 🅰️
`3.` [✒🗔🅰️ Advanced Write Text (+ 🎲 random option)](#3---🅰%EF%B8%8F-advanced-write-text---random-option)
`117.` [📝👈🅰️ Line selector (🎲 or ♻ or ♻📑)](#117---🅰%EF%B8%8F-line-selector--or--or-)
`123.` [💾🅰️ Save Global Variables](#123---🅰%EF%B8%8F-save-global-variables)
`124.` [📥🅰️ Load Global Variables](#124---🅰%EF%B8%8F-load-global-variables)
@@ -182,7 +189,7 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
## 📹 Video 📹
`20.` [📹 Video Ping Pong](#20----video-ping-pong)
`21.` [📹 Images to Video (FFmpeg)](#21----images-to-video)
`21.` [🖼➜📹 Images to Video (FFmpeg Save Video)](#21)
`49.` [📹👁 Video Preview](#49----video-preview)
`50.` [🖼➜📹 Images to Video path (tmp video)](#50----images-to-video-path-tmp-video)
`51.` [📹➜🖼 Video Path to Images](#51----video-path-to-images)
@@ -214,14 +221,17 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
`66.` [🔊➜📝 STT - Speech to Text](#66----stt---speech-to-text)
`118.` [🔊 TTS Configuration ⚙](#118----tts-configuration-)
`120.` [📝➜🔊 Kokoro - Text to Speech](#120----kokoro---text-to-speech)
`134.` [134 - 🔊▶ Play Audio](#134)
## 💻 System 💻
## 💻 General / System 💻
`34.` [🧹 Free VRAM hack](#34----free-vram-hack)
`137.` [🌎🎲 Global Seed Manager](#137)
## 🧍 Manual user Control 🧍
`35.` [⏸️ Paused. Resume or Stop, Pick 👇](#35---%EF%B8%8F-paused-resume-or-stop-)
`36.` [⏸️ Paused. Select input, Pick 👇](#36---%EF%B8%8F-paused-select-input-pick-one)
`117.` [📝👈🅰️ Line selector (🎲 or ♻ or ♻📑)](#117---🅰%EF%B8%8F-line-selector--or--or-)
`135.` [🔛✨ Anything Switch On/Off](#135)
## 🧠 Logic / Conditional Operations 🧠
`45.` [🔀 If-Else (input / compare_with)](#45----if-else-input--compare_with)
@@ -290,7 +300,7 @@ You can then run comfyui.
## 🐧🐍 Linux : Install dependencies (without venv, not recommended)
Move the the custom_node folder and : `pip install -r requirements.txt`
Move to the custom_node folder and do : `pip install -r requirements.txt`
OR
@@ -400,6 +410,15 @@ Text replace now have multine option for regex. (https://github.com/justUmen/Bjo
Fix a lot of code everywhere, a little better logging system, etc...
WIP : Rewrite of all my ffmpeg nodes. (Still need improvements and fixes, will do that in 0.71?) Maybe don't use them yet...
- **0.71**: ❗Breaking changes for Global variable nodes. (add to global variable system a "filename", which is a a separate global variable file.) bug fix speech to text node, 5 new nodes 129-133. combine text limit raised to 100. improve Save image in folder node.
- **0.71-0.75**: Many bug fixing. Civitai nodes are working on windows. (encoding, links problem are solved ? - at least on my machines...)
- **0.76**: Removed kokoro_onnx from requirements.txt due to sonflict with other nodes (need to be installed manually if you want to use this node.)
New syntaxes for advanced text/line selector, ex: {left|right|middle|group=LMR}+{left|right|middle|group=LMR}+{left|right|middle|group=LMR} and {A(80%)|B(15%)|C(5%)}
2 new nodes switch : 🔛✨ Anything Switch On/Off (compatible with combine images) AND 🔛📝 Text Switch On/Off (Compatible with combine texts)
2 new pick Me global nodes, using an identifier instead of chain : 🌎✒👉 Global Write Pick Me AND 🌎📥 Load Global Pick Me
3 random nodes : 🌎🎲 Global Random Seed, 🎲 Random Integer, 🎲 Random Float (Each return their value but also TEXT version of it.) "Seed node" more advanced.
1 new node to quickly select element from list : 📑👈 Select from List
1 new audio node : 🔊▶ Play Audio (Just play an audio file, will default to bell.m4a if none provided.) Can take AUDIO format or audio_path.
❗Breaking changes. Large rewrite for all FFMPEG related nodes. With options for video preview. (Still have few changes to make, next version.)
# 📝 Nodes descriptions
@@ -440,6 +459,10 @@ Usage example :
![variables](screenshots/variables.png)
❗ 0.76 - New syntax available :
Groups, with no duplicate, example : {left|right|middle|group=LMR}+{left|right|middle|group=LMR}+{left|right|middle|group=LMR}
Random based on percentage : {A(80%)|B(15%)|C(5%)}
## 4 - 🔗 Combine Texts
**Description:**
@@ -455,7 +478,6 @@ You also have `control_after_generate` to manage the randomness.
![Random Text](screenshots/random_text.png)
## 6 - ♻ Loop
**Description:**
@@ -633,7 +655,7 @@ Create a ping-pong effect from a list of images (from a video) by reversing the
![Video Ping Pong](screenshots/video_pingpong.png)
## 21 - 📹 Images to Video
## 21 - 🖼➜📹 Images to Video (FFMPEG Save Video)
**Description:**
Combine a sequence of images into a video file.
@@ -1701,6 +1723,10 @@ So use that if you want to ignore a line.
![Line Selector](screenshots/line_selector.png)
❗ 0.76 - New syntax available :
Groups, with no duplicate, example : {left|right|middle|group=LMR}+{left|right|middle|group=LMR}+{left|right|middle|group=LMR}
Random based on percentage : {A(80%)|B(15%)|C(5%)}
#### 118 - 🔊 TTS Configuration ⚙
**Description:**
@@ -1726,6 +1752,9 @@ The workflow below is included : `workflows/HUNYUAN_basic_lora.json`) :
#### 120 - 📝➜🔊 Kokoro - Text to Speech
**Description:**
❗ 0.76 - Due to some compatibility issues with other custom now, you now need to install it manually if you want to use it : `pip install kokoro_onnx`
Another Text to Speech node based on Kokoro. : https://github.com/thewh1teagle/kokoro-onnx
Lightweight, much simpler, no configuration and fully integrated into Comfyui. (No external backend to run.)
@@ -1852,3 +1881,109 @@ Below is an example, you can see that at this size/resolution, 25% is almost as
Here is a zoom on the same image :
![four previews](screenshots/four_preview_zoom.png)
#### 134 - 🔊▶ Play Audio
**Description:**
This node will just play a bell.
For example, if you have a workflow that takes a while and you want to be alerted every time it's over.
![play_audio_1](screenshots/play_audio_1.png)
You can connect to it a custom path of an audio file :
![play_audio_2](screenshots/play_audio_2.png)
Or send it an AUDIO type format :
![play_audio_3](screenshots/play_audio_3.png)
#### 135 - 🔛✨ Anything Switch On/Off
**Description:**
Basic switch that will not send anything if toggled off.
below is an example with the compatible "combine image node", here you can see that the top image was ignored.
![switch_anything](screenshots/switch_anything.png)
#### 136 - 🔛📝 Text Switch On/Off
**Description:**
Tired of disconnecting nodes you don't want for a moment ?
Maybe you are working on this input, but your workflow isn't ready for it yet ?
Well now you can quickly enable / disable it. (If disabled you will see it in red.)
![switch_text](screenshots/switch_text.png)
If connected with my combine text node, you can use a special option `ONLY_ME_combine_text` that will tell combine text to write ONLY the selected node. It will ignore all the otehrs. (Here will appear in blue.) :
![switch_text_onlyme](screenshots/switch_text_onlyme.png)
#### 137 - 🌎🎲 Global Seed Manager
**Description:**
Seed manager.
It is :
- Generating a random seed every run.
- Return the current seed as a STRING that you can use in other nodes with STRING format.
- Return the value of the previously used seed.
- Will save all the seeds used inside a file. (that you can reset with a button.)
If you want to select a seed from this list, use node 138.
![global_seed_manager](screenshots/global_seed_manager.png)
#### 138 - 📑👈 Select from List
**Description:**
Select quickly an element from a LIST. (a STRING with elements separated by ; by default)
Example of LIST : a;b;c;d
Below is an example for quickly selecting the third seed used by Global Seed Manager :
![select_from_list](screenshots/select_from_list.png)
#### 139 - 🎲 Random Integer
**Description:**
Simply return an INT in between the 2 values provided.
![random_int](screenshots/random_int.png)
#### 140 - 🎲 Random Float
**Description:**
Simply return a FLOAT in between the 2 values provided.
![random_float](screenshots/random_float.png)
#### 141 - 🌎✒👉 Global Write Pick Me
**Description:**
Do you enjoy Pick Me chain nodes ?
This one is using IDENTIFIERS (global_pickme_id) instead of connections.
Just pick up a name as global_pickme_id and if the nodes have the same global_pickme_id they will automatically connect to each other.
Below is an example of write + load :
![global_write_pickme_load](screenshots/global_write_pickme_load.png)
#### 142 - 🌎📥 Load Global Pick Me
**Description:**
The node used to recover the values from PICK ME global write nodes.
It will return the value from the currently selecte global_pickme_id.
This node also automatically return a random value from the list with the global_pickme_id.
Below is an example of write + load :
![global_write_pickme_load](screenshots/global_write_pickme_load.png)

View File

@@ -110,7 +110,23 @@ from .images_compare import FourImageViewer
from .write_pickme_chain import WriteTextPickMeChain
# from .todo import ToDoList
from .text_to_variable import TextToVariable
from .random_stuff import RandomIntNode, RandomFloatNode
from .global_seed_manager import GlobalSeedManager
from .play_sound import PlayAudio
from .switches import SwitchText, SwitchAnything
from .write_pickme_global import WriteTextPickMeGlobal, LoadTextPickMeGlobal
from .list_selector import ListSelector
NODE_CLASS_MAPPINGS = {
"Bjornulf_ListSelector": ListSelector,
"Bjornulf_WriteTextPickMeGlobal": WriteTextPickMeGlobal,
"Bjornulf_LoadTextPickMeGlobal": LoadTextPickMeGlobal,
"Bjornulf_PlayAudio": PlayAudio,
"Bjornulf_SwitchText": SwitchText,
"Bjornulf_SwitchAnything": SwitchAnything,
"Bjornulf_GlobalSeedManager": GlobalSeedManager,
"Bjornulf_RandomIntNode": RandomIntNode,
"Bjornulf_RandomFloatNode": RandomFloatNode,
"Bjornulf_TextToVariable": TextToVariable,
# "Bjornulf_ToDoList": ToDoList,
# "Bjornulf_WriteTextPickMe": WriteTextPickMe,
@@ -257,10 +273,20 @@ NODE_CLASS_MAPPINGS = {
}
NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_ListSelector": "📑👈 Select from List",
"Bjornulf_PlayAudio": "🔊▶ Play Audio",
"Bjornulf_SwitchText": "🔛📝 Text Switch On/Off",
"Bjornulf_SwitchAnything": "🔛✨ Anything Switch On/Off",
"Bjornulf_GlobalSeedManager": "🌎🎲 Global Seed Manager",
"Bjornulf_RandomIntNode": "🎲 Random Integer",
"Bjornulf_RandomFloatNode": "🎲 Random Float",
"Bjornulf_WriteTextPickMeGlobal": "🌎✒👉 Global Write Pick Me",
"Bjornulf_LoadTextPickMeGlobal": "🌎📥 Load Global Pick Me",
"Bjornulf_TextToVariable": "📌🅰️ Set Variable from Text",
# "Bjornulf_ToDoList": "ToDoList",
# "Bjornulf_WriteTextPickMe": "✒👉 Write Pick Me",
"Bjornulf_WriteTextPickMeChain": "✒👉 Write Pick Me Chain",
# "Bjornulf_PickByText": "✒👉 Pick Me by Text",
# "Bjornulf_PickMe": "✋ Recover Pick Me ! ✋",
"Bjornulf_FourImageViewer": "🖼👁 Preview 1-4 images (compare)",
"Bjornulf_PreviewFirstImage": "🖼👁 Preview (first) image",
@@ -393,7 +419,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_LoadTextFromPath": "📥 Load Text From Path",
"Bjornulf_LoadTextFromFolder": "📥 Load Text From Bjornulf Folder",
"Bjornulf_CombineTexts": "🔗 Combine (Texts)",
"Bjornulf_imagesToVideo": "📹 images to video (FFmpeg)",
"Bjornulf_imagesToVideo": "🖼➜📹 images to video (FFMPEG Save Video)",
"Bjornulf_VideoPingPong": "📹 video PingPong",
"Bjornulf_ollamaLoader": "🦙 Ollama (Description)",
"Bjornulf_FreeVRAM": "🧹 Free VRAM hack",

View File

@@ -1,8 +1,9 @@
import torch
import numpy as np
# import logging
class CombineImages:
SPECIAL_PREFIX = "ImSpEcIaL" # The special text prefix to look for
@classmethod
def INPUT_TYPES(cls):
return {
@@ -21,11 +22,33 @@ class CombineImages:
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def all_in_one_images(self, number_of_images, all_in_one, ** kwargs):
images = [kwargs[f"image_{i}"] for i in range(1, number_of_images + 1) if f"image_{i}" in kwargs]
def all_in_one_images(self, number_of_images, all_in_one, **kwargs):
# Retrieve all inputs based on number_of_images
inputs = [kwargs.get(f"image_{i}", None) for i in range(1, number_of_images + 1)]
# for i, img in enumerate(images):
# logging.info(f"Image {i+1} shape: {img.shape}, dtype: {img.dtype}, min: {img.min()}, max: {img.max()}")
# Check for special text input with "ImSpEcIaL" prefix
for i, inp in enumerate(inputs):
if isinstance(inp, str):
if inp.startswith(self.SPECIAL_PREFIX):
# Extract the text after the prefix (for logging or future use)
text_after_prefix = inp[len(self.SPECIAL_PREFIX):].lstrip()
# Return a dummy image as a placeholder
# Note: Adjust this to return an actual image if necessary
dummy_image = torch.zeros((1, 256, 256, 3), dtype=torch.float32)
return (dummy_image,)
else:
# Ignore non-special text inputs (e.g., empty strings or other text)
inputs[i] = None
# Filter out None values (ignored inputs) and non-image inputs
images = []
for inp in inputs:
if inp is not None and not isinstance(inp, str):
images.append(inp)
# Check if there are any valid images
if not images:
raise ValueError("No valid image inputs provided after filtering non-image inputs.")
if all_in_one:
# Check if all images have the same shape
@@ -70,7 +93,7 @@ class CombineImages:
return (all_in_oned,)
else:
# Return a single tuple containing all images (original behavior)
# Return a single tuple containing all valid images
return (images,)
@classmethod
@@ -78,7 +101,7 @@ class CombineImages:
return float("NaN")
@classmethod
def VALIDATE_INPUTS(cls, ** kwargs):
def VALIDATE_INPUTS(cls, **kwargs):
if kwargs['all_in_one']:
cls.OUTPUT_IS_LIST = (False,)
else:

View File

@@ -1,4 +1,6 @@
class CombineTexts:
SPECIAL_PREFIX = "ImSpEcIaL" # The special text (password) to look for
@classmethod
def INPUT_TYPES(cls):
return {
@@ -25,11 +27,22 @@ class CombineTexts:
else:
return str(item)
combined_text = self.get_delimiter(delimiter).join([
flatten(kwargs[f"text_{i}"])
# Check each input for the special prefix
for i in range(1, number_of_inputs + 1):
text_key = f"text_{i}"
if text_key in kwargs:
text = flatten(kwargs[text_key])
if text.startswith(self.SPECIAL_PREFIX):
# Output only the text after the prefix, stripping leading whitespace
return (text[len(self.SPECIAL_PREFIX):].lstrip(),)
# If no prefix is found, combine all non-empty inputs as usual
text_entries = [
flatten(kwargs.get(f"text_{i}", ""))
for i in range(1, number_of_inputs + 1)
if f"text_{i}" in kwargs
])
if f"text_{i}" in kwargs and flatten(kwargs.get(f"text_{i}", "")).strip() != ""
]
combined_text = self.get_delimiter(delimiter).join(text_entries)
return (combined_text,)
@staticmethod

View File

@@ -8,19 +8,25 @@ class FFmpegConfig:
return {
"required": {
"ffmpeg_path": ("STRING", {"default": "ffmpeg"}),
"video_codec": ([
"container_format": ([
"None",
"mp4",
"mkv",
"webm",
"mov",
"avi"
], {"default": "mkv"}),
"video_codec": ([
"Auto",
"copy",
"libx264 (H.264)",
"h264_nvenc (H.264 / NVIDIA GPU)",
"libx265 (H.265)",
"hevc_nvenc (H.265 / NVIDIA GPU)",
"libvpx-vp9 (WebM)",
"libaom-av1"
], {"default": "None"}),
"video_bitrate": ("STRING", {"default": "3045k"}),
"libaom-av1",
"av1_nvenc (av1 / NVIDIA GPU)",
], {"default": "libx265 (H.265)"}),
"preset": ([
"None",
"ultrafast",
@@ -32,8 +38,8 @@ class FFmpegConfig:
"slow",
"slower",
"veryslow"
], {"default": "medium"}),
], {"default": "veryslow"}),
"crf": ("INT", {"default": 10, "min": 1, "max": 63}),
"pixel_format": ([
"None",
"yuv420p",
@@ -43,18 +49,7 @@ class FFmpegConfig:
"rgb24",
"rgba",
"yuva420p"
], {"default": "yuv420p"}),
"container_format": ([
"None",
"mp4",
"mkv",
"webm",
"mov",
"avi"
], {"default": "mp4"}),
"crf": ("INT", {"default": 19, "min": 1, "max": 63}),
], {"default": "yuv444p10le"}),
"force_fps": ("FLOAT", {
"default": 0.0,
@@ -67,7 +62,7 @@ class FFmpegConfig:
"width": ("INT", {"default": 0, "min": 0, "max": 10000}),
"height": ("INT", {"default": 0, "min": 0, "max": 10000}),
"ignore_audio": ("BOOLEAN", {"default": False}),
"enable_change_audio": ("BOOLEAN", {"default": False}),
"audio_codec": ([
"None",
"copy",
@@ -77,9 +72,12 @@ class FFmpegConfig:
"libopus",
"none"
], {"default": "aac"}),
"enabled_audio_bitrate": ("BOOLEAN", {"default": False}),
"audio_bitrate": ("STRING", {"default": "192k"}),
"force_transparency": ("BOOLEAN", {
"enabled_static_video_bitrate": ("BOOLEAN", {"default": False}),
"video_bitrate": ("STRING", {"default": "3045k"}),
"force_transparency_webm": ("BOOLEAN", {
"default": False,
"description": "Force transparency in WebM output"
}),
@@ -114,10 +112,11 @@ class FFmpegConfig:
},
"video": {
"codec": config["video_codec"] or "None",
"bitrate": config["video_bitrate"],
"bitrate_mode": "static" if config["enabled_static_video_bitrate"] else "crf",
"bitrate": config["video_bitrate"] if config["enabled_static_video_bitrate"] else None,
"preset": config["preset"] or "None",
"pixel_format": config["pixel_format"] or "None",
"crf": config["crf"],
"crf": config["crf"] if not config["enabled_static_video_bitrate"] else None,
"resolution": (
{"width": config["width"], "height": config["height"]}
if (config["enabled_change_resolution"] and config["width"] > 0 and config["height"] > 0)
@@ -127,12 +126,12 @@ class FFmpegConfig:
"force_fps": config["force_fps"],
"enabled": config["force_fps"] > 0
},
"force_transparency": config["force_transparency"]
"force_transparency_webm": config["force_transparency_webm"]
},
"audio": {
"enabled": not config["ignore_audio"],
# "enabled": not config["enable_change_audio"], #DONT SEND THAT ANYMORE, IT IS DECIDED IF HAVE audio / audio_path, just used to set stuff below
"codec": config["audio_codec"] or "None",
"bitrate": config["audio_bitrate"]
"bitrate": None if not config["enabled_audio_bitrate"] or not config["enable_change_audio"] else config["audio_bitrate"],
},
"output": {
"container_format": config["container_format"] or "None"
@@ -140,34 +139,35 @@ class FFmpegConfig:
}
return json.dumps(config_info, indent=2)
def create_config(self, ffmpeg_path, ignore_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, enabled_change_resolution,
width, height, force_transparency):
def create_config(self, ffmpeg_path, enable_change_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, enabled_change_resolution,
width, height, force_transparency_webm, enabled_static_video_bitrate, enabled_audio_bitrate):
config = {
"ffmpeg_path": ffmpeg_path,
"video_bitrate": video_bitrate,
"video_bitrate": video_bitrate if enabled_static_video_bitrate else None,
"preset": None if preset == "None" else preset,
"crf": crf,
"force_fps": force_fps,
"enabled_change_resolution": enabled_change_resolution,
"ignore_audio": ignore_audio,
"audio_bitrate": audio_bitrate,
# "enable_change_audio": enable_change_audio,
"audio_bitrate": audio_bitrate if not enabled_audio_bitrate or not enable_change_audio else None,
"width": width,
"height": height,
"video_codec": video_codec.split(" ")[0] if video_codec != "None" else None,
"video_codec": video_codec.split(" ")[0] if video_codec != "Auto" else None,
"pixel_format": None if pixel_format == "None" else pixel_format,
"container_format": None if container_format == "None" else container_format,
"audio_codec": None if audio_codec == "None" or ignore_audio else audio_codec,
"force_transparency": force_transparency
"audio_codec": None if audio_codec == "None" or not enable_change_audio else audio_codec,
"force_transparency_webm": force_transparency_webm,
"enabled_static_video_bitrate": enabled_static_video_bitrate,
"enabled_audio_bitrate": enabled_audio_bitrate
}
return (self.create_json_output(config),)
@classmethod
def IS_CHANGED(cls, ffmpeg_path, ignore_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, enabled_change_resolution,
width, height, force_transparency) -> float:
def IS_CHANGED(cls, ffmpeg_path, enable_change_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, enabled_change_resolution,
width, height, force_transparency_webm, enabled_static_video_bitrate, enabled_audio_bitrate) -> float:
return 0.0

View File

@@ -31,16 +31,16 @@ class ConvertVideo:
"""Provide basic default configuration."""
return {
'ffmpeg_path': 'ffmpeg', # Assuming ffmpeg is in PATH
'video_codec': 'copy',
'video_bitrate': '3045K',
'video_codec': 'libx264',
'video_bitrate': None,
'preset': 'medium',
'pixel_format': 'yuv420p',
'container_format': 'mp4',
'crf': 19,
'force_fps': 30,
'force_fps': 0,
'width': None,
'height': None,
'ignore_audio': False,
'ignore_audio': True,
'audio_codec': 'aac',
'audio_bitrate': '128k'
}

View File

@@ -6,6 +6,7 @@ import json
from PIL import Image
import soundfile as sf
import glob
import logging
class imagesToVideo:
@classmethod
@@ -19,12 +20,13 @@ class imagesToVideo:
},
"optional": {
"audio": ("AUDIO",),
"audio_path": ("STRING", {"forceInput": True}),
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
},
}
RETURN_TYPES = ("STRING", "STRING",)
RETURN_NAMES = ("comment", "ffmpeg_command",)
RETURN_TYPES = ("STRING", "STRING","STRING",)
RETURN_NAMES = ("comment", "ffmpeg_command", "video_path",)
FUNCTION = "image_to_video"
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
@@ -41,66 +43,87 @@ class imagesToVideo:
def run_ffmpeg_python(self, ffmpeg_cmd, output_file, ffmpeg_path):
try:
import ffmpeg
except ImportError as e:
print(f"Error importing ffmpeg-python: {e}")
except ImportError:
logging.error("ffmpeg-python library not installed")
return False, "ffmpeg-python library not installed"
try:
# Reconstruct the command using ffmpeg-python syntax
inputs = []
streams = []
audio_added = False
# Find frame rate
idx_fr = ffmpeg_cmd.index('-framerate')
fps = ffmpeg_cmd[idx_fr + 1]
# Parse command elements
i = 0
while i < len(ffmpeg_cmd):
if ffmpeg_cmd[i] == "-framerate":
framerate = float(ffmpeg_cmd[i+1])
i += 2
elif ffmpeg_cmd[i] == "-i":
if "frame_" in ffmpeg_cmd[i+1]: # Image sequence input
video_input = ffmpeg.input(ffmpeg_cmd[i+1], framerate=framerate)
streams.append(video_input.video)
else: # Audio input
audio_input = ffmpeg.input(ffmpeg_cmd[i+1])
streams.append(audio_input.audio)
audio_added = True
i += 2
elif ffmpeg_cmd[i] == "-vf":
filters = ffmpeg_cmd[i+1].split(',')
for f in filters:
if 'scale=' in f:
w, h = f.split('=')[1].split(':')
video_input = video_input.filter('scale', w, h)
i += 2
elif ffmpeg_cmd[i] in ["-c:v", "-preset", "-crf", "-cq", "-b:v", "-pix_fmt"]:
key = ffmpeg_cmd[i][1:]
value = ffmpeg_cmd[i+1]
if key == 'c:v':
streams[-1] = streams[-1].output(vcodec=value)
elif key == 'preset':
streams[-1] = streams[-1].output(preset=value)
elif key in ['crf', 'cq']:
streams[-1] = streams[-1].output(**{key: value})
elif key == 'b:v':
streams[-1] = streams[-1].output(**{'b:v': value})
elif key == 'pix_fmt':
streams[-1] = streams[-1].output(pix_fmt=value)
i += 2
else:
i += 1
# Find all input indices
idx_inputs = [i for i, x in enumerate(ffmpeg_cmd) if x == '-i']
if not idx_inputs:
return False, "Error: No input found"
# Handle output
output = ffmpeg.output(*streams, output_file)
# First input is the image sequence
image_sequence = ffmpeg_cmd[idx_inputs[0] + 1]
# Second input (if present) is the audio file
audio_file = ffmpeg_cmd[idx_inputs[1] + 1] if len(idx_inputs) > 1 else None
# Determine position after the last input
idx_after = idx_inputs[-1] + 2
# Check for video filter
filter_graph = None
output_options_start = idx_after
if idx_after < len(ffmpeg_cmd) - 1 and ffmpeg_cmd[idx_after] == '-vf':
filter_graph = ffmpeg_cmd[idx_after + 1]
output_options_start = idx_after + 2
# Extract output options (everything between last input/filter and output file)
output_options = ffmpeg_cmd[output_options_start:-1]
if len(output_options) % 2 != 0:
return False, "Error: Output options have odd number of elements"
# Convert output options to a dictionary, preserving colons
options = {}
for i in range(0, len(output_options), 2):
key = output_options[i].lstrip('-') # Remove '-' but keep ':'
value = output_options[i + 1]
options[key] = value
# Add filter graph to options if present
if filter_graph:
options['vf'] = filter_graph
# Create video input
video_input = ffmpeg.input(image_sequence, framerate=fps)
video_stream = video_input.video
# Create audio input if present
audio_stream = None
if audio_file:
audio_input = ffmpeg.input(audio_file)
audio_stream = audio_input.audio
# Construct output
if audio_stream:
output = ffmpeg.output(video_stream, audio_stream, output_file, **options)
else:
output = ffmpeg.output(video_stream, output_file, **options)
# Execute FFmpeg command
output.run(cmd=ffmpeg_path, overwrite_output=True)
logging.debug(f"FFmpeg-python executed successfully for {output_file}")
return True, "Success"
except ffmpeg.Error as e:
return False, f"FFmpeg error: {e.stderr.decode()}"
error_message = "Unknown FFmpeg error"
if hasattr(e, 'stderr') and e.stderr is not None:
try:
error_message = e.stderr.decode(errors='replace')
except Exception as decode_err:
error_message = f"Could not decode stderr: {decode_err}"
logging.error(f"FFmpeg-python failed: {error_message}\nCommand: {' '.join(ffmpeg_cmd)}")
return False, f"FFmpeg error: {error_message}\nCommand: {' '.join(ffmpeg_cmd)}"
except Exception as e:
logging.error(f"Unexpected error in FFmpeg-python: {str(e)}")
return False, f"Error: {str(e)}"
def image_to_video(self, images, fps, name_prefix, use_python_ffmpeg=False, audio=None, FFMPEG_CONFIG_JSON=None):
def image_to_video(self, images, fps, name_prefix, use_python_ffmpeg=False, audio=None, audio_path=None, FFMPEG_CONFIG_JSON=None):
ffmpeg_config = self.parse_ffmpeg_config(FFMPEG_CONFIG_JSON)
format = "mp4"
@@ -134,12 +157,24 @@ class imagesToVideo:
img = img.convert("RGBA")
img.save(os.path.join(temp_dir, f"frame_{i:04d}.png"))
# Handle audio from either AUDIO type or audio_path
temp_audio_file = None
if audio is not None and (not ffmpeg_config or not ffmpeg_config["audio"]["enabled"]):
temp_audio_file = os.path.join(temp_dir, "temp_audio.wav")
waveform = audio['waveform'].squeeze().numpy()
sample_rate = audio['sample_rate']
sf.write(temp_audio_file, waveform, sample_rate)
# Always use audio if either audio or audio_path is provided
# logging.info(f"audio : {audio}")
# logging.info(f"audio_path : {audio_path}")
audio_enabled = (audio is not None) or (audio_path is not None and os.path.exists(audio_path))
# logging.info(f"audio_enabled : {audio_enabled}")
if audio_enabled:
if audio is not None:
# Process AUDIO type input
temp_audio_file = os.path.join(temp_dir, "temp_audio.wav")
waveform = audio['waveform'].squeeze().numpy()
sample_rate = audio['sample_rate']
sf.write(temp_audio_file, waveform, sample_rate)
elif audio_path and os.path.exists(audio_path):
# Use provided audio path directly
temp_audio_file = audio_path
ffmpeg_path = "ffmpeg"
if ffmpeg_config and ffmpeg_config["ffmpeg"]["path"]:
@@ -152,10 +187,11 @@ class imagesToVideo:
"-i", os.path.join(temp_dir, "frame_%04d.png"),
]
# logging.info(f"temp_audio_file : {temp_audio_file}")
if temp_audio_file:
ffmpeg_cmd.extend(["-i", temp_audio_file])
if ffmpeg_config and format == "webm" and ffmpeg_config["video"]["force_transparency"]:
if ffmpeg_config and format == "webm" and ffmpeg_config["video"]["force_transparency_webm"]:
ffmpeg_cmd.extend([
"-vf", "scale=iw:ih,format=rgba,split[s0][s1];[s0]lutrgb=r=0:g=0:b=0:a=0[transparent];[transparent][s1]overlay"
])
@@ -181,7 +217,7 @@ class imagesToVideo:
if ffmpeg_config["video"]["resolution"]:
scale_filter = f"scale={ffmpeg_config['video']['resolution']['width']}:{ffmpeg_config['video']['resolution']['height']}"
if format == "webm" and ffmpeg_config["video"]["force_transparency"]:
if format == "webm" and ffmpeg_config["video"]["force_transparency_webm"]:
current_filter_idx = ffmpeg_cmd.index("-vf") + 1
current_filter = ffmpeg_cmd[current_filter_idx]
ffmpeg_cmd[current_filter_idx] = scale_filter + "," + current_filter
@@ -191,12 +227,25 @@ class imagesToVideo:
if ffmpeg_config["video"]["fps"]["enabled"]:
ffmpeg_cmd.extend(["-r", str(ffmpeg_config["video"]["fps"]["force_fps"])])
if not ffmpeg_config["audio"]["enabled"]:
ffmpeg_cmd.extend(["-an"])
elif ffmpeg_config["audio"]["codec"] != "None" and temp_audio_file:
ffmpeg_cmd.extend(["-c:a", ffmpeg_config["audio"]["codec"]])
if ffmpeg_config["audio"]["bitrate"]:
ffmpeg_cmd.extend(["-b:a", ffmpeg_config["audio"]["bitrate"]])
# if not ffmpeg_config["audio"]["enabled"]:
# ffmpeg_cmd.extend(["-an"])
# elif ffmpeg_config["audio"]["codec"] != "None" and temp_audio_file:
# ffmpeg_config["audio"]["codec"] != "None" and
#Need codec ????
if temp_audio_file:
# Check if we have ffmpeg_config with audio codec settings
if ffmpeg_config and "audio" in ffmpeg_config and ffmpeg_config["audio"]["codec"] != "None":
ffmpeg_cmd.extend(["-c:a", ffmpeg_config["audio"]["codec"]])
if "bitrate" in ffmpeg_config["audio"] and ffmpeg_config["audio"]["bitrate"]:
ffmpeg_cmd.extend(["-b:a", ffmpeg_config["audio"]["bitrate"]])
else:
# Use default audio codec based on format if no specific codec is set
if format == "mp4":
ffmpeg_cmd.extend(["-c:a", "aac"])
elif format == "webm":
ffmpeg_cmd.extend(["-c:a", "libvorbis"])
else:
ffmpeg_cmd.extend(["-an"]) # No audio
else:
if format == "mp4":
ffmpeg_cmd.extend([
@@ -210,8 +259,8 @@ class imagesToVideo:
elif format == "webm":
ffmpeg_cmd.extend([
"-c:v", "libvpx-vp9",
"-crf", "30",
"-b:v", "0",
"-crf", "19",
# "-b:v", "0",
"-pix_fmt", "yuva420p"
])
if temp_audio_file:
@@ -232,6 +281,465 @@ class imagesToVideo:
print(f"Error creating video: {e}")
comment = f"Error creating video: {e}"
finally:
print("Temporary files not removed for debugging purposes.")
# Only remove temp_audio_file if it was created here (not if it's an external path)
if temp_audio_file and audio_path != temp_audio_file:
print("Temporary files not removed for debugging purposes.")
return (comment,ffmpeg_cmd,)
# Generate configuration report
comment_lines = []
comment_lines.append("📽 Video Generation Configuration Report 📽\n")
# Quick format overview based on selected format
if format.lower() == "mp4":
comment_lines.append("MP4 FORMAT OVERVIEW:")
comment_lines.append("✅ Advantages: Universal compatibility, excellent streaming support")
comment_lines.append("❌ Drawbacks: No transparency support, less efficient than newer formats")
comment_lines.append("🏆 Best for: General distribution, web streaming, maximum device compatibility\n")
elif format.lower() == "webm":
comment_lines.append("WEBM FORMAT OVERVIEW:")
comment_lines.append("✅ Advantages: Better compression efficiency, transparency support, open format")
comment_lines.append("❌ Drawbacks: Limited compatibility on older devices/iOS, slower encoding")
comment_lines.append("🏆 Best for: Web delivery, animations with transparency, modern browsers\n")
elif format.lower() == "mov":
comment_lines.append("MOV FORMAT OVERVIEW:")
comment_lines.append("✅ Advantages: Professional codec support, good for editing workflows, Apple ecosystem")
comment_lines.append("❌ Drawbacks: Larger file sizes, less web-friendly")
comment_lines.append("🏆 Best for: Professional workflows, Mac/iOS delivery, intermediate editing files\n")
elif format.lower() == "mkv":
comment_lines.append("MKV FORMAT OVERVIEW:")
comment_lines.append("✅ Advantages: Superior flexibility, supports all codecs, multiple audio/subtitle tracks")
comment_lines.append("❌ Drawbacks: Not viewable in browsers, limited device support")
comment_lines.append("🏆 Best for: Archiving, local playback, advanced feature support\n")
elif format.lower() == "gif":
comment_lines.append("GIF FORMAT OVERVIEW:")
comment_lines.append("✅ Advantages: Universal compatibility, simple animation support")
comment_lines.append("❌ Drawbacks: Extremely inefficient compression, limited to 256 colors, no audio")
comment_lines.append("🏆 Best for: Simple animations, maximum compatibility\n")
# Basic parameters section
comment_lines.append("=== Core Parameters ===")
comment_lines.append(f"• FPS: {fps} ({24 if fps == 24 else 'custom'} fps)")
if fps == 24:
comment_lines.append(" 24 fps is the cinema standard, offering a classic film look")
elif fps == 30:
comment_lines.append(" 30 fps provides smoother motion for general video content")
elif fps == 60:
comment_lines.append(" 60 fps delivers very smooth motion ideal for gaming/sports")
elif fps > 60:
comment_lines.append(" High frame rate (>60 fps) used for slow-motion effects")
comment_lines.append(f" 📊 Valid range: 1-120 fps (Higher values increase file size significantly)")
comment_lines.append(f"• Output Naming: '{name_prefix}'")
comment_lines.append(f" 📁 Full path: {output_file}")
comment_lines.append(f"• Execution Mode: {'Python ffmpeg' if use_python_ffmpeg else 'System FFmpeg'}")
if use_python_ffmpeg:
comment_lines.append(" Python FFmpeg: Integrated library approach with cleaner error handling")
comment_lines.append(" ⚠️ May have fewer codec options than system FFmpeg")
comment_lines.append(" 💡 For next improvement: Switch to system FFmpeg for access to more codecs and options")
else:
comment_lines.append(" System FFmpeg: Direct shell access with full codec/options support")
comment_lines.append(" ⚠️ Requires FFmpeg to be installed and in system PATH")
# Video configuration section
comment_lines.append("\n=== Video Encoding Configuration ===")
if ffmpeg_config:
comment_lines.append("🔧 Custom Configuration Active")
v = ffmpeg_config.get('video', {})
default_codec = "libx264"
# Codec information
if format.lower() == "webm":
default_codec = "libvpx-vp9"
codec = v.get('codec', default_codec)
comment_lines.append(f"• Codec: {codec}")
if "264" in codec:
comment_lines.append(" H.264/AVC: Universal compatibility, good balance of quality and size")
comment_lines.append(" ⭐ Quality: 8/10 | Compatibility: 10/10 | Encoding Speed: 7/10")
comment_lines.append(" 💡 For next improvement: Consider H.265/HEVC for 20-30% better compression at same quality")
elif "265" in codec in codec.lower():
comment_lines.append(" H.265/HEVC: Better compression than H.264, but slower encoding")
comment_lines.append(" ⭐ Quality: 9/10 | Compatibility: 6/10 | Encoding Speed: 5/10")
comment_lines.append(" ⚠️ Limited browser/device support, best for archiving")
comment_lines.append(" 💡 For next improvement: Try AV1 for even better compression")
elif "vp9" in codec.lower():
comment_lines.append(" VP9: Google's open codec with excellent quality-to-size ratio")
comment_lines.append(" ⭐ Quality: 9/10 | Compatibility: 7/10 | Encoding Speed: 4/10")
comment_lines.append(" ✅ Good support in modern browsers, especially Chrome")
comment_lines.append(" 💡 For next improvement: Consider AV1 for 20% better compression or faster encoding preset")
elif "av1" in codec.lower():
comment_lines.append(" AV1: Next-gen open codec with superior compression")
comment_lines.append(" ⭐ Quality: 10/10 | Compatibility: 5/10 | Encoding Speed: 2/10")
comment_lines.append(" ⚠️ Very slow encoding, requires modern hardware")
comment_lines.append(" 💡 For next improvement: Use SVT-AV1 encoder for faster processing")
# Quality parameters
crf = v.get('crf', 19)
comment_lines.append(f"• Quality: CRF {crf}")
if crf != 'N/A':
if 0 <= int(crf) <= 14:
comment_lines.append(" Very High Quality (CRF 0-14): Nearly lossless, very large files")
comment_lines.append(" ⭐ Visual Quality: 9-10/10 | File Size: Very Large")
elif 15 <= int(crf) <= 19:
comment_lines.append(" High Quality (CRF 15-19): Visually transparent, good for archiving")
comment_lines.append(" ⭐ Visual Quality: 8-9/10 | File Size: Large")
comment_lines.append(" 💡 For next improvement: Lower CRF to 17 for even better quality")
elif 20 <= int(crf) <= 24:
comment_lines.append(" Balanced Quality (CRF 20-24): Good for general distribution")
comment_lines.append(" ⭐ Visual Quality: 7-8/10 | File Size: Moderate")
comment_lines.append(" 💡 For next improvement: Lower CRF to 18 for higher quality or switch to H.265 at same CRF")
elif 25 <= int(crf) <= 30:
comment_lines.append(" Reduced Quality (CRF 25-30): Noticeable compression artifacts")
comment_lines.append(" ⭐ Visual Quality: 5-6/10 | File Size: Small")
comment_lines.append(" 💡 For next improvement: Lower CRF to 22 for better quality-size balance")
else:
comment_lines.append(" Low Quality (CRF 31+): Heavy compression, significant artifacts")
comment_lines.append(" ⭐ Visual Quality: <5/10 | File Size: Very Small")
comment_lines.append(" 💡 For next improvement: Use CRF 28 for better quality with minimal size increase")
comment_lines.append(" ⚠️ Cannot combine CRF with static bitrate settings")
# Encoding speed/preset
preset = v.get('preset', 'medium')
comment_lines.append(f"• Performance: {preset} preset")
if preset == 'ultrafast':
comment_lines.append(" Ultrafast: Maximum encoding speed, largest file size")
comment_lines.append(" ⏱️ Speed: 10/10 | Efficiency: 3/10 | Use case: Live streaming")
comment_lines.append(" 💡 For next improvement: Try 'superfast' for 30% better compression with minimal speed loss")
elif preset == 'superfast' or preset == 'veryfast':
comment_lines.append(" Very Fast: Quick encoding, larger file sizes")
comment_lines.append(" ⏱️ Speed: 8-9/10 | Efficiency: 4-5/10 | Use case: Quick exports")
comment_lines.append(" 💡 For next improvement: Try 'slower' preset for better compression")
elif preset == 'faster' or preset == 'fast':
comment_lines.append(" Fast: Good balance of speed and compression")
comment_lines.append(" ⏱️ Speed: 6-7/10 | Efficiency: 6-7/10 | Use case: General purpose")
comment_lines.append(" 💡 For next improvement: Consider 'veryslow' preset for better compression")
elif preset == 'medium':
comment_lines.append(" Medium: Default preset, balanced speed/compression")
comment_lines.append(" ⏱️ Speed: 5/10 | Efficiency: 7/10 | Use case: Standard encoding")
comment_lines.append(" 💡 For next improvement: Try 'veryslow' preset for 15-20% better compression")
elif preset == 'slow' or preset == 'slower':
comment_lines.append(" Slow: Better compression, slower encoding")
comment_lines.append(" ⏱️ Speed: 3-4/10 | Efficiency: 8-9/10 | Use case: Distribution/archiving")
comment_lines.append(" 💡 For next improvement: Try 'veryslow' for archival quality or reduce CRF slightly")
elif preset == 'veryslow' or preset == 'placebo':
comment_lines.append(" Very Slow: Maximum compression, extremely slow encoding")
comment_lines.append(" ⏱️ Speed: 1-2/10 | Efficiency: 9-10/10 | Use case: Final archiving")
# Bitrate information
bitrate = v.get('bitrate', 'Auto')
comment_lines.append(f"• Bitrate: {bitrate}")
if bitrate == 'Auto':
comment_lines.append(" Auto Bitrate: Determined by CRF value (recommended)")
else:
comment_lines.append(f" Fixed Bitrate: {bitrate}")
comment_lines.append(" ⚠️ Fixed bitrate overrides quality-based settings (CRF)")
# Rough bitrate quality indicators
if isinstance(bitrate, str):
bitrate_value = int(''.join(filter(str.isdigit, bitrate)))
if 'k' in bitrate.lower():
bitrate_value *= 1000
if bitrate_value < 1000000:
comment_lines.append(" ⭐ Quality: Low (< 1 Mbps) - Suitable for mobile/web previews")
comment_lines.append(" 💡 For next improvement: Increase to at least 2-3 Mbps for SD content")
elif 1000000 <= bitrate_value < 5000000:
comment_lines.append(" ⭐ Quality: Medium (1-5 Mbps) - Standard web video")
comment_lines.append(" 💡 For next improvement: Use 5-8 Mbps for higher quality HD content")
elif 5000000 <= bitrate_value < 10000000:
comment_lines.append(" ⭐ Quality: High (5-10 Mbps) - HD streaming")
comment_lines.append(" 💡 For next improvement: Consider two-pass encoding for consistent quality")
elif 10000000 <= bitrate_value < 20000000:
comment_lines.append(" ⭐ Quality: Very High (10-20 Mbps) - Full HD premium content")
comment_lines.append(" 💡 For next improvement: Switch to CRF-based encoding for more efficient sizing")
else:
comment_lines.append(" ⭐ Quality: Ultra High (20+ Mbps) - 4K/professional use")
# Pixel format details
pixel_format = v.get('pixel_format', 'yuv420p/yuva420p')
comment_lines.append(f"• Pixel Format: {pixel_format}")
if '420' in pixel_format:
comment_lines.append(" YUV 4:2:0: Standard chroma subsampling, best compatibility")
comment_lines.append(" ✅ Recommended for most content")
comment_lines.append(" 💡 For next improvement: Consider 4:2:2 for professional content or chroma keying")
elif '422' in pixel_format:
comment_lines.append(" YUV 4:2:2: Better color accuracy, larger files")
comment_lines.append(" ✅ Good for professional content/chroma keying")
comment_lines.append(" 💡 For next improvement: Use 4:4:4 for graphic design work or precision color grading")
elif '444' in pixel_format:
comment_lines.append(" YUV 4:4:4: Full chroma resolution, largest files")
comment_lines.append(" ✅ Best for high-end professional work")
if 'a' in pixel_format:
comment_lines.append(" Alpha channel support active (transparency)")
comment_lines.append(" ⚠️ Only supported in WebM (VP8/VP9) and some MOV containers")
comment_lines.append(" 💡 For next improvement: Use VP9 for better quality transparency")
# Resolution information
if v.get('resolution'):
width = v['resolution']['width']
height = v['resolution']['height']
comment_lines.append(f"• Resolution: {width}x{height}")
# Add resolution category information
if width >= 3840 or height >= 2160:
comment_lines.append(" 4K Ultra HD (3840×2160 or higher)")
comment_lines.append(" ⚠️ Very large files, may require powerful hardware to play")
comment_lines.append(" 💡 For next improvement: Try 1440p (2560×1440) for better balance of quality and size")
elif width >= 1920 or height >= 1080:
comment_lines.append(" Full HD (1920×1080)")
comment_lines.append(" ✅ Standard for high-quality video")
comment_lines.append(" 💡 For next improvement: Consider H.265/HEVC codec for better compression at this resolution")
elif width >= 1280 or height >= 720:
comment_lines.append(" HD (1280×720)")
comment_lines.append(" ✅ Good balance of quality and file size")
comment_lines.append(" 💡 For next improvement: Upgrade to 1080p for higher quality or lower CRF")
elif width >= 854 or height >= 480:
comment_lines.append(" SD (854×480 or similar)")
comment_lines.append(" ✅ Suitable for mobile devices or low bandwidth")
comment_lines.append(" 💡 For next improvement: Increase to 720p for better viewing experience")
else:
comment_lines.append(" Low Resolution (< 480p)")
comment_lines.append(" ⚠️ May appear pixelated on modern displays")
comment_lines.append(" 💡 For next improvement: Increase to at least 480p for acceptable quality")
# Container format detailed information
comment_lines.append(f"• Container: {format.upper()}")
else:
comment_lines.append(f"🔄 Default {format.upper()} Configuration:")
comment_lines.append("• Codec: " + ("libx264" if format == "mp4" else "libvpx-vp9"))
comment_lines.append("• CRF: 19")
comment_lines.append("• Preset: medium" + (" (slow for VP9)" if format == "webm" else ""))
comment_lines.append(" 💡 For next improvement: Lower CRF to 16-18 for better quality")
# Container format information
comment_lines.append("\n=== Container Format Details ===")
if format.lower() == "mp4":
comment_lines.append("• MP4 (.mp4)")
comment_lines.append(" Universal compatibility with nearly all devices and platforms")
comment_lines.append(" ✅ Excellent for web, mobile, and general distribution")
comment_lines.append(" ✅ Supports H.264, H.265, AAC audio")
comment_lines.append(" ❌ Limited support for transparency")
comment_lines.append(" ⭐ Compatibility: 10/10 | Flexibility: 7/10")
comment_lines.append(" 💡 For next improvement: Consider H.265 in MP4 for 30% smaller files")
elif format.lower() == "webm":
comment_lines.append("• WebM (.webm)")
comment_lines.append(" Open format optimized for web delivery")
comment_lines.append(" ✅ Excellent support in modern browsers")
comment_lines.append(" ✅ Native support for transparency (alpha channel)")
comment_lines.append(" ✅ Supports VP8/VP9 video, Vorbis/Opus audio")
comment_lines.append(" ❌ Limited support on older devices/iOS")
comment_lines.append(" ⭐ Compatibility: 7/10 | Web Performance: 9/10")
comment_lines.append(" 💡 For next improvement: Try AV1 in WebM for better quality/size ratio")
elif format.lower() == "mov":
comment_lines.append("• QuickTime (.mov)")
comment_lines.append(" Apple's native container format")
comment_lines.append(" ✅ Excellent for macOS/iOS ecosystem")
comment_lines.append(" ✅ Good support for professional codecs (ProRes, DNxHD)")
comment_lines.append(" ✅ Can support transparency")
comment_lines.append(" ❌ Less compatible outside Apple ecosystem")
comment_lines.append(" ⭐ Compatibility: 6/10 | Professional Use: 8/10")
comment_lines.append(" 💡 For next improvement: Use ProRes 422 for editing workflows or H.264 for delivery")
elif format.lower() == "mkv":
comment_lines.append("• Matroska (.mkv)")
comment_lines.append(" Highly flexible open container format")
comment_lines.append(" ✅ Supports virtually all codecs and features")
comment_lines.append(" ✅ Excellent for archiving and local playback")
comment_lines.append(" ✅ Supports multiple audio/subtitle tracks")
comment_lines.append(" ❌ Not natively supported in browsers or some devices")
comment_lines.append(" ⚠️ Cannot be viewed directly in web browsers")
comment_lines.append(" ⭐ Compatibility: 5/10 | Flexibility: 10/10")
comment_lines.append(" 💡 For next improvement: Use H.265 or AV1 codec inside MKV for best archival quality")
elif format.lower() == "gif":
comment_lines.append("• GIF (.gif)")
comment_lines.append(" Simple animated image format")
comment_lines.append(" ✅ Universal compatibility across all platforms")
comment_lines.append(" ✅ Supports basic transparency")
comment_lines.append(" ❌ Limited to 256 colors, no audio")
comment_lines.append(" ❌ Very inefficient compression (large files)")
comment_lines.append(" ⭐ Compatibility: 10/10 | Quality: 2/10")
comment_lines.append(" 💡 For next improvement: Use WebM or MP4 with autoplay for much better quality/size")
# Audio configuration section
comment_lines.append("\n=== Audio Configuration ===")
if audio_enabled:
comment_lines.append(f"• Audio Source: {'Direct input' if audio else 'External file'}")
if ffmpeg_config and ffmpeg_config.get('audio'):
a = ffmpeg_config['audio']
codec = a.get('codec', 'AAC/Vorbis')
comment_lines.append(f"• Codec: {codec}")
if 'aac' in codec.lower():
comment_lines.append(" AAC: High-quality lossy compression, excellent compatibility")
comment_lines.append(" ⭐ Quality: 8/10 | Compatibility: 10/10")
comment_lines.append(" 💡 For next improvement: Use higher bitrate (192+ kbps) or switch to Opus for better quality")
elif 'opus' in codec.lower():
comment_lines.append(" Opus: Modern codec with superior quality at low bitrates")
comment_lines.append(" ⭐ Quality: 9/10 | Compatibility: 7/10")
comment_lines.append(" 💡 For next improvement: Fine-tune VBR settings or increase bitrate by 10-20%")
elif 'vorbis' in codec.lower():
comment_lines.append(" Vorbis: Open audio codec, good quality-to-size ratio")
comment_lines.append(" ⭐ Quality: 7/10 | Compatibility: 8/10")
comment_lines.append(" 💡 For next improvement: Switch to Opus for better quality at same bitrate")
elif 'mp3' in codec.lower():
comment_lines.append(" MP3: Widely compatible but older codec technology")
comment_lines.append(" ⭐ Quality: 6/10 | Compatibility: 10/10")
comment_lines.append(" 💡 For next improvement: Switch to AAC for better quality at same bitrate")
elif 'flac' in codec.lower() or 'alac' in codec.lower():
comment_lines.append(" FLAC/ALAC: Lossless audio compression")
comment_lines.append(" ⭐ Quality: 10/10 | Compatibility: 6/10 | File Size: Large")
bitrate = a.get('bitrate', 'Default')
comment_lines.append(f"• Bitrate: {bitrate}")
# Audio bitrate quality indicators
if bitrate != 'Default':
if isinstance(bitrate, str):
bitrate_value = int(''.join(filter(str.isdigit, bitrate)))
if 'k' in bitrate.lower():
bitrate_value *= 1000
if bitrate_value < 96000:
comment_lines.append(" Low Bitrate (<96 kbps): Basic audio quality")
comment_lines.append(" ⭐ Quality: 4/10 | Use case: Voice/basic audio")
comment_lines.append(" 💡 For next improvement: Increase to at least 128 kbps for music or 96 kbps for speech")
elif 96000 <= bitrate_value < 128000:
comment_lines.append(" Standard Bitrate (96-128 kbps): Acceptable quality")
comment_lines.append(" ⭐ Quality: 6/10 | Use case: General purpose")
comment_lines.append(" 💡 For next improvement: Use 160-192 kbps for better music quality")
elif 128000 <= bitrate_value < 192000:
comment_lines.append(" Good Bitrate (128-192 kbps): Good quality")
comment_lines.append(" ⭐ Quality: 7/10 | Use case: Music/general media")
comment_lines.append(" 💡 For next improvement: Use 192-256 kbps for higher quality music")
elif 192000 <= bitrate_value < 256000:
comment_lines.append(" High Bitrate (192-256 kbps): Near transparent")
comment_lines.append(" ⭐ Quality: 8/10 | Use case: Music distribution")
comment_lines.append(" 💡 For next improvement: Consider VBR encoding for more efficient size/quality")
else:
comment_lines.append(" Very High Bitrate (256+ kbps): Transparent quality")
comment_lines.append(" ⭐ Quality: 9-10/10 | Use case: Archiving/professional")
else:
comment_lines.append(" Default bitrate selected based on codec")
comment_lines.append(" ✅ Typically 128-192 kbps for lossy formats")
comment_lines.append(" 💡 For next improvement: Specify 192-256 kbps for music content")
else:
comment_lines.append("• Codec: " + ("AAC" if format == "mp4" else "Vorbis"))
if format == "mp4":
comment_lines.append(" AAC: Standard audio codec for MP4 with excellent quality")
comment_lines.append(" ⭐ Quality: 8/10 at default bitrate (128-192 kbps)")
comment_lines.append(" 💡 For next improvement: Set explicit bitrate of 192 kbps for better quality")
else:
comment_lines.append(" Vorbis: Open audio codec with good compression efficiency")
comment_lines.append(" ⭐ Quality: 7/10 at default bitrate (128 kbps)")
comment_lines.append(" 💡 For next improvement: Switch to Opus codec for better quality at same bitrate")
else:
comment_lines.append("• Audio: Disabled")
comment_lines.append(" No audio track will be included in the output file")
comment_lines.append(" ✅ Results in smaller file size")
comment_lines.append(" 💡 For next improvement: Add audio if applicable to content")
# Advanced features with detailed explanations
comment_lines.append("\n=== Advanced Features ===")
# Transparency handling
transparency_enabled = format == 'webm' and ffmpeg_config and ffmpeg_config['video'].get('force_transparency_webm', False)
comment_lines.append(f"• Transparency Handling: {'Enabled' if transparency_enabled else 'Disabled'}")
if transparency_enabled:
comment_lines.append(" Alpha channel (transparency) will be preserved")
comment_lines.append(" ✅ WebM with VP9 codec provides excellent transparency support")
comment_lines.append(" ⚠️ Requires 'yuva420p' pixel format")
comment_lines.append(" ⚠️ Increases file size by approximately 33%")
comment_lines.append(" 💡 For next improvement: Ensure original content has high-quality alpha channel")
else:
if format == 'webm':
comment_lines.append(" Transparency can be enabled for WebM format")
comment_lines.append(" 💡 For next improvement: Set 'force_transparency_webm: True' in ffmpeg_config to enable")
elif format == 'mov':
comment_lines.append(" MOV format can support transparency with certain codecs")
comment_lines.append(" 💡 For next improvement: Use ProRes 4444 or PNG codec for transparency in MOV")
elif format == 'gif':
comment_lines.append(" GIF supports basic binary transparency (on/off)")
comment_lines.append(" 💡 For next improvement: Use WebM for smooth alpha transparency")
else:
comment_lines.append(" Selected format does not support transparency")
comment_lines.append(" 💡 For next improvement: Use WebM format for web-compatible transparency")
# Temp frames information
comment_lines.append(f"• Temp Frames: {len(images)} images @ {temp_dir}")
comment_lines.append(f" Processing {len(images)} individual frames")
if len(images) > 1000:
comment_lines.append(" ⚠️ Large frame count (>1000): May require significant processing time")
comment_lines.append(f" 💡 Estimated size: ~{len(images) * 0.2:.1f}MB temporary storage")
comment_lines.append(f" 🗂️ Temporary directory: {temp_dir}")
# Execution status
try:
# [Existing FFmpeg execution code...]
comment_lines.append("\n=== Execution Status ===")
comment_lines.append("✅ Success: Video created")
comment_lines.append(f" 📁 Output: {output_file}")
# comment_lines.append(f" 📊 Final file size: {"[Will be calculated after processing]"}")
# Add estimated output quality based on settings
if ffmpeg_config and ffmpeg_config.get('video'):
v = ffmpeg_config['video']
crf = v.get('crf')
preset = v.get('preset', 'medium')
codec = v.get('codec', '')
quality_score = 0
# Base quality on CRF
if crf is not None:
if 0 <= int(crf) <= 14:
quality_score = 9.5
elif 15 <= int(crf) <= 19:
quality_score = 8.5
elif 20 <= int(crf) <= 24:
quality_score = 7.5
elif 25 <= int(crf) <= 30:
quality_score = 5.5
else:
quality_score = 4.0
# Adjust for codec
if "265" in codec or "hevc" in codec.lower() or "av1" in codec.lower():
quality_score += 0.5
elif "vp9" in codec.lower():
quality_score += 0.3
elif "nvenc" in codec.lower():
quality_score -= 0.5
# Adjust for preset
if preset in ['veryslow', 'placebo']:
quality_score += 0.5
elif preset in ['ultrafast', 'superfast']:
quality_score -= 0.5
# Cap at 10
quality_score = min(10, quality_score)
comment_lines.append(f" ⭐ Estimated quality: {quality_score:.1f}/10")
else:
comment_lines.append(f" ⭐ For estimated quality x/10, connect FFMPEG Configuration node")
except Exception as e:
comment_lines.append("\n=== Execution Status ===")
comment_lines.append(f"❌ Error: {str(e)}")
comment_lines.append(" ⚠️ See log for detailed error information")
comment_lines.append(" 💡 Common issues:")
comment_lines.append(" - FFmpeg not installed or not in PATH")
comment_lines.append(" - Insufficient disk space")
comment_lines.append(" - Incompatible codec/container combination")
comment_lines.append(" - Invalid parameter values")
return ("\n".join(comment_lines), " ".join(ffmpeg_cmd), output_file)
# return (comment, " ".join(ffmpeg_cmd), output_file)

View File

@@ -15,12 +15,12 @@ class ImagesListToVideo:
return {
"required": {
"images": ("IMAGE",),
"frames_per_second": ("FLOAT", {"default": 30, "min": 1, "max": 120, "step": 1}),
"fps": ("FLOAT", {"default": 25, "min": 1, "max": 120, "step": 0.01}),
},
"optional": {
"audio_path": ("STRING", {"default": "", "multiline": False}),
"audio_path": ("STRING", {"forceInput": True}),
"audio": ("AUDIO", {"default": None}),
"FFMPEG_CONFIG_JSON": ("STRING", {"default": None}),
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
}
}
@@ -46,40 +46,59 @@ class ImagesListToVideo:
"-i", input_pattern,
"-c:v", "libx264",
"-pix_fmt", "yuv420p",
"-crf", "19"
"-crf", "19",
"-y"
]
cmd = [config["ffmpeg"]["path"]] if config["ffmpeg"]["path"] else ["ffmpeg"]
# Handle framerate - use force_fps if enabled
cmd.extend(["-framerate", str(config["video"]["fps"]["force_fps"] if config["video"]["fps"]["enabled"] else fps)])
cmd.extend(["-i", input_pattern])
# Video settings
if config["video"]["codec"] not in [None, "None", "copy"]:
cmd.extend(["-c:v", config["video"]["codec"]])
# Video codec settings
codec = config["video"]["codec"]
if codec not in [None, "None", "copy"]:
cmd.extend(["-c:v", codec])
if config["video"]["pixel_format"] not in [None, "None"]:
cmd.extend(["-pix_fmt", config["video"]["pixel_format"]])
# Pixel format
pixel_format = config["video"]["pixel_format"]
if pixel_format not in [None, "None"]:
cmd.extend(["-pix_fmt", pixel_format])
if config["video"]["preset"] not in [None, "None"]:
cmd.extend(["-preset", config["video"]["preset"]])
# Preset
preset = config["video"]["preset"]
if preset not in [None, "None"]:
cmd.extend(["-preset", preset])
if config["video"]["bitrate"] not in [None, "None", ""]:
# Handle bitrate mode - static or CRF
if config["video"]["bitrate_mode"] == "static" and config["video"]["bitrate"]:
cmd.extend(["-b:v", config["video"]["bitrate"]])
else:
crf_value = config["video"]["crf"]
if crf_value is not None:
cmd.extend(["-crf", str(crf_value)])
cmd.extend(["-crf", str(config["video"]["crf"])])
# Resolution change if enabled
if config["video"]["resolution"]:
width = config["video"]["resolution"]["width"]
height = config["video"]["resolution"]["height"]
if width > 0 and height > 0:
cmd.extend(["-s", f"{width}x{height}"])
if config["video"]["resolution"] and config["video"]["resolution"]["width"] > 0 and config["video"]["resolution"]["height"] > 0:
cmd.extend(["-s", f"{config['video']['resolution']['width']}x{config['video']['resolution']['height']}"])
# Special handling for WebM transparency if enabled
if config["output"]["container_format"] == "webm" and config["video"]["force_transparency_webm"]:
cmd.extend(["-auto-alt-ref", "0"])
return cmd
def images_to_video(self, images, frames_per_second=30, audio_path="", audio=None, ffmpeg_config=None):
config = self.parse_ffmpeg_config(ffmpeg_config)
def images_to_video(self, images, fps=30, audio_path="", audio=None, FFMPEG_CONFIG_JSON=None):
config = self.parse_ffmpeg_config(FFMPEG_CONFIG_JSON)
output_dir = os.path.join("Bjornulf", "images_to_video")
os.makedirs(output_dir, exist_ok=True)
# Determine output format
# Determine output format from config
output_format = "mp4"
if config and config["output"]["container_format"] not in [None, "None"]:
output_format = config["output"]["container_format"]
@@ -88,6 +107,7 @@ class ImagesListToVideo:
video_path = os.path.join(output_dir, video_filename)
with tempfile.TemporaryDirectory() as temp_dir:
# Save frames as images
for i, img in enumerate(images):
img_np = self.convert_to_numpy(img)
if img_np.shape[-1] != 3:
@@ -97,11 +117,13 @@ class ImagesListToVideo:
img_pil.save(img_path)
input_pattern = os.path.join(temp_dir, "frame_%05d.png")
ffmpeg_cmd = self.build_ffmpeg_command(input_pattern, video_path, frames_per_second, config)
ffmpeg_cmd = self.build_ffmpeg_command(input_pattern, video_path, fps, config)
# Handle audio
# Handle audio based on config
temp_audio_path = None
if not (config and config["audio"]["enabled"] == False):
audio_enabled = not (config and config["audio"]["enabled"] == False)
if audio_enabled:
if audio is not None and isinstance(audio, dict):
waveform = audio['waveform'].numpy().squeeze()
sample_rate = audio['sample_rate']
@@ -111,25 +133,28 @@ class ImagesListToVideo:
temp_audio_path = audio_path
if temp_audio_path:
# First create video without audio
temp_video = os.path.join(temp_dir, "temp_video.mp4")
temp_cmd = ffmpeg_cmd + ["-y", temp_video]
try:
subprocess.run(temp_cmd, check=True, capture_output=True, text=True)
# Now add audio
audio_cmd = [
config["ffmpeg"]["path"] if config else "ffmpeg",
config["ffmpeg"]["path"] if config and config["ffmpeg"]["path"] else "ffmpeg",
"-i", temp_video,
"-i", temp_audio_path,
"-c:v", "copy"
]
# Audio codec settings from config
if config and config["audio"]["codec"] not in [None, "None"]:
if config and config["audio"]["codec"] not in [None, "None", "copy"]:
audio_cmd.extend(["-c:a", config["audio"]["codec"]])
else:
audio_cmd.extend(["-c:a", "aac"])
# Audio bitrate
if config and config["audio"]["bitrate"]:
audio_cmd.extend(["-b:a", config["audio"]["bitrate"]])
@@ -140,6 +165,7 @@ class ImagesListToVideo:
print(f"FFmpeg error: {e.stderr}")
return ("",)
else:
# Just create video without audio
ffmpeg_cmd.append("-y")
ffmpeg_cmd.append(video_path)
try:

65
global_seed_manager.py Normal file
View File

@@ -0,0 +1,65 @@
from server import PromptServer
import os
from aiohttp import web
import random
class GlobalSeedManager:
@classmethod
def INPUT_TYPES(cls):
return {"required": {"seed": ( "INT", {
"default": 0,
"min": 0,
"max": 4294967294
})}}
RETURN_TYPES = ("INT", "STRING", "INT", "STRING")
RETURN_NAMES = ("new_seed_INT", "new_seed_STRING", "previous_seed_INT", "all_seeds_LIST")
FUNCTION = "generate_seed"
CATEGORY = "Bjornulf"
def generate_seed(self, seed: int):
# Generate new random seed
new_seed = random.randint(0, 2**31 - 1)
seed_str = str(new_seed)
# Define file path
file_path = "Bjornulf/random_seeds.txt"
# Ensure directory exists
os.makedirs(os.path.dirname(file_path), exist_ok=True)
# Read previous seeds from file
try:
with open(file_path, 'r') as f:
existing_seeds = f.read().strip()
seed_list = existing_seeds.split(';') if existing_seeds else []
prev_seed = int(seed_list[-1]) if seed_list else -1
except (FileNotFoundError, ValueError, IndexError):
prev_seed = -1
seed_list = []
# Add new seed to list
seed_list.append(str(new_seed))
# Write all seeds to file
with open(file_path, 'w') as f:
f.write(';'.join(seed_list))
# Create string of all seeds
all_seeds_str = ';'.join(seed_list)
return new_seed, seed_str, prev_seed, all_seeds_str
# Define the API endpoint to delete the seeds file
@PromptServer.instance.routes.post("/delete_random_seeds")
async def delete_random_seeds(request):
file_path = "Bjornulf/random_seeds.txt"
try:
if os.path.exists(file_path):
os.remove(file_path)
return web.json_response({"success": True})
else:
return web.json_response({"success": False, "error": "File not found"})
except Exception as e:
return web.json_response({"success": False, "error": str(e)})

View File

@@ -1,5 +1,8 @@
import os
import re
import random
import csv
from itertools import cycle
from aiohttp import web
from server import PromptServer
@@ -17,7 +20,7 @@ class LineSelector:
"LOOP": ("BOOLEAN", {"default": False}), # Return all lines as list
"LOOP_SEQUENTIAL": ("BOOLEAN", {"default": False}), # Sequential looping
"jump": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}), # Jump size for sequential loop
"pick_random_variable": ("BOOLEAN", {"default": False}), # Enable random choice functionality
"pick_random_variable": ("BOOLEAN", {"default": True}), # Enable random choice functionality
},
"optional": {
"variables": ("STRING", {"multiline": True, "forceInput": True}),
@@ -35,6 +38,122 @@ class LineSelector:
FUNCTION = "select_line"
CATEGORY = "Bjornulf"
def find_variables(self, text):
stack = []
variables = []
for i, char in enumerate(text):
if char == '{':
stack.append((i, len(stack) + 1))
elif char == '}' and stack:
start, nesting = stack.pop()
variables.append({
'start': start,
'end': i + 1,
'nesting': nesting
})
variables.sort(key=lambda x: (-x['nesting'], -x['end']))
return variables
def parse_option(self, part):
if part.startswith('%csv='):
try:
filename = part.split('=', 1)[1].strip()
with open(filename, 'r') as f:
return [row[0] for row in csv.reader(f)]
except Exception as e:
return [f"[CSV Error: {str(e)}]"]
elif '(' in part and '%)' in part:
option, weight = part.rsplit('(', 1)
return (option.strip(), float(weight.split('%)')[0]))
return part.strip()
def process_content(self, content, seed):
random.seed(seed)
parts = []
weights = []
group_defined = False
group_name = None
for p in content.split('|'):
p = p.strip()
if p.startswith('group='):
group_name = p.split('=', 1)[1].strip()
group_defined = True
continue
parsed = self.parse_option(p)
if isinstance(parsed, list): # CSV data
parts.extend(parsed)
weights.extend([1]*len(parsed))
elif isinstance(parsed, tuple): # Weighted option
parts.append(parsed[0])
weights.append(parsed[1])
else:
parts.append(parsed)
weights.append(1)
if group_defined:
return {'type': 'group', 'name': group_name, 'options': parts}
if any(w != 1 for w in weights):
total = sum(weights)
if total == 0: weights = [1]*len(parts)
return random.choices(parts, weights=[w/total for w in weights])[0]
return random.choice(parts) if parts else ''
def process_advanced_syntax(self, text, seed):
# Process nested variables
variables = self.find_variables(text)
substitutions = []
groups = {}
for var in variables:
start, end = var['start'], var['end']
content = text[start+1:end-1]
processed = self.process_content(content, seed)
if isinstance(processed, dict):
if processed['type'] == 'group':
group_name = processed['name']
if group_name not in groups:
groups[group_name] = []
groups[group_name].append({
'start': start,
'end': end,
'options': processed['options']
})
else:
substitutions.append({
'start': start,
'end': end,
'sub': processed
})
# Handle groups
for group_name, matches in groups.items():
if not matches or not matches[0]['options']:
continue
options = matches[0]['options']
permuted = random.sample(options, len(options))
perm_cycle = cycle(permuted)
for m in matches:
substitutions.append({
'start': m['start'],
'end': m['end'],
'sub': next(perm_cycle)
})
# Apply regular substitutions
substitutions.sort(key=lambda x: -x['start'])
result_text = text
for sub in substitutions:
result_text = result_text[:sub['start']] + sub['sub'] + result_text[sub['end']:]
return result_text
def select_line(self, text, line_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, pick_random_variable, variables="", seed=-1):
# Parse variables
var_dict = {}
@@ -58,16 +177,15 @@ class LineSelector:
import os
# Set seed if provided
if seed >= 0:
random.seed(seed)
if seed < 0:
seed = random.randint(0, 0x7FFFFFFFFFFFFFFF)
# Process random choice functionality if enabled
# Process WriteTextAdvanced syntax if enabled
if pick_random_variable:
pattern = r'\{([^}]+)\}'
def replace_random(match):
return random.choice(match.group(1).split('|'))
lines = [re.sub(pattern, replace_random, line) for line in lines]
processed_lines = []
for line in lines:
processed_lines.append(self.process_advanced_syntax(line, seed))
lines = processed_lines
# Handle sequential looping
if LOOP_SEQUENTIAL:

57
list_selector.py Normal file
View File

@@ -0,0 +1,57 @@
class ListSelector:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"input_LIST": ("STRING", {"forceInput": True}),
"selection": ("INT", {
"default": 1,
"min": 1,
"max": 9999 # Reasonable upper limit
}),
"delimiter": ("STRING", {
"default": ";",
"multiline": False
})
}
}
RETURN_TYPES = ("INT", "STRING", "INT")
RETURN_NAMES = ("selected_element_INT", "selected_element_STRING", "list_length_INT")
FUNCTION = "select_number"
CATEGORY = "Bjornulf"
def select_number(self, input_LIST: str, selection: int, delimiter: str):
# Split the string into a list using the delimiter
numbers = input_LIST.split(delimiter)
# Remove any empty strings and strip whitespace
numbers = [num.strip() for num in numbers if num.strip()]
# Get list length
list_length = len(numbers)
# Validate selection
if list_length == 0:
return 0, "0", 0
if selection > list_length:
selection = list_length # Clamp to max
elif selection < 1:
selection = 1 # Clamp to min
# Convert to 0-based index
index = selection - 1
# Get the selected number
selected = numbers[index]
# Convert to integer and string
try:
selected_int = int(selected)
selected_str = str(selected_int)
except ValueError:
# If conversion fails, return 0
selected_int = 0
selected_str = "0"
return selected_int, selected_str, list_length

95
play_sound.py Normal file
View File

@@ -0,0 +1,95 @@
import os
import io
import sys
from pydub import AudioSegment
from pydub.playback import play
import torch
import numpy as np
from scipy.io import wavfile
class Everything(str):
def __ne__(self, __value: object) -> bool:
return False
class PlayAudio:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"anything": (Everything("*"), {"forceInput": True}),
},
"optional": {
"AUDIO": ("AUDIO", {"forceInput": True}),
"audio_path": ("STRING", {"default": ""})
}
}
RETURN_TYPES = (Everything("*"),)
RETURN_NAMES = ("anything",)
FUNCTION = "execute"
CATEGORY = "audio"
def play_audio(self, anything, AUDIO=None, audio_path=None):
# print(f"Debug - Entering play_audio: AUDIO={AUDIO}, audio_path={audio_path}")
try:
# Case 1: AUDIO input is provided
if AUDIO is not None:
# print(f"Debug - Processing AUDIO input: type={type(AUDIO)}")
if isinstance(AUDIO, dict) and 'waveform' in AUDIO:
waveform = AUDIO['waveform']
sample_rate = AUDIO.get('sample_rate', 44100)
if isinstance(waveform, torch.Tensor):
waveform = waveform.cpu().numpy()
if waveform.dtype.kind == 'f':
waveform = (waveform * 32767).astype(np.int16)
temp_wav = io.BytesIO()
wavfile.write(temp_wav, sample_rate, waveform)
temp_wav.seek(0)
sound = AudioSegment.from_wav(temp_wav)
elif isinstance(AUDIO, AudioSegment):
sound = AUDIO
else:
raise ValueError(f"Unsupported AUDIO type: {type(AUDIO)}")
# Case 2: audio_path is provided
elif audio_path and os.path.exists(audio_path):
# print(f"Debug - Loading audio from path: {audio_path}")
sound = AudioSegment.from_file(audio_path)
# Case 3: Default to bell sound
else:
audio_file = os.path.join(os.path.dirname(__file__), 'bell.m4a')
# print(f"Debug - Attempting default bell sound: {audio_file}")
if not os.path.exists(audio_file):
raise FileNotFoundError(f"Default bell.m4a not found at {audio_file}")
sound = AudioSegment.from_file(audio_file, format="m4a")
# Play the sound
# print("Debug - Playing sound...")
if sys.platform.startswith('win'):
wav_io = io.BytesIO()
sound.export(wav_io, format='wav')
wav_data = wav_io.getvalue()
import winsound
winsound.PlaySound(wav_data, winsound.SND_MEMORY)
else:
play(sound)
# print("Debug - Sound played successfully")
except Exception as e:
# print(f"Audio playback error: {e}")
import traceback
print(traceback.format_exc())
def execute(self, anything, AUDIO=None, audio_path=None):
# print(f"Debug - Execute: anything={anything}, AUDIO={AUDIO}, audio_path={audio_path}")
self.play_audio(anything, AUDIO, audio_path)
return (anything,)
@classmethod
def IS_CHANGED(cls, anything, AUDIO=None, audio_path=None, *args):
return float("NaN")

View File

@@ -1,7 +1,7 @@
[project]
name = "bjornulf_custom_nodes"
description = "133 ComfyUI nodes : Display, manipulate, and edit text, images, videos, loras, generate characters and more. Manage looping operations, generate randomized content, use logical conditions and work with external AI tools, like Ollama or Text To Speech Kokoro, etc..."
version = "0.75"
version = "0.76"
license = {file = "LICENSE"}
[project.urls]

37
random_stuff.py Normal file
View File

@@ -0,0 +1,37 @@
import random
from typing import Tuple
class RandomIntNode:
@classmethod
def INPUT_TYPES(cls):
return {"required": {"min_value": ("INT", {"default": 1}), "max_value": ("INT", {"default": 10}), "seed": ("INT", {
"default": 0,
"min": 0,
"max": 4294967294
})}}
RETURN_TYPES = ("INT", "STRING")
FUNCTION = "generate_random_int"
CATEGORY = "Bjornulf"
def generate_random_int(self, min_value: int, max_value: int, seed: int) -> Tuple[int, str]:
rand_int = random.randint(min_value, max_value)
return rand_int, f"{rand_int}"
class RandomFloatNode:
@classmethod
def INPUT_TYPES(cls):
return {"required": {"min_value": ("FLOAT", {"default": 1.0}), "max_value": ("FLOAT", {"default": 10.0}), "seed": ("INT", {
"default": 0,
"min": 0,
"max": 4294967294
})}}
RETURN_TYPES = ("FLOAT", "STRING")
FUNCTION = "generate_random_float"
CATEGORY = "Bjornulf"
def generate_random_float(self, min_value: float, max_value: float, seed: int) -> Tuple[float, str]:
rand_float = round(random.uniform(min_value, max_value), 2)
return rand_float, f"{rand_float:.2f}"

View File

@@ -6,4 +6,5 @@ ffmpeg-python
civitai-py
fal_client
sounddevice
kokoro_onnx
#24, remove kokoro install by default (need to do that manually to use the kokoro node)
# kokoro_onnx

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

BIN
screenshots/random_int.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 272 KiB

BIN
screenshots/switch_text.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 84 KiB

48
switches.py Normal file
View File

@@ -0,0 +1,48 @@
class Everything(str):
def __ne__(self, __value: object) -> bool:
return False
class SwitchAnything:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"anything": (Everything("*"), {"forceInput": True}),
"switch": ("BOOLEAN", {"default": True})
}
}
RETURN_TYPES = (Everything("*"),)
RETURN_NAMES = ("anything",)
FUNCTION = "process_switch"
CATEGORY = "Bjornulf"
def process_switch(self, anything, switch):
if switch:
return (anything,)
else:
return ("",)
class SwitchText:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"STRING": ("STRING", {"forceInput": True}),
"switch": ("BOOLEAN", {"default": True}),
"ONLY_ME_combine_text": ("BOOLEAN", {"default": False}),
}
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("STRING",)
FUNCTION = "process_switch"
CATEGORY = "Bjornulf"
def process_switch(self, STRING, switch, ONLY_ME_combine_text):
if ONLY_ME_combine_text:
return (f"ImSpEcIaL{STRING}",)
if switch:
return (STRING,)
else:
return ("",)

View File

@@ -36,6 +36,10 @@ class TextReplace:
# Convert input to string
input_text = str(input_text)
# Early exit if search_text is empty to prevent hanging
if not search_text:
return (input_text,)
# Prepare regex flags
regex_flags = 0
if not case_sensitive:
@@ -118,6 +122,5 @@ class TextReplace:
return (input_text,)
@classmethod
def IS_CHANGED(cls, *args):
# Return float("NaN") to ensure the node always processes
def IS_CHANGED(cls, search_text, replace_text, input_text, replace_count, use_regex, case_sensitive, trim_whitespace, multiline_regex, *args):
return float("NaN")

View File

@@ -14,6 +14,7 @@ class VideoPingPong:
}
RETURN_TYPES = ("IMAGE",)
RETURN_NAMES = ("IMAGES",)
FUNCTION = "pingpong_images"
CATEGORY = "Bjornulf"

View File

@@ -1,6 +1,10 @@
import os
import shutil
# import logging
import time
import hashlib
from pathlib import Path
SUPPORTED_EXTENSIONS = {'.mp4', '.webm', '.ogg', '.mov', '.mkv'}
class VideoPreview:
@classmethod
@@ -8,6 +12,8 @@ class VideoPreview:
return {
"required": {
"video_path": ("STRING", {"forceInput": True}),
"autoplay": ("BOOLEAN", {"default": False}),
"mute": ("BOOLEAN", {"default": True}),
},
}
@@ -16,34 +22,47 @@ class VideoPreview:
CATEGORY = "Bjornulf"
OUTPUT_NODE = True
def preview_video(self, video_path):
if not video_path:
return {"ui": {"error": "No video path provided."}}
def preview_video(self, video_path, autoplay, mute):
try:
if not video_path or not isinstance(video_path, str):
raise ValueError("Invalid video path provided")
# Keep the "output" folder structure for copying
dest_dir = os.path.join("output", "Bjornulf", "preview_video")
os.makedirs(dest_dir, exist_ok=True)
video_path = os.path.abspath(video_path)
if not os.path.exists(video_path):
raise FileNotFoundError(f"Video file not found: {video_path}")
video_name = os.path.basename(video_path)
dest_path = os.path.join(dest_dir, video_name)
ext = Path(video_path).suffix.lower()
if ext not in SUPPORTED_EXTENSIONS:
raise ValueError(f"Unsupported video format: {ext}. Supported formats: {', '.join(SUPPORTED_EXTENSIONS)}")
if os.path.abspath(video_path) != os.path.abspath(dest_path):
shutil.copy2(video_path, dest_path)
print(f"Video copied successfully to {dest_path}")
else:
print(f"Video is already in the destination folder: {dest_path}")
dest_dir = os.path.join("output", "Bjornulf", "preview_video")
os.makedirs(dest_dir, exist_ok=True)
# Determine the video type based on file extension
_, file_extension = os.path.splitext(dest_path)
video_type = file_extension.lower()[1:] # Remove the dot from extension
file_hash = hashlib.md5(open(video_path,'rb').read()).hexdigest()[:8]
timestamp = int(time.time())
base_name = Path(video_path).stem
dest_name = f"{base_name}_{timestamp}_{file_hash}{ext}"
dest_path = os.path.join(dest_dir, dest_name)
# logging.info(f"Video type: {video_type}")
# logging.info(f"Video path: {dest_path}")
# logging.info(f"Destination directory: {dest_dir}")
# logging.info(f"Video name: {video_name}")
if not os.path.exists(dest_path):
shutil.copy2(video_path, dest_path)
# Create a new variable for the return value without "output"
return_dest_dir = os.path.join("Bjornulf", "preview_video")
return {
"ui": {
"video": [dest_name, "Bjornulf/preview_video"],
"metadata": {
"width": 512,
"height": 512,
"autoplay": autoplay,
"mute": mute
}
}
}
# Return the video name and the modified destination directory
return {"ui": {"video": [video_name, return_dest_dir]}}
except Exception as e:
return {
"ui": {
"error": str(e),
"video": None
}
}

View File

@@ -0,0 +1,38 @@
import { app } from "../../../scripts/app.js";
import { api } from "../../../scripts/api.js";
app.registerExtension({
name: "Bjornulf.GlobalSeedManager",
async nodeCreated(node) {
// Ensure the button is added only to RandomSeedNode
if (node.comfyClass !== "Bjornulf_GlobalSeedManager") return;
// Add a button widget to the node
const deleteButton = node.addWidget(
"button", // Widget type
"Delete Seeds LIST", // Button label
null, // Initial value (not needed for buttons)
async () => {
// Ensure the node is still in the graph
if (!node.graph) return;
try {
// Make a POST request to the delete endpoint
const response = await fetch("/delete_random_seeds", {
method: "POST",
});
const data = await response.json();
// Show feedback to the user
if (data.success) {
app.ui.dialog.show("Seeds file deleted successfully.");
} else {
app.ui.dialog.show(`Failed to delete seeds file: ${data.error}`);
}
} catch (error) {
app.ui.dialog.show("An error occurred while deleting the seeds file.");
}
}
);
},
});

View File

@@ -1,80 +1,44 @@
import { app } from "../../../scripts/app.js";
app.registerExtension({
name: "Bjornulf.ImageNoteLoadImage",
async nodeCreated(node) {
// Ensure the node is of the specific class
if (node.comfyClass !== "Bjornulf_ImageNoteLoadImage") return;
console.log("node created");
setTimeout(() => {
// Update widget positions
node.onResize(node.size);
// Store the initial node size
let prevSize = [...node.size];
let stableCount = 0;
const minStableFrames = 3; // Number of frames the size must remain stable
// Refresh all widgets
node.widgets.forEach(w => {
if (w.onShow?.(true)) {
w.onShow?.(false);
// Function to check if the node's size has stabilized
const checkSizeStable = () => {
if (node.size[0] === prevSize[0] && node.size[1] === prevSize[1]) {
stableCount++;
if (stableCount >= minStableFrames) {
// Size has been stable, simulate a resize to trigger layout update
const originalSize = [...node.size];
node.setSize([originalSize[0] + 1, originalSize[1]]); // Slightly increase width
setTimeout(() => {
node.setSize(originalSize); // Revert to original size
app.graph.setDirtyCanvas(true, true); // Trigger canvas redraw
}, 0);
} else {
// Size is stable but not for enough frames yet, check again
requestAnimationFrame(checkSizeStable);
}
});
} else {
// Size changed, reset counter and update prevSize
prevSize = [...node.size];
stableCount = 0;
requestAnimationFrame(checkSizeStable);
}
};
app.graph.setDirtyCanvas(true, true);
}, 500);
// Start checking after a short delay to allow node initialization
setTimeout(() => {
requestAnimationFrame(checkSizeStable);
}, 5000);
}
});
// app.registerExtension({
// name: "Bjornulf.ImageNote",
// async nodeCreated(node) {
// if (node.comfyClass !== "Bjornulf_ImageNote") return;
// // Add Save Note button
// node.addWidget("button", "Save Note", null, () => {
// const imagePathWidget = node.widgets.find(w => w.name === "image_path");
// const noteTextWidget = node.widgets.find(w => w.name === "note_text");
// if (!imagePathWidget?.value) {
// return;
// }
// fetch("/save_note", {
// method: "POST",
// body: JSON.stringify({
// image_path: imagePathWidget.value,
// note_text: noteTextWidget?.value || ""
// }),
// headers: { "Content-Type": "application/json" }
// })
// .then(response => response.json())
// .catch(error => {
// console.error("Error saving note:", error);
// });
// });
// // Add Load Note button
// node.addWidget("button", "Load Note", null, () => {
// const imagePathWidget = node.widgets.find(w => w.name === "image_path");
// if (!imagePathWidget?.value) {
// return;
// }
// fetch("/load_note", {
// method: "POST",
// body: JSON.stringify({ image_path: imagePathWidget.value }),
// headers: { "Content-Type": "application/json" }
// })
// .then(response => response.json())
// .then(data => {
// if (data.success) {
// const noteTextWidget = node.widgets.find(w => w.name === "note_text");
// if (noteTextWidget) {
// noteTextWidget.value = data.note_text;
// // Trigger widget changed event to update UI
// app.graph.setDirtyCanvas(true);
// }
// }
// })
// .catch(error => {
// console.error("Error loading note:", error);
// });
// });
// }
// });

90
web/js/switches.js Normal file
View File

@@ -0,0 +1,90 @@
import { app } from "/scripts/app.js"; // Adjust path based on ComfyUI's structure
app.registerExtension({
name: "Bjornulf.SwitchText",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_SwitchText") {
// Store original colors
const originalColor = ""; // Default ComfyUI node color
// Function to update color based on switch value
const updateNodeColor = () => {
const switchWidget = node.widgets?.find(w => w.name === "switch");
if (switchWidget) {
const isTrue = switchWidget.value;
node.color = isTrue ? originalColor : "#640000"; // Red when false
}
};
const updateNodeColorPickMe = () => {
const pickMeWidget = node.widgets?.find(w => w.name === "ONLY_ME_combine_text");
if (pickMeWidget) {
const isPicked = pickMeWidget.value;
node.color = isPicked ? "#000064" : originalColor; // Red when false
}
}
// Initial color update
updateNodeColor();
// Hook into widget value changes
const originalSetValue = node.widgets?.find(w => w.name === "switch")?.callback;
node.widgets.find(w => w.name === "switch").callback = function(value) {
updateNodeColor();
if (originalSetValue) {
originalSetValue.apply(this, arguments);
}
};
// Hook into widget value changes
const originalSetValuePickMe = node.widgets?.find(w => w.name === "ONLY_ME_combine_text")?.callback;
node.widgets.find(w => w.name === "ONLY_ME_combine_text").callback = function(value) {
updateNodeColorPickMe();
if (originalSetValuePickMe) {
originalSetValuePickMe.apply(this, arguments);
}
};
// Cleanup on node removal (optional but good practice)
node.onRemoved = function() {
node.color = originalColor;
};
}
}
});
app.registerExtension({
name: "Bjornulf.SwitchAnything",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_SwitchAnything") {
// Store original colors
const originalColor = ""; // Default ComfyUI node color
// Function to update color based on switch value
const updateNodeColor = () => {
const switchWidget = node.widgets?.find(w => w.name === "switch");
if (switchWidget) {
const isTrue = switchWidget.value;
node.color = isTrue ? originalColor : "#640000"; // Red when false
}
};
// Initial color update
updateNodeColor();
// Hook into widget value changes
const originalSetValue = node.widgets?.find(w => w.name === "switch")?.callback;
node.widgets.find(w => w.name === "switch").callback = function(value) {
updateNodeColor();
if (originalSetValue) {
originalSetValue.apply(this, arguments);
}
};
// Cleanup on node removal (optional but good practice)
node.onRemoved = function() {
node.color = originalColor;
};
}
}
});

View File

@@ -1,7 +1,7 @@
import { api } from '../../../scripts/api.js';
import { app } from "../../../scripts/app.js";
function displayVideoPreview(component, filename, category) {
function displayVideoPreview(component, filename, category, autoplay, mute) {
let videoWidget = component._videoWidget;
if (!videoWidget) {
// Create the widget if it doesn't exist
@@ -61,6 +61,10 @@ function displayVideoPreview(component, filename, category) {
"rand": Math.random().toString().slice(2, 12)
};
const urlParams = new URLSearchParams(params);
if(mute) videoWidget.videoElement.muted = true;
else videoWidget.videoElement.muted = false;
if(autoplay) videoWidget.videoElement.autoplay = !videoWidget.value.paused && !videoWidget.value.hidden;
else videoWidget.videoElement.autoplay = false;
videoWidget.videoElement.src = `http://localhost:8188/api/view?${urlParams.toString()}`;
adjustSize(component); // Adjust the component size
@@ -76,7 +80,9 @@ app.registerExtension({
async beforeRegisterNodeDef(nodeType, nodeData, appInstance) {
if (nodeData?.name == "Bjornulf_VideoPreview") {
nodeType.prototype.onExecuted = function (data) {
displayVideoPreview(this, data.video[0], data.video[1]);
const autoplay = this.widgets.find(w => w.name === "autoplay")?.value ?? false;
const mute = this.widgets.find(w => w.name === "mute")?.value ?? true;
displayVideoPreview(this, data.video[0], data.video[1], autoplay, mute);
};
}
}

View File

@@ -2,56 +2,73 @@ import { app } from "../../../scripts/app.js";
// Helper function to clean up widget DOM elements
function cleanupWidgetDOM(widget) {
if (widget && widget.inputEl) {
if (widget.inputEl.parentElement) {
widget.inputEl.parentElement.remove();
} else {
widget.inputEl.remove();
}
if (widget && widget.inputEl) {
if (widget.inputEl.parentElement) {
widget.inputEl.parentElement.remove();
} else {
widget.inputEl.remove();
}
}
}
function getChainNodes(startNode) {
const nodes = [];
let currentNode = startNode;
const nodes = [];
let currentNode = startNode;
const visitedUpstream = new Set();
// First traverse upstream to find the root node
while (true) {
const input = currentNode.inputs.find(i => i.name === "pickme_chain");
if (input?.link) {
const link = app.graph.links[input.link];
const prevNode = app.graph.getNodeById(link.origin_id);
if (prevNode?.comfyClass === "Bjornulf_WriteTextPickMeChain") {
currentNode = prevNode;
} else {
break;
}
} else {
break;
}
// First traverse upstream to find the root node
while (true) {
if (visitedUpstream.has(currentNode.id)) {
throw new Error(
"Infinite loop detected! Nodes form a circular chain through 'pickme_chain' inputs"
);
}
visitedUpstream.add(currentNode.id);
// Now traverse downstream from root
while (currentNode) {
nodes.push(currentNode);
const output = currentNode.outputs.find(o => o.name === "chain_text");
if (output?.links) {
let nextNode = null;
for (const linkId of output.links) {
const link = app.graph.links[linkId];
const targetNode = app.graph.getNodeById(link.target_id);
if (targetNode?.comfyClass === "Bjornulf_WriteTextPickMeChain") {
nextNode = targetNode;
break;
}
}
currentNode = nextNode;
} else {
break;
}
const input = currentNode.inputs.find((i) => i.name === "pickme_chain");
if (input?.link) {
const link = app.graph.links[input.link];
const prevNode = app.graph.getNodeById(link.origin_id);
if (prevNode?.comfyClass === "Bjornulf_WriteTextPickMeChain") {
currentNode = prevNode;
} else {
break;
}
} else {
break;
}
}
return nodes;
// Now traverse downstream from root
const visitedDownstream = new Set();
while (currentNode) {
if (visitedDownstream.has(currentNode.id)) {
app.ui.dialog.show("Infinite loop detected! Nodes form a circular chain through 'chain_text' outputs");
throw new Error(
"Infinite loop detected! Nodes form a circular chain through 'chain_text' outputs"
);
}
visitedDownstream.add(currentNode.id);
nodes.push(currentNode);
const output = currentNode.outputs.find((o) => o.name === "chain_text");
if (output?.links) {
let nextNode = null;
for (const linkId of output.links) {
const link = app.graph.links[linkId];
const targetNode = app.graph.getNodeById(link.target_id);
if (targetNode?.comfyClass === "Bjornulf_WriteTextPickMeChain") {
nextNode = targetNode;
break;
}
}
currentNode = nextNode;
} else {
break;
}
}
return nodes;
}
function pickNode(node) {
@@ -67,8 +84,6 @@ function pickNode(node) {
app.graph.setDirtyCanvas(true, true);
}
// Rest of the code remains the same as previous working version
function findAndPickNext(removedNode) {
const chainNodes = getChainNodes(removedNode);
const remaining = chainNodes.filter(n => n.id !== removedNode.id);
@@ -76,97 +91,100 @@ function findAndPickNext(removedNode) {
}
app.registerExtension({
name: "Bjornulf.WriteTextPickMeChain",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_WriteTextPickMeChain") {
// Store original onRemoved if it exists
const origOnRemoved = node.onRemoved;
// Create widgets in specific order to maintain layout
// const textWidget = node.widgets.find(w => w.name === "text");
// if (textWidget) {
// textWidget.computeSize = function() {
// return [node.size[0] - 20, 150];
// };
// }
name: "Bjornulf.WriteTextPickMeChain",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_WriteTextPickMeChain") {
// Store original onRemoved if it exists
const origOnRemoved = node.onRemoved;
// Create widgets in specific order to maintain layout
// const textWidget = node.widgets.find(w => w.name === "text");
// if (textWidget) {
// textWidget.computeSize = function() {
// return [node.size[0] - 20, 150];
// };
// }
// Handle picked widget
let pickedWidget = node.widgets.find(w => w.name === "picked");
if (!pickedWidget) {
pickedWidget = node.addWidget("BOOLEAN", "picked", false, null);
}
pickedWidget.visible = false;
// Handle picked widget
let pickedWidget = node.widgets.find((w) => w.name === "picked");
if (!pickedWidget) {
pickedWidget = node.addWidget("BOOLEAN", "picked", false, null);
}
pickedWidget.visible = false;
// Add button after textarea
const buttonWidget = node.addWidget("button", "PICK ME", null, () => pickNode(node));
buttonWidget.computeSize = function() {
return [node.size[0] - 20, 30];
};
// Add button after textarea
const buttonWidget = node.addWidget("button", "PICK ME", null, () =>
pickNode(node)
);
buttonWidget.computeSize = function () {
return [node.size[0] - 20, 30];
};
// Set initial node size
// node.size = [node.size[0], 200];
// node.size = [200, 200];
setTimeout(() => {
// Update widget positions
node.onResize(node.size);
// Set initial node size
// node.size = [node.size[0], 200];
// node.size = [200, 200];
setTimeout(() => {
// Update widget positions
node.onResize(node.size);
// Refresh all widgets
node.widgets.forEach(w => {
if (w.onShow?.(true)) {
w.onShow?.(false);
}
});
// Refresh all widgets
node.widgets.forEach((w) => {
if (w.onShow?.(true)) {
w.onShow?.(false);
}
});
app.graph.setDirtyCanvas(true, true);
}, 10);
app.graph.setDirtyCanvas(true, true);
}, 10);
// Enhanced cleanup on node removal
node.onRemoved = function() {
// Call original onRemoved if it exists
if (origOnRemoved) {
origOnRemoved.call(this);
}
// Handle chain updates
if (this.widgets.find(w => w.name === "picked")?.value) {
findAndPickNext(this);
}
// Clean up all widgets
for (const widget of this.widgets) {
cleanupWidgetDOM(widget);
}
// Force DOM cleanup and canvas update
if (this.domElement) {
this.domElement.remove();
}
app.graph.setDirtyCanvas(true, true);
};
const updateColors = () => {
const picked = node.widgets.find(w => w.name === "picked")?.value;
node.color = picked ? "#006400" : "";
};
const origSetNodeState = node.setNodeState;
node.setNodeState = function(state) {
origSetNodeState?.apply(this, arguments);
if (state.picked !== undefined) {
const widget = this.widgets.find(w => w.name === "picked");
if (widget) widget.value = state.picked;
}
updateColors();
};
const origGetNodeState = node.getNodeState;
node.getNodeState = function() {
const state = origGetNodeState?.apply(this, arguments) || {};
state.picked = this.widgets.find(w => w.name === "picked")?.value ?? false;
return state;
};
// Force initial layout update
app.graph.setDirtyCanvas(true, true);
// Enhanced cleanup on node removal
node.onRemoved = function () {
// Call original onRemoved if it exists
if (origOnRemoved) {
origOnRemoved.call(this);
}
// Handle chain updates
if (this.widgets.find((w) => w.name === "picked")?.value) {
findAndPickNext(this);
}
// Clean up all widgets
for (const widget of this.widgets) {
cleanupWidgetDOM(widget);
}
// Force DOM cleanup and canvas update
if (this.domElement) {
this.domElement.remove();
}
app.graph.setDirtyCanvas(true, true);
};
const updateColors = () => {
const picked = node.widgets.find((w) => w.name === "picked")?.value;
node.color = picked ? "#006400" : "";
};
const origSetNodeState = node.setNodeState;
node.setNodeState = function (state) {
origSetNodeState?.apply(this, arguments);
if (state.picked !== undefined) {
const widget = this.widgets.find((w) => w.name === "picked");
if (widget) widget.value = state.picked;
}
updateColors();
};
const origGetNodeState = node.getNodeState;
node.getNodeState = function () {
const state = origGetNodeState?.apply(this, arguments) || {};
state.picked =
this.widgets.find((w) => w.name === "picked")?.value ?? false;
return state;
};
// Force initial layout update
app.graph.setDirtyCanvas(true, true);
}
},
});

View File

@@ -0,0 +1,85 @@
import { app } from "../../../scripts/app.js";
// Function to pick a node within its global_pickme_id group
function pickGlobalNode(node) {
const global_pickme_idWidget = node.widgets.find(w => w.name === "global_pickme_id");
const global_pickme_id = global_pickme_idWidget ? global_pickme_idWidget.value : "default";
// Iterate through all nodes in the graph
app.graph._nodes.forEach(n => {
if (n.comfyClass === "Bjornulf_WriteTextPickMeGlobal") {
const nglobal_pickme_idWidget = n.widgets.find(w => w.name === "global_pickme_id");
const nglobal_pickme_id = nglobal_pickme_idWidget ? nglobal_pickme_idWidget.value : "default";
if (nglobal_pickme_id === global_pickme_id) { // Only affect nodes in the same group
const pickedWidget = n.widgets.find(w => w.name === "picked");
if (pickedWidget) {
pickedWidget.value = (n === node); // Pick this node, unpick others in group
}
n.color = (n === node) ? "#006400" : ""; // Green for picked, default otherwise
}
}
});
app.graph.setDirtyCanvas(true, true); // Refresh the canvas
}
app.registerExtension({
name: "Bjornulf.WriteTextPickMeGlobal",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_WriteTextPickMeGlobal") {
// Hide the picked widget from the UI
const pickedWidget = node.widgets.find(w => w.name === "picked");
if (pickedWidget && pickedWidget.inputEl) {
pickedWidget.inputEl.style.display = "none";
}
// Add "PICK ME" button
const buttonWidget = node.addWidget("button", "PICK ME", null, () => {
pickGlobalNode(node); // Handle picking within the group
});
buttonWidget.computeSize = function () {
return [node.size[0] - 20, 30]; // Size the button
};
// Function to update node color based on picked state
const updateColors = () => {
const picked = node.widgets.find(w => w.name === "picked")?.value;
node.color = picked ? "#006400" : ""; // Green if picked
};
updateColors(); // Set initial color
// Handle global_pickme_id changes
const global_pickme_idWidget = node.widgets.find(w => w.name === "global_pickme_id");
if (global_pickme_idWidget) {
global_pickme_idWidget.onChange = function() {
const pickedWidget = node.widgets.find(w => w.name === "picked");
if (pickedWidget && pickedWidget.value) {
pickedWidget.value = false; // Unpick if global_pickme_id changes
node.color = "";
app.graph.setDirtyCanvas(true, true);
}
};
}
// State management for saving/loading
const origSetNodeState = node.setNodeState;
node.setNodeState = function (state) {
origSetNodeState?.apply(this, arguments);
if (state.picked !== undefined) {
const widget = this.widgets.find(w => w.name === "picked");
if (widget) widget.value = state.picked;
}
updateColors();
};
const origGetNodeState = node.getNodeState;
node.getNodeState = function () {
const state = origGetNodeState?.apply(this, arguments) || {};
state.picked = this.widgets.find(w => w.name === "picked")?.value ?? false;
return state;
};
// Refresh canvas on load
app.graph.setDirtyCanvas(true, true);
}
}
});

59
write_pickme_global.py Normal file
View File

@@ -0,0 +1,59 @@
class WriteTextPickMeGlobal:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"global_pickme_id": ("STRING", {"default": "default"}), # Custom text global_pickme_id
"picked": ("BOOLEAN", {"default": False}), # Picked state
"text": ("STRING", {"multiline": True, "lines": 10}) # Text input
},
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("text",)
FUNCTION = "write_text"
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def write_text(self, global_pickme_id, picked, text, **kwargs):
return (text,) # Simply returns the input text
import random
class LoadTextPickMeGlobal:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"global_pickme_id": ("STRING", {"default": "default"})
},
"hidden": {"prompt": "PROMPT"} # For accessing the graph state
}
RETURN_TYPES = ("STRING", "STRING", "STRING")
RETURN_NAMES = ("picked_text", "picked_text_as_variable", "random")
FUNCTION = "load_text"
CATEGORY = "Bjornulf"
def load_text(self, global_pickme_id, prompt=None):
texts = []
picked_text = ""
if prompt:
for node_id, node_data in prompt.items():
if node_data.get("class_type") == "Bjornulf_WriteTextPickMeGlobal":
inputs = node_data.get("inputs", {})
node_global_pickme_id = inputs.get("global_pickme_id", "default")
if node_global_pickme_id == global_pickme_id:
text = inputs.get("text", "")
texts.append(text)
if inputs.get("picked", False):
picked_text = text
# Note: We dont break here to collect all texts
# Select random text
random_text = random.choice(texts) if texts else ""
# Return all three outputs
return (picked_text, f"global_pickme_{global_pickme_id} = {picked_text}", random_text)
@classmethod
def IS_CHANGED(cls, global_pickme_id, input_text="", prompt=None):
return float("NaN")

View File

@@ -1,7 +1,12 @@
import re
import random
import time
# import logging
import csv
from itertools import cycle
#{red|blue}
#{left|right|middle|group=LR}+{left|right|middle|group=LR}+{left|right|middle|group=LR}
#{A(80%)|B(15%)|C(5%)}
class WriteTextAdvanced:
@classmethod
@@ -22,41 +27,134 @@ class WriteTextAdvanced:
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def find_variables(self, text):
stack = []
variables = []
for i, char in enumerate(text):
if char == '{':
stack.append((i, len(stack) + 1))
elif char == '}' and stack:
start, nesting = stack.pop()
variables.append({
'start': start,
'end': i + 1,
'nesting': nesting
})
variables.sort(key=lambda x: (-x['nesting'], -x['end']))
return variables
def parse_option(self, part):
if part.startswith('%csv='):
try:
filename = part.split('=', 1)[1].strip()
with open(filename, 'r') as f:
return [row[0] for row in csv.reader(f)]
except Exception as e:
return [f"[CSV Error: {str(e)}]"]
elif '(' in part and '%)' in part:
option, weight = part.rsplit('(', 1)
return (option.strip(), float(weight.split('%)')[0]))
return part.strip()
def process_content(self, content, seed):
random.seed(seed)
parts = []
weights = []
group_defined = False
group_name = None
for p in content.split('|'):
p = p.strip()
if p.startswith('group='):
group_name = p.split('=', 1)[1].strip()
group_defined = True
continue
parsed = self.parse_option(p)
if isinstance(parsed, list): # CSV data
parts.extend(parsed)
weights.extend([1]*len(parsed))
elif isinstance(parsed, tuple): # Weighted option
parts.append(parsed[0])
weights.append(parsed[1])
else:
parts.append(parsed)
weights.append(1)
if group_defined:
return {'type': 'group', 'name': group_name, 'options': parts}
if any(w != 1 for w in weights):
total = sum(weights)
if total == 0: weights = [1]*len(parts)
return random.choices(parts, weights=[w/total for w in weights])[0]
return random.choice(parts) if parts else ''
def write_text_special(self, text, variables="", seed=None):
# logging.info(f"Raw text: {text}")
# logging.info(f"Variables: {variables}")
if len(text) > 10000:
return ("Text too large to process at once. Please split into smaller parts.",)
if seed is None or seed == 0:
seed = int(time.time() * 1000)
random.seed(seed)
# Parse variables
# Handle variables
var_dict = {}
for line in variables.split('\n'):
if '=' in line:
key, value = line.split('=', 1)
var_dict[key.strip()] = value.strip()
# logging.info(f"Parsed variables: {var_dict}")
# Replace variables
for key, value in var_dict.items():
text = text.replace(f"<{key}>", value)
# Handle random choices
pattern = r'\{([^}]+)\}'
# Process nested variables
variables = self.find_variables(text)
substitutions = []
groups = {}
def replace_random(match):
return random.choice(match.group(1).split('|'))
for var in variables:
start, end = var['start'], var['end']
content = text[start+1:end-1]
processed = self.process_content(content, seed)
result = re.sub(pattern, replace_random, text)
# logging.info(f"Final text: {result}")
if isinstance(processed, dict):
if processed['type'] == 'group':
group_name = processed['name']
if group_name not in groups:
groups[group_name] = []
groups[group_name].append({
'start': start,
'end': end,
'options': processed['options']
})
else:
substitutions.append({
'start': start,
'end': end,
'sub': processed
})
return (result,)
# Handle groups
for group_name, matches in groups.items():
if not matches or not matches[0]['options']:
continue
options = matches[0]['options']
permuted = random.sample(options, len(options))
perm_cycle = cycle(permuted)
for m in matches:
substitutions.append({
'start': m['start'],
'end': m['end'],
'sub': next(perm_cycle)
})
# Apply regular substitutions
substitutions.sort(key=lambda x: -x['start'])
result_text = text
for sub in substitutions:
result_text = result_text[:sub['start']] + sub['sub'] + result_text[sub['end']:]
return (result_text,)
@classmethod
def IS_CHANGED(s, text, variables="", seed=None):