This commit is contained in:
justumen
2024-11-27 14:37:10 +01:00
parent 230ed2c446
commit 038842e80e
27 changed files with 2192 additions and 82 deletions

101
README.md
View File

@@ -1,6 +1,6 @@
# 🔗 Comfyui : Bjornulf_custom_nodes v0.60 🔗
# 🔗 Comfyui : Bjornulf_custom_nodes v0.61 🔗
A list of 68 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
A list of 79 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
# Coffee : ☕☕☕☕☕ 5/5
@@ -19,6 +19,10 @@ You can manage looping operations, generate randomized content, trigger logical
`1.` [👁 Show (Text, Int, Float)](#1----show-text-int-float)
`49.` [📹👁 Video Preview](#49----video-preview)
`68.` [🔢 Add line numbers](#68----add-line-numbers)
`71.` [👁 Show (Int)](#)
`72.` [👁 Show (Float)](#)
`73.` [👁 Show (String/Text)](#)
`74.` [👁 Show (JSON)](#)
## ✒ Text ✒
`2.` [✒ Write Text](#2----write-text)
@@ -29,7 +33,9 @@ You can manage looping operations, generate randomized content, trigger logical
`28.` [🔢🎲 Text with random Seed](#28----text-with-random-seed)
`32.` [🧑📝 Character Description Generator](#32----character-description-generator)
`48.` [🔀🎲 Text scrambler (🧑 Character)](#48----text-scrambler--character)
`67.` [📝➜✨ Text to Anything](67----text-to-anything)
`67.` [📝➜✨ Text to Anything](#)
`68.` [✨➜📝 Anything to Text](#)
`75.` [📝➜📝 Replace text](#)
## ♻ Loop ♻
`6.` [♻ Loop](#6----loop)
@@ -87,6 +93,7 @@ You can manage looping operations, generate randomized content, trigger logical
`60.` [🖼🖼 Merge Images/Videos 📹📹 (Horizontally)](#60----merge-imagesvideos--horizontally)
`61.` [🖼🖼 Merge Images/Videos 📹📹 (Vertically)](#61----merge-imagesvideos--vertically)
`62.` [🦙👁 Ollama Vision](#62----ollama-vision)
`69.` [📏 Resize Image Percentage](#69)
## 🚀 Load checkpoints 🚀
`40.` [🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model](#40----random-modelclipvae---aka-checkpoint--model)
@@ -109,6 +116,10 @@ You can manage looping operations, generate randomized content, trigger logical
`59.` [📹🔊 Combine Video + Audio](#59----combine-video--audio)
`60.` [🖼🖼 Merge Images/Videos 📹📹 (Horizontally)](#60----merge-imagesvideos--horizontally)
`61.` [🖼🖼 Merge Images/Videos 📹📹 (Vertically)](#61----merge-imagesvideos--vertically)
`76.` [⚙📹 FFmpeg Configuration 📹⚙](#76)
`77.` [📹🔍 Video details ⚙](#77)
`78.` [📹➜📹 Convert Video](#78)
`79.` [📹🔗 Concat Videos from list](#79)
## 🤖 AI 🤖
`19.` [🦙💬 Ollama Talk](#19----ollama-talk)
@@ -286,6 +297,7 @@ cd /where/you/installed/ComfyUI && python main.py
- **0.58**: small fix in model selector default value. (Set to None by default)
- **0.59**: A lot of Javascript fixing to avoid resizing and better properties mangement / recoveries
- **0.60**: Revert changes from ollama_talk (implement _user mode later / another node)
- **0.61**: Add/modify a bunch of Ffmpeg / video nodes. With a global configuration system and toggle python-ffmpeg / system.
# 📝 Nodes descriptions
@@ -1034,6 +1046,7 @@ If you want to be able to predict the next line, you can use node 68, to Add lin
**Description:**
Take two videos and concatenate them. (One after the other in the same video.)
Convert a video, can use FFMPEG_CONFIG_JSON. (From node 76 / 77)
![concat video](screenshots/concat_video.png)
@@ -1120,7 +1133,14 @@ Below is an example of that with my TTS node.
![text to anything](screenshots/text_to_anything.png)
### 68 - 🔢 Add line numbers
### 68 - ✨➜📝 Anything to Text
**Description:**
Sometimes you want to force something to be a STRING.
Most outputs are indeed text, even though they might be unusable.
This node ignore this fact and simply convert the input to a simple STRING.
### 69 - 🔢 Add line numbers
**Description:**
@@ -1128,3 +1148,76 @@ This node will just add line numbers to text.
Useful when you want to use node 57 that will loop over input lines. (You can read/predict the next line.)
![add line numbers](screenshots/add_line_numbers.png)
### 70 - 📏 Resize Image Percentage
**Description:**
Resize an image by percentage.
![resize percentage](screenshots/resize_percentage.png)
### 71 - 👁 Show (Int)
**Description:**
Basic node, show an INT. (You can simply drag any INT node and it will be recommended.)
### 72 - 👁 Show (Float)
**Description:**
Basic node, show a FLOAT. (You can simply drag any FLOAT node and it will be recommended.)
### 73 - 👁 Show (String/Text)
**Description:**
Basic node, show a STRING. (You can simply drag any STRING node and it will be recommended.)
### 74 - 👁 Show (JSON)
**Description:**
This node will take a STRING and format it as a readable JSON. (and pink)
![show json](screenshots/show_json.png)
![show json](screenshots/show_json2.png)
### 75 - 📝➜📝 Replace text
**Description:**
Replace text with another text, allow regex and more options, check examples below :
![text replace](screenshots/text_replace_1.png)
![text replace](screenshots/text_replace_2.png)
![text replace](screenshots/text_replace_3.png)
### 76 - ⚙📹 FFmpeg Configuration 📹⚙
**Description:**
Create a FFMPEG_CONFIG_JSON, it will contains a JSON that can be used by other nodes :
- Convert video
- Concat videos
- Concat video from list
![text replace](screenshots/ffmpeg_conf.png)
### 77 - 📹🔍 Video details ⚙
**Description:**
Extract details from a video_path.
You can use the all-in-one FFMPEG_CONFIG_JSON with other nodes or just use the other variables as your want.
![video details](screenshots/video_details.png)
### 78 - 📹➜📹 Convert Video
**Description:**
Convert a video, can use FFMPEG_CONFIG_JSON.
![convert video](screenshots/convert_video.png)
#### 79 - 📹🔗 Concat Videos from list
**Description:**
Take a list of videos (one per line) and concatenate them. (One after the other in the same video.)
Can use FFMPEG_CONFIG_JSON. (From node 76 / 77)
![concat video list](screenshots/concat_video_list.png)

View File

@@ -1,10 +1,13 @@
from .show_stuff import ShowFloat, ShowInt, ShowStringText, ShowJson
from .images_to_video import imagesToVideo
from .write_text import WriteText
from .text_replace import TextReplace
# from .write_image_environment import WriteImageEnvironment
# from .write_image_characters import WriteImageCharacters
# from .write_image_character import WriteImageCharacter
# from .write_image_allinone import WriteImageAllInOne
from .combine_texts import CombineTexts
from .ffmpeg_configuration import FFmpegConfig
from .loop_texts import LoopTexts
from .random_texts import RandomTexts
from .random_model_clip_vae import RandomModelClipVae
@@ -21,6 +24,7 @@ from .save_tmp_image import SaveTmpImage
from .save_image_path import SaveImagePath
from .save_img_to_folder import SaveImageToFolder
from .resize_image import ResizeImage
from .resize_image_percentage import ResizeImagePercentage
from .loop_my_combos_samplers_schedulers import LoopCombosSamplersSchedulers
from .remove_transparency import RemoveTransparency
from .image_to_grayscale import GrayscaleTransform
@@ -48,6 +52,7 @@ from .select_image_from_list import SelectImageFromList
from .random_model_selector import RandomModelSelector
from .if_else import IfElse
from .image_details import ImageDetails
from .video_details import VideoDetails
from .combine_images import CombineImages
# from .pass_preview_image import PassPreviewImage
from .text_scramble_character import ScramblerCharacter
@@ -61,6 +66,7 @@ from .loop_lora_selector import LoopLoraSelector
from .loop_sequential_integer import LoopIntegerSequential
from .loop_lines_sequential import LoopLinesSequential
from .concat_videos import ConcatVideos
from .concat_videos_from_list import ConcatVideosFromList
from .combine_video_audio import CombineVideoAudio
from .images_merger_horizontal import MergeImagesHorizontally
from .images_merger_vertical import MergeImagesVertically
@@ -71,12 +77,22 @@ from .ollama_system_persona import OllamaSystemPersonaSelector
from .ollama_system_job import OllamaSystemJobSelector
from .speech_to_text import SpeechToText
from .text_to_anything import TextToAnything
from .anything_to_text import AnythingToText
from .add_line_numbers import AddLineNumbers
from .ffmpeg_convert import ConvertVideo
NODE_CLASS_MAPPINGS = {
"Bjornulf_ShowInt": ShowInt,
"Bjornulf_TextReplace" : TextReplace,
"Bjornulf_ShowFloat": ShowFloat,
"Bjornulf_ShowJson": ShowJson,
"Bjornulf_ShowStringText": ShowStringText,
"Bjornulf_ollamaLoader": ollamaLoader,
"Bjornulf_FFmpegConfig": FFmpegConfig,
"Bjornulf_ConvertVideo": ConvertVideo,
"Bjornulf_AddLineNumbers": AddLineNumbers,
"Bjornulf_TextToAnything": TextToAnything,
"Bjornulf_AnythingToText": AnythingToText,
"Bjornulf_SpeechToText": SpeechToText,
"Bjornulf_OllamaConfig": OllamaConfig,
"Bjornulf_OllamaSystemPersonaSelector": OllamaSystemPersonaSelector,
@@ -87,6 +103,7 @@ NODE_CLASS_MAPPINGS = {
"Bjornulf_MergeImagesVertically": MergeImagesVertically,
"Bjornulf_CombineVideoAudio": CombineVideoAudio,
"Bjornulf_ConcatVideos": ConcatVideos,
"Bjornulf_ConcatVideosFromList": ConcatVideosFromList,
"Bjornulf_LoopLinesSequential": LoopLinesSequential,
"Bjornulf_LoopIntegerSequential": LoopIntegerSequential,
"Bjornulf_LoopLoraSelector": LoopLoraSelector,
@@ -99,6 +116,7 @@ NODE_CLASS_MAPPINGS = {
"Bjornulf_ScramblerCharacter": ScramblerCharacter,
"Bjornulf_CombineImages": CombineImages,
"Bjornulf_ImageDetails": ImageDetails,
"Bjornulf_VideoDetails": VideoDetails,
"Bjornulf_IfElse": IfElse,
"Bjornulf_RandomModelSelector": RandomModelSelector,
"Bjornulf_SelectImageFromList": SelectImageFromList,
@@ -129,6 +147,7 @@ NODE_CLASS_MAPPINGS = {
"Bjornulf_ShowText": ShowText,
"Bjornulf_SaveText": SaveText,
"Bjornulf_ResizeImage": ResizeImage,
"Bjornulf_ResizeImagePercentage": ResizeImagePercentage,
"Bjornulf_SaveImageToFolder": SaveImageToFolder,
"Bjornulf_SaveTmpImage": SaveTmpImage,
"Bjornulf_SaveImagePath": SaveImagePath,
@@ -147,6 +166,10 @@ NODE_CLASS_MAPPINGS = {
}
NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_ShowInt": "👁 Show (Int)",
"Bjornulf_ShowFloat": "👁 Show (Float)",
"Bjornulf_ShowJson": "👁 Show (JSON)",
"Bjornulf_ShowStringText": "👁 Show (String/Text)",
"Bjornulf_OllamaTalk": "🦙💬 Ollama Talk",
"Bjornulf_OllamaImageVision": "🦙👁 Ollama Vision",
"Bjornulf_OllamaConfig": "🦙 Ollama Configuration ⚙",
@@ -155,12 +178,18 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_SpeechToText": "🔊➜📝 STT - Speech to Text",
"Bjornulf_TextToSpeech": "📝➜🔊 TTS - Text to Speech",
"Bjornulf_TextToAnything": "📝➜✨ Text to Anything",
"Bjornulf_AnythingToText": "✨➜📝 Anything to Text",
"Bjornulf_TextReplace": "📝➜📝 Replace text",
"Bjornulf_AddLineNumbers": "🔢 Add line numbers",
"Bjornulf_FFmpegConfig": "⚙📹 FFmpeg Configuration 📹⚙",
"Bjornulf_ConvertVideo": "📹➜📹 Convert Video",
"Bjornulf_VideoDetails": "📹🔍 Video details ⚙",
"Bjornulf_WriteText": "✒ Write Text",
"Bjornulf_MergeImagesHorizontally": "🖼🖼 Merge Images/Videos 📹📹 (Horizontally)",
"Bjornulf_MergeImagesVertically": "🖼🖼 Merge Images/Videos 📹📹 (Vertically)",
"Bjornulf_CombineVideoAudio": "📹🔊 Combine Video + Audio",
"Bjornulf_ConcatVideos": "📹🔗 Concat Videos",
"Bjornulf_ConcatVideosFromList": "📹🔗 Concat Videos from list",
"Bjornulf_LoopLinesSequential": "♻📝 Loop Sequential (input Lines)",
"Bjornulf_LoopIntegerSequential": "♻📝 Loop Sequential (Integer)",
"Bjornulf_LoopLoraSelector": "♻ Loop Lora Selector",
@@ -201,6 +230,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
"Bjornulf_GrayscaleTransform": "🖼➜🔲 Image to grayscale (black & white)",
"Bjornulf_RemoveTransparency": "▢➜⬛ Remove image Transparency (alpha)",
"Bjornulf_ResizeImage": "📏 Resize Image",
"Bjornulf_ResizeImagePercentage": "📏 Resize Image Percentage",
"Bjornulf_SaveImagePath": "💾🖼 Save Image (exact path, exact name) ⚠️💣",
"Bjornulf_SaveImageToFolder": "💾🖼📁 Save Image(s) to a folder",
"Bjornulf_SaveTmpImage": "💾🖼 Save Image (tmp_api.png) ⚠️💣",

26
anything_to_text.py Normal file
View File

@@ -0,0 +1,26 @@
class AnythingToText:
@classmethod
def INPUT_TYPES(s):
return {
"required": {
"anything": (Everything("*"), {"forceInput": True})
},
}
@classmethod
def VALIDATE_INPUTS(s, input_types):
return True
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("text",)
FUNCTION = "any_to_text"
CATEGORY = "Bjornulf"
def any_to_text(self, anything):
# Convert the input to string representation
return (str(anything),)
# Keep the Everything class definition as it's needed for type handling
class Everything(str):
def __ne__(self, __value: object) -> bool:
return False

View File

@@ -1,85 +1,182 @@
import subprocess
from pathlib import Path
import os
import json
class ConcatVideos:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"video_path_1": ("STRING", {"default": ""}),
"video_path_2": ("STRING", {"default": ""}),
"output_filename": ("STRING", {"default": "concatenated.mp4"})
"number_of_videos": ("INT", {"default": 2, "min": 2, "max": 50, "step": 1}),
"output_filename": ("STRING", {"default": "concatenated.mp4"}),
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
},
"optional": {
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
},
"hidden": {
**{f"video_path_{i}": ("STRING", {"forceInput": True}) for i in range(1, 51)}
}
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("concat_path",)
RETURN_TYPES = ("STRING", "STRING",)
RETURN_NAMES = ("concat_path", "ffmpeg_command",)
FUNCTION = "concat_videos"
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def __init__(self):
# Get absolute paths for working directories
self.work_dir = Path(os.path.abspath("temp_concat"))
self.output_dir = Path(os.path.abspath("Bjornulf/concat_videos"))
os.makedirs(self.work_dir, exist_ok=True)
os.makedirs(self.output_dir, exist_ok=True)
def concat_videos(self, video_path_1: str, video_path_2: str, output_filename: str):
def concat_videos(self, number_of_videos: int, output_filename: str,
use_python_ffmpeg: bool = False,
FFMPEG_CONFIG_JSON: str = None, **kwargs):
"""
Concatenate two videos using ffmpeg with high-quality settings.
Returns the absolute path of the output file.
Concatenate multiple videos using ffmpeg.
Supports both subprocess and python-ffmpeg methods.
"""
# Convert to absolute paths
video_path_1 = os.path.abspath(video_path_1)
video_path_2 = os.path.abspath(video_path_2)
# Get and validate video paths
video_paths = [kwargs[f"video_path_{i}"] for i in range(1, number_of_videos + 1)
if f"video_path_{i}" in kwargs]
# Validate inputs
if not (Path(video_path_1).exists() and Path(video_path_2).exists()):
raise ValueError(f"Both video paths must exist.\nPath 1: {video_path_1}\nPath 2: {video_path_2}")
video_paths = [os.path.abspath(path) for path in video_paths]
for path in video_paths:
if not Path(path).exists():
raise ValueError(f"Video path does not exist: {path}")
# Ensure output filename has mp4 extension
output_filename = Path(output_filename).with_suffix('.mp4')
output_path = self.output_dir / output_filename
# Create concat file with absolute paths
concat_file = self.work_dir / "concat.txt"
with open(concat_file, 'w') as f:
f.write(f"file '{video_path_1}'\n")
f.write(f"file '{video_path_2}'\n")
for path in video_paths:
f.write(f"file '{path}'\n")
# Set output path (absolute)
output_path = self.output_dir / output_filename
output_path = output_path.absolute()
# Default configuration
config = {
'ffmpeg': {'path': 'ffmpeg', 'use_python_ffmpeg': use_python_ffmpeg}
}
# Concatenate videos using ffmpeg with high quality settings
cmd = [
'ffmpeg', '-y',
'-f', 'concat',
'-safe', '0',
'-i', str(concat_file),
# Video settings for maximum quality
'-c:v', 'libx264',
'-preset', 'veryslow', # Slowest preset for best compression
'-crf', '17', # Lower CRF for higher quality (range: 0-51, 0 is lossless)
'-x264-params', 'ref=6:me=umh:subme=7:trellis=2:direct-pred=auto:b-adapt=2',
# Audio settings
'-c:a', 'aac',
'-b:a', '320k', # High audio bitrate
# Additional quality settings
'-movflags', '+faststart', # Enables streaming
'-pix_fmt', 'yuv420p', # Ensures compatibility
str(output_path)
]
# If FFMPEG_CONFIG_JSON provided, parse and merge with default config
if FFMPEG_CONFIG_JSON:
try:
json_config = json.loads(FFMPEG_CONFIG_JSON)
# Merge JSON config, giving priority to use_python_ffmpeg from the node input
config = {**json_config, 'ffmpeg': {**json_config.get('ffmpeg', {}), 'use_python_ffmpeg': use_python_ffmpeg}}
except json.JSONDecodeError:
raise ValueError("Invalid FFMPEG_CONFIG_JSON format")
try:
# Run FFmpeg command
process = subprocess.run(
cmd,
check=True,
capture_output=True,
text=True
)
# Use python-ffmpeg if enabled
if config.get('ffmpeg', {}).get('use_python_ffmpeg', False):
import ffmpeg
# Return absolute path as string
return (str(output_path),)
# Create input streams
input_streams = [ffmpeg.input(path) for path in video_paths]
# Set up output stream
output_kwargs = {}
# Video settings
video_config = config.get('video', {})
if video_config.get('codec') and video_config['codec'] != 'None':
output_kwargs['vcodec'] = video_config['codec']
# Additional video encoding parameters
if video_config['codec'] != 'copy':
if video_config.get('bitrate'):
output_kwargs['video_bitrate'] = video_config['bitrate']
if video_config.get('crf') is not None:
output_kwargs['crf'] = video_config['crf']
if video_config.get('preset') and video_config['preset'] != 'None':
output_kwargs['preset'] = video_config['preset']
# Audio settings
audio_config = config.get('audio', {})
if audio_config.get('enabled') is False or audio_config.get('codec') == 'None':
output_kwargs['an'] = None # No audio
elif audio_config.get('codec') and audio_config['codec'] != 'None':
output_kwargs['acodec'] = audio_config['codec']
if audio_config.get('bitrate'):
output_kwargs['audio_bitrate'] = audio_config['bitrate']
# Concatenate and output
output = ffmpeg.concat(*input_streams)
output = output.output(str(output_path), **output_kwargs)
# Compile and run the command
ffmpeg_cmd = output.compile()
output.run(overwrite_output=True)
return str(output_path), ' '.join(ffmpeg_cmd)
# Default to subprocess method
else:
# Default simple concatenation command
cmd = [
'ffmpeg', '-y',
'-f', 'concat',
'-safe', '0',
'-i', str(concat_file),
'-c', 'copy',
'-movflags', '+faststart',
str(output_path)
]
# If FFMPEG_CONFIG_JSON provided, modify command
if FFMPEG_CONFIG_JSON:
cmd = [
config.get('ffmpeg', {}).get('path', 'ffmpeg'), '-y',
'-f', 'concat',
'-safe', '0',
'-i', str(concat_file)
]
# Video codec settings
video_config = config.get('video', {})
if video_config.get('codec') and video_config['codec'] != 'None':
cmd.extend(['-c:v', video_config['codec']])
# Add encoding parameters if not copying
if video_config['codec'] != 'copy':
if video_config.get('bitrate'):
cmd.extend(['-b:v', video_config['bitrate']])
if video_config.get('crf') is not None:
cmd.extend(['-crf', str(video_config['crf'])])
# Add preset if specified
if video_config.get('preset') and video_config['preset'] != 'None':
cmd.extend(['-preset', video_config['preset']])
# Add pixel format if specified
if video_config.get('pixel_format') and video_config['pixel_format'] != 'None':
cmd.extend(['-pix_fmt', video_config['pixel_format']])
# Audio settings
audio_config = config.get('audio', {})
if audio_config.get('enabled') is False or audio_config.get('codec') == 'None':
cmd.extend(['-an'])
elif audio_config.get('codec') and audio_config['codec'] != 'None':
cmd.extend(['-c:a', audio_config['codec']])
if audio_config.get('bitrate'):
cmd.extend(['-b:a', audio_config['bitrate']])
cmd.extend(['-movflags', '+faststart', str(output_path)])
# Run subprocess command
process = subprocess.run(
cmd,
check=True,
capture_output=True,
text=True
)
return str(output_path), ' '.join(cmd)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"FFmpeg error: {e.stderr}")

183
concat_videos_from_list.py Normal file
View File

@@ -0,0 +1,183 @@
import subprocess
from pathlib import Path
import os
import json
class ConcatVideosFromList:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"files": ("STRING", {"multiline": True, "forceInput": True}),
"output_filename": ("STRING", {"default": "output.mp4"}),
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
},
"optional": {
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
}
}
RETURN_TYPES = ("STRING", "STRING",)
RETURN_NAMES = ("concat_path", "ffmpeg_command",)
FUNCTION = "concat_videos"
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def __init__(self):
self.work_dir = Path(os.path.abspath("temp_concat"))
self.output_dir = Path(os.path.abspath("Bjornulf/concat_videos"))
os.makedirs(self.work_dir, exist_ok=True)
os.makedirs(self.output_dir, exist_ok=True)
def concat_videos(self, files: str, output_filename: str,
use_python_ffmpeg: bool = False,
FFMPEG_CONFIG_JSON: str = None):
"""
Concatenate multiple videos using ffmpeg.
Supports both subprocess and python-ffmpeg methods.
"""
# Split the multiline string into a list of video paths
video_paths = [path.strip() for path in files.split('\n') if path.strip()]
video_paths = [os.path.abspath(path) for path in video_paths]
for path in video_paths:
if not Path(path).exists():
raise ValueError(f"Video path does not exist: {path}")
# Ensure output filename has mp4 extension
output_filename = Path(output_filename).with_suffix('.mp4')
output_path = self.output_dir / output_filename
# Create concat file with absolute paths
concat_file = self.work_dir / "concat.txt"
with open(concat_file, 'w') as f:
for path in video_paths:
f.write(f"file '{path}'\n")
# Default configuration
config = {
'ffmpeg': {'path': 'ffmpeg', 'use_python_ffmpeg': use_python_ffmpeg}
}
# If FFMPEG_CONFIG_JSON provided, parse and merge with default config
if FFMPEG_CONFIG_JSON:
try:
json_config = json.loads(FFMPEG_CONFIG_JSON)
config = {**json_config, 'ffmpeg': {**json_config.get('ffmpeg', {}), 'use_python_ffmpeg': use_python_ffmpeg}}
except json.JSONDecodeError:
raise ValueError("Invalid FFMPEG_CONFIG_JSON format")
try:
# Use python-ffmpeg if enabled
if config.get('ffmpeg', {}).get('use_python_ffmpeg', False):
import ffmpeg
# Create input streams
input_streams = [ffmpeg.input(path) for path in video_paths]
# Set up output stream
output_kwargs = {}
# Video settings
video_config = config.get('video', {})
if video_config.get('codec') and video_config['codec'] != 'None':
output_kwargs['vcodec'] = video_config['codec']
# Additional video encoding parameters
if video_config['codec'] != 'copy':
if video_config.get('bitrate'):
output_kwargs['video_bitrate'] = video_config['bitrate']
if video_config.get('crf') is not None:
output_kwargs['crf'] = video_config['crf']
if video_config.get('preset') and video_config['preset'] != 'None':
output_kwargs['preset'] = video_config['preset']
# Audio settings
audio_config = config.get('audio', {})
if audio_config.get('enabled') is False or audio_config.get('codec') == 'None':
output_kwargs['an'] = None # No audio
elif audio_config.get('codec') and audio_config['codec'] != 'None':
output_kwargs['acodec'] = audio_config['codec']
if audio_config.get('bitrate'):
output_kwargs['audio_bitrate'] = audio_config['bitrate']
# Concatenate and output
output = ffmpeg.concat(*input_streams)
output = output.output(str(output_path), **output_kwargs)
# Compile and run the command
ffmpeg_cmd = output.compile()
output.run(overwrite_output=True)
return str(output_path), ' '.join(ffmpeg_cmd)
# Default to subprocess method
else:
# Default simple concatenation command
cmd = [
'ffmpeg', '-y',
'-f', 'concat',
'-safe', '0',
'-i', str(concat_file),
'-c', 'copy',
'-movflags', '+faststart',
str(output_path)
]
# If FFMPEG_CONFIG_JSON provided, modify command
if FFMPEG_CONFIG_JSON:
cmd = [
config.get('ffmpeg', {}).get('path', 'ffmpeg'), '-y',
'-f', 'concat',
'-safe', '0',
'-i', str(concat_file)
]
# Video codec settings
video_config = config.get('video', {})
if video_config.get('codec') and video_config['codec'] != 'None':
cmd.extend(['-c:v', video_config['codec']])
# Add encoding parameters if not copying
if video_config['codec'] != 'copy':
if video_config.get('bitrate'):
cmd.extend(['-b:v', video_config['bitrate']])
if video_config.get('crf') is not None:
cmd.extend(['-crf', str(video_config['crf'])])
# Add preset if specified
if video_config.get('preset') and video_config['preset'] != 'None':
cmd.extend(['-preset', video_config['preset']])
# Add pixel format if specified
if video_config.get('pixel_format') and video_config['pixel_format'] != 'None':
cmd.extend(['-pix_fmt', video_config['pixel_format']])
# Audio settings
audio_config = config.get('audio', {})
if audio_config.get('enabled') is False or audio_config.get('codec') == 'None':
cmd.extend(['-an'])
elif audio_config.get('codec') and audio_config['codec'] != 'None':
cmd.extend(['-c:a', audio_config['codec']])
if audio_config.get('bitrate'):
cmd.extend(['-b:a', audio_config['bitrate']])
cmd.extend(['-movflags', '+faststart', str(output_path)])
# Run subprocess command
process = subprocess.run(
cmd,
check=True,
capture_output=True,
text=True
)
return str(output_path), ' '.join(cmd)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"FFmpeg error: {e.stderr}")
except Exception as e:
raise RuntimeError(f"Error during video concatenation: {str(e)}")
@classmethod
def IS_CHANGED(cls, **kwargs):
return float("NaN")

171
ffmpeg_configuration.py Normal file
View File

@@ -0,0 +1,171 @@
import json
import subprocess
import ffmpeg # Assuming the Python FFmpeg bindings (ffmpeg-python) are installed
class FFmpegConfig:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
"ffmpeg_path": ("STRING", {"default": "ffmpeg"}),
"video_codec": ([
"None",
"copy",
"libx264 (H.264)",
"h264_nvenc (H.264 / NVIDIA GPU)",
"libx265 (H.265)",
"hevc_nvenc (H.265 / NVIDIA GPU)",
"libvpx-vp9 (WebM)",
"libaom-av1"
], {"default": "None"}),
"video_bitrate": ("STRING", {"default": "3045k"}),
"preset": ([
"None",
"ultrafast",
"superfast",
"veryfast",
"faster",
"fast",
"medium",
"slow",
"slower",
"veryslow"
], {"default": "medium"}),
"pixel_format": ([
"None",
"yuv420p",
"yuv444p",
"yuv420p10le",
"yuv444p10le",
"rgb24",
"rgba",
"yuva420p"
], {"default": "yuv420p"}),
"container_format": ([
"None",
"mp4",
"mkv",
"webm",
"mov",
"avi"
], {"default": "mp4"}),
"crf": ("INT", {"default": 19, "min": 1, "max": 63}),
"force_fps": ("FLOAT", {
"default": 0.0,
"min": 0.0,
"max": 240.0,
"step": 0.01,
"description": "Force output FPS (0 = use source FPS)"
}),
"width": ("INT", {"default": 1152, "min": 1, "max": 10000}),
"height": ("INT", {"default": 768, "min": 1, "max": 10000}),
"ignore_audio": ("BOOLEAN", {"default": False}),
"audio_codec": ([
"None",
"copy",
"aac",
"libmp3lame",
"libvorbis",
"libopus",
"none"
], {"default": "aac"}),
"audio_bitrate": ("STRING", {"default": "192k"}),
}
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("FFMPEG_CONFIG_JSON",)
FUNCTION = "create_config"
CATEGORY = "Bjornulf"
def get_ffmpeg_version(self, ffmpeg_path, use_python_ffmpeg):
if use_python_ffmpeg:
try:
# Retrieve Python ffmpeg-python version
return f"Python FFmpeg binding (ffmpeg-python) version: {ffmpeg.__version__}"
except AttributeError:
return "Python FFmpeg binding (ffmpeg-python) version: Unknown (no __version__ attribute)"
else:
try:
# Retrieve system FFmpeg version
result = subprocess.run(
[ffmpeg_path, "-version"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
version_line = result.stdout.splitlines()[0]
return version_line
except Exception as e:
return f"Error fetching FFmpeg version: {e}"
def create_json_output(self, config, use_python_ffmpeg):
"""Create a JSON string containing all FFmpeg configuration."""
ffmpeg_version = self.get_ffmpeg_version(config["ffmpeg_path"], use_python_ffmpeg)
config_info = {
"ffmpeg": {
"path": config["ffmpeg_path"],
"version": ffmpeg_version
},
"video": {
"codec": config["video_codec"] or "None",
"bitrate": config["video_bitrate"],
"preset": config["preset"] or "None",
"pixel_format": config["pixel_format"] or "None",
"crf": config["crf"],
"resolution": {
"width": config["width"],
"height": config["height"]
},
"fps": {
"force_fps": config["force_fps"],
"enabled": config["force_fps"] > 0
}
},
"audio": {
"enabled": not config["ignore_audio"],
"codec": config["audio_codec"] or "None",
"bitrate": config["audio_bitrate"]
},
"output": {
"container_format": config["container_format"] or "None"
}
}
return json.dumps(config_info, indent=2)
def create_config(self, ffmpeg_path, use_python_ffmpeg, ignore_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, width, height):
config = {
"ffmpeg_path": ffmpeg_path,
"video_bitrate": video_bitrate,
"preset": None if preset == "None" else preset,
"crf": crf,
"force_fps": force_fps,
"ignore_audio": ignore_audio,
"audio_bitrate": audio_bitrate,
"width": width,
"height": height,
"video_codec": video_codec.split(" ")[0] if video_codec != "None" else None,
"pixel_format": None if pixel_format == "None" else pixel_format,
"container_format": None if container_format == "None" else container_format,
"audio_codec": None if audio_codec == "None" or ignore_audio else audio_codec,
}
return (self.create_json_output(config, use_python_ffmpeg),)
@classmethod
def IS_CHANGED(cls, ffmpeg_path, use_python_ffmpeg, ignore_audio, video_codec, audio_codec,
video_bitrate, audio_bitrate, preset, pixel_format,
container_format, crf, force_fps, width, height) -> float:
return 0.0

302
ffmpeg_convert.py Normal file
View File

@@ -0,0 +1,302 @@
import subprocess
from pathlib import Path
import os
import json
class ConvertVideo:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"video_path": ("STRING", {"forceInput": True}),
"output_filename": ("STRING", {"default": "converted.mp4"}),
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
},
"optional": {
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
},
}
RETURN_TYPES = ("STRING", "STRING",)
RETURN_NAMES = ("video_path", "ffmpeg_command",)
FUNCTION = "convert_video"
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def __init__(self):
self.output_dir = Path(os.path.abspath("ffmpeg/converted_videos"))
os.makedirs(self.output_dir, exist_ok=True)
def get_default_config(self):
"""Provide basic default configuration."""
return {
'ffmpeg_path': 'ffmpeg', # Assuming ffmpeg is in PATH
'video_codec': 'copy',
'video_bitrate': '3045K',
'preset': 'medium',
'pixel_format': 'yuv420p',
'container_format': 'mp4',
'crf': 19,
'force_fps': 30,
'width': None,
'height': None,
'ignore_audio': False,
'audio_codec': 'aac',
'audio_bitrate': '128k'
}
def parse_config_json(self, config_json: str) -> dict:
"""Parse the JSON configuration string into a dictionary format compatible with the converter"""
config = json.loads(config_json)
return {
# 'use_python_ffmpeg': config['ffmpeg']['use_python_ffmpeg'],
'ffmpeg_path': config['ffmpeg']['path'],
'video_codec': None if config['video']['codec'] == 'None' else config['video']['codec'],
'video_bitrate': config['video']['bitrate'],
'preset': None if config['video']['preset'] == 'None' else config['video']['preset'],
'pixel_format': None if config['video']['pixel_format'] == 'None' else config['video']['pixel_format'],
'container_format': None if config['output']['container_format'] == 'None' else config['output']['container_format'],
'crf': config['video']['crf'],
'force_fps': config['video']['fps']['force_fps'],
'width': config['video']['resolution']['width'],
'height': config['video']['resolution']['height'],
'ignore_audio': not config['audio']['enabled'],
'audio_codec': None if config['audio']['codec'] == 'None' else config['audio']['codec'],
'audio_bitrate': config['audio']['bitrate']
}
def convert_video_subprocess(self, input_path, output_path, FFMPEG_CONFIG_JSON):
"""Use subprocess to run ffmpeg command"""
cmd = [
FFMPEG_CONFIG_JSON['ffmpeg_path'], '-y',
'-i', str(input_path)
]
# Add video codec settings if not None
if FFMPEG_CONFIG_JSON['video_codec'] is not None:
if FFMPEG_CONFIG_JSON['video_codec'] == 'copy':
cmd.extend(['-c:v', 'copy'])
else:
cmd.extend(['-c:v', FFMPEG_CONFIG_JSON['video_codec']])
# Add preset if specified
if FFMPEG_CONFIG_JSON['preset'] is not None:
cmd.extend(['-preset', FFMPEG_CONFIG_JSON['preset']])
# Add width and height if specified
if FFMPEG_CONFIG_JSON['width'] and FFMPEG_CONFIG_JSON['height']:
cmd.extend(['-vf', f'scale={FFMPEG_CONFIG_JSON["width"]}:{FFMPEG_CONFIG_JSON["height"]}'])
# Add video bitrate if specified
if FFMPEG_CONFIG_JSON['video_bitrate']:
cmd.extend(['-b:v', FFMPEG_CONFIG_JSON['video_bitrate']])
# Add CRF if video codec isn't copy
cmd.extend(['-crf', str(FFMPEG_CONFIG_JSON['crf'])])
# Add pixel format if specified
if FFMPEG_CONFIG_JSON['pixel_format'] is not None:
cmd.extend(['-pix_fmt', FFMPEG_CONFIG_JSON['pixel_format']])
# Add force fps if enabled
if FFMPEG_CONFIG_JSON['force_fps'] > 0:
cmd.extend(['-r', str(FFMPEG_CONFIG_JSON['force_fps'])])
# Add audio codec settings
if FFMPEG_CONFIG_JSON['ignore_audio'] or FFMPEG_CONFIG_JSON['audio_codec'] is None:
cmd.extend(['-an'])
elif FFMPEG_CONFIG_JSON['audio_codec'] == 'copy':
cmd.extend(['-c:a', 'copy'])
else:
cmd.extend([
'-c:a', FFMPEG_CONFIG_JSON['audio_codec'],
'-b:a', FFMPEG_CONFIG_JSON['audio_bitrate']
])
# Add output path
cmd.append(str(output_path))
process = subprocess.run(
cmd,
check=True,
capture_output=True,
text=True
)
def convert_video_python_ffmpeg(self, input_path, output_path, FFMPEG_CONFIG_JSON):
"""Use ffmpeg-python library"""
try:
import ffmpeg
except ImportError:
raise ImportError("ffmpeg-python is not installed. Please install it with: pip install ffmpeg-python")
# Start building the ffmpeg-python chain
stream = ffmpeg.input(str(input_path))
# Build stream arguments based on config
stream_args = {}
# Video settings if not None
if FFMPEG_CONFIG_JSON['video_codec'] is not None:
if FFMPEG_CONFIG_JSON['video_codec'] != 'copy':
stream_args['vcodec'] = FFMPEG_CONFIG_JSON['video_codec']
if FFMPEG_CONFIG_JSON['preset'] is not None:
stream_args['preset'] = FFMPEG_CONFIG_JSON['preset']
# Add width and height if specified
if FFMPEG_CONFIG_JSON['width'] and FFMPEG_CONFIG_JSON['height']:
stream = ffmpeg.filter(stream, 'scale',
w=FFMPEG_CONFIG_JSON['width'],
h=FFMPEG_CONFIG_JSON['height'])
if FFMPEG_CONFIG_JSON['video_bitrate']:
stream_args['video_bitrate'] = FFMPEG_CONFIG_JSON['video_bitrate']
if FFMPEG_CONFIG_JSON['force_fps'] > 0:
stream_args['crf'] = FFMPEG_CONFIG_JSON['crf']
else:
stream_args['crf'] = 19
if FFMPEG_CONFIG_JSON['pixel_format'] is not None:
stream_args['pix_fmt'] = FFMPEG_CONFIG_JSON['pixel_format']
if FFMPEG_CONFIG_JSON['force_fps'] > 0:
stream_args['r'] = FFMPEG_CONFIG_JSON['force_fps']
else:
stream_args['vcodec'] = 'copy'
# Audio settings
if FFMPEG_CONFIG_JSON['ignore_audio'] or FFMPEG_CONFIG_JSON['audio_codec'] is None:
stream_args['an'] = None
elif FFMPEG_CONFIG_JSON['audio_codec'] == 'copy':
stream_args['acodec'] = 'copy'
else:
stream_args.update({
'acodec': FFMPEG_CONFIG_JSON['audio_codec'],
'audio_bitrate': FFMPEG_CONFIG_JSON['audio_bitrate']
})
# Run the ffmpeg operation
stream = ffmpeg.output(stream, str(output_path), **stream_args, y=None)
stream.run()
def convert_video(self, video_path: str, output_filename: str, FFMPEG_CONFIG_JSON: str = None, use_python_ffmpeg: bool = False):
"""
Convert a video using either subprocess or python-ffmpeg based on config.
If no configuration is provided, uses default configuration.
"""
# Use default configuration if no JSON is provided
if FFMPEG_CONFIG_JSON is None:
default_config = self.get_default_config()
# Create a JSON-like structure to match the parse_config_json method's expectations
FFMPEG_CONFIG_JSON = {
'ffmpeg': {
'path': default_config['ffmpeg_path']
},
'video': {
'codec': default_config['video_codec'],
'bitrate': default_config['video_bitrate'],
'preset': default_config['preset'],
'pixel_format': default_config['pixel_format'],
'crf': default_config['crf'],
'fps': {
'force_fps': default_config['force_fps']
},
'resolution': {
'width': default_config['width'],
'height': default_config['height']
}
},
'output': {
'container_format': default_config['container_format']
},
'audio': {
'enabled': not default_config['ignore_audio'],
'codec': default_config['audio_codec'],
'bitrate': default_config['audio_bitrate']
}
}
# Convert to JSON string
FFMPEG_CONFIG_JSON = json.dumps(FFMPEG_CONFIG_JSON)
# Parse the JSON configuration
FFMPEG_CONFIG_JSON = self.parse_config_json(FFMPEG_CONFIG_JSON)
# Validate input path
input_path = Path(os.path.abspath(video_path))
if not input_path.exists():
raise ValueError(f"Input video path does not exist: {input_path}")
# Set output path
if FFMPEG_CONFIG_JSON['container_format']:
output_filename = Path(output_filename).with_suffix(f".{FFMPEG_CONFIG_JSON['container_format']}")
output_path = self.output_dir / output_filename
output_path = output_path.absolute()
# Construct FFmpeg command for command string return
cmd = [
FFMPEG_CONFIG_JSON['ffmpeg_path'], '-y',
'-i', str(input_path)
]
# Add video codec settings if not None
if FFMPEG_CONFIG_JSON['video_codec'] is not None:
if FFMPEG_CONFIG_JSON['video_codec'] == 'copy':
cmd.extend(['-c:v', 'copy'])
else:
cmd.extend(['-c:v', FFMPEG_CONFIG_JSON['video_codec']])
if FFMPEG_CONFIG_JSON['preset'] is not None:
cmd.extend(['-preset', FFMPEG_CONFIG_JSON['preset']])
if FFMPEG_CONFIG_JSON['width'] and FFMPEG_CONFIG_JSON['height']:
cmd.extend(['-vf', f'scale={FFMPEG_CONFIG_JSON["width"]}:{FFMPEG_CONFIG_JSON["height"]}'])
if FFMPEG_CONFIG_JSON['video_bitrate']:
cmd.extend(['-b:v', FFMPEG_CONFIG_JSON['video_bitrate']])
if FFMPEG_CONFIG_JSON['crf'] > 0:
cmd.extend(['-crf', str(FFMPEG_CONFIG_JSON['crf'])])
else:
cmd.extend(['-crf', '19'])
if FFMPEG_CONFIG_JSON['pixel_format'] is not None:
cmd.extend(['-pix_fmt', FFMPEG_CONFIG_JSON['pixel_format']])
if FFMPEG_CONFIG_JSON['force_fps'] > 0:
cmd.extend(['-r', str(FFMPEG_CONFIG_JSON['force_fps'])])
# Add audio codec settings
if FFMPEG_CONFIG_JSON['ignore_audio'] or FFMPEG_CONFIG_JSON['audio_codec'] is None:
cmd.extend(['-an'])
elif FFMPEG_CONFIG_JSON['audio_codec'] == 'copy':
cmd.extend(['-c:a', 'copy'])
else:
cmd.extend([
'-c:a', FFMPEG_CONFIG_JSON['audio_codec'],
'-b:a', FFMPEG_CONFIG_JSON['audio_bitrate']
])
cmd.append(str(output_path))
# Convert command list to string
ffmpeg_command = ' '.join(cmd)
try:
if use_python_ffmpeg:
self.convert_video_python_ffmpeg(input_path, output_path, FFMPEG_CONFIG_JSON)
else:
self.convert_video_subprocess(input_path, output_path, FFMPEG_CONFIG_JSON)
return (str(output_path), ffmpeg_command)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"FFmpeg error: {e.stderr}")
except Exception as e:
raise RuntimeError(f"Error during video conversion: {str(e)}")
@classmethod
def IS_CHANGED(cls, **kwargs):
return float("NaN")

View File

@@ -1,7 +1,7 @@
[project]
name = "bjornulf_custom_nodes"
description = "61 ComfyUI nodes : Display, manipulate, and edit text, images, videos, loras and more. Manage looping operations, generate randomized content, use logical conditions and work with external AI tools, like Ollama or Text To Speech."
version = "0.60"
description = "79 ComfyUI nodes : Display, manipulate, and edit text, images, videos, loras and more. Manage looping operations, generate randomized content, use logical conditions and work with external AI tools, like Ollama or Text To Speech."
version = "0.61"
license = {file = "LICENSE"}
[project.urls]

View File

@@ -2,3 +2,6 @@ ollama
pydub
opencv-python
faster_whisper
ffmpeg-python
re
subprocess

View File

@@ -0,0 +1,89 @@
import numpy as np
import torch
from PIL import Image
class ResizeImagePercentage:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"image": ("IMAGE", {}),
"percentage": ("INT", {
"default": 50,
"min": 1,
"max": 1000,
"step": 1,
}),
},
"hidden": {"prompt": "PROMPT", "extra_pnginfo": "EXTRA_PNGINFO"},
}
FUNCTION = "resize_image"
RETURN_TYPES = ("IMAGE", "INT", "INT",)
RETURN_NAMES = ("IMAGE", "width", "height")
OUTPUT_NODE = True
CATEGORY = "Bjornulf"
def resize_image(self, image, percentage=100.0, prompt=None, extra_pnginfo=None):
# Convert percentage to decimal (e.g., 150% -> 1.5)
scale_factor = percentage / 100.0
# Ensure the input image is on CPU and convert to numpy array
image_np = image.cpu().numpy()
# Initialize new_width and new_height
new_width = 0
new_height = 0
# Check if the image is in the format [batch, height, width, channel]
if image_np.ndim == 4:
# Process each image in the batch
resized_images = []
for img in image_np:
# Get original dimensions
orig_height, orig_width = img.shape[:2]
# Calculate new dimensions
new_width = int(orig_width * scale_factor)
new_height = int(orig_height * scale_factor)
# Convert to PIL Image
pil_img = Image.fromarray((img * 255).astype(np.uint8))
# Resize
resized_pil = pil_img.resize((new_width, new_height), Image.LANCZOS)
# Convert back to numpy and normalize
resized_np = np.array(resized_pil).astype(np.float32) / 255.0
resized_images.append(resized_np)
# Stack the resized images back into a batch
resized_batch = np.stack(resized_images)
# Convert to torch tensor
resized_tensor = torch.from_numpy(resized_batch)
else:
# If it's a single image, process it directly
# Get original dimensions
orig_height, orig_width = image_np.shape[:2]
# Calculate new dimensions
new_width = int(orig_width * scale_factor)
new_height = int(orig_height * scale_factor)
# Convert to PIL Image
pil_img = Image.fromarray((image_np * 255).astype(np.uint8))
# Resize
resized_pil = pil_img.resize((new_width, new_height), Image.LANCZOS)
# Convert back to numpy and normalize
resized_np = np.array(resized_pil).astype(np.float32) / 255.0
# Add batch dimension if it was originally present
if image.dim() == 4:
resized_np = np.expand_dims(resized_np, axis=0)
# Convert to torch tensor
resized_tensor = torch.from_numpy(resized_np)
# Update metadata if needed
if extra_pnginfo is not None:
extra_pnginfo["resize_percentage"] = percentage
extra_pnginfo["resized_width"] = new_width
extra_pnginfo["resized_height"] = new_height
return (resized_tensor, new_width, new_height)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

After

Width:  |  Height:  |  Size: 225 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 305 KiB

BIN
screenshots/ffmpeg_conf.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 182 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 129 KiB

BIN
screenshots/show_json.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

BIN
screenshots/show_json2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 208 KiB

144
show_stuff.py Normal file
View File

@@ -0,0 +1,144 @@
class ShowInt:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"INT": ("INT", {"default": 0, "forceInput": True}),
},
}
RETURN_TYPES = ()
FUNCTION = "show_int"
OUTPUT_NODE = True
INPUT_IS_LIST = (True,)
CATEGORY = "Bjornulf"
def detect_type(self, value):
return 'integer'
def show_int(self, INT):
type_info = [f"{value}" for value in INT]
return {"ui": {"text": type_info}}
class ShowFloat:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"FLOAT": ("FLOAT", {"default": 0.0, "forceInput": True}),
},
}
RETURN_TYPES = ()
FUNCTION = "show_float"
OUTPUT_NODE = True
INPUT_IS_LIST = (True,)
CATEGORY = "Bjornulf"
def detect_type(self, value):
return 'float'
def show_float(self, FLOAT):
type_info = [f"{value}" for value in FLOAT]
return {"ui": {"text": type_info}}
class ShowStringText:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"STRING": ("STRING", {"default": "", "forceInput": True}),
},
}
RETURN_TYPES = ()
FUNCTION = "show_string"
OUTPUT_NODE = True
INPUT_IS_LIST = (True,)
CATEGORY = "Bjornulf"
def detect_type(self, value):
if isinstance(value, int):
return 'integer'
elif isinstance(value, float):
# Check if it has a decimal part
if value % 1 == 0:
return 'float' if str(value).endswith('.0') else 'integer'
return 'float'
elif isinstance(value, str):
try:
float_val = float(value)
if '.' in value:
return 'float string'
if float_val.is_integer():
return 'integer string'
return 'float string'
except ValueError:
return 'normal string'
else:
return 'other type'
def show_string(self, STRING):
type_info = [f"{value}" for value in STRING]
return {"ui": {"text": type_info}}
class ShowJson:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"STRING": ("STRING", {"default": "", "forceInput": True}),
},
}
RETURN_TYPES = ()
FUNCTION = "show_json"
OUTPUT_NODE = True
INPUT_IS_LIST = (True,)
CATEGORY = "Bjornulf"
def detect_type(self, value):
if isinstance(value, int):
return 'integer'
elif isinstance(value, float):
if value % 1 == 0:
return 'float' if str(value).endswith('.0') else 'integer'
return 'float'
elif isinstance(value, str):
try:
float_val = float(value)
if '.' in value:
return 'float string'
if float_val.is_integer():
return 'integer string'
return 'float string'
except ValueError:
return 'normal string'
else:
return 'other type'
def show_json(self, STRING):
import json
try:
# Join all characters into a single string
full_string = "".join(STRING)
try:
# Parse JSON
parsed_json = json.loads(full_string)
# Format JSON with proper indentation and Unicode support
formatted_json = json.dumps(
parsed_json,
indent=2, # You can adjust this number for different indentation levels
ensure_ascii=False,
sort_keys=True # Optional: sorts keys alphabetically
)
# Add newlines for better readability
formatted_json = f"\n{formatted_json}\n"
# Return as a single-element list
return {"ui": {"text": [formatted_json]}}
except json.JSONDecodeError as e:
# If not valid JSON, return error message
return {"ui": {"text": [f"Invalid JSON: {str(e)}\nOriginal string:\n{full_string}"]}}
except Exception as e:
return {"ui": {"text": [f"Error processing string: {str(e)}"]}}

130
text_replace.py Normal file
View File

@@ -0,0 +1,130 @@
import re
class TextReplace:
def __init__(self):
pass
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"input_text": ("STRING", {"multiline": True, "forceInput": True}),
"search_text": ("STRING", {"multiline": True}),
"replace_text": ("STRING", {"multiline": True, "default": ""}),
"replace_count": ("INT", {"default": 0, "min": 0, "max": 1000,
"display": "number",
"tooltip": "Number of replacements (0 = replace all)"}),
"use_regex": ("BOOLEAN", {"default": False}),
"case_sensitive": ("BOOLEAN", {"default": True, "tooltip": "Whether the search should be case-sensitive"}),
"trim_whitespace": (["none", "left", "right", "both"], {
"default": "none",
"tooltip": "Remove whitespace around the found text"
})
}
}
RETURN_TYPES = ("STRING",)
FUNCTION = "replace_text"
CATEGORY = "Bjornulf"
def replace_text(self, input_text, search_text, replace_text, replace_count, use_regex, case_sensitive, trim_whitespace):
try:
# Convert input to string
input_text = str(input_text)
# Prepare regex flags
regex_flags = 0
if not case_sensitive:
regex_flags |= re.IGNORECASE
# Debug print
# print(f"Input: {input_text}")
# print(f"Search Text: {search_text}")
# print(f"Replace Text: {replace_text}")
# print(f"Use Regex: {use_regex}")
# print(f"Regex Flags: {regex_flags}")
if use_regex:
# Ensure regex pattern is valid
try:
# Compile the regex pattern first
pattern = re.compile(search_text, flags=regex_flags)
# Perform replacement
if replace_count == 0:
# Replace all instances
result = pattern.sub(replace_text, input_text)
else:
# Replace specific number of instances
result = pattern.sub(replace_text, input_text, count=replace_count)
# Debug print
# print(f"Regex Result: {result}")
return (result,)
except re.error as regex_compile_error:
# print(f"Invalid Regex Pattern: {regex_compile_error}")
return (input_text,)
else:
# Standard string replacement
if not case_sensitive:
# Case-insensitive string replacement
result = input_text
count = 0
while search_text.lower() in result.lower() and (replace_count == 0 or count < replace_count):
# Find the index of the match
idx = result.lower().index(search_text.lower())
# Determine left and right parts
left_part = result[:idx]
right_part = result[idx + len(search_text):]
# Trim whitespace based on option
if trim_whitespace == "left":
left_part = left_part.rstrip()
elif trim_whitespace == "right":
right_part = right_part.lstrip()
elif trim_whitespace == "both":
left_part = left_part.rstrip()
right_part = right_part.lstrip()
# Reconstruct the string
result = left_part + replace_text + right_part
count += 1
else:
# Case-sensitive replacement
result = input_text
count = 0
while search_text in result and (replace_count == 0 or count < replace_count):
# Find the index of the match
idx = result.index(search_text)
# Determine left and right parts
left_part = result[:idx]
right_part = result[idx + len(search_text):]
# Trim whitespace based on option
if trim_whitespace == "left":
left_part = left_part.rstrip()
elif trim_whitespace == "right":
right_part = right_part.lstrip()
elif trim_whitespace == "both":
left_part = left_part.rstrip()
right_part = right_part.lstrip()
# Reconstruct the string
result = left_part + replace_text + right_part
count += 1
return (result,)
except Exception as e:
# print(f"Unexpected error during text replacement: {e}")
return (input_text,)
@classmethod
def IS_CHANGED(cls, input_text, search_text, replace_text, replace_count, use_regex, case_sensitive, trim_whitespace):
# Return float("NaN") to ensure the node always processes
return float("NaN")

330
video_details.py Normal file
View File

@@ -0,0 +1,330 @@
import subprocess
import json
from pathlib import Path
import os
import re
try:
import ffmpeg
FFMPEG_PYTHON_AVAILABLE = True
except ImportError:
FFMPEG_PYTHON_AVAILABLE = False
class VideoDetails:
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"video_path": ("STRING", {"default": "", "forceInput": True}),
"ffprobe_path": ("STRING", {"default": "ffprobe"}),
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
}
}
RETURN_TYPES = ("STRING", "STRING", "INT", "INT", "FLOAT", "INT", "INT", "STRING", "STRING",
"STRING", "STRING", "STRING", "STRING", "FLOAT", "STRING", "STRING")
RETURN_NAMES = ("filename", "video_path", "width", "height", "fps", "total_frames", "duration_seconds",
"video_codec", "video_bitrate", "pixel_format",
"audio_codec", "audio_bitrate", "container_format",
"duration_seconds_float", "full_info", "FFMPEG_CONFIG_JSON")
FUNCTION = "get_video_info"
CATEGORY = "Bjornulf"
def extract_bitrate(self, text):
"""Extract bitrate value from text."""
match = re.search(r'(\d+(?:\.\d+)?)\s*(?:kb/s|Kb/s|KB/s|Mb/s|MB/s)', text)
if match:
value = float(match.group(1))
if 'mb/s' in text.lower() or 'MB/s' in text:
value *= 1000
return f"{value:.0f}k"
return "N/A"
def create_json_output(self, filename, video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format):
"""Create a JSON string containing all video information in FFmpegConfig format."""
video_info = {
"ffmpeg": {
"path": "ffmpeg", # Default value since this is from probe
# "use_python_ffmpeg": False # Default value since this is from probe
},
"video": {
"codec": video_codec if video_codec != "N/A" else "None",
"bitrate": video_bitrate if video_bitrate != "N/A" else "0k",
"preset": "None", # Not available from probe
"pixel_format": pixel_format if pixel_format != "N/A" else "None",
"crf": 0, # Not available from probe
"resolution": {
"width": width,
"height": height
},
"fps": {
"force_fps": fps,
"enabled": False # This is source fps, not forced
}
},
"audio": {
"enabled": audio_codec != "N/A" and audio_codec != "None",
"codec": audio_codec if audio_codec != "N/A" else "None",
"bitrate": audio_bitrate if audio_bitrate != "N/A" else "0k"
},
"output": {
"container_format": container_format if container_format != "N/A" else "None"
}
}
return json.dumps(video_info, indent=2)
def create_full_info_string(self, video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format):
return f"""Video Information:
Filename: {os.path.basename(video_path)}
Resolution: {width}x{height}
FPS: {fps:.3f}
Total Frames: {total_frames}
Duration: {duration_seconds} seconds ({duration_seconds_float:.3f})
Video Codec: {video_codec}
Video Bitrate: {video_bitrate}
Pixel Format: {pixel_format}
Audio Codec: {audio_codec}
Audio Bitrate: {audio_bitrate}
Container Format: {container_format}
"""
def get_video_info_python_ffmpeg(self, video_path):
"""Get video info using python-ffmpeg."""
if not FFMPEG_PYTHON_AVAILABLE:
raise RuntimeError("python-ffmpeg is not installed. Please install it with 'pip install ffmpeg-python'")
try:
probe = ffmpeg.probe(video_path)
# Initialize variables with default values
width = 0
height = 0
fps = 0.0
total_frames = 0
duration_seconds = 0
duration_seconds_float = 0.0
video_codec = "N/A"
video_bitrate = "N/A"
pixel_format = "N/A"
audio_codec = "N/A"
audio_bitrate = "N/A"
container_format = "N/A"
# Extract format information
format_data = probe['format']
container_format = format_data.get('format_name', "N/A").split(',')[0]
# With:
format_name = format_data.get('format_name', "N/A")
if 'mp4' in format_name.lower():
container_format = 'mp4'
else:
container_format = format_name.split(',')[0]
duration_seconds_float = float(format_data.get('duration', 0))
duration_seconds = int(duration_seconds_float)
# Process streams
for stream in probe['streams']:
if stream['codec_type'] == 'video':
width = int(stream.get('width', 0))
height = int(stream.get('height', 0))
fps_str = stream.get('r_frame_rate', '')
if fps_str and fps_str != '0/0':
num, den = map(int, fps_str.split('/'))
fps = num / den if den != 0 else 0.0
total_frames = int(stream.get('nb_frames', 0))
if total_frames == 0 and fps > 0 and duration_seconds_float > 0:
total_frames = int(duration_seconds_float * fps)
video_codec = stream.get('codec_name', "N/A")
pixel_format = stream.get('pix_fmt', "N/A")
video_bitrate = f"{int(int(stream.get('bit_rate', 0))/1000)}k"
elif stream['codec_type'] == 'audio':
audio_codec = stream.get('codec_name', "N/A")
audio_bitrate = stream.get('bit_rate', "N/A")
if audio_bitrate != "N/A":
audio_bitrate = f"{int(int(audio_bitrate)/1000)}k"
filename = os.path.basename(video_path)
# Create full info string and JSON outputs
full_info = self.create_full_info_string(
video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format
)
full_info_json = self.create_json_output(
filename, video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format
)
return (
filename,
video_path,
width,
height,
fps,
total_frames,
duration_seconds,
video_codec,
video_bitrate,
pixel_format,
audio_codec,
audio_bitrate,
container_format,
duration_seconds_float,
full_info,
full_info_json
)
except Exception as e:
raise RuntimeError(f"Error analyzing video with python-ffmpeg: {str(e)}")
def get_video_info(self, video_path: str, ffprobe_path: str, use_python_ffmpeg: bool):
"""Get detailed information about a video file."""
video_path = os.path.abspath(video_path)
if not os.path.exists(video_path):
raise ValueError(f"Video file not found: {video_path}")
if use_python_ffmpeg:
return self.get_video_info_python_ffmpeg(video_path)
# Original ffmpeg/ffprobe implementation
probe_cmd = [
ffprobe_path,
'-v', 'quiet',
'-print_format', 'json',
'-show_format',
'-show_streams',
video_path
]
info_cmd = [
ffprobe_path,
'-i', video_path,
'-hide_banner'
]
try:
probe_result = subprocess.run(probe_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
probe_data = json.loads(probe_result.stdout)
info_result = subprocess.run(info_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
ffmpeg_output = info_result.stderr
# Initialize variables with default values
width = 0
height = 0
fps = 0.0
total_frames = 0
duration_seconds = 0
duration_seconds_float = 0.0
video_codec = "N/A"
video_bitrate = "N/A"
pixel_format = "N/A"
audio_codec = "N/A"
audio_bitrate = "N/A"
container_format = "N/A"
# Extract information from probe data
if 'format' in probe_data:
format_data = probe_data['format']
# container_format = format_data.get('format_name', "N/A").split(',')[0]
container_format = format_data.get('format_name', "N/A").split(',')[0]
# With:
format_name = format_data.get('format_name', "N/A")
if 'mp4' in format_name.lower():
container_format = 'mp4'
else:
container_format = format_name.split(',')[0]
duration_seconds_float = float(format_data.get('duration', 0))
duration_seconds = int(duration_seconds_float)
# Process streams
for stream in probe_data.get('streams', []):
if stream['codec_type'] == 'video':
width = int(stream.get('width', 0))
height = int(stream.get('height', 0))
fps_str = stream.get('r_frame_rate', '')
if fps_str and fps_str != '0/0':
num, den = map(int, fps_str.split('/'))
fps = num / den if den != 0 else 0.0
total_frames = int(stream.get('nb_frames', 0))
if total_frames == 0 and fps > 0 and duration_seconds_float > 0:
total_frames = int(duration_seconds_float * fps)
video_codec = stream.get('codec_name', "N/A")
pixel_format = stream.get('pix_fmt', "N/A")
elif stream['codec_type'] == 'audio':
audio_codec = stream.get('codec_name', "N/A")
audio_bitrate = stream.get('bit_rate', "N/A")
if audio_bitrate != "N/A":
audio_bitrate = f"{int(int(audio_bitrate)/1000)}k"
# Extract video bitrate from ffmpeg output
video_bitrate = self.extract_bitrate(ffmpeg_output)
filename = os.path.basename(video_path)
# Create full info string
full_info = self.create_full_info_string(
video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format
)
# Create JSON output
full_info_json = self.create_json_output(
filename, video_path, width, height, fps, total_frames,
duration_seconds, duration_seconds_float, video_codec,
video_bitrate, pixel_format, audio_codec, audio_bitrate,
container_format
)
return (
filename,
video_path,
width,
height,
fps,
total_frames,
duration_seconds,
video_codec,
video_bitrate,
pixel_format,
audio_codec,
audio_bitrate,
container_format,
duration_seconds_float,
full_info,
full_info_json
)
except subprocess.CalledProcessError as e:
raise RuntimeError(f"Error running ffmpeg/ffprobe: {e.stderr}")
except json.JSONDecodeError:
raise RuntimeError("Error parsing ffprobe output")
except Exception as e:
raise RuntimeError(f"Error analyzing video: {str(e)}")
@classmethod
def IS_CHANGED(cls, **kwargs):
return float("NaN")

116
web/js/concat_videos.js Normal file
View File

@@ -0,0 +1,116 @@
import { app } from "../../../scripts/app.js";
app.registerExtension({
name: "Bjornulf.ConcatVideos",
async nodeCreated(node) {
if (node.comfyClass === "Bjornulf_ConcatVideos") {
// Initialize properties if not already set
node.properties = node.properties || {};
// Default output filename
const defaultOutputFilename = "concatenated.mp4";
// Ensure `output_filename` is initialized in properties
if (!node.properties.output_filename) {
node.properties.output_filename = defaultOutputFilename;
}
// Store the original serialize/configure methods
const originalSerialize = node.serialize;
const originalConfigure = node.configure;
// Override serialize to save `output_filename` and inputs
node.serialize = function() {
const data = originalSerialize ? originalSerialize.call(this) : {};
data.video_inputs = this.inputs
.filter(input => input.name.startsWith("video_path_"))
.map(input => ({
name: input.name,
type: input.type,
link: input.link || null,
}));
data.properties = { ...this.properties };
return data;
};
// Override configure to restore `output_filename` and inputs
node.configure = function(data) {
if (originalConfigure) {
originalConfigure.call(this, data);
}
if (data.video_inputs) {
data.video_inputs.forEach(inputData => {
if (!this.inputs.find(input => input.name === inputData.name)) {
const newInput = this.addInput(inputData.name, inputData.type);
newInput.link = inputData.link || null;
}
});
}
node.properties = { ...node.properties, ...data.properties };
// Ensure `output_filename` is always consistent
if (!node.properties.output_filename) {
node.properties.output_filename = defaultOutputFilename;
}
return true;
};
const updateInputs = () => {
const initialWidth = node.size[0];
const numVideosWidget = node.widgets.find(w => w.name === "number_of_videos");
if (!numVideosWidget) return;
const numVideos = numVideosWidget.value;
// Store existing connections before modifying inputs
const existingConnections = {};
node.inputs.forEach(input => {
if (input.link !== null) {
existingConnections[input.name] = input.link;
}
});
// Clear and update inputs
node.inputs = node.inputs.filter(input => !input.name.startsWith("video_path_"));
for (let i = 1; i <= numVideos; i++) {
const inputName = `video_path_${i}`;
const newInput = node.addInput(inputName, "STRING");
if (existingConnections[inputName] !== undefined) {
newInput.link = existingConnections[inputName];
}
}
// Synchronize `output_filename` with properties and widget
const outputFilenameWidget = node.widgets.find(w => w.name === "output_filename");
if (outputFilenameWidget) {
outputFilenameWidget.value = node.properties.output_filename;
}
// Adjust size and redraw
node.setSize(node.computeSize());
node.size[0] = Math.max(initialWidth, 200);
app.graph.setDirtyCanvas(true);
};
// Set up widget callbacks
const numVideosWidget = node.widgets.find(w => w.name === "number_of_videos");
if (numVideosWidget) {
numVideosWidget.callback = updateInputs;
}
// Ensure `output_filename` is properly initialized on node creation
let outputFilenameWidget = node.widgets.find(w => w.name === "output_filename");
if (!outputFilenameWidget) {
outputFilenameWidget = node.addWidget("string", "output_filename", node.properties.output_filename, value => {
node.properties.output_filename = value || defaultOutputFilename;
});
} else {
// Synchronize widget value with properties
outputFilenameWidget.value = node.properties.output_filename || defaultOutputFilename;
}
// Initialize inputs on node creation
requestAnimationFrame(updateInputs);
}
}
});

385
web/js/show_stuff.js Normal file
View File

@@ -0,0 +1,385 @@
import { app } from "../../../scripts/app.js";
import { ComfyWidgets } from "../../../scripts/widgets.js";
// Styles for the text area
const textStyles = {
readOnly: true,
opacity: 1,
padding: "4px",
paddingLeft: "7px",
border: "1px solid #ccc",
borderRadius: "5px",
backgroundColor: "#222",
color: "Lime",
fontFamily: "Arial, sans-serif",
fontSize: "14px",
lineHeight: "1.4",
resize: "none",
overflowY: "auto",
};
app.registerExtension({
name: "Bjornulf.ShowStringText",
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Bjornulf_ShowStringText") {
function populate(text) {
if (!Array.isArray(text)) {
console.warn("populate expects an array, got:", text);
return;
}
if (this.widgets) {
const pos = this.widgets.findIndex((w) => w.name === "text");
if (pos !== -1) {
for (let i = pos; i < this.widgets.length; i++) {
this.widgets[i].onRemove?.();
}
this.widgets.length = pos;
}
} else {
this.widgets = [];
}
text.forEach((list) => {
const existingWidget = this.widgets.find(
(w) => w.name === "text" && w.value === list
);
if (!existingWidget) {
const w = ComfyWidgets["STRING"](
this,
"text",
["STRING", { multiline: true }],
app
).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textStyles);
// Determine color based on type
let color = "lime";
w.inputEl.style.color = color;
w.value = list;
}
});
requestAnimationFrame(() => {
const sz = this.computeSize();
if (sz[0] < this.size[0]) sz[0] = this.size[0];
if (sz[1] < this.size[1]) sz[1] = this.size[1];
this.onResize?.(sz);
app.graph.setDirtyCanvas(true, false);
});
}
// When the node is executed we will be sent the input text, display this in the widget
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {
const initialWidth = this.size[0];
onExecuted?.apply(this, arguments);
populate.call(this, message.text);
this.size[0] = Math.max(initialWidth, 200); // Ensure minimum width
// this.setSize(this.size[0], this.size[1]);
};
// const onConfigure = nodeType.prototype.onConfigure;
// nodeType.prototype.onConfigure = function () {
// onConfigure?.apply(this, arguments);
// if (this.widgets_values?.length) {
// populate.call(this, this.widgets_values);
// }
// };
}
},
});
app.registerExtension({
name: "Bjornulf.ShowJson",
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Bjornulf_ShowJson") {
function populate(text) {
if (!Array.isArray(text)) {
console.warn("populate expects an array, got:", text);
return;
}
if (this.widgets) {
const pos = this.widgets.findIndex((w) => w.name === "text");
if (pos !== -1) {
for (let i = pos; i < this.widgets.length; i++) {
this.widgets[i].onRemove?.();
}
this.widgets.length = pos;
}
} else {
this.widgets = [];
}
text.forEach((list) => {
const existingWidget = this.widgets.find(
(w) => w.name === "text" && w.value === list
);
if (!existingWidget) {
const w = ComfyWidgets["STRING"](
this,
"text",
["STRING", { multiline: true }],
app
).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textStyles);
// Determine color based on type
let color = "pink";
w.inputEl.style.color = color;
w.value = list;
}
});
requestAnimationFrame(() => {
const sz = this.computeSize();
if (sz[0] < this.size[0]) sz[0] = this.size[0];
if (sz[1] < this.size[1]) sz[1] = this.size[1];
this.onResize?.(sz);
app.graph.setDirtyCanvas(true, false);
});
}
// When the node is executed we will be sent the input text, display this in the widget
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {
const initialWidth = this.size[0];
onExecuted?.apply(this, arguments);
populate.call(this, message.text);
this.size[0] = Math.max(initialWidth, 200); // Ensure minimum width
// this.setSize(this.size[0], this.size[1]);
};
// const onConfigure = nodeType.prototype.onConfigure;
// nodeType.prototype.onConfigure = function () {
// onConfigure?.apply(this, arguments);
// if (this.widgets_values?.length) {
// populate.call(this, this.widgets_values);
// }
// };
}
},
});
app.registerExtension({
name: "Bjornulf.ShowInt",
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Bjornulf_ShowInt") {
function populate(text) {
if (!Array.isArray(text)) {
console.warn("populate expects an array, got:", text);
return;
}
if (this.widgets) {
const pos = this.widgets.findIndex((w) => w.name === "text");
if (pos !== -1) {
for (let i = pos; i < this.widgets.length; i++) {
this.widgets[i].onRemove?.();
}
this.widgets.length = pos;
}
} else {
this.widgets = [];
}
text.forEach((list) => {
const existingWidget = this.widgets.find(
(w) => w.name === "text" && w.value === list
);
if (!existingWidget) {
const w = ComfyWidgets["STRING"](
this,
"text",
["STRING", { multiline: true }],
app
).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textStyles);
// Determine color based on type
let color = "#0096FF";
w.inputEl.style.color = color;
w.value = list;
}
});
requestAnimationFrame(() => {
const sz = this.computeSize();
if (sz[0] < this.size[0]) sz[0] = this.size[0];
if (sz[1] < this.size[1]) sz[1] = this.size[1];
this.onResize?.(sz);
app.graph.setDirtyCanvas(true, false);
});
}
// When the node is executed we will be sent the input text, display this in the widget
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {
onExecuted?.apply(this, arguments);
populate.call(this, message.text);
};
// const onConfigure = nodeType.prototype.onConfigure;
// nodeType.prototype.onConfigure = function () {
// onConfigure?.apply(this, arguments);
// if (this.widgets_values?.length) {
// populate.call(this, this.widgets_values);
// }
// };
}
},
});
app.registerExtension({
name: "Bjornulf.ShowFloat",
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Bjornulf_ShowFloat") {
function populate(text) {
if (!Array.isArray(text)) {
console.warn("populate expects an array, got:", text);
return;
}
if (this.widgets) {
const pos = this.widgets.findIndex((w) => w.name === "text");
if (pos !== -1) {
for (let i = pos; i < this.widgets.length; i++) {
this.widgets[i].onRemove?.();
}
this.widgets.length = pos;
}
} else {
this.widgets = [];
}
text.forEach((list) => {
const existingWidget = this.widgets.find(
(w) => w.name === "text" && w.value === list
);
if (!existingWidget) {
const w = ComfyWidgets["STRING"](
this,
"text",
["STRING", { multiline: true }],
app
).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textStyles);
// Determine color based on type
let color = "orange";
w.inputEl.style.color = color;
w.value = list;
}
});
requestAnimationFrame(() => {
const sz = this.computeSize();
if (sz[0] < this.size[0]) sz[0] = this.size[0];
if (sz[1] < this.size[1]) sz[1] = this.size[1];
this.onResize?.(sz);
app.graph.setDirtyCanvas(true, false);
});
}
// When the node is executed we will be sent the input text, display this in the widget
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {
onExecuted?.apply(this, arguments);
populate.call(this, message.text);
};
// const onConfigure = nodeType.prototype.onConfigure;
// nodeType.prototype.onConfigure = function () {
// onConfigure?.apply(this, arguments);
// if (this.widgets_values?.length) {
// populate.call(this, this.widgets_values);
// }
// };
}
},
});
// app.registerExtension({
// name: "Bjornulf.ShowJson",
// async beforeRegisterNodeDef(nodeType, nodeData, app) {
// if (nodeData.name === "Bjornulf_ShowJson") {
// function populate(text) {
// if (!Array.isArray(text)) {
// console.warn("populate expects an array, got:", text);
// return;
// }
// if (this.widgets) {
// const pos = this.widgets.findIndex((w) => w.name === "text");
// if (pos !== -1) {
// for (let i = pos; i < this.widgets.length; i++) {
// this.widgets[i].onRemove?.();
// }
// this.widgets.length = pos;
// }
// } else {
// this.widgets = [];
// }
// text.forEach((list) => {
// const existingWidget = this.widgets.find(w => w.name === "text" && w.value === list);
// if (!existingWidget) {
// const w = ComfyWidgets["STRING"](this, "text", ["STRING", { multiline: true }], app).widget;
// w.inputEl.readOnly = true;
// Object.assign(w.inputEl.style, textStyles);
// // Determine color based on type
// let color = 'Lime'; // Default color for strings
// const value = list.toString().trim();
// if (/^-?\d+$/.test(value)) {
// color = '#0096FF'; // Integer
// } else if (/^-?\d*\.?\d+$/.test(value)) {
// color = 'orange'; // Float
// } else if (value.startsWith("If-Else ERROR: ")) {
// color = 'red'; // If-Else ERROR lines
// } else if (value.startsWith("tensor(")) {
// color = '#0096FF'; // Lines starting with "tensor("
// }
// w.inputEl.style.color = color;
// w.value = list;
// }
// });
// requestAnimationFrame(() => {
// const sz = this.computeSize();
// if (sz[0] < this.size[0]) sz[0] = this.size[0];
// if (sz[1] < this.size[1]) sz[1] = this.size[1];
// this.onResize?.(sz);
// app.graph.setDirtyCanvas(true, false);
// });
// }
// // When the node is executed we will be sent the input text, display this in the widget
// const onExecuted = nodeType.prototype.onExecuted;
// nodeType.prototype.onExecuted = function (message) {
// onExecuted?.apply(this, arguments);
// populate.call(this, message.text);
// };
// const onConfigure = nodeType.prototype.onConfigure;
// nodeType.prototype.onConfigure = function () {
// onConfigure?.apply(this, arguments);
// if (this.widgets_values?.length) {
// populate.call(this, this.widgets_values);
// }
// };
// }
// },
// });

View File

@@ -24,6 +24,11 @@ app.registerExtension({
async beforeRegisterNodeDef(nodeType, nodeData, app) {
if (nodeData.name === "Bjornulf_ShowText") {
function populate(text) {
if (!Array.isArray(text)) {
console.warn("populate expects an array, got:", text);
return;
}
if (this.widgets) {
const pos = this.widgets.findIndex((w) => w.name === "text");
if (pos !== -1) {
@@ -32,30 +37,35 @@ app.registerExtension({
}
this.widgets.length = pos;
}
} else {
this.widgets = [];
}
for (const list of text) {
const w = ComfyWidgets["STRING"](this, "text", ["STRING", { multiline: true }], app).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textAreaStyles);
text.forEach((list) => {
const existingWidget = this.widgets.find(w => w.name === "text" && w.value === list);
if (!existingWidget) {
const w = ComfyWidgets["STRING"](this, "text", ["STRING", { multiline: true }], app).widget;
w.inputEl.readOnly = true;
Object.assign(w.inputEl.style, textAreaStyles);
// Improved type detection
let color = 'Lime'; // Default color for strings
const value = list.toString().trim();
// Determine color based on type
let color = 'Lime'; // Default color for strings
const value = list.toString().trim();
if (/^-?\d+$/.test(value)) {
color = '#0096FF'; // Integer
} else if (/^-?\d*\.?\d+$/.test(value)) {
color = 'orange'; // Float
} else if (value.startsWith("If-Else ERROR: ")) {
color = 'red'; // If-Else ERROR lines
} else if (value.startsWith("tensor(")) {
color = '#0096FF'; // Lines starting with "tensor("
if (/^-?\d+$/.test(value)) {
color = '#0096FF'; // Integer
} else if (/^-?\d*\.?\d+$/.test(value)) {
color = 'orange'; // Float
} else if (value.startsWith("If-Else ERROR: ")) {
color = 'red'; // If-Else ERROR lines
} else if (value.startsWith("tensor(")) {
color = '#0096FF'; // Lines starting with "tensor("
}
w.inputEl.style.color = color;
w.value = list;
}
w.inputEl.style.color = color;
w.value = list;
}
});
requestAnimationFrame(() => {
const sz = this.computeSize();
@@ -66,6 +76,7 @@ app.registerExtension({
});
}
// When the node is executed we will be sent the input text, display this in the widget
const onExecuted = nodeType.prototype.onExecuted;
nodeType.prototype.onExecuted = function (message) {