0.70
120
README.md
@@ -1,9 +1,9 @@
|
||||
# 🔗 Comfyui : Bjornulf_custom_nodes v0.69 🔗
|
||||
# 🔗 Comfyui : Bjornulf_custom_nodes v0.70 🔗
|
||||
|
||||
A list of 120 custom nodes for Comfyui : Display, manipulate, create and edit text, images, videos, loras, generate characters and more.
|
||||
A list of 128 custom nodes for Comfyui : Display, manipulate, create and edit text, images, videos, loras, generate characters and more.
|
||||
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
|
||||
|
||||
# Watch Video Intro :
|
||||
# Watch Video Intro (Quick overview 28 minutes) :
|
||||
[](https://youtu.be/jTg9QsgKYmA)
|
||||
|
||||
# Coffee : ☕☕☕☕☕ 5/5
|
||||
@@ -26,6 +26,9 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`72.` [👁 Show (Float)](#72----show-float)
|
||||
`73.` [👁 Show (String/Text)](#73----show-stringtext)
|
||||
`74.` [👁 Show (JSON)](#74----show-json)
|
||||
`126.` [📒 Note](#126)
|
||||
`127.` [🖼📒 Image Note](#127)
|
||||
`128.` [🖼👁 Preview (first) image](#128)
|
||||
|
||||
## ✒ Text ✒
|
||||
`2.` [✒ Write Text](#2----write-text)
|
||||
@@ -45,7 +48,7 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`113.` [📝🔪 Text split in 5](#113----text-split-in-5)
|
||||
`115.` [📥 Load Text From Bjornulf Folder](115----load-text-from-bjornulf-folder)
|
||||
`116.` [📥 Load Text From Path](#116----load-text-from-path)
|
||||
`117.` [📝👈 Line selector (🎲 Or random)](#117----line-selector--or-random)
|
||||
`117.` [📝👈 Line selector (🎲 or ♻ or ♻📑)](#117)
|
||||
|
||||
## 🔥 Text Generator 🔥
|
||||
`81.` [🔥📝 Text Generator 📝🔥](#81----text-generator-)
|
||||
@@ -81,8 +84,8 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`42.` [♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model](#42----loop-modelclipvae---aka-checkpoint--model)
|
||||
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
|
||||
`54.` [♻👑 Loop Lora Selector](#54----loop-lora-selector)
|
||||
`56.` [♻📝 Loop Sequential (Integer)](#56----loop-sequential-integer)
|
||||
`57.` [♻📝 Loop Sequential (input Lines)](#57----loop-sequential-input-lines)
|
||||
`56.` [♻📑 Loop Sequential (Integer)](#56----loop-sequential-integer)
|
||||
`57.` [♻📑 Loop Sequential (input Lines)](#57----loop-sequential-input-lines)
|
||||
`90.` [♻🔥📝 List Looper (Text Generator)](#8)
|
||||
`91.` [♻🌄📝 List Looper (Text Generator Scenes)](#8)
|
||||
`92.` [♻🎨📝 List Looper (Text Generator Styles)](#8)
|
||||
@@ -101,17 +104,19 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`41.` [🎲 Random Load checkpoint (Model Selector)](#41----random-load-checkpoint-model-selector)
|
||||
`48.` [🔀🎲 Text scrambler (🧑 Character)](#48----text-scrambler--character)
|
||||
`55.` [🎲👑 Random Lora Selector](#55----random-lora-selector)
|
||||
`117.` [📝👈 Line selector (🎲 Or random)](#117----line-selector--or-random)
|
||||
`117.` [📝👈 Line selector (🎲 or ♻ or ♻📑)](#117)
|
||||
|
||||
## 🖼💾 Image Save 💾🖼
|
||||
## 🖼💾 Save Image / Text 💾🖼
|
||||
`16.` [💾🖼💬 Save image for Bjornulf LobeChat](#16----save-image-for-bjornulf-lobechat-for-my-custom-lobe-chat)
|
||||
`17.` [💾🖼 Save image as `tmp_api.png` Temporary API](#17----save-image-as-tmp_apipng-temporary-api-%EF%B8%8F)
|
||||
`18.` [💾🖼📁 Save image to a chosen folder name](#18----save-image-to-a-chosen-folder-name)
|
||||
`14.` [💾🖼 Save Exact name](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
||||
`123.` 💾 Save Global Variables](#123)
|
||||
|
||||
## 🖼📥 Image Load 📥🖼
|
||||
## 🖼📥 Load Image / Text 📥🖼
|
||||
`29.` [📥🖼 Load Image with Transparency ▢](#29----load-image-with-transparency-)
|
||||
`43.` [📥🖼📂 Load Images from output folder](#43----load-images-from-output-folder)
|
||||
`124.` [📥 Load Global Variables](#124)
|
||||
|
||||
## 🖼 Image - others 🖼
|
||||
`13.` [📏 Resize Image](#1314------resize-and-save-exact-name-%EF%B8%8F)
|
||||
@@ -137,12 +142,14 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`40.` [🎲 Random (Model+Clip+Vae) - aka Checkpoint / Model](#40----random-modelclipvae---aka-checkpoint--model)
|
||||
`41.` [🎲 Random Load checkpoint (Model Selector)](#41----random-load-checkpoint-model-selector)
|
||||
`42.` [♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model](#42----loop-modelclipvae---aka-checkpoint--model)
|
||||
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
|
||||
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
|
||||
`125.` [📝👈 Model-Clip-Vae selector (🎲 or ♻ or ♻📑)](#125)
|
||||
|
||||
## 🚀 Load loras 🚀
|
||||
`54.` [♻ Loop Lora Selector](#54----loop-lora-selector)
|
||||
`55.` [🎲 Random Lora@ Selector](#55----random-lora-selector)
|
||||
`114.` [📥👑 Load Lora with Path](#114----load-lora-with-path)
|
||||
`122.` [👑 Combine Loras, Lora stack](#122)
|
||||
|
||||
## ☁ Image Creation : API / cloud / remote ☁
|
||||
`106.` [☁🎨 API Image Generator (FalAI) ☁](#106----api-image-generator-falai-)
|
||||
@@ -151,7 +158,7 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`109.` [☁🎨 API Image Generator (Black Forest Labs - Flux) ☁](#109----api-image-generator-black-forest-labs---flux-)
|
||||
`110.` [☁🎨 API Image Generator (Stability - Stable Diffusion) ☁](#110----api-image-generator-stability---stable-diffusion-)
|
||||
|
||||
## 📥 Take from CivitAI 📥
|
||||
## 📥 Take from CivitAI / Hugginface 📥
|
||||
`98.` [📥 Load checkpoint SD1.5 (+Download from CivitAi)](#98----load-checkpoint-sd15-download-from-civitai)
|
||||
`99.` [📥 Load checkpoint SDXL (+Download from CivitAi)](#99----load-checkpoint-sdxl-download-from-civitai)
|
||||
`100.` [📥 Load checkpoint Pony (+Download from CivitAi)](#100----load-checkpoint-pony-download-from-civitai)
|
||||
@@ -161,6 +168,7 @@ Support me and my work : ❤️❤️❤️ <https://ko-fi.com/bjornulf> ❤️
|
||||
`104.` [📥👑 Load Lora SDXL (+Download from CivitAi)](#104----load-lora-sdxl-download-from-civitai)
|
||||
`105.` [📥👑 Load Lora Pony (+Download from CivitAi)](#105----load-lora-pony-download-from-civitai)
|
||||
`119.` [📥👑📹 Load Lora Hunyuan Video (+Download from CivitAi)](#119----load-lora-hunyuan-video-download-from-civitai)
|
||||
`121.` [💾 Huggingface Downloader](#121)
|
||||
|
||||
## 📹 Video 📹
|
||||
`20.` [📹 Video Ping Pong](#20----video-ping-pong)
|
||||
@@ -375,6 +383,11 @@ cd /where/you/installed/ComfyUI && python main.py
|
||||
- **0.67**: Add kokoro TTS node.
|
||||
- **0.68**: Update kokoro TTS node with connect_to_workflow and same outputs as XTTS.
|
||||
- **0.69**: Small fixes
|
||||
- **0.70**: ❗Breaking changes : "Line Selector Node" is now a "universal node" : manual selection, random, and LOOP + Sequential.
|
||||
Text replace now have multine option for regex. (https://github.com/justUmen/Bjornulf_custom_nodes/issues/17) - can remove <think> tag from ollama.
|
||||
8 new nodes : "🖼👁 Preview (first) image", "💾 Huggingface Downloader", "👑 Combine Loras, Lora stack", "📥 Load Global Variables", "💾 Save Global Variables", "📝👈 Model-Clip-Vae selector (🎲 or ♻ or ♻📑)", "📒 Note", "🖼📒 Image Note".
|
||||
Fix a lot of code everywhere, a little better logging system, etc...
|
||||
WIP : Rewrite of all my ffmpeg nodes. (Still need improvements and fixes, will do that in 0.71) Maybe don't use them yet...
|
||||
|
||||
# 📝 Nodes descriptions
|
||||
|
||||
@@ -1091,7 +1104,7 @@ Just take a single Lora at random from a list of Loras.
|
||||
|
||||

|
||||
|
||||
### 56 - ♻📝 Loop Sequential (Integer)
|
||||
### 56 - ♻📑📝 Loop Sequential (Integer)
|
||||
|
||||
**Description:**
|
||||
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
|
||||
@@ -1106,7 +1119,7 @@ Update 0.57: Now also contains the next counter in the reset button.
|
||||

|
||||

|
||||
|
||||
### 57 - ♻📝 Loop Sequential (input Lines)
|
||||
### 57 - ♻📑 Loop Sequential (input Lines)
|
||||
|
||||
**Description:**
|
||||
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
|
||||
@@ -1277,6 +1290,8 @@ Replace text with another text, allow regex and more options, check examples bel
|
||||

|
||||

|
||||
|
||||
0.70 : Text replace now have multiline option for regex.
|
||||
|
||||
### 76 - ⚙📹 FFmpeg Configuration 📹⚙
|
||||
|
||||
**Description:**
|
||||
@@ -1664,7 +1679,7 @@ If you want, with `Load Text From Path` you can also recover the elements in "Bj
|
||||
|
||||

|
||||
|
||||
#### 117 - 📝👈 Line selector (🎲 Or random)
|
||||
#### 117 - 📝👈 Line selector (🎲 or ♻ or ♻📑)
|
||||
|
||||
**Description:**
|
||||
|
||||
@@ -1702,4 +1717,79 @@ The workflow below is included : `workflows/HUNYUAN_basic_lora.json`) :
|
||||
Another Text to Speech node based on Kokoro. : https://github.com/thewh1teagle/kokoro-onnx
|
||||
Lightweight, much simpler, no configuration and fully integrated into Comfyui. (No external backend to run.)
|
||||
|
||||

|
||||

|
||||
|
||||
#### 121 - 💾 Huggingface Downloader
|
||||
|
||||
**Description:**
|
||||
This node allows you to download models/vae/unet etc... directly from huggingface with your access token.
|
||||
|
||||

|
||||
|
||||
#### 122 - 👑 Combine Loras, Lora stack
|
||||
|
||||
**Description:**
|
||||
If you want to have multiple loras in a single node, well this is it.
|
||||
|
||||

|
||||
|
||||
#### 123 - 💾 Save Global Variables
|
||||
|
||||
**Description:**
|
||||
So if you know how to use variables with my nodes, this node gives you the opportunity to create global variables.
|
||||
This node is very simple, it will just append (or overwrite) the file : `Bjornulf/GlobalVariables.txt` (You can edit that manually if you want.)
|
||||
|
||||

|
||||
|
||||
#### 124 - 📥 Load Global Variables
|
||||
|
||||
**Description:**
|
||||
This node will load the global variables as text from the file `Bjornulf/GlobalVariables.txt`.
|
||||
Here is an example of usage save/load :
|
||||
|
||||

|
||||
|
||||
#### 125 - 📝👈 Model-Clip-Vae selector (🎲 or ♻ or ♻📑)
|
||||
|
||||
**Description:**
|
||||
|
||||
If you want to use and manage multiple models/clip/vae : this is the universal node for it.
|
||||
You can run them in a LOOP, one at RANDOM, a LOOP SEQUENTIAL (one at a time for each workflow run) and even SELECT a specific one.
|
||||
|
||||

|
||||
|
||||
#### 126 - 📒 Note
|
||||
|
||||
**Description:**
|
||||
Sometimes I want to add a NOTE but I want this note to be connected to a specific spaghetti.
|
||||
So you can use this to write details about a specific connection, it will move with it.
|
||||
|
||||
You can do whatever you want of course, below is an example about HunYuan video generation. (You can quickly switch connection to the rest of your workflow, depending on what you want to run.)
|
||||
|
||||

|
||||
|
||||
#### 127 - 🖼📒 Image Note
|
||||
|
||||
**Description:**
|
||||
|
||||
You can use this node to have it show a previously generated image and some custom text. (Use image_path or IMAGE type.)
|
||||

|
||||
|
||||
You can use the text to display the prompt used to generate the image for example.
|
||||
|
||||
Sometimes I want to display an image to explain what something specific is doing visually. (For example a stack of loras will have a specific style.)
|
||||
Here is a complex example on how i use that, for a list of loras stacks. (I then "select" a style by using node `125 - Model-Clip-Vae selector`)
|
||||
|
||||

|
||||
|
||||
#### 128 - 🖼👁 Preview (first) image
|
||||
|
||||
This node can display a preview of an image...
|
||||
- But also can take a list of images and preview only the first image. (Useful for video, it will take the first image.)
|
||||
- But can also take as input the full path of an image.
|
||||
- BUT it can also take a video path as input and extract the first frame of it.
|
||||
Very useful for testing when working with videos.
|
||||
Below is a visual example of what I just said :
|
||||
|
||||
**Description:**
|
||||

|
||||
|
||||
56
__init__.py
@@ -1,5 +1,5 @@
|
||||
from .show_stuff import ShowFloat, ShowInt, ShowStringText, ShowJson
|
||||
from .images_to_video import imagesToVideo
|
||||
from .ffmpeg_images_to_video import imagesToVideo
|
||||
from .write_text import WriteText
|
||||
from .text_replace import TextReplace
|
||||
# from .write_image_environment import WriteImageEnvironment
|
||||
@@ -17,7 +17,7 @@ from .loop_integer import LoopInteger
|
||||
from .loop_basic_batch import LoopBasicBatch
|
||||
from .loop_samplers import LoopSamplers
|
||||
from .loop_schedulers import LoopSchedulers
|
||||
from .ollama import ollamaLoader
|
||||
# from .ollama import ollamaLoader OBSOLETE
|
||||
from .show_text import ShowText
|
||||
from .save_text import SaveText
|
||||
from .save_tmp_image import SaveTmpImage
|
||||
@@ -58,16 +58,16 @@ from .combine_images import CombineImages
|
||||
from .text_scramble_character import ScramblerCharacter
|
||||
from .audio_video_sync import AudioVideoSync
|
||||
from .video_path_to_images import VideoToImagesList
|
||||
from .images_to_video_path import ImagesListToVideo
|
||||
from .ffmpeg_images_to_video_path import ImagesListToVideo
|
||||
from .video_preview import VideoPreview
|
||||
from .loop_model_selector import LoopModelSelector
|
||||
from .random_lora_selector import RandomLoraSelector
|
||||
from .loop_lora_selector import LoopLoraSelector
|
||||
from .loop_sequential_integer import LoopIntegerSequential
|
||||
from .loop_lines_sequential import LoopLinesSequential
|
||||
from .concat_videos import ConcatVideos
|
||||
from .concat_videos_from_list import ConcatVideosFromList
|
||||
from .combine_video_audio import CombineVideoAudio
|
||||
from .ffmpeg_concat_videos import ConcatVideos
|
||||
from .ffmpeg_concat_videos_from_list import ConcatVideosFromList
|
||||
from .ffmpeg_combine_video_audio import CombineVideoAudio
|
||||
from .images_merger_horizontal import MergeImagesHorizontally
|
||||
from .images_merger_vertical import MergeImagesVertically
|
||||
from .ollama_talk import OllamaTalk
|
||||
@@ -95,9 +95,26 @@ from .load_text import LoadTextFromFolder, LoadTextFromPath
|
||||
from .string_splitter import TextSplitin5
|
||||
from .line_selector import LineSelector
|
||||
from .text_to_speech_kokoro import KokoroTTS
|
||||
from .note_text import DisplayNote
|
||||
from .note_image import ImageNote
|
||||
from .model_clip_vae_selector import ModelClipVaeSelector
|
||||
from .global_variables import LoadGlobalVariables, SaveGlobalVariables
|
||||
from .lora_stacks import AllLoraSelector
|
||||
from .hugginface_download import HuggingFaceDownloader
|
||||
from .preview_first_image import PreviewFirstImage
|
||||
# from .video_latent import VideoLatentResolutionSelector
|
||||
# from .empty_latent_video import EmptyVideoLatentWithSingle
|
||||
# from .text_generator_t2v import TextGeneratorText2Video
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"Bjornulf_PreviewFirstImage": PreviewFirstImage,
|
||||
"Bjornulf_HuggingFaceDownloader": HuggingFaceDownloader,
|
||||
# "Bjornulf_VideoLatentResolutionSelector": VideoLatentResolutionSelector,
|
||||
"Bjornulf_AllLoraSelector": AllLoraSelector,
|
||||
"Bjornulf_LoadGlobalVariables": LoadGlobalVariables,
|
||||
"Bjornulf_SaveGlobalVariables": SaveGlobalVariables,
|
||||
"Bjornulf_ModelClipVaeSelector": ModelClipVaeSelector,
|
||||
"Bjornulf_DisplayNote": DisplayNote,
|
||||
"Bjornulf_ImageNote": ImageNote,
|
||||
"Bjornulf_LineSelector": LineSelector,
|
||||
# "Bjornulf_EmptyVideoLatentWithSingle": EmptyVideoLatentWithSingle,
|
||||
"Bjornulf_XTTSConfig": XTTSConfig,
|
||||
@@ -146,7 +163,7 @@ NODE_CLASS_MAPPINGS = {
|
||||
"Bjornulf_ShowFloat": ShowFloat,
|
||||
"Bjornulf_ShowJson": ShowJson,
|
||||
"Bjornulf_ShowStringText": ShowStringText,
|
||||
"Bjornulf_ollamaLoader": ollamaLoader,
|
||||
# "Bjornulf_ollamaLoader": ollamaLoader, OBSOLETE
|
||||
"Bjornulf_FFmpegConfig": FFmpegConfig,
|
||||
"Bjornulf_ConvertVideo": ConvertVideo,
|
||||
"Bjornulf_AddLineNumbers": AddLineNumbers,
|
||||
@@ -227,6 +244,15 @@ NODE_CLASS_MAPPINGS = {
|
||||
}
|
||||
|
||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"Bjornulf_PreviewFirstImage": "🖼👁 Preview (first) image",
|
||||
"Bjornulf_HuggingFaceDownloader": "💾 Huggingface Downloader",
|
||||
"Bjornulf_AllLoraSelector": "👑 Combine Loras, Lora stack",
|
||||
"Bjornulf_LoadGlobalVariables": "📥 Load Global Variables",
|
||||
"Bjornulf_SaveGlobalVariables": "💾 Save Global Variables",
|
||||
"Bjornulf_ModelClipVaeSelector": "📝👈 Model-Clip-Vae selector (🎲 or ♻ or ♻📑)",
|
||||
"Bjornulf_DisplayNote": "📒 Note",
|
||||
"Bjornulf_ImageNote": "🖼📒 Image Note",
|
||||
# "Bjornulf_VideoLatentResolutionSelector": "🩷📹 Empty Video Latent Selector",
|
||||
# "Bjornulf_EmptyVideoLatentWithSingle": "Bjornulf_EmptyVideoLatentWithSingle",
|
||||
"Bjornulf_XTTSConfig": "🔊 TTS Configuration ⚙",
|
||||
"Bjornulf_TextToSpeech": "📝➜🔊 TTS - Text to Speech",
|
||||
@@ -235,7 +261,7 @@ NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
# "Bjornulf_APIHiResCivitAI": "🎨➜🎨 API Image hires fix (CivitAI)",
|
||||
# "Bjornulf_CivitAILoraSelector": "lora Civit",
|
||||
"Bjornulf_KokoroTTS": "📝➜🔊 Kokoro - Text to Speech",
|
||||
"Bjornulf_LineSelector": "📝👈 Line selector (🎲 Or random)",
|
||||
"Bjornulf_LineSelector": "📝👈 Line selector (🎲 or ♻ or ♻📑)",
|
||||
"Bjornulf_LoaderLoraWithPath": "📥👑 Load Lora with Path",
|
||||
# "Bjornulf_TextGeneratorText2Video": "🔥📝📹 Text Generator for text to video 📹📝🔥",
|
||||
"Bjornulf_TextSplitin5": "📝🔪 Text split in 5",
|
||||
@@ -290,21 +316,21 @@ NODE_DISPLAY_NAME_MAPPINGS = {
|
||||
"Bjornulf_TextReplace": "📝➜📝 Replace text",
|
||||
"Bjornulf_AddLineNumbers": "🔢 Add line numbers",
|
||||
"Bjornulf_FFmpegConfig": "⚙📹 FFmpeg Configuration 📹⚙",
|
||||
"Bjornulf_ConvertVideo": "📹➜📹 Convert Video",
|
||||
"Bjornulf_VideoDetails": "📹🔍 Video details ⚙",
|
||||
"Bjornulf_ConvertVideo": "📹➜📹 Convert Video (FFmpeg)",
|
||||
"Bjornulf_VideoDetails": "📹🔍 Video details (FFmpeg) ⚙",
|
||||
"Bjornulf_WriteText": "✒ Write Text",
|
||||
"Bjornulf_MergeImagesHorizontally": "🖼🖼 Merge Images/Videos 📹📹 (Horizontally)",
|
||||
"Bjornulf_MergeImagesVertically": "🖼🖼 Merge Images/Videos 📹📹 (Vertically)",
|
||||
"Bjornulf_CombineVideoAudio": "📹🔊 Combine Video + Audio",
|
||||
"Bjornulf_ConcatVideos": "📹🔗 Concat Videos",
|
||||
"Bjornulf_ConcatVideosFromList": "📹🔗 Concat Videos from list",
|
||||
"Bjornulf_LoopLinesSequential": "♻📝 Loop Sequential (input Lines)",
|
||||
"Bjornulf_LoopIntegerSequential": "♻📝 Loop Sequential (Integer)",
|
||||
"Bjornulf_ConcatVideos": "📹🔗 Concat Videos (FFmpeg)",
|
||||
"Bjornulf_ConcatVideosFromList": "📹🔗 Concat Videos from list (FFmpeg)",
|
||||
"Bjornulf_LoopLinesSequential": "♻📑 Loop Sequential (input Lines)",
|
||||
"Bjornulf_LoopIntegerSequential": "♻📑 Loop Sequential (Integer)",
|
||||
"Bjornulf_LoopLoraSelector": "♻👑 Loop Lora Selector",
|
||||
"Bjornulf_RandomLoraSelector": "🎲👑 Random Lora Selector",
|
||||
"Bjornulf_LoopModelSelector": "♻ Loop Load checkpoint (Model Selector)",
|
||||
"Bjornulf_VideoPreview": "📹👁 Video Preview",
|
||||
"Bjornulf_ImagesListToVideo": "🖼➜📹 Images to Video path (tmp video)",
|
||||
"Bjornulf_ImagesListToVideo": "🖼➜📹 Images to Video path (tmp video) (FFmpeg)",
|
||||
"Bjornulf_VideoToImagesList": "📹➜🖼 Video Path to Images (Load video)",
|
||||
"Bjornulf_AudioVideoSync": "🔊📹 Audio Video Sync",
|
||||
"Bjornulf_ScramblerCharacter": "🔀🎲 Text scrambler (🧑 Character)",
|
||||
|
||||
@@ -5,7 +5,7 @@ import subprocess
|
||||
from datetime import datetime
|
||||
import math
|
||||
from PIL import Image
|
||||
import logging
|
||||
# import logging
|
||||
import torchvision.transforms as transforms
|
||||
|
||||
class AudioVideoSync:
|
||||
@@ -361,7 +361,7 @@ class AudioVideoSync:
|
||||
if audio_duration is None or audio_duration == 0.0:
|
||||
audio_duration = self.get_audio_duration(AUDIO)
|
||||
|
||||
logging.info(f"Audio duration: {audio_duration}")
|
||||
# logging.info(f"Audio duration: {audio_duration}")
|
||||
|
||||
# Process input source
|
||||
if IMAGES is not None and len(IMAGES) > 0:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import torch
|
||||
import numpy as np
|
||||
import logging
|
||||
# import logging
|
||||
|
||||
class CombineImages:
|
||||
@classmethod
|
||||
@@ -24,8 +24,8 @@ class CombineImages:
|
||||
def all_in_one_images(self, number_of_images, all_in_one, ** kwargs):
|
||||
images = [kwargs[f"image_{i}"] for i in range(1, number_of_images + 1) if f"image_{i}" in kwargs]
|
||||
|
||||
for i, img in enumerate(images):
|
||||
logging.info(f"Image {i+1} shape: {img.shape}, dtype: {img.dtype}, min: {img.min()}, max: {img.max()}")
|
||||
# for i, img in enumerate(images):
|
||||
# logging.info(f"Image {i+1} shape: {img.shape}, dtype: {img.dtype}, min: {img.min()}, max: {img.max()}")
|
||||
|
||||
if all_in_one:
|
||||
# Check if all images have the same shape
|
||||
|
||||
@@ -1,13 +1,12 @@
|
||||
import json
|
||||
import subprocess
|
||||
import ffmpeg # Assuming the Python FFmpeg bindings (ffmpeg-python) are installed
|
||||
import ffmpeg
|
||||
|
||||
class FFmpegConfig:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
|
||||
"ffmpeg_path": ("STRING", {"default": "ffmpeg"}),
|
||||
"video_codec": ([
|
||||
"None",
|
||||
@@ -64,9 +63,9 @@ class FFmpegConfig:
|
||||
"step": 0.01,
|
||||
"description": "Force output FPS (0 = use source FPS)"
|
||||
}),
|
||||
|
||||
"width": ("INT", {"default": 1152, "min": 1, "max": 10000}),
|
||||
"height": ("INT", {"default": 768, "min": 1, "max": 10000}),
|
||||
"enabled_change_resolution": ("BOOLEAN", {"default": False}),
|
||||
"width": ("INT", {"default": 0, "min": 0, "max": 10000}),
|
||||
"height": ("INT", {"default": 0, "min": 0, "max": 10000}),
|
||||
|
||||
"ignore_audio": ("BOOLEAN", {"default": False}),
|
||||
"audio_codec": ([
|
||||
@@ -79,6 +78,11 @@ class FFmpegConfig:
|
||||
"none"
|
||||
], {"default": "aac"}),
|
||||
"audio_bitrate": ("STRING", {"default": "192k"}),
|
||||
|
||||
"force_transparency": ("BOOLEAN", {
|
||||
"default": False,
|
||||
"description": "Force transparency in WebM output"
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -87,30 +91,22 @@ class FFmpegConfig:
|
||||
FUNCTION = "create_config"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def get_ffmpeg_version(self, ffmpeg_path, use_python_ffmpeg):
|
||||
if use_python_ffmpeg:
|
||||
try:
|
||||
# Retrieve Python ffmpeg-python version
|
||||
return f"Python FFmpeg binding (ffmpeg-python) version: {ffmpeg.__version__}"
|
||||
except AttributeError:
|
||||
return "Python FFmpeg binding (ffmpeg-python) version: Unknown (no __version__ attribute)"
|
||||
else:
|
||||
try:
|
||||
# Retrieve system FFmpeg version
|
||||
result = subprocess.run(
|
||||
[ffmpeg_path, "-version"],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True
|
||||
)
|
||||
version_line = result.stdout.splitlines()[0]
|
||||
return version_line
|
||||
except Exception as e:
|
||||
return f"Error fetching FFmpeg version: {e}"
|
||||
def get_ffmpeg_version(self, ffmpeg_path):
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[ffmpeg_path, "-version"],
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
text=True
|
||||
)
|
||||
version_line = result.stdout.splitlines()[0]
|
||||
return version_line
|
||||
except Exception as e:
|
||||
return f"Error fetching FFmpeg version: {e}"
|
||||
|
||||
def create_json_output(self, config, use_python_ffmpeg):
|
||||
def create_json_output(self, config):
|
||||
"""Create a JSON string containing all FFmpeg configuration."""
|
||||
ffmpeg_version = self.get_ffmpeg_version(config["ffmpeg_path"], use_python_ffmpeg)
|
||||
ffmpeg_version = self.get_ffmpeg_version(config["ffmpeg_path"])
|
||||
config_info = {
|
||||
"ffmpeg": {
|
||||
"path": config["ffmpeg_path"],
|
||||
@@ -122,14 +118,16 @@ class FFmpegConfig:
|
||||
"preset": config["preset"] or "None",
|
||||
"pixel_format": config["pixel_format"] or "None",
|
||||
"crf": config["crf"],
|
||||
"resolution": {
|
||||
"width": config["width"],
|
||||
"height": config["height"]
|
||||
},
|
||||
"resolution": (
|
||||
{"width": config["width"], "height": config["height"]}
|
||||
if (config["enabled_change_resolution"] and config["width"] > 0 and config["height"] > 0)
|
||||
else None
|
||||
),
|
||||
"fps": {
|
||||
"force_fps": config["force_fps"],
|
||||
"enabled": config["force_fps"] > 0
|
||||
}
|
||||
},
|
||||
"force_transparency": config["force_transparency"]
|
||||
},
|
||||
"audio": {
|
||||
"enabled": not config["ignore_audio"],
|
||||
@@ -142,9 +140,10 @@ class FFmpegConfig:
|
||||
}
|
||||
return json.dumps(config_info, indent=2)
|
||||
|
||||
def create_config(self, ffmpeg_path, use_python_ffmpeg, ignore_audio, video_codec, audio_codec,
|
||||
def create_config(self, ffmpeg_path, ignore_audio, video_codec, audio_codec,
|
||||
video_bitrate, audio_bitrate, preset, pixel_format,
|
||||
container_format, crf, force_fps, width, height):
|
||||
container_format, crf, force_fps, enabled_change_resolution,
|
||||
width, height, force_transparency):
|
||||
|
||||
config = {
|
||||
"ffmpeg_path": ffmpeg_path,
|
||||
@@ -152,6 +151,7 @@ class FFmpegConfig:
|
||||
"preset": None if preset == "None" else preset,
|
||||
"crf": crf,
|
||||
"force_fps": force_fps,
|
||||
"enabled_change_resolution": enabled_change_resolution,
|
||||
"ignore_audio": ignore_audio,
|
||||
"audio_bitrate": audio_bitrate,
|
||||
"width": width,
|
||||
@@ -160,12 +160,14 @@ class FFmpegConfig:
|
||||
"pixel_format": None if pixel_format == "None" else pixel_format,
|
||||
"container_format": None if container_format == "None" else container_format,
|
||||
"audio_codec": None if audio_codec == "None" or ignore_audio else audio_codec,
|
||||
"force_transparency": force_transparency
|
||||
}
|
||||
|
||||
return (self.create_json_output(config, use_python_ffmpeg),)
|
||||
return (self.create_json_output(config),)
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(cls, ffmpeg_path, use_python_ffmpeg, ignore_audio, video_codec, audio_codec,
|
||||
def IS_CHANGED(cls, ffmpeg_path, ignore_audio, video_codec, audio_codec,
|
||||
video_bitrate, audio_bitrate, preset, pixel_format,
|
||||
container_format, crf, force_fps, width, height) -> float:
|
||||
return 0.0
|
||||
container_format, crf, force_fps, enabled_change_resolution,
|
||||
width, height, force_transparency) -> float:
|
||||
return 0.0
|
||||
@@ -24,7 +24,7 @@ class ConvertVideo:
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def __init__(self):
|
||||
self.output_dir = Path(os.path.abspath("ffmpeg/converted_videos"))
|
||||
self.output_dir = Path(os.path.abspath("Bjornulf/ffmpeg/converted_videos"))
|
||||
os.makedirs(self.output_dir, exist_ok=True)
|
||||
|
||||
def get_default_config(self):
|
||||
@@ -190,8 +190,7 @@ class ConvertVideo:
|
||||
# Use default configuration if no JSON is provided
|
||||
if FFMPEG_CONFIG_JSON is None:
|
||||
default_config = self.get_default_config()
|
||||
# Create a JSON-like structure to match the parse_config_json method's expectations
|
||||
FFMPEG_CONFIG_JSON = {
|
||||
config_json = {
|
||||
'ffmpeg': {
|
||||
'path': default_config['ffmpeg_path']
|
||||
},
|
||||
@@ -204,7 +203,7 @@ class ConvertVideo:
|
||||
'fps': {
|
||||
'force_fps': default_config['force_fps']
|
||||
},
|
||||
'resolution': {
|
||||
'resolution': None if default_config['width'] == 0 or default_config['height'] == 0 else {
|
||||
'width': default_config['width'],
|
||||
'height': default_config['height']
|
||||
}
|
||||
@@ -218,8 +217,7 @@ class ConvertVideo:
|
||||
'bitrate': default_config['audio_bitrate']
|
||||
}
|
||||
}
|
||||
# Convert to JSON string
|
||||
FFMPEG_CONFIG_JSON = json.dumps(FFMPEG_CONFIG_JSON)
|
||||
FFMPEG_CONFIG_JSON = json.dumps(config_json)
|
||||
|
||||
# Parse the JSON configuration
|
||||
FFMPEG_CONFIG_JSON = self.parse_config_json(FFMPEG_CONFIG_JSON)
|
||||
@@ -240,7 +238,6 @@ class ConvertVideo:
|
||||
FFMPEG_CONFIG_JSON['ffmpeg_path'], '-y',
|
||||
'-i', str(input_path)
|
||||
]
|
||||
|
||||
# Add video codec settings if not None
|
||||
if FFMPEG_CONFIG_JSON['video_codec'] is not None:
|
||||
if FFMPEG_CONFIG_JSON['video_codec'] == 'copy':
|
||||
@@ -251,8 +248,8 @@ class ConvertVideo:
|
||||
if FFMPEG_CONFIG_JSON['preset'] is not None:
|
||||
cmd.extend(['-preset', FFMPEG_CONFIG_JSON['preset']])
|
||||
|
||||
if FFMPEG_CONFIG_JSON['width'] and FFMPEG_CONFIG_JSON['height']:
|
||||
cmd.extend(['-vf', f'scale={FFMPEG_CONFIG_JSON["width"]}:{FFMPEG_CONFIG_JSON["height"]}'])
|
||||
if 'resolution' in FFMPEG_CONFIG_JSON and FFMPEG_CONFIG_JSON['resolution'] is not None:
|
||||
cmd.extend(['-vf', f'scale={FFMPEG_CONFIG_JSON["resolution"]["width"]}:{FFMPEG_CONFIG_JSON["resolution"]["height"]}'])
|
||||
|
||||
if FFMPEG_CONFIG_JSON['video_bitrate']:
|
||||
cmd.extend(['-b:v', FFMPEG_CONFIG_JSON['video_bitrate']])
|
||||
@@ -268,18 +265,18 @@ class ConvertVideo:
|
||||
if FFMPEG_CONFIG_JSON['force_fps'] > 0:
|
||||
cmd.extend(['-r', str(FFMPEG_CONFIG_JSON['force_fps'])])
|
||||
|
||||
# Add audio codec settings
|
||||
if FFMPEG_CONFIG_JSON['ignore_audio'] or FFMPEG_CONFIG_JSON['audio_codec'] is None:
|
||||
cmd.extend(['-an'])
|
||||
elif FFMPEG_CONFIG_JSON['audio_codec'] == 'copy':
|
||||
cmd.extend(['-c:a', 'copy'])
|
||||
else:
|
||||
cmd.extend([
|
||||
'-c:a', FFMPEG_CONFIG_JSON['audio_codec'],
|
||||
'-b:a', FFMPEG_CONFIG_JSON['audio_bitrate']
|
||||
])
|
||||
# Add audio codec settings
|
||||
if FFMPEG_CONFIG_JSON['ignore_audio'] or FFMPEG_CONFIG_JSON['audio_codec'] is None:
|
||||
cmd.extend(['-an'])
|
||||
elif FFMPEG_CONFIG_JSON['audio_codec'] == 'copy':
|
||||
cmd.extend(['-c:a', 'copy'])
|
||||
else:
|
||||
cmd.extend([
|
||||
'-c:a', FFMPEG_CONFIG_JSON['audio_codec'],
|
||||
'-b:a', FFMPEG_CONFIG_JSON['audio_bitrate']
|
||||
])
|
||||
|
||||
cmd.append(str(output_path))
|
||||
cmd.append(str(output_path))
|
||||
|
||||
# Convert command list to string
|
||||
ffmpeg_command = ' '.join(cmd)
|
||||
|
||||
237
ffmpeg_images_to_video.py
Normal file
@@ -0,0 +1,237 @@
|
||||
import os
|
||||
import numpy as np
|
||||
import torch
|
||||
import subprocess
|
||||
import json
|
||||
from PIL import Image
|
||||
import soundfile as sf
|
||||
import glob
|
||||
|
||||
class imagesToVideo:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE",),
|
||||
"fps": ("FLOAT", {"default": 24, "min": 1, "max": 120}),
|
||||
"name_prefix": ("STRING", {"default": "imgs2video/me"}),
|
||||
"use_python_ffmpeg": ("BOOLEAN", {"default": False}),
|
||||
},
|
||||
"optional": {
|
||||
"audio": ("AUDIO",),
|
||||
"FFMPEG_CONFIG_JSON": ("STRING", {"forceInput": True}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING", "STRING",)
|
||||
RETURN_NAMES = ("comment", "ffmpeg_command",)
|
||||
FUNCTION = "image_to_video"
|
||||
OUTPUT_NODE = True
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def parse_ffmpeg_config(self, config_json):
|
||||
if not config_json:
|
||||
return None
|
||||
try:
|
||||
return json.loads(config_json)
|
||||
except json.JSONDecodeError:
|
||||
print("Error parsing FFmpeg config JSON")
|
||||
return None
|
||||
|
||||
def run_ffmpeg_python(self, ffmpeg_cmd, output_file, ffmpeg_path):
|
||||
try:
|
||||
import ffmpeg
|
||||
except ImportError as e:
|
||||
print(f"Error importing ffmpeg-python: {e}")
|
||||
return False, "ffmpeg-python library not installed"
|
||||
|
||||
try:
|
||||
# Reconstruct the command using ffmpeg-python syntax
|
||||
inputs = []
|
||||
streams = []
|
||||
audio_added = False
|
||||
|
||||
# Parse command elements
|
||||
i = 0
|
||||
while i < len(ffmpeg_cmd):
|
||||
if ffmpeg_cmd[i] == "-framerate":
|
||||
framerate = float(ffmpeg_cmd[i+1])
|
||||
i += 2
|
||||
elif ffmpeg_cmd[i] == "-i":
|
||||
if "frame_" in ffmpeg_cmd[i+1]: # Image sequence input
|
||||
video_input = ffmpeg.input(ffmpeg_cmd[i+1], framerate=framerate)
|
||||
streams.append(video_input.video)
|
||||
else: # Audio input
|
||||
audio_input = ffmpeg.input(ffmpeg_cmd[i+1])
|
||||
streams.append(audio_input.audio)
|
||||
audio_added = True
|
||||
i += 2
|
||||
elif ffmpeg_cmd[i] == "-vf":
|
||||
filters = ffmpeg_cmd[i+1].split(',')
|
||||
for f in filters:
|
||||
if 'scale=' in f:
|
||||
w, h = f.split('=')[1].split(':')
|
||||
video_input = video_input.filter('scale', w, h)
|
||||
i += 2
|
||||
elif ffmpeg_cmd[i] in ["-c:v", "-preset", "-crf", "-cq", "-b:v", "-pix_fmt"]:
|
||||
key = ffmpeg_cmd[i][1:]
|
||||
value = ffmpeg_cmd[i+1]
|
||||
if key == 'c:v':
|
||||
streams[-1] = streams[-1].output(vcodec=value)
|
||||
elif key == 'preset':
|
||||
streams[-1] = streams[-1].output(preset=value)
|
||||
elif key in ['crf', 'cq']:
|
||||
streams[-1] = streams[-1].output(**{key: value})
|
||||
elif key == 'b:v':
|
||||
streams[-1] = streams[-1].output(**{'b:v': value})
|
||||
elif key == 'pix_fmt':
|
||||
streams[-1] = streams[-1].output(pix_fmt=value)
|
||||
i += 2
|
||||
else:
|
||||
i += 1
|
||||
|
||||
# Handle output
|
||||
output = ffmpeg.output(*streams, output_file)
|
||||
output.run(cmd=ffmpeg_path, overwrite_output=True)
|
||||
return True, "Success"
|
||||
|
||||
except ffmpeg.Error as e:
|
||||
return False, f"FFmpeg error: {e.stderr.decode()}"
|
||||
except Exception as e:
|
||||
return False, f"Error: {str(e)}"
|
||||
|
||||
def image_to_video(self, images, fps, name_prefix, use_python_ffmpeg=False, audio=None, FFMPEG_CONFIG_JSON=None):
|
||||
ffmpeg_config = self.parse_ffmpeg_config(FFMPEG_CONFIG_JSON)
|
||||
|
||||
format = "mp4"
|
||||
if ffmpeg_config and ffmpeg_config["output"]["container_format"] != "None":
|
||||
format = ffmpeg_config["output"]["container_format"]
|
||||
|
||||
name_prefix = os.path.splitext(name_prefix)[0]
|
||||
output_base = os.path.join("output", name_prefix)
|
||||
|
||||
existing_files = glob.glob(f"{output_base}_*.{format}")
|
||||
if existing_files:
|
||||
max_num = max([int(f.split('_')[-1].split('.')[0]) for f in existing_files])
|
||||
next_num = max_num + 1
|
||||
else:
|
||||
next_num = 1
|
||||
|
||||
output_file = f"{output_base}_{next_num:04d}.{format}"
|
||||
|
||||
temp_dir = "Bjornulf/temp_images_imgs2video"
|
||||
if os.path.exists(temp_dir) and os.path.isdir(temp_dir):
|
||||
for file in os.listdir(temp_dir):
|
||||
os.remove(os.path.join(temp_dir, file))
|
||||
os.rmdir(temp_dir)
|
||||
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
os.makedirs(os.path.dirname(output_file) if os.path.dirname(output_file) else ".", exist_ok=True)
|
||||
|
||||
for i, img_tensor in enumerate(images):
|
||||
img = Image.fromarray((img_tensor.cpu().numpy() * 255).astype(np.uint8))
|
||||
if format == "webm":
|
||||
img = img.convert("RGBA")
|
||||
img.save(os.path.join(temp_dir, f"frame_{i:04d}.png"))
|
||||
|
||||
temp_audio_file = None
|
||||
if audio is not None and (not ffmpeg_config or not ffmpeg_config["audio"]["enabled"]):
|
||||
temp_audio_file = os.path.join(temp_dir, "temp_audio.wav")
|
||||
waveform = audio['waveform'].squeeze().numpy()
|
||||
sample_rate = audio['sample_rate']
|
||||
sf.write(temp_audio_file, waveform, sample_rate)
|
||||
|
||||
ffmpeg_path = "ffmpeg"
|
||||
if ffmpeg_config and ffmpeg_config["ffmpeg"]["path"]:
|
||||
ffmpeg_path = ffmpeg_config["ffmpeg"]["path"]
|
||||
|
||||
ffmpeg_cmd = [
|
||||
ffmpeg_path,
|
||||
"-y",
|
||||
"-framerate", str(fps),
|
||||
"-i", os.path.join(temp_dir, "frame_%04d.png"),
|
||||
]
|
||||
|
||||
if temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-i", temp_audio_file])
|
||||
|
||||
if ffmpeg_config and format == "webm" and ffmpeg_config["video"]["force_transparency"]:
|
||||
ffmpeg_cmd.extend([
|
||||
"-vf", "scale=iw:ih,format=rgba,split[s0][s1];[s0]lutrgb=r=0:g=0:b=0:a=0[transparent];[transparent][s1]overlay"
|
||||
])
|
||||
|
||||
if ffmpeg_config:
|
||||
if ffmpeg_config["video"]["codec"] != "None":
|
||||
ffmpeg_cmd.extend(["-c:v", ffmpeg_config["video"]["codec"]])
|
||||
|
||||
if ffmpeg_config["video"]["preset"] != "None":
|
||||
ffmpeg_cmd.extend(["-preset", ffmpeg_config["video"]["preset"]])
|
||||
|
||||
if ffmpeg_config["video"]["bitrate"]:
|
||||
ffmpeg_cmd.extend(["-b:v", ffmpeg_config["video"]["bitrate"]])
|
||||
|
||||
if ffmpeg_config["video"]["crf"]:
|
||||
if "nvenc" in (ffmpeg_config["video"]["codec"] or ""):
|
||||
ffmpeg_cmd.extend(["-cq", str(ffmpeg_config["video"]["crf"])])
|
||||
else:
|
||||
ffmpeg_cmd.extend(["-crf", str(ffmpeg_config["video"]["crf"])])
|
||||
|
||||
if ffmpeg_config["video"]["pixel_format"] != "None":
|
||||
ffmpeg_cmd.extend(["-pix_fmt", ffmpeg_config["video"]["pixel_format"]])
|
||||
|
||||
if ffmpeg_config["video"]["resolution"]:
|
||||
scale_filter = f"scale={ffmpeg_config['video']['resolution']['width']}:{ffmpeg_config['video']['resolution']['height']}"
|
||||
if format == "webm" and ffmpeg_config["video"]["force_transparency"]:
|
||||
current_filter_idx = ffmpeg_cmd.index("-vf") + 1
|
||||
current_filter = ffmpeg_cmd[current_filter_idx]
|
||||
ffmpeg_cmd[current_filter_idx] = scale_filter + "," + current_filter
|
||||
else:
|
||||
ffmpeg_cmd.extend(["-vf", scale_filter])
|
||||
|
||||
if ffmpeg_config["video"]["fps"]["enabled"]:
|
||||
ffmpeg_cmd.extend(["-r", str(ffmpeg_config["video"]["fps"]["force_fps"])])
|
||||
|
||||
if not ffmpeg_config["audio"]["enabled"]:
|
||||
ffmpeg_cmd.extend(["-an"])
|
||||
elif ffmpeg_config["audio"]["codec"] != "None" and temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-c:a", ffmpeg_config["audio"]["codec"]])
|
||||
if ffmpeg_config["audio"]["bitrate"]:
|
||||
ffmpeg_cmd.extend(["-b:a", ffmpeg_config["audio"]["bitrate"]])
|
||||
else:
|
||||
if format == "mp4":
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", "libx264",
|
||||
"-preset", "medium",
|
||||
"-crf", "19",
|
||||
"-pix_fmt", "yuv420p"
|
||||
])
|
||||
if temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-c:a", "aac"])
|
||||
elif format == "webm":
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", "libvpx-vp9",
|
||||
"-crf", "30",
|
||||
"-b:v", "0",
|
||||
"-pix_fmt", "yuva420p"
|
||||
])
|
||||
if temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-c:a", "libvorbis"])
|
||||
|
||||
ffmpeg_cmd.append(output_file)
|
||||
|
||||
try:
|
||||
if use_python_ffmpeg:
|
||||
success, message = self.run_ffmpeg_python(ffmpeg_cmd, output_file, ffmpeg_path)
|
||||
comment = f"Python FFmpeg: {message}" if not success else f"Video created successfully with {'custom' if ffmpeg_config else 'default'} settings (Python FFmpeg)"
|
||||
else:
|
||||
subprocess.run(ffmpeg_cmd, check=True)
|
||||
comment = f"Video created successfully with {'custom' if ffmpeg_config else 'default'} FFmpeg settings"
|
||||
|
||||
print(f"Video created successfully: {output_file}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error creating video: {e}")
|
||||
comment = f"Error creating video: {e}"
|
||||
finally:
|
||||
print("Temporary files not removed for debugging purposes.")
|
||||
|
||||
return (comment,ffmpeg_cmd,)
|
||||
182
ffmpeg_images_to_video_path.py
Normal file
@@ -0,0 +1,182 @@
|
||||
import os
|
||||
import uuid
|
||||
import subprocess
|
||||
import tempfile
|
||||
import torch
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
import wave
|
||||
import json
|
||||
import ffmpeg
|
||||
|
||||
class ImagesListToVideo:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE",),
|
||||
"frames_per_second": ("FLOAT", {"default": 30, "min": 1, "max": 120, "step": 1}),
|
||||
},
|
||||
"optional": {
|
||||
"audio_path": ("STRING", {"default": "", "multiline": False}),
|
||||
"audio": ("AUDIO", {"default": None}),
|
||||
"FFMPEG_CONFIG_JSON": ("STRING", {"default": None}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("video_path",)
|
||||
FUNCTION = "images_to_video"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def parse_ffmpeg_config(self, config_json):
|
||||
if not config_json:
|
||||
return None
|
||||
try:
|
||||
return json.loads(config_json)
|
||||
except json.JSONDecodeError:
|
||||
print("Invalid FFmpeg configuration JSON")
|
||||
return None
|
||||
|
||||
def build_ffmpeg_command(self, input_pattern, output_path, fps, config=None):
|
||||
if not config:
|
||||
return [
|
||||
"ffmpeg",
|
||||
"-framerate", str(fps),
|
||||
"-i", input_pattern,
|
||||
"-c:v", "libx264",
|
||||
"-pix_fmt", "yuv420p",
|
||||
"-crf", "19"
|
||||
]
|
||||
|
||||
cmd = [config["ffmpeg"]["path"]] if config["ffmpeg"]["path"] else ["ffmpeg"]
|
||||
cmd.extend(["-framerate", str(config["video"]["fps"]["force_fps"] if config["video"]["fps"]["enabled"] else fps)])
|
||||
cmd.extend(["-i", input_pattern])
|
||||
|
||||
# Video settings
|
||||
if config["video"]["codec"] not in [None, "None", "copy"]:
|
||||
cmd.extend(["-c:v", config["video"]["codec"]])
|
||||
|
||||
if config["video"]["pixel_format"] not in [None, "None"]:
|
||||
cmd.extend(["-pix_fmt", config["video"]["pixel_format"]])
|
||||
|
||||
if config["video"]["preset"] not in [None, "None"]:
|
||||
cmd.extend(["-preset", config["video"]["preset"]])
|
||||
|
||||
if config["video"]["bitrate"] not in [None, "None", ""]:
|
||||
cmd.extend(["-b:v", config["video"]["bitrate"]])
|
||||
|
||||
cmd.extend(["-crf", str(config["video"]["crf"])])
|
||||
|
||||
if config["video"]["resolution"] and config["video"]["resolution"]["width"] > 0 and config["video"]["resolution"]["height"] > 0:
|
||||
cmd.extend(["-s", f"{config['video']['resolution']['width']}x{config['video']['resolution']['height']}"])
|
||||
|
||||
return cmd
|
||||
|
||||
def images_to_video(self, images, frames_per_second=30, audio_path="", audio=None, ffmpeg_config=None):
|
||||
config = self.parse_ffmpeg_config(ffmpeg_config)
|
||||
|
||||
output_dir = os.path.join("Bjornulf", "images_to_video")
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Determine output format
|
||||
output_format = "mp4"
|
||||
if config and config["output"]["container_format"] not in [None, "None"]:
|
||||
output_format = config["output"]["container_format"]
|
||||
|
||||
video_filename = f"video_{uuid.uuid4().hex}.{output_format}"
|
||||
video_path = os.path.join(output_dir, video_filename)
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
for i, img in enumerate(images):
|
||||
img_np = self.convert_to_numpy(img)
|
||||
if img_np.shape[-1] != 3:
|
||||
img_np = self.convert_to_rgb(img_np)
|
||||
img_pil = Image.fromarray(img_np)
|
||||
img_path = os.path.join(temp_dir, f"frame_{i:05d}.png")
|
||||
img_pil.save(img_path)
|
||||
|
||||
input_pattern = os.path.join(temp_dir, "frame_%05d.png")
|
||||
ffmpeg_cmd = self.build_ffmpeg_command(input_pattern, video_path, frames_per_second, config)
|
||||
|
||||
# Handle audio
|
||||
temp_audio_path = None
|
||||
if not (config and config["audio"]["enabled"] == False):
|
||||
if audio is not None and isinstance(audio, dict):
|
||||
waveform = audio['waveform'].numpy().squeeze()
|
||||
sample_rate = audio['sample_rate']
|
||||
temp_audio_path = os.path.join(temp_dir, "temp_audio.wav")
|
||||
self.write_wav(temp_audio_path, waveform, sample_rate)
|
||||
elif audio_path and os.path.isfile(audio_path):
|
||||
temp_audio_path = audio_path
|
||||
|
||||
if temp_audio_path:
|
||||
temp_video = os.path.join(temp_dir, "temp_video.mp4")
|
||||
temp_cmd = ffmpeg_cmd + ["-y", temp_video]
|
||||
|
||||
try:
|
||||
subprocess.run(temp_cmd, check=True, capture_output=True, text=True)
|
||||
|
||||
audio_cmd = [
|
||||
config["ffmpeg"]["path"] if config else "ffmpeg",
|
||||
"-i", temp_video,
|
||||
"-i", temp_audio_path,
|
||||
"-c:v", "copy"
|
||||
]
|
||||
|
||||
# Audio codec settings from config
|
||||
if config and config["audio"]["codec"] not in [None, "None"]:
|
||||
audio_cmd.extend(["-c:a", config["audio"]["codec"]])
|
||||
else:
|
||||
audio_cmd.extend(["-c:a", "aac"])
|
||||
|
||||
if config and config["audio"]["bitrate"]:
|
||||
audio_cmd.extend(["-b:a", config["audio"]["bitrate"]])
|
||||
|
||||
audio_cmd.extend(["-shortest", "-y", video_path])
|
||||
|
||||
subprocess.run(audio_cmd, check=True, capture_output=True, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"FFmpeg error: {e.stderr}")
|
||||
return ("",)
|
||||
else:
|
||||
ffmpeg_cmd.append("-y")
|
||||
ffmpeg_cmd.append(video_path)
|
||||
try:
|
||||
subprocess.run(ffmpeg_cmd, check=True, capture_output=True, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"FFmpeg error: {e.stderr}")
|
||||
return ("",)
|
||||
|
||||
return (video_path,)
|
||||
|
||||
def write_wav(self, file_path, audio_data, sample_rate):
|
||||
with wave.open(file_path, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1)
|
||||
wav_file.setsampwidth(2)
|
||||
wav_file.setframerate(sample_rate)
|
||||
audio_data = np.int16(audio_data * 32767)
|
||||
wav_file.writeframes(audio_data.tobytes())
|
||||
|
||||
def convert_to_numpy(self, img):
|
||||
if isinstance(img, torch.Tensor):
|
||||
img = img.cpu().numpy()
|
||||
if img.dtype == np.uint8:
|
||||
return img
|
||||
elif img.dtype == np.float32 or img.dtype == np.float64:
|
||||
return (img * 255).astype(np.uint8)
|
||||
else:
|
||||
raise ValueError(f"Unsupported data type: {img.dtype}")
|
||||
|
||||
def convert_to_rgb(self, img):
|
||||
if img.shape[-1] == 1:
|
||||
return np.repeat(img, 3, axis=-1)
|
||||
elif img.shape[-1] == 768:
|
||||
img = img.reshape((-1, 3))
|
||||
img = (img - img.min()) / (img.max() - img.min())
|
||||
img = (img * 255).astype(np.uint8)
|
||||
return img.reshape((img.shape[0], -1, 3))
|
||||
elif len(img.shape) == 2:
|
||||
return np.stack([img, img, img], axis=-1)
|
||||
else:
|
||||
raise ValueError(f"Unsupported image shape: {img.shape}")
|
||||
77
global_variables.py
Normal file
@@ -0,0 +1,77 @@
|
||||
import os
|
||||
import folder_paths
|
||||
|
||||
|
||||
class SaveGlobalVariables:
|
||||
def __init__(self):
|
||||
self.output_dir = os.path.join(folder_paths.base_path, 'Bjornulf')
|
||||
self.filename = os.path.join(self.output_dir, 'GlobalVariables.txt')
|
||||
os.makedirs(self.output_dir, exist_ok=True)
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"variables": ("STRING", {"multiline": True, "default": ""}),
|
||||
"mode": (["append", "overwrite"], {"default": "append"}),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ()
|
||||
FUNCTION = "save_variables"
|
||||
OUTPUT_NODE = True
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def save_variables(self, variables, mode):
|
||||
# Clean and validate input
|
||||
new_lines = set(line.strip() for line in variables.split('\n') if line.strip())
|
||||
|
||||
if mode == "overwrite":
|
||||
with open(self.filename, 'w', encoding='utf-8') as f:
|
||||
f.write('\n'.join(new_lines) + '\n')
|
||||
else: # append mode
|
||||
if os.path.exists(self.filename):
|
||||
with open(self.filename, 'r', encoding='utf-8') as f:
|
||||
existing_lines = set(line.strip() for line in f.readlines() if line.strip())
|
||||
else:
|
||||
existing_lines = set()
|
||||
|
||||
# Add only new unique lines
|
||||
unique_lines = new_lines - existing_lines
|
||||
if unique_lines:
|
||||
with open(self.filename, 'a', encoding='utf-8') as f:
|
||||
f.write('\n'.join(unique_lines) + '\n')
|
||||
|
||||
return ()
|
||||
|
||||
|
||||
class LoadGlobalVariables:
|
||||
def __init__(self):
|
||||
self.output_dir = os.path.join(folder_paths.base_path, 'Bjornulf')
|
||||
self.filename = os.path.join(self.output_dir, 'GlobalVariables.txt')
|
||||
os.makedirs(self.output_dir, exist_ok=True)
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {"required": {
|
||||
"seed": ("INT", {
|
||||
"default": -1,
|
||||
"min": -1,
|
||||
"max": 0x7FFFFFFFFFFFFFFF
|
||||
}),
|
||||
}}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("variables",)
|
||||
FUNCTION = "load_variables"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def load_variables(self, seed):
|
||||
if not os.path.exists(self.filename):
|
||||
return ("",)
|
||||
|
||||
with open(self.filename, 'r', encoding='utf-8', errors='ignore') as f:
|
||||
content = f.read().strip()
|
||||
|
||||
os.sync() # Ensures that any pending file writes are flushed to disk
|
||||
return (content,)
|
||||
66
hugginface_download.py
Normal file
@@ -0,0 +1,66 @@
|
||||
import os
|
||||
import folder_paths
|
||||
from huggingface_hub import hf_hub_download
|
||||
|
||||
class HuggingFaceDownloader:
|
||||
"""Custom node for downloading models from Hugging Face within ComfyUI"""
|
||||
|
||||
MODELS_DIR = {
|
||||
"models/vae": "vae",
|
||||
"models/unet": "unet",
|
||||
"models/clip": "clip",
|
||||
"models/lora": "loras",
|
||||
"models/controlnet": "controlnet",
|
||||
"models/upscale": "upscale_models",
|
||||
"models/embeddings": "embeddings"
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"hf_token": ("STRING", {"multiline": False, "default": ""}),
|
||||
"repo_id": ("STRING", {"multiline": False, "default": "Kijai/HunyuanVideo_comfy"}),
|
||||
"filename": ("STRING", {"multiline": False, "default": "hunyuan_video_vae_bf16.safetensors"}),
|
||||
"model_type": (list(cls.MODELS_DIR.keys()),),
|
||||
},
|
||||
"optional": {
|
||||
"custom_path": ("STRING", {"multiline": False, "default": ""}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("status",)
|
||||
FUNCTION = "download_model"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def download_model(self, hf_token, repo_id, filename, model_type, custom_path=None):
|
||||
download_dir = "Unknown"
|
||||
try:
|
||||
os.environ["HF_TOKEN"] = hf_token
|
||||
|
||||
if custom_path:
|
||||
download_dir = custom_path
|
||||
else:
|
||||
folder_key = self.MODELS_DIR[model_type]
|
||||
download_dir = folder_paths.get_folder_paths(folder_key)[0]
|
||||
|
||||
os.makedirs(download_dir, exist_ok=True)
|
||||
|
||||
hf_hub_download(
|
||||
repo_id=repo_id,
|
||||
filename=filename,
|
||||
token=hf_token,
|
||||
local_dir=download_dir
|
||||
)
|
||||
|
||||
return (f"Successfully downloaded {filename} to {download_dir}",)
|
||||
|
||||
except IndexError:
|
||||
return (f"No directory found for model type: {model_type}. Check folder_paths configuration.",)
|
||||
except Exception as e:
|
||||
return (f"Error downloading model: {str(e)}, {filename} to {download_dir}",)
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(cls, **kwargs):
|
||||
return float("nan")
|
||||
@@ -1,166 +0,0 @@
|
||||
import os
|
||||
import numpy as np
|
||||
import torch
|
||||
import subprocess
|
||||
from PIL import Image
|
||||
import soundfile as sf
|
||||
import glob
|
||||
|
||||
class imagesToVideo:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE",),
|
||||
"fps": ("FLOAT", {"default": 24, "min": 1, "max": 120}),
|
||||
"name_prefix": ("STRING", {"default": "output/imgs2video/me"}),
|
||||
"format": (["mp4", "webm"], {"default": "mp4"}),
|
||||
"mp4_encoder": (["libx264 (H.264)", "h264_nvenc (H.264 / NVIDIA GPU)", "libx265 (H.265)", "hevc_nvenc (H.265 / NVIDIA GPU)"], {"default": "h264_nvenc (H.264 / NVIDIA GPU)"}),
|
||||
"webm_encoder": (["libvpx-vp9", "libaom-av1 (VERY SLOW)"], {"default": "libvpx-vp9"}),
|
||||
"crf": ("INT", {"default": 19, "min": 0, "max": 63}),
|
||||
"force_transparency": ("BOOLEAN", {"default": False}),
|
||||
# "preset": (["ultrafast", "superfast", "veryfast", "faster", "fast", "medium", "slow", "slower", "veryslow"], {"default": "medium"}),
|
||||
},
|
||||
"optional": {
|
||||
"audio": ("AUDIO",),
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("comment",)
|
||||
FUNCTION = "image_to_video"
|
||||
OUTPUT_NODE = True
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def image_to_video(self, images, fps, name_prefix, format, crf, force_transparency, mp4_encoder, webm_encoder, audio=None):
|
||||
# Remove any existing extension
|
||||
name_prefix = os.path.splitext(name_prefix)[0]
|
||||
|
||||
# Find the next available number
|
||||
existing_files = glob.glob(f"{name_prefix}_*.{format}")
|
||||
if existing_files:
|
||||
max_num = max([int(f.split('_')[-1].split('.')[0]) for f in existing_files])
|
||||
next_num = max_num + 1
|
||||
else:
|
||||
next_num = 1
|
||||
|
||||
# Create the new filename with the incremented number
|
||||
output_file = f"{name_prefix}_{next_num:04d}.{format}"
|
||||
|
||||
temp_dir = "Bjornulf/temp_images_imgs2video"
|
||||
# Clean up temp dir
|
||||
if os.path.exists(temp_dir) and os.path.isdir(temp_dir):
|
||||
for file in os.listdir(temp_dir):
|
||||
os.remove(os.path.join(temp_dir, file))
|
||||
os.rmdir(temp_dir)
|
||||
|
||||
os.makedirs(temp_dir, exist_ok=True)
|
||||
# Ensure the output directory exists
|
||||
os.makedirs(os.path.dirname(output_file) if os.path.dirname(output_file) else ".", exist_ok=True)
|
||||
|
||||
# Save the tensor images as PNG files
|
||||
for i, img_tensor in enumerate(images):
|
||||
img = Image.fromarray((img_tensor.cpu().numpy() * 255).astype(np.uint8))
|
||||
if format == "webm":
|
||||
img = img.convert("RGBA") # Ensure alpha channel for WebM
|
||||
img.save(os.path.join(temp_dir, f"frame_{i:04d}.png"))
|
||||
|
||||
# Handle audio
|
||||
temp_audio_file = None
|
||||
if audio is not None:
|
||||
temp_audio_file = os.path.join(temp_dir, "temp_audio.wav")
|
||||
waveform = audio['waveform'].squeeze().numpy()
|
||||
sample_rate = audio['sample_rate']
|
||||
sf.write(temp_audio_file, waveform, sample_rate)
|
||||
|
||||
# Construct the FFmpeg command based on the selected format and encoder
|
||||
ffmpeg_cmd = [
|
||||
"ffmpeg",
|
||||
"-y",
|
||||
"-framerate", str(fps),
|
||||
"-i", os.path.join(temp_dir, "frame_%04d.png"),
|
||||
]
|
||||
|
||||
if temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-i", temp_audio_file])
|
||||
|
||||
if force_transparency:
|
||||
ffmpeg_cmd.extend([
|
||||
"-vf", "scale=iw:ih,format=rgba,split[s0][s1];[s0]lutrgb=r=0:g=0:b=0:a=0[transparent];[transparent][s1]overlay",
|
||||
])
|
||||
|
||||
if format == "mp4":
|
||||
if mp4_encoder == "h264_nvenc (H.264 / NVIDIA GPU)":
|
||||
mp4_encoder = "h264_nvenc"
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", mp4_encoder,
|
||||
# "-preset", "p" + preset, # NVENC uses different preset names
|
||||
"-cq", str(crf), # NVENC uses -cq instead of -crf
|
||||
])
|
||||
if mp4_encoder == "hevc_nvenc (H.265 / NVIDIA GPU)":
|
||||
mp4_encoder = "hevc_nvenc"
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", mp4_encoder,
|
||||
# "-preset", "p" + preset, # NVENC uses different preset names
|
||||
"-cq", str(crf), # NVENC uses -cq instead of -crf
|
||||
])
|
||||
elif mp4_encoder == "libx264":
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", mp4_encoder,
|
||||
# "-preset", preset,
|
||||
"-crf", str(crf),
|
||||
])
|
||||
elif mp4_encoder == "libx265":
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", mp4_encoder,
|
||||
# "-preset", preset,
|
||||
"-crf", str(crf),
|
||||
"-tag:v", "hvc1", # For better compatibility
|
||||
])
|
||||
ffmpeg_cmd.extend(["-pix_fmt", "yuv420p"]) #No transparency
|
||||
comment = """MP4 format : Widely compatible, efficient compression, No transparency support.
|
||||
H.264: Fast encoding, widely compatible, larger file sizes for the same quality.
|
||||
H.265: More efficient compression, smaller file sizes, better for high-resolution video, slower encoding, BUT less universal support."""
|
||||
elif format == "webm":
|
||||
if webm_encoder == "libvpx-vp9":
|
||||
# cpu_used = preset_to_cpu_used.get(preset, 3) # Default to 3 if preset not found
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", webm_encoder,
|
||||
# "-cpu-used", str(cpu_used),
|
||||
"-deadline", "realtime",
|
||||
"-crf", str(crf),
|
||||
"-b:v", "0",
|
||||
"-pix_fmt", "yuva420p", #Transparency
|
||||
])
|
||||
elif webm_encoder == "libaom-av1 (VERY SLOW)":
|
||||
# cpu_used = preset_to_cpu_used.get(preset, 3) # Default to 3 if preset not found
|
||||
webm_encoder = "libaom-av1"
|
||||
ffmpeg_cmd.extend([
|
||||
"-c:v", webm_encoder,
|
||||
# "-cpu-used", str(cpu_used),
|
||||
"-deadline", "realtime",
|
||||
"-crf", str(crf),
|
||||
"-b:v", "0",
|
||||
"-pix_fmt", "yuva420p", #Transparency
|
||||
])
|
||||
comment = """WebM format: Supports transparency, open format, smaller file size, but less compatible than MP4."""
|
||||
|
||||
if temp_audio_file:
|
||||
ffmpeg_cmd.extend(["-c:a", "libvorbis" if format == "webm" else "aac", "-shortest"])
|
||||
|
||||
ffmpeg_cmd.append(output_file)
|
||||
|
||||
# Run FFmpeg
|
||||
try:
|
||||
subprocess.run(ffmpeg_cmd, check=True)
|
||||
print(f"Video created successfully: {output_file}")
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"Error creating video: {e}")
|
||||
finally:
|
||||
# Clean up temporary files
|
||||
# for file in os.listdir(temp_dir):
|
||||
# os.remove(os.path.join(temp_dir, file))
|
||||
# os.rmdir(temp_dir)
|
||||
print("Temporary files not removed for debugging purposes.")
|
||||
|
||||
return (comment,)
|
||||
@@ -1,139 +0,0 @@
|
||||
import os
|
||||
import uuid
|
||||
import subprocess
|
||||
import tempfile
|
||||
import torch
|
||||
import numpy as np
|
||||
from PIL import Image
|
||||
import wave
|
||||
|
||||
class ImagesListToVideo:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"images": ("IMAGE",),
|
||||
"frames_per_second": ("FLOAT", {"default": 30, "min": 1, "max": 120, "step": 1}),
|
||||
},
|
||||
"optional": {
|
||||
"audio_path": ("STRING", {"default": "", "multiline": False}),
|
||||
"audio": ("AUDIO", {"default": None}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_NAMES = ("video_path",)
|
||||
FUNCTION = "images_to_video"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def images_to_video(self, images, frames_per_second=30, audio_path="", audio=None):
|
||||
# Create the output directory if it doesn't exist
|
||||
output_dir = os.path.join("Bjornulf", "images_to_video")
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Generate a unique filename for the video
|
||||
video_filename = f"video_{uuid.uuid4().hex}.mp4"
|
||||
video_path = os.path.join(output_dir, video_filename)
|
||||
|
||||
# Create a temporary directory to store image files and audio
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
# Save each image as a PNG file in the temporary directory
|
||||
for i, img in enumerate(images):
|
||||
img_np = self.convert_to_numpy(img)
|
||||
if img_np.shape[-1] != 3:
|
||||
img_np = self.convert_to_rgb(img_np)
|
||||
img_pil = Image.fromarray(img_np)
|
||||
img_path = os.path.join(temp_dir, f"frame_{i:05d}.png")
|
||||
img_pil.save(img_path)
|
||||
|
||||
# Prepare FFmpeg command
|
||||
ffmpeg_cmd = [
|
||||
"ffmpeg",
|
||||
"-framerate", str(frames_per_second),
|
||||
"-i", os.path.join(temp_dir, "frame_%05d.png"),
|
||||
"-c:v", "libx264",
|
||||
"-pix_fmt", "yuv420p",
|
||||
"-crf", "19"
|
||||
]
|
||||
|
||||
# Handle audio
|
||||
temp_audio_path = None
|
||||
if audio is not None and isinstance(audio, dict):
|
||||
waveform = audio['waveform'].numpy().squeeze()
|
||||
sample_rate = audio['sample_rate']
|
||||
temp_audio_path = os.path.join(temp_dir, "temp_audio.wav")
|
||||
self.write_wav(temp_audio_path, waveform, sample_rate)
|
||||
elif audio_path and os.path.isfile(audio_path):
|
||||
temp_audio_path = audio_path
|
||||
|
||||
if temp_audio_path:
|
||||
# Create temporary video without audio first
|
||||
temp_video = os.path.join(temp_dir, "temp_video.mp4")
|
||||
temp_cmd = ffmpeg_cmd + ["-y", temp_video]
|
||||
|
||||
try:
|
||||
# Create video without audio
|
||||
subprocess.run(temp_cmd, check=True, capture_output=True, text=True)
|
||||
|
||||
# Add audio to the video
|
||||
audio_cmd = [
|
||||
"ffmpeg",
|
||||
"-i", temp_video,
|
||||
"-i", temp_audio_path,
|
||||
"-c:v", "copy",
|
||||
"-c:a", "aac",
|
||||
"-shortest",
|
||||
"-y",
|
||||
video_path
|
||||
]
|
||||
subprocess.run(audio_cmd, check=True, capture_output=True, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"FFmpeg error: {e.stderr}")
|
||||
return ("",)
|
||||
else:
|
||||
# No audio, just create the video directly
|
||||
ffmpeg_cmd.append("-y")
|
||||
ffmpeg_cmd.append(video_path)
|
||||
try:
|
||||
subprocess.run(ffmpeg_cmd, check=True, capture_output=True, text=True)
|
||||
except subprocess.CalledProcessError as e:
|
||||
print(f"FFmpeg error: {e.stderr}")
|
||||
return ("",)
|
||||
|
||||
return (video_path,)
|
||||
|
||||
def write_wav(self, file_path, audio_data, sample_rate):
|
||||
with wave.open(file_path, 'wb') as wav_file:
|
||||
wav_file.setnchannels(1) # Mono
|
||||
wav_file.setsampwidth(2) # 2 bytes per sample
|
||||
wav_file.setframerate(sample_rate)
|
||||
|
||||
# Normalize and convert to 16-bit PCM
|
||||
audio_data = np.int16(audio_data * 32767)
|
||||
|
||||
# Write audio data
|
||||
wav_file.writeframes(audio_data.tobytes())
|
||||
|
||||
def convert_to_numpy(self, img):
|
||||
if isinstance(img, torch.Tensor):
|
||||
img = img.cpu().numpy()
|
||||
if img.dtype == np.uint8:
|
||||
return img
|
||||
elif img.dtype == np.float32 or img.dtype == np.float64:
|
||||
return (img * 255).astype(np.uint8)
|
||||
else:
|
||||
raise ValueError(f"Unsupported data type: {img.dtype}")
|
||||
|
||||
def convert_to_rgb(self, img):
|
||||
if img.shape[-1] == 1: # Grayscale
|
||||
return np.repeat(img, 3, axis=-1)
|
||||
elif img.shape[-1] == 768: # Latent space representation
|
||||
# This is a placeholder. You might need a more sophisticated method to convert latent space to RGB
|
||||
img = img.reshape((-1, 3)) # Reshape to (H*W, 3)
|
||||
img = (img - img.min()) / (img.max() - img.min()) # Normalize to [0, 1]
|
||||
img = (img * 255).astype(np.uint8)
|
||||
return img.reshape((img.shape[0], -1, 3)) # Reshape back to (H, W, 3)
|
||||
elif len(img.shape) == 2: # 2D array
|
||||
return np.stack([img, img, img], axis=-1)
|
||||
else:
|
||||
raise ValueError(f"Unsupported image shape: {img.shape}")
|
||||
@@ -77,7 +77,7 @@ class LatentResolutionSelector:
|
||||
|
||||
RETURN_TYPES = ("LATENT",)
|
||||
FUNCTION = "generate_latent"
|
||||
CATEGORY = "latent"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def generate_latent(self, resolution_preset, batch_size=1):
|
||||
# Extract dimensions from the preset string
|
||||
|
||||
@@ -1,6 +1,11 @@
|
||||
import os
|
||||
import re
|
||||
from aiohttp import web
|
||||
from server import PromptServer
|
||||
|
||||
class LineSelector:
|
||||
def __init__(self):
|
||||
pass
|
||||
self._counter = -1
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
@@ -8,6 +13,11 @@ class LineSelector:
|
||||
"required": {
|
||||
"text": ("STRING", {"multiline": True}), # Input for multiple lines
|
||||
"line_number": ("INT", {"default": 0, "min": 0, "max": 99999}), # 0 for random, >0 for specific line
|
||||
"RANDOM": ("BOOLEAN", {"default": False}), # Force random selection
|
||||
"LOOP": ("BOOLEAN", {"default": False}), # Return all lines as list
|
||||
"LOOP_SEQUENTIAL": ("BOOLEAN", {"default": False}), # Sequential looping
|
||||
"jump": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}), # Jump size for sequential loop
|
||||
"pick_random_variable": ("BOOLEAN", {"default": False}), # Enable random choice functionality
|
||||
},
|
||||
"optional": {
|
||||
"variables": ("STRING", {"multiline": True, "forceInput": True}),
|
||||
@@ -19,11 +29,13 @@ class LineSelector:
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("STRING",)
|
||||
RETURN_TYPES = ("STRING", "INT", "INT") # String output, remaining cycles, current line number
|
||||
RETURN_NAMES = ("text", "remaining_cycles", "current_line")
|
||||
OUTPUT_IS_LIST = (True, False, False) # Only text output can be a list
|
||||
FUNCTION = "select_line"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def select_line(self, text, line_number, variables="", seed=-1):
|
||||
def select_line(self, text, line_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, pick_random_variable, variables="", seed=-1):
|
||||
# Parse variables
|
||||
var_dict = {}
|
||||
for line in variables.split('\n'):
|
||||
@@ -40,25 +52,84 @@ class LineSelector:
|
||||
if line.strip() and not line.strip().startswith('#')]
|
||||
|
||||
if not lines:
|
||||
return ("No valid lines found.",)
|
||||
return (["No valid lines found."], 0, 0)
|
||||
|
||||
import random
|
||||
import os
|
||||
|
||||
# Set seed if provided
|
||||
if seed >= 0:
|
||||
random.seed(seed)
|
||||
|
||||
# Process random choice functionality if enabled
|
||||
if pick_random_variable:
|
||||
pattern = r'\{([^}]+)\}'
|
||||
def replace_random(match):
|
||||
return random.choice(match.group(1).split('|'))
|
||||
|
||||
# If line_number is 0, select random line
|
||||
if line_number == 0:
|
||||
lines = [re.sub(pattern, replace_random, line) for line in lines]
|
||||
|
||||
# Handle sequential looping
|
||||
if LOOP_SEQUENTIAL:
|
||||
counter_file = os.path.join("Bjornulf", "line_selector_counter.txt")
|
||||
os.makedirs(os.path.dirname(counter_file), exist_ok=True)
|
||||
|
||||
try:
|
||||
with open(counter_file, 'r') as f:
|
||||
current_index = int(f.read().strip())
|
||||
except (FileNotFoundError, ValueError):
|
||||
current_index = -jump
|
||||
|
||||
next_index = current_index + jump
|
||||
|
||||
if next_index >= len(lines):
|
||||
with open(counter_file, 'w') as f:
|
||||
f.write(str(-jump))
|
||||
raise ValueError(f"Counter has reached the last line (total lines: {len(lines)}). Counter has be reset.")
|
||||
|
||||
with open(counter_file, 'w') as f:
|
||||
f.write(str(next_index))
|
||||
|
||||
remaining_cycles = max(0, (len(lines) - next_index - 1) // jump + 1)
|
||||
return ([lines[next_index]], remaining_cycles, next_index + 1)
|
||||
|
||||
# Handle normal LOOP mode
|
||||
if LOOP:
|
||||
return (lines, len(lines), 0)
|
||||
|
||||
# Handle RANDOM or line_number selection
|
||||
if RANDOM or line_number == 0:
|
||||
selected = random.choice(lines)
|
||||
else:
|
||||
# If line_number is greater than 0, select specific line (with bounds checking)
|
||||
index = min(line_number - 1, len(lines) - 1) # -1 because user input starts at 1
|
||||
index = max(0, index) # Ensure we don't go below 0
|
||||
index = min(line_number - 1, len(lines) - 1)
|
||||
index = max(0, index)
|
||||
selected = lines[index]
|
||||
|
||||
return (selected,)
|
||||
return ([selected], 0, line_number if line_number > 0 else 0)
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(s, text, line_number, variables="", seed=-1):
|
||||
return (text, line_number, variables, seed)
|
||||
def IS_CHANGED(s, text, line_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, pick_random_variable, variables="", seed=-1):
|
||||
return float("NaN") if LOOP_SEQUENTIAL else (text, line_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, pick_random_variable, variables, seed)
|
||||
|
||||
@PromptServer.instance.routes.post("/reset_line_selector_counter")
|
||||
async def reset_line_selector_counter(request):
|
||||
counter_file = os.path.join("Bjornulf", "line_selector_counter.txt")
|
||||
try:
|
||||
os.remove(counter_file)
|
||||
return web.json_response({"success": True}, status=200)
|
||||
except FileNotFoundError:
|
||||
return web.json_response({"success": True}, status=200)
|
||||
except Exception as e:
|
||||
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||
|
||||
@PromptServer.instance.routes.post("/get_line_selector_counter")
|
||||
async def get_line_selector_counter(request):
|
||||
counter_file = os.path.join("Bjornulf", "line_selector_counter.txt")
|
||||
try:
|
||||
with open(counter_file, 'r') as f:
|
||||
current_index = int(f.read().strip())
|
||||
return web.json_response({"success": True, "value": current_index + 1}, status=200)
|
||||
except (FileNotFoundError, ValueError):
|
||||
return web.json_response({"success": True, "value": 0}, status=200)
|
||||
except Exception as e:
|
||||
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||
@@ -68,7 +68,7 @@ async def get_counter_value(request):
|
||||
|
||||
@PromptServer.instance.routes.post("/reset_counter")
|
||||
async def reset_counter(request):
|
||||
logging.info("Reset counter called")
|
||||
# logging.info("Reset counter called")
|
||||
counter_file = os.path.join("Bjornulf", "counter_integer.txt")
|
||||
try:
|
||||
os.remove(counter_file)
|
||||
|
||||
93
lora_stacks.py
Normal file
@@ -0,0 +1,93 @@
|
||||
import os
|
||||
import random
|
||||
from folder_paths import get_filename_list, get_full_path
|
||||
import comfy.sd
|
||||
import comfy.utils
|
||||
|
||||
class AllLoraSelector:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
lora_list = get_filename_list("loras")
|
||||
optional_inputs = {}
|
||||
|
||||
# Add a default value if lora_list is empty
|
||||
if not lora_list:
|
||||
lora_list = ["none"]
|
||||
|
||||
for i in range(1, 21):
|
||||
optional_inputs[f"lora_{i}"] = (lora_list, {"default": lora_list[0]})
|
||||
optional_inputs[f"strength_model_{i}"] = ("FLOAT", {"default": 1.0, "min": -100.0, "max": 100.0, "step": 0.01})
|
||||
optional_inputs[f"strength_clip_{i}"] = ("FLOAT", {"default": 1.0, "min": -100.0, "max": 100.0, "step": 0.01})
|
||||
|
||||
return {
|
||||
"required": {
|
||||
"number_of_loras": ("INT", {"default": 3, "min": 1, "max": 20, "step": 1}),
|
||||
"model": ("MODEL",),
|
||||
"clip": ("CLIP",),
|
||||
},
|
||||
"optional": optional_inputs
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING", "STRING")
|
||||
RETURN_NAMES = ("model", "clip", "lora_paths", "lora_names", "lora_folders")
|
||||
FUNCTION = "apply_all_loras"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def apply_all_loras(self, number_of_loras, model, clip, **kwargs):
|
||||
available_loras = []
|
||||
strengths_model = []
|
||||
strengths_clip = []
|
||||
|
||||
# Collect LoRAs and their strengths
|
||||
for i in range(1, number_of_loras + 1):
|
||||
lora_key = f"lora_{i}"
|
||||
strength_model_key = f"strength_model_{i}"
|
||||
strength_clip_key = f"strength_clip_{i}"
|
||||
|
||||
if lora_key in kwargs and kwargs[lora_key] and kwargs[lora_key] != "none":
|
||||
available_loras.append(kwargs[lora_key])
|
||||
strengths_model.append(kwargs.get(strength_model_key, 1.0))
|
||||
strengths_clip.append(kwargs.get(strength_clip_key, 1.0))
|
||||
|
||||
if not available_loras:
|
||||
return (model, clip, "", "", "")
|
||||
|
||||
# Initialize lists for collecting metadata
|
||||
lora_paths = []
|
||||
lora_names = []
|
||||
lora_folders = []
|
||||
|
||||
# Create a copy of the initial model and clip
|
||||
result_model = model.clone()
|
||||
result_clip = clip.clone()
|
||||
|
||||
# Apply each LoRA sequentially
|
||||
for selected_lora, strength_model, strength_clip in zip(available_loras, strengths_model, strengths_clip):
|
||||
# Get LoRA metadata
|
||||
lora_name = os.path.splitext(os.path.basename(selected_lora))[0]
|
||||
lora_path = get_full_path("loras", selected_lora)
|
||||
lora_folder = os.path.basename(os.path.dirname(lora_path))
|
||||
|
||||
# Load and apply LoRA
|
||||
lora = comfy.utils.load_torch_file(lora_path, safe_load=True)
|
||||
model_lora, clip_lora = comfy.sd.load_lora_for_models(
|
||||
result_model, result_clip, lora, strength_model, strength_clip
|
||||
)
|
||||
|
||||
# Update results
|
||||
result_model = model_lora
|
||||
if clip_lora is not None:
|
||||
result_clip = clip_lora
|
||||
|
||||
# Collect metadata
|
||||
lora_paths.append(lora_path)
|
||||
lora_names.append(lora_name)
|
||||
lora_folders.append(lora_folder)
|
||||
|
||||
return (
|
||||
result_model,
|
||||
result_clip,
|
||||
",".join(lora_paths),
|
||||
",".join(lora_names),
|
||||
",".join(lora_folders)
|
||||
)
|
||||
115
model_clip_vae_selector.py
Normal file
@@ -0,0 +1,115 @@
|
||||
import random
|
||||
import json
|
||||
import os
|
||||
from aiohttp import web
|
||||
from server import PromptServer
|
||||
|
||||
class ModelClipVaeSelector:
|
||||
def __init__(self):
|
||||
self._counter = -1
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {
|
||||
"number_of_inputs": ("INT", {"default": 2, "min": 2, "max": 10, "step": 1}),
|
||||
"selected_number": ("INT", {"default": 0, "min": 0, "max": 10, "step": 1}), # 0 for random, >0 for specific selection
|
||||
"model_1": ("MODEL", {"forceInput": True}),
|
||||
"clip_1": ("CLIP", {"forceInput": True}),
|
||||
"vae_1": ("VAE", {"forceInput": True}),
|
||||
"model_2": ("MODEL", {"forceInput": True}),
|
||||
"clip_2": ("CLIP", {"forceInput": True}),
|
||||
"vae_2": ("VAE", {"forceInput": True}),
|
||||
"RANDOM": ("BOOLEAN", {"default": False}), # Force random selection
|
||||
"LOOP": ("BOOLEAN", {"default": False}), # Return all as list
|
||||
"LOOP_SEQUENTIAL": ("BOOLEAN", {"default": False}), # Sequential looping
|
||||
"jump": ("INT", {"default": 1, "min": 1, "max": 10, "step": 1}), # Jump size for sequential loop
|
||||
"seed": ("INT", {
|
||||
"default": 0,
|
||||
"min": -1,
|
||||
"max": 0x7FFFFFFFFFFFFFFF
|
||||
}),
|
||||
},
|
||||
"hidden": {
|
||||
**{f"model_{i}": ("MODEL", {"forceInput": True}) for i in range(3, 11)},
|
||||
**{f"clip_{i}": ("CLIP", {"forceInput": True}) for i in range(3, 11)},
|
||||
**{f"vae_{i}": ("VAE", {"forceInput": True}) for i in range(3, 11)},
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("MODEL", "CLIP", "VAE", "INT") # Added INT for current selection
|
||||
RETURN_NAMES = ("model", "clip", "vae", "current_selection")
|
||||
OUTPUT_IS_LIST = (True, True, True, False) # Allow lists for model/clip/vae outputs
|
||||
FUNCTION = "select_models"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def select_models(self, number_of_inputs, selected_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, **kwargs):
|
||||
if LOOP:
|
||||
# Return all models as lists
|
||||
models = [kwargs[f"model_{i}"] for i in range(1, number_of_inputs + 1)]
|
||||
clips = [kwargs[f"clip_{i}"] for i in range(1, number_of_inputs + 1)]
|
||||
vaes = [kwargs[f"vae_{i}"] for i in range(1, number_of_inputs + 1)]
|
||||
return (models, clips, vaes, 0)
|
||||
|
||||
if LOOP_SEQUENTIAL:
|
||||
counter_file = os.path.join("Bjornulf", "model_selector_counter.txt")
|
||||
os.makedirs(os.path.dirname(counter_file), exist_ok=True)
|
||||
|
||||
try:
|
||||
with open(counter_file, 'r') as f:
|
||||
current_index = int(f.read().strip())
|
||||
except (FileNotFoundError, ValueError):
|
||||
current_index = -jump
|
||||
|
||||
next_index = current_index + jump
|
||||
|
||||
if next_index >= number_of_inputs:
|
||||
with open(counter_file, 'w') as f:
|
||||
f.write(str(-jump))
|
||||
raise ValueError(f"Counter has reached the last model (total models: {number_of_inputs}). Counter has been reset.")
|
||||
|
||||
with open(counter_file, 'w') as f:
|
||||
f.write(str(next_index))
|
||||
|
||||
selected_index = next_index + 1 # Convert to 1-based indexing
|
||||
else:
|
||||
# Handle RANDOM or specific selection
|
||||
if RANDOM or selected_number == 0:
|
||||
random.seed(kwargs.get('seed', 0))
|
||||
selected_index = random.randint(1, number_of_inputs)
|
||||
else:
|
||||
selected_index = max(1, min(selected_number, number_of_inputs))
|
||||
|
||||
selected_model = kwargs[f"model_{selected_index}"]
|
||||
selected_clip = kwargs[f"clip_{selected_index}"]
|
||||
selected_vae = kwargs[f"vae_{selected_index}"]
|
||||
|
||||
return ([selected_model], [selected_clip], [selected_vae], selected_index)
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(cls, number_of_inputs, selected_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, **kwargs):
|
||||
return float("NaN") if LOOP_SEQUENTIAL else (number_of_inputs, selected_number, RANDOM, LOOP, LOOP_SEQUENTIAL, jump, kwargs.get('seed', 0))
|
||||
|
||||
# Add routes for counter management
|
||||
@PromptServer.instance.routes.post("/reset_model_selector_counter")
|
||||
async def reset_model_selector_counter(request):
|
||||
counter_file = os.path.join("Bjornulf", "model_selector_counter.txt")
|
||||
try:
|
||||
os.remove(counter_file)
|
||||
return web.json_response({"success": True}, status=200)
|
||||
except FileNotFoundError:
|
||||
return web.json_response({"success": True}, status=200)
|
||||
except Exception as e:
|
||||
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||
|
||||
@PromptServer.instance.routes.post("/get_model_selector_counter")
|
||||
async def get_model_selector_counter(request):
|
||||
counter_file = os.path.join("Bjornulf", "model_selector_counter.txt")
|
||||
try:
|
||||
with open(counter_file, 'r') as f:
|
||||
current_index = int(f.read().strip())
|
||||
return web.json_response({"success": True, "value": current_index + 1}, status=200)
|
||||
except (FileNotFoundError, ValueError):
|
||||
return web.json_response({"success": True, "value": 0}, status=200)
|
||||
except Exception as e:
|
||||
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||
108
note_image.py
Normal file
@@ -0,0 +1,108 @@
|
||||
import random
|
||||
import os
|
||||
import hashlib
|
||||
# import logging
|
||||
import numpy as np
|
||||
import torch
|
||||
from nodes import SaveImage
|
||||
import folder_paths
|
||||
from PIL import Image
|
||||
from server import PromptServer
|
||||
from aiohttp import web
|
||||
|
||||
# Configure logging
|
||||
# logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
# logger = logging.getLogger("ImageNote")
|
||||
|
||||
class ImageNote(SaveImage):
|
||||
def __init__(self):
|
||||
self.output_dir = folder_paths.get_temp_directory()
|
||||
self.type = "temp"
|
||||
self.prefix_append = "_temp_" + ''.join(random.choice("abcdefghijklmnopqrstupvxyz") for _ in range(5))
|
||||
self.compress_level = 1
|
||||
self.note_dir = os.path.join("ComfyUI", "Bjornulf", "imageNote")
|
||||
os.makedirs(self.note_dir, exist_ok=True)
|
||||
|
||||
# Store last image path and hash to prevent unnecessary reloading
|
||||
self.last_image_path = None
|
||||
self.last_image_hash = None
|
||||
self.last_output_images = None
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"optional": {
|
||||
"images": ("IMAGE", ),
|
||||
"image_path": ("STRING", {"default": ""}),
|
||||
"note_text": ("STRING", {"default": "", "multiline": True})
|
||||
},
|
||||
"hidden": {
|
||||
"prompt": "PROMPT",
|
||||
"extra_pnginfo": "EXTRA_PNGINFO"
|
||||
},
|
||||
}
|
||||
|
||||
FUNCTION = "process_image"
|
||||
OUTPUT_NODE = True
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def compute_md5(self, image):
|
||||
image_bytes = image.tobytes() if isinstance(image, Image.Image) else image
|
||||
return hashlib.md5(image_bytes).hexdigest()
|
||||
|
||||
def process_image(self, images=None, image_path="", note_text="", prompt=None, extra_pnginfo=None):
|
||||
output_images = None
|
||||
output_note_text = ""
|
||||
|
||||
# If images are given, process them
|
||||
if images is not None and len(images) > 0:
|
||||
output_images = images
|
||||
image_np = (images[0].numpy() * 255).astype(np.uint8)
|
||||
image = Image.fromarray(image_np)
|
||||
image_hash = self.compute_md5(image)
|
||||
|
||||
note_path = os.path.join(self.note_dir, f"{image_hash}.txt")
|
||||
if os.path.exists(note_path):
|
||||
with open(note_path, "r", encoding="utf-8") as f:
|
||||
output_note_text = f.read()
|
||||
elif note_text:
|
||||
with open(note_path, "w", encoding="utf-8") as f:
|
||||
f.write(note_text)
|
||||
output_note_text = note_text
|
||||
|
||||
# If image_path is empty, do nothing
|
||||
elif not image_path:
|
||||
# logger.debug("No image path provided, skipping processing.")
|
||||
return None, ""
|
||||
|
||||
# Process image from path only if it has changed
|
||||
elif os.path.isfile(image_path):
|
||||
if image_path == self.last_image_path:
|
||||
# logger.debug("Image path has not changed, skipping reload.")
|
||||
return super().save_images(images=self.last_output_images, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
|
||||
image = Image.open(image_path).convert("RGB")
|
||||
image_hash = self.compute_md5(image)
|
||||
|
||||
if image_hash == self.last_image_hash:
|
||||
# logger.debug("Image content has not changed, skipping reload.")
|
||||
return super().save_images(images=self.last_output_images, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
|
||||
note_path = os.path.join(self.note_dir, f"{image_hash}.txt")
|
||||
if os.path.exists(note_path):
|
||||
with open(note_path, "r", encoding="utf-8") as f:
|
||||
output_note_text = f.read()
|
||||
elif note_text:
|
||||
with open(note_path, "w", encoding="utf-8") as f:
|
||||
f.write(note_text)
|
||||
output_note_text = note_text
|
||||
|
||||
image_np = np.array(image).astype(np.float32) / 255.0
|
||||
output_images = torch.from_numpy(image_np).unsqueeze(0)
|
||||
|
||||
# Update stored values
|
||||
self.last_image_path = image_path
|
||||
self.last_image_hash = image_hash
|
||||
self.last_output_images = output_images
|
||||
|
||||
return super().save_images(images=output_images, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
25
note_text.py
Normal file
@@ -0,0 +1,25 @@
|
||||
class Everything(str):
|
||||
def __ne__(self, __value: object) -> bool:
|
||||
return False
|
||||
|
||||
class DisplayNote:
|
||||
@classmethod
|
||||
def INPUT_TYPES(s):
|
||||
return {
|
||||
"required": {
|
||||
"any": (Everything("*"), {"forceInput": True}), # Accept any input
|
||||
"display_text": ("STRING", {
|
||||
"multiline": True, # Allow multiline text
|
||||
"default": "" # Default text
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
RETURN_TYPES = (Everything("*"),) # Return same type as input
|
||||
RETURN_NAMES = ("any",) # Return same type as input
|
||||
FUNCTION = "display_text_pass"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def display_text_pass(self, any, display_text):
|
||||
# Simply pass through the input
|
||||
return (any,)
|
||||
@@ -7,7 +7,7 @@ import requests
|
||||
import json
|
||||
import ollama
|
||||
from ollama import Client
|
||||
import logging
|
||||
# import logging
|
||||
import hashlib
|
||||
from typing import Dict, Any
|
||||
from PIL.PngImagePlugin import PngInfo
|
||||
|
||||
@@ -215,18 +215,23 @@ async def get_current_context_size(request):
|
||||
counter_file = os.path.join("Bjornulf", "ollama", "ollama_context.txt")
|
||||
try:
|
||||
if not os.path.exists(counter_file):
|
||||
logging.info("Context file does not exist")
|
||||
logging.info("[Ollama] Context file does not exist")
|
||||
# Create parent directories if they don't exist
|
||||
os.makedirs(os.path.dirname(counter_file), exist_ok=True)
|
||||
# Create empty file
|
||||
open(counter_file, 'w').close()
|
||||
logging.info(f"[Ollama] Created empty context file at: {counter_file}")
|
||||
return web.json_response({"success": True, "value": 0}, status=200)
|
||||
|
||||
with open(counter_file, 'r', encoding='utf-8') as f:
|
||||
# Count non-empty lines in the file
|
||||
lines = [line.strip() for line in f.readlines() if line.strip()]
|
||||
line_count = len(lines)
|
||||
logging.info(f"Found {line_count} lines in context file")
|
||||
logging.info(f"[Ollama] Found {line_count} lines in context file")
|
||||
return web.json_response({"success": True, "value": line_count}, status=200)
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error reading context size: {str(e)}")
|
||||
# logging.error(f"Error reading context size: {str(e)}")
|
||||
return web.json_response({
|
||||
"success": False,
|
||||
"error": str(e),
|
||||
@@ -258,7 +263,7 @@ def get_next_filename(base_path, base_name):
|
||||
|
||||
@PromptServer.instance.routes.post("/reset_lines_context")
|
||||
def reset_lines_context(request):
|
||||
logging.info("Reset lines counter called")
|
||||
# logging.info("Reset lines counter called")
|
||||
base_dir = os.path.join("Bjornulf", "ollama")
|
||||
base_file = "ollama_context"
|
||||
counter_file = os.path.join(base_dir, f"{base_file}.txt")
|
||||
@@ -268,7 +273,7 @@ def reset_lines_context(request):
|
||||
# Get new filename and rename
|
||||
new_filename = os.path.join(base_dir, get_next_filename(base_dir, base_file))
|
||||
os.rename(counter_file, new_filename)
|
||||
logging.info(f"Renamed {counter_file} to {new_filename}")
|
||||
# logging.info(f"Renamed {counter_file} to {new_filename}")
|
||||
|
||||
# Send notification through ComfyUI
|
||||
notification = {
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import time
|
||||
from aiohttp import web
|
||||
from server import PromptServer
|
||||
import logging
|
||||
# import logging
|
||||
from pydub import AudioSegment
|
||||
from pydub.playback import play
|
||||
import os
|
||||
@@ -61,7 +61,7 @@ class PauseResume:
|
||||
self.play_audio()
|
||||
self.input = input
|
||||
while PauseResume.is_paused and not PauseResume.should_stop:
|
||||
logging.info(f"PauseResume.is_paused: {PauseResume.is_paused}, PauseResume.should_stop: {PauseResume.should_stop}")
|
||||
# logging.info(f"PauseResume.is_paused: {PauseResume.is_paused}, PauseResume.should_stop: {PauseResume.should_stop}")
|
||||
time.sleep(1) # Sleep to prevent busy waiting
|
||||
|
||||
if PauseResume.should_stop:
|
||||
@@ -75,13 +75,13 @@ class PauseResume:
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_resume")
|
||||
async def resume_node(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PauseResume.is_paused = False
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_stop")
|
||||
async def stop_node(request):
|
||||
logging.info("Stop node called")
|
||||
# logging.info("Stop node called")
|
||||
PauseResume.should_stop = True
|
||||
PauseResume.is_paused = False # Ensure the loop exits
|
||||
return web.Response(text="Workflow stopped")
|
||||
@@ -1,7 +1,7 @@
|
||||
import time
|
||||
from aiohttp import web
|
||||
from server import PromptServer
|
||||
import logging
|
||||
# import logging
|
||||
from pydub import AudioSegment
|
||||
from pydub.playback import play
|
||||
import os
|
||||
@@ -62,11 +62,11 @@ class PickInput:
|
||||
|
||||
def pick_input(self, seed, **kwargs):
|
||||
random.seed(seed)
|
||||
logging.info(f"Selected input at the start: {PickInput.selected_input}")
|
||||
# logging.info(f"Selected input at the start: {PickInput.selected_input}")
|
||||
self.play_audio()
|
||||
|
||||
while PickInput.is_paused and not PickInput.should_stop:
|
||||
logging.info(f"PickInput.is_paused: {PickInput.is_paused}, PickInput.should_stop: {PickInput.should_stop}")
|
||||
# logging.info(f"PickInput.is_paused: {PickInput.is_paused}, PickInput.should_stop: {PickInput.should_stop}")
|
||||
time.sleep(1) # Sleep to prevent busy waiting
|
||||
|
||||
if PickInput.should_stop:
|
||||
@@ -79,12 +79,12 @@ class PickInput:
|
||||
|
||||
# Check if the selected input exists in kwargs
|
||||
if PickInput.selected_input not in kwargs:
|
||||
logging.error(f"Selected input '{PickInput.selected_input}' not found in kwargs")
|
||||
logging.info(f"Available kwargs: {list(kwargs.keys())}")
|
||||
# logging.error(f"Selected input '{PickInput.selected_input}' not found in kwargs")
|
||||
# logging.info(f"Available kwargs: {list(kwargs.keys())}")
|
||||
return (None,) # or handle this error as appropriate
|
||||
|
||||
selected_value = kwargs.get(PickInput.selected_input)
|
||||
logging.info(f"Value of selected input '{PickInput.selected_input}': {selected_value}")
|
||||
# logging.info(f"Value of selected input '{PickInput.selected_input}': {selected_value}")
|
||||
|
||||
# Store the value in self.target if needed
|
||||
self.target = selected_value
|
||||
@@ -100,77 +100,77 @@ class PickInput:
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_stop_pick")
|
||||
async def stop_node_pick(request):
|
||||
logging.info("Stop node pick called")
|
||||
# logging.info("Stop node pick called")
|
||||
PickInput.should_stop = True
|
||||
PickInput.is_paused = False # Ensure the loop exits
|
||||
return web.Response(text="Workflow stopped")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_1")
|
||||
async def bjornulf_select_input_1(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_1"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_2")
|
||||
async def bjornulf_select_input_2(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_2"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_3")
|
||||
async def bjornulf_select_input_3(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_3"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_4")
|
||||
async def bjornulf_select_input_4(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_4"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_5")
|
||||
async def bjornulf_select_input_5(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_5"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_6")
|
||||
async def bjornulf_select_input_6(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_6"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_7")
|
||||
async def bjornulf_select_input_7(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_7"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_8")
|
||||
async def bjornulf_select_input_8(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_8"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_9")
|
||||
async def bjornulf_select_input_9(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_9"
|
||||
return web.Response(text="Node resumed")
|
||||
|
||||
@PromptServer.instance.routes.get("/bjornulf_select_input_10")
|
||||
async def bjornulf_select_input_10(request):
|
||||
logging.info("Resume node called")
|
||||
# logging.info("Resume node called")
|
||||
PickInput.is_paused = False
|
||||
PickInput.selected_input="input_10"
|
||||
return web.Response(text="Node resumed")
|
||||
76
preview_first_image.py
Normal file
@@ -0,0 +1,76 @@
|
||||
import random
|
||||
import os
|
||||
import numpy as np
|
||||
import torch
|
||||
from nodes import SaveImage
|
||||
import folder_paths
|
||||
from PIL import Image
|
||||
import cv2
|
||||
|
||||
class PreviewFirstImage(SaveImage):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.output_dir = folder_paths.get_temp_directory()
|
||||
self.type = "temp"
|
||||
self.prefix_append = "_preview_" + ''.join(random.choice("abcdefghijklmnopqrstuvwxyz") for _ in range(5))
|
||||
self.compress_level = 1
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {},
|
||||
"optional": {
|
||||
"images": ("IMAGE",),
|
||||
"path": ("STRING", {"default": ""})
|
||||
},
|
||||
"hidden": {
|
||||
"prompt": "PROMPT",
|
||||
"extra_pnginfo": "EXTRA_PNGINFO"
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ()
|
||||
FUNCTION = "preview_image"
|
||||
OUTPUT_NODE = True
|
||||
CATEGORY = "image"
|
||||
|
||||
def preview_image(self, images=None, path="", prompt=None, extra_pnginfo=None):
|
||||
if images is None and not path:
|
||||
return {}
|
||||
|
||||
output_images = None
|
||||
|
||||
# Handle image tensor input - always take first image from batch
|
||||
if images is not None and images.nelement() > 0:
|
||||
# Ensure we're working with the first image in the batch
|
||||
first_image = images[0:1] # Maintains batch dimension
|
||||
return super().save_images(images=first_image, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
|
||||
# Handle path input
|
||||
if path and os.path.exists(path):
|
||||
try:
|
||||
if path.lower().endswith(('.mp4', '.avi', '.mov', '.wmv', '.webm', '.mkv')): # Video file
|
||||
cap = cv2.VideoCapture(path)
|
||||
ret, frame = cap.read()
|
||||
cap.release()
|
||||
|
||||
if not ret:
|
||||
return {}
|
||||
|
||||
# Convert BGR to RGB and normalize
|
||||
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
|
||||
frame = frame.astype(np.float32) / 255.0
|
||||
output_images = torch.from_numpy(frame).unsqueeze(0)
|
||||
|
||||
else: # Image file
|
||||
image = Image.open(path).convert('RGB')
|
||||
image_np = np.array(image).astype(np.float32) / 255.0
|
||||
output_images = torch.from_numpy(image_np).unsqueeze(0)
|
||||
|
||||
return super().save_images(images=output_images, prompt=prompt, extra_pnginfo=extra_pnginfo)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing file {path}: {str(e)}")
|
||||
return {}
|
||||
|
||||
return {}
|
||||
@@ -1,7 +1,7 @@
|
||||
[project]
|
||||
name = "bjornulf_custom_nodes"
|
||||
description = "120 ComfyUI nodes : Display, manipulate, and edit text, images, videos, loras, generate characters and more. Manage looping operations, generate randomized content, use logical conditions and work with external AI tools, like Ollama or Text To Speech Kokoro, etc..."
|
||||
version = "0.69"
|
||||
description = "128 ComfyUI nodes : Display, manipulate, and edit text, images, videos, loras, generate characters and more. Manage looping operations, generate randomized content, use logical conditions and work with external AI tools, like Ollama or Text To Speech Kokoro, etc..."
|
||||
version = "0.70"
|
||||
license = {file = "LICENSE"}
|
||||
|
||||
[project.urls]
|
||||
|
||||
BIN
screenshots/first_image_preview.png
Normal file
|
After Width: | Height: | Size: 617 KiB |
BIN
screenshots/global_load.png
Normal file
|
After Width: | Height: | Size: 104 KiB |
BIN
screenshots/global_save.png
Normal file
|
After Width: | Height: | Size: 12 KiB |
BIN
screenshots/huggingface_dl.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
screenshots/image_note.png
Normal file
|
After Width: | Height: | Size: 495 KiB |
BIN
screenshots/image_notes.png
Normal file
|
After Width: | Height: | Size: 365 KiB |
BIN
screenshots/lora_stacks.png
Normal file
|
After Width: | Height: | Size: 62 KiB |
BIN
screenshots/model_clip_vae_selector.png
Normal file
|
After Width: | Height: | Size: 214 KiB |
BIN
screenshots/note.png
Normal file
|
After Width: | Height: | Size: 156 KiB |
@@ -15,11 +15,14 @@ class TextReplace:
|
||||
"display": "number",
|
||||
"tooltip": "Number of replacements (0 = replace all)"}),
|
||||
"use_regex": ("BOOLEAN", {"default": False}),
|
||||
"case_sensitive": ("BOOLEAN", {"default": True, "tooltip": "Whether the search should be case-sensitive"}),
|
||||
"case_sensitive": ("BOOLEAN", {"default": True,
|
||||
"tooltip": "Whether the search should be case-sensitive"}),
|
||||
"trim_whitespace": (["none", "left", "right", "both"], {
|
||||
"default": "none",
|
||||
"tooltip": "Remove whitespace around the found text"
|
||||
})
|
||||
}),
|
||||
"multiline_regex": ("BOOLEAN", {"default": False,
|
||||
"tooltip": "Make dot (.) match newlines in regex"})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,7 +30,8 @@ class TextReplace:
|
||||
FUNCTION = "replace_text"
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def replace_text(self, input_text, search_text, replace_text, replace_count, use_regex, case_sensitive, trim_whitespace):
|
||||
def replace_text(self, input_text, search_text, replace_text, replace_count,
|
||||
use_regex, multiline_regex, case_sensitive, trim_whitespace):
|
||||
try:
|
||||
# Convert input to string
|
||||
input_text = str(input_text)
|
||||
@@ -36,16 +40,10 @@ class TextReplace:
|
||||
regex_flags = 0
|
||||
if not case_sensitive:
|
||||
regex_flags |= re.IGNORECASE
|
||||
|
||||
# Debug print
|
||||
# print(f"Input: {input_text}")
|
||||
# print(f"Search Text: {search_text}")
|
||||
# print(f"Replace Text: {replace_text}")
|
||||
# print(f"Use Regex: {use_regex}")
|
||||
# print(f"Regex Flags: {regex_flags}")
|
||||
if multiline_regex and use_regex:
|
||||
regex_flags |= re.DOTALL
|
||||
|
||||
if use_regex:
|
||||
# Ensure regex pattern is valid
|
||||
try:
|
||||
# Compile the regex pattern first
|
||||
pattern = re.compile(search_text, flags=regex_flags)
|
||||
@@ -58,13 +56,9 @@ class TextReplace:
|
||||
# Replace specific number of instances
|
||||
result = pattern.sub(replace_text, input_text, count=replace_count)
|
||||
|
||||
# Debug print
|
||||
# print(f"Regex Result: {result}")
|
||||
|
||||
return (result,)
|
||||
|
||||
except re.error as regex_compile_error:
|
||||
# print(f"Invalid Regex Pattern: {regex_compile_error}")
|
||||
return (input_text,)
|
||||
|
||||
else:
|
||||
@@ -121,10 +115,9 @@ class TextReplace:
|
||||
return (result,)
|
||||
|
||||
except Exception as e:
|
||||
# print(f"Unexpected error during text replacement: {e}")
|
||||
return (input_text,)
|
||||
|
||||
@classmethod
|
||||
def IS_CHANGED(cls, input_text, search_text, replace_text, replace_count, use_regex, case_sensitive, trim_whitespace):
|
||||
def IS_CHANGED(cls, *args):
|
||||
# Return float("NaN") to ensure the node always processes
|
||||
return float("NaN")
|
||||
188
web/js/line_selector.js
Normal file
@@ -0,0 +1,188 @@
|
||||
import { app } from "../../../scripts/app.js";
|
||||
import { api } from "../../../scripts/api.js";
|
||||
|
||||
app.registerExtension({
|
||||
name: "Bjornulf.LineSelector",
|
||||
async nodeCreated(node) {
|
||||
if (node.comfyClass !== "Bjornulf_LineSelector") return;
|
||||
|
||||
// Hide seed widget
|
||||
const seedWidget = node.widgets.find((w) => w.name === "seed");
|
||||
if (seedWidget) {
|
||||
seedWidget.visible = false;
|
||||
}
|
||||
|
||||
// Function to update the Reset Button text
|
||||
const updateResetButtonTextNode = () => {
|
||||
console.log("[line_selector]=====> updateResetButtonTextNode");
|
||||
if (!node.graph) return;
|
||||
|
||||
fetch("/get_line_selector_counter", {
|
||||
method: "POST",
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.then((data) => {
|
||||
if (!node.graph) return;
|
||||
|
||||
if (data.success) {
|
||||
const jumpWidget = node.widgets.find((w) => w.name === "jump");
|
||||
const text = node.widgets.find((w) => w.name === "text");
|
||||
|
||||
if (data.value === 0) {
|
||||
resetButton.name = "Reset Counter (Empty)";
|
||||
} else {
|
||||
// Count valid lines in text
|
||||
const lines = text.value
|
||||
.split("\n")
|
||||
.filter((line) => line.trim() && !line.trim().startsWith("#"));
|
||||
const lineCount = lines.length;
|
||||
|
||||
let next_value = data.value + jumpWidget.value;
|
||||
if (next_value > lineCount) {
|
||||
resetButton.name = `Reset Counter (ABOVE MAX: ${next_value} > ${lineCount})`;
|
||||
} else {
|
||||
resetButton.name = `Reset Counter (next: ${next_value})`;
|
||||
}
|
||||
}
|
||||
} else if (node.graph) {
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
if (node.graph) {
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
// Add reset button
|
||||
const resetButton = node.addWidget(
|
||||
"button",
|
||||
"Reset Counter",
|
||||
null,
|
||||
async () => {
|
||||
if (!node.graph) return;
|
||||
|
||||
try {
|
||||
const response = await fetch("/reset_line_selector_counter", {
|
||||
method: "POST",
|
||||
});
|
||||
const data = await response.json();
|
||||
|
||||
if (!node.graph) return;
|
||||
|
||||
if (data.success) {
|
||||
app.ui.dialog.show(`[Line Selector] Reset counter successfully.`);
|
||||
updateResetButtonTextNode();
|
||||
} else {
|
||||
app.ui.dialog.show(
|
||||
`[Line Selector] Failed to reset counter: ${
|
||||
data.error || "Unknown error"
|
||||
}`
|
||||
);
|
||||
}
|
||||
} catch (error) {
|
||||
if (node.graph) {
|
||||
app.ui.dialog.show(
|
||||
"[Line Selector] An error occurred while resetting the counter."
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
// Create event handler function that we can remove later
|
||||
// const executedHandler = async (event) => {
|
||||
// if (event.detail.node_id === node.id) {
|
||||
// updateResetButtonTextNode();
|
||||
// }
|
||||
// };
|
||||
|
||||
// Initial update of showing counter number
|
||||
setTimeout(updateResetButtonTextNode, 0);
|
||||
|
||||
// Listen for node execution events (update value when node executed)
|
||||
// api.addEventListener("executed", async () => {
|
||||
// updateResetButtonTextNode();
|
||||
// });
|
||||
api.addEventListener("executed", async () => {
|
||||
// Check if context file is enabled before updating
|
||||
const contextWidget = node.widgets.find(
|
||||
(w) => w.name === "LOOP_SEQUENTIAL"
|
||||
);
|
||||
if (contextWidget && contextWidget.value) {
|
||||
updateResetButtonTextNode();
|
||||
}
|
||||
});
|
||||
|
||||
// Override the original execute function
|
||||
const originalExecute = node.execute;
|
||||
node.execute = function () {
|
||||
const result = originalExecute.apply(this, arguments);
|
||||
if (result instanceof Promise) {
|
||||
return result.catch((error) => {
|
||||
if (error.message.includes("Counter has reached") && node.graph) {
|
||||
app.ui.dialog.show(`Execution blocked: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
});
|
||||
}
|
||||
return result;
|
||||
};
|
||||
|
||||
// Setup widget handlers for updating counter display
|
||||
const setupWidgetHandler = (widgetName) => {
|
||||
const widget = node.widgets.find((w) => w.name === widgetName);
|
||||
if (widget) {
|
||||
const originalOnChange = widget.callback;
|
||||
widget.callback = function (v) {
|
||||
if (originalOnChange) {
|
||||
originalOnChange.call(this, v);
|
||||
}
|
||||
if (node.widgets.find((w) => w.name === "LOOP_SEQUENTIAL")?.value) {
|
||||
updateResetButtonTextNode();
|
||||
}
|
||||
};
|
||||
}
|
||||
};
|
||||
|
||||
// Setup handlers for relevant widgets
|
||||
setupWidgetHandler("jump");
|
||||
setupWidgetHandler("text");
|
||||
setupWidgetHandler("LOOP_SEQUENTIAL");
|
||||
|
||||
//BUG this cleanup five a floating textarea
|
||||
// Add cleanup when node is removed
|
||||
// node.onRemoved = function() {
|
||||
// api.removeEventListener("executed", executedHandler);
|
||||
// };
|
||||
|
||||
// Initial button visibility check
|
||||
const updateButtonVisibility = () => {
|
||||
const loopSeqWidget = node.widgets.find(
|
||||
(w) => w.name === "LOOP_SEQUENTIAL"
|
||||
);
|
||||
resetButton.type = loopSeqWidget?.value ? "button" : "hidden";
|
||||
if (loopSeqWidget?.value) {
|
||||
updateResetButtonTextNode();
|
||||
}
|
||||
};
|
||||
|
||||
// Setup visibility handler for LOOP_SEQUENTIAL
|
||||
const loopSeqWidget = node.widgets.find(
|
||||
(w) => w.name === "LOOP_SEQUENTIAL"
|
||||
);
|
||||
if (loopSeqWidget) {
|
||||
const originalOnChange = loopSeqWidget.callback;
|
||||
loopSeqWidget.callback = function (v) {
|
||||
if (originalOnChange) {
|
||||
originalOnChange.call(this, v);
|
||||
}
|
||||
updateButtonVisibility();
|
||||
};
|
||||
}
|
||||
|
||||
// Initial update
|
||||
updateButtonVisibility();
|
||||
},
|
||||
});
|
||||
@@ -19,6 +19,9 @@ app.registerExtension({
|
||||
|
||||
// Function to update the Reset Button text
|
||||
const updateResetButtonTextNode = () => {
|
||||
console.log("[loop_lines_sequential]=====> updateResetButtonTextNode");
|
||||
if (!node.graph) return;
|
||||
|
||||
fetch("/get_current_line_number", {
|
||||
method: "POST",
|
||||
})
|
||||
@@ -36,12 +39,12 @@ app.registerExtension({
|
||||
resetButton.name = `Reset Counter (next: ${next_value})`;
|
||||
}
|
||||
} else {
|
||||
console.error("Error in context size:", data.error);
|
||||
console.error("[Loop Lines Sequential] Error in context size:", data.error);
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error fetching context size:", error);
|
||||
console.error("[Loop Lines Sequential] Error fetching context size:", error);
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
});
|
||||
};
|
||||
@@ -56,20 +59,15 @@ app.registerExtension({
|
||||
if (data.success) {
|
||||
// updateLineNumber();
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter reset successfully!", { duration: 5000 });
|
||||
// app.ui.dialog.show("Counter reset successfully!");
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to reset counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Loop Lines Sequential] Failed to reset counter: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while resetting the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
console.error("[Loop Lines Sequential] Error:", error);
|
||||
app.ui.dialog.show("[Loop Lines Sequential] An error occurred while resetting the counter.");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -82,20 +80,15 @@ app.registerExtension({
|
||||
.then((data) => {
|
||||
if (data.success) {
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter incremented", { duration: 3000 });
|
||||
// app.ui.dialog.show("Counter incremented");
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to increment counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Loop Lines Sequential] Failed to increment counter: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while incrementing the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
console.error("[Loop Lines Sequential] Error:", error);
|
||||
app.ui.dialog.show("[Loop Lines Sequential] An error occurred while incrementing the counter.");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -108,49 +101,18 @@ app.registerExtension({
|
||||
.then((data) => {
|
||||
if (data.success) {
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter decremented", { duration: 3000 });
|
||||
// app.ui.dialog.show("Counter decremented");
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to decrement counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Loop Lines Sequential] Failed to decrement counter: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while decrementing the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
console.error("[Loop Lines Sequential] Error:", error);
|
||||
app.ui.dialog.show("[Loop Lines Sequential] An error occurred while decrementing the counter.");
|
||||
});
|
||||
});
|
||||
|
||||
// Add reset button
|
||||
// const resetButton = node.addWidget("button", "Reset Counter", null, () => {
|
||||
// fetch("/reset_lines_counter", {
|
||||
// method: "POST",
|
||||
// })
|
||||
// .then((response) => response.json())
|
||||
// .then((data) => {
|
||||
// if (data.success) {
|
||||
// updateLineNumber();
|
||||
// app.ui.toast("Counter reset successfully!", { duration: 5000 });
|
||||
// } else {
|
||||
// app.ui.toast(
|
||||
// `Failed to reset counter: ${data.error || "Unknown error"}`,
|
||||
// { type: "error", duration: 5000 }
|
||||
// );
|
||||
// }
|
||||
// })
|
||||
// .catch((error) => {
|
||||
// console.error("Error:", error);
|
||||
// app.ui.toast("An error occurred while resetting the counter.", {
|
||||
// type: "error",
|
||||
// duration: 5000,
|
||||
// });
|
||||
// });
|
||||
// });
|
||||
|
||||
// Update line number periodically
|
||||
setTimeout(updateResetButtonTextNode, 0);
|
||||
|
||||
@@ -178,10 +140,7 @@ app.registerExtension({
|
||||
if (result instanceof Promise) {
|
||||
return result.catch((error) => {
|
||||
if (error.message.includes("Counter has reached its limit")) {
|
||||
app.ui.toast(`Execution blocked: ${error.message}`, {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
app.ui.dialog.show(`[Loop Lines Sequential] Execution blocked: ${error.message}`);
|
||||
}
|
||||
throw error;
|
||||
});
|
||||
|
||||
@@ -23,6 +23,9 @@ app.registerExtension({
|
||||
|
||||
// Function to update the Reset Button text
|
||||
const updateResetButtonTextNode = () => {
|
||||
console.log("[loop_sequential_integer]=====> updateResetButtonTextNode");
|
||||
if (!node.graph) return;
|
||||
|
||||
fetch("/get_counter_value", {
|
||||
method: "POST",
|
||||
})
|
||||
@@ -49,12 +52,12 @@ app.registerExtension({
|
||||
}
|
||||
}
|
||||
} else {
|
||||
console.error("Error in context size:", data.error);
|
||||
console.error("[Loop Integer Sequential] Error in context size:", data.error);
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error fetching context size:", error);
|
||||
console.error("[Loop Integer Sequential] Error fetching context size:", error);
|
||||
resetButton.name = "Reset Counter (Error)";
|
||||
});
|
||||
};
|
||||
@@ -69,20 +72,15 @@ app.registerExtension({
|
||||
if (data.success) {
|
||||
// updateLineNumber();
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter reset successfully!", { duration: 5000 });
|
||||
// app.ui.dialog.show("Counter reset successfully!");
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to reset counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Loop Integer Sequential] Failed to reset counter: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while resetting the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
console.error("[Loop Integer Sequential] Error:", error);
|
||||
app.ui.dialog.show("[Loop Integer Sequential] An error occurred while resetting the counter.");
|
||||
});
|
||||
});
|
||||
|
||||
@@ -92,8 +90,8 @@ app.registerExtension({
|
||||
const result = originalExecute.apply(this, arguments);
|
||||
if (result instanceof Promise) {
|
||||
return result.catch((error) => {
|
||||
if (error.message.includes("Counter has reached its limit")) {
|
||||
app.ui.toast(`Execution blocked: ${error.message}`, {
|
||||
if (error.message.includes("[Loop Integer Sequential] Counter has reached its limit")) {
|
||||
app.ui.dialog.show(`[Loop Integer Sequential] Execution blocked: ${error.message}`, {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
@@ -150,20 +148,15 @@ app.registerExtension({
|
||||
if (data.success) {
|
||||
// updateLineNumber();
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter reset successfully!", { duration: 5000 });
|
||||
// app.ui.dialog.show("Counter reset successfully!", { duration: 5000 });
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to reset counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Loop Integer Sequential] Failed to reset counter: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while resetting the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
console.error("[Loop Integer Sequential] Error:", error);
|
||||
app.ui.dialog.show("[Loop Integer Sequential] An error occurred while resetting the counter.");
|
||||
});
|
||||
};
|
||||
}
|
||||
|
||||
110
web/js/lora_stacks.js
Normal file
@@ -0,0 +1,110 @@
|
||||
import { app } from "../../../scripts/app.js";
|
||||
|
||||
app.registerExtension({
|
||||
name: "Bjornulf.AllLoraSelector",
|
||||
async nodeCreated(node) {
|
||||
if (node.comfyClass === "Bjornulf_AllLoraSelector") {
|
||||
node.properties = node.properties || {};
|
||||
|
||||
const updateLoraInputs = () => {
|
||||
const initialWidth = node.size[0];
|
||||
const numLorasWidget = node.widgets.find(w => w.name === "number_of_loras");
|
||||
if (!numLorasWidget) return;
|
||||
|
||||
const numLoras = numLorasWidget.value;
|
||||
const loraList = node.widgets.find(w => w.name === "lora_1")?.options?.values || [];
|
||||
|
||||
// Save existing values
|
||||
node.widgets.forEach(w => {
|
||||
if (w.name.startsWith("lora_") || w.name.startsWith("strength_model_") || w.name.startsWith("strength_clip_")) {
|
||||
node.properties[w.name] = w.value;
|
||||
}
|
||||
});
|
||||
|
||||
// Remove existing LoRA-related widgets
|
||||
node.widgets = node.widgets.filter(w =>
|
||||
!w.name.startsWith("lora_") &&
|
||||
!w.name.startsWith("strength_model_") &&
|
||||
!w.name.startsWith("strength_clip_")
|
||||
);
|
||||
|
||||
// Add number_of_loras widget if it doesn't exist
|
||||
const ensureWidget = (name, type, defaultValue, config) => {
|
||||
let widget = node.widgets.find(w => w.name === name);
|
||||
if (!widget) {
|
||||
widget = node.addWidget(type, name,
|
||||
node.properties[name] !== undefined ? node.properties[name] : defaultValue,
|
||||
value => { node.properties[name] = value; },
|
||||
config
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
ensureWidget("number_of_loras", "number", 3, { min: 1, max: 20, step: 1 });
|
||||
|
||||
// Add LoRA widgets for each slot
|
||||
for (let i = 1; i <= numLoras; i++) {
|
||||
const loraName = `lora_${i}`;
|
||||
const strengthModelName = `strength_model_${i}`;
|
||||
const strengthClipName = `strength_clip_${i}`;
|
||||
|
||||
// Add LoRA selector
|
||||
node.addWidget("combo", loraName,
|
||||
node.properties[loraName] || loraList[0],
|
||||
value => { node.properties[loraName] = value; },
|
||||
{ values: loraList }
|
||||
);
|
||||
|
||||
// Add strength sliders
|
||||
node.addWidget("number", strengthModelName,
|
||||
node.properties[strengthModelName] !== undefined ? node.properties[strengthModelName] : 1.0,
|
||||
value => { node.properties[strengthModelName] = value; },
|
||||
{ min: -100.0, max: 100.0, step: 0.01 }
|
||||
);
|
||||
|
||||
node.addWidget("number", strengthClipName,
|
||||
node.properties[strengthClipName] !== undefined ? node.properties[strengthClipName] : 1.0,
|
||||
value => { node.properties[strengthClipName] = value; },
|
||||
{ min: -100.0, max: 100.0, step: 0.01 }
|
||||
);
|
||||
}
|
||||
|
||||
node.setSize(node.computeSize());
|
||||
node.size[0] = Math.max(initialWidth, node.size[0]);
|
||||
};
|
||||
|
||||
// Set up number_of_loras widget callback
|
||||
const numLorasWidget = node.widgets.find(w => w.name === "number_of_loras");
|
||||
if (numLorasWidget) {
|
||||
numLorasWidget.callback = () => {
|
||||
updateLoraInputs();
|
||||
app.graph.setDirtyCanvas(true);
|
||||
};
|
||||
}
|
||||
|
||||
// Handle serialization
|
||||
const originalOnSerialize = node.onSerialize;
|
||||
node.onSerialize = function(info) {
|
||||
if (originalOnSerialize) {
|
||||
originalOnSerialize.call(this, info);
|
||||
}
|
||||
info.properties = { ...this.properties };
|
||||
};
|
||||
|
||||
// Handle deserialization
|
||||
const originalOnConfigure = node.onConfigure;
|
||||
node.onConfigure = function(info) {
|
||||
if (originalOnConfigure) {
|
||||
originalOnConfigure.call(this, info);
|
||||
}
|
||||
if (info.properties) {
|
||||
Object.assign(this.properties, info.properties);
|
||||
}
|
||||
updateLoraInputs();
|
||||
};
|
||||
|
||||
// Initial setup
|
||||
updateLoraInputs();
|
||||
}
|
||||
}
|
||||
});
|
||||
71
web/js/model_clip_vae_selector.js
Normal file
@@ -0,0 +1,71 @@
|
||||
import { app } from "../../../scripts/app.js";
|
||||
|
||||
app.registerExtension({
|
||||
name: "Bjornulf.ModelClipVaeSelector",
|
||||
async nodeCreated(node) {
|
||||
if (node.comfyClass === "Bjornulf_ModelClipVaeSelector") {
|
||||
const updateInputs = () => {
|
||||
const numInputsWidget = node.widgets.find(w => w.name === "number_of_inputs");
|
||||
if (!numInputsWidget) return;
|
||||
|
||||
const numInputs = numInputsWidget.value;
|
||||
|
||||
// Initialize node.inputs if it doesn't exist
|
||||
if (!node.inputs) {
|
||||
node.inputs = [];
|
||||
}
|
||||
|
||||
// Filter existing model, clip, and vae inputs
|
||||
const existingModelInputs = node.inputs.filter(input => input.name.startsWith('model_'));
|
||||
const existingClipInputs = node.inputs.filter(input => input.name.startsWith('clip_'));
|
||||
const existingVaeInputs = node.inputs.filter(input => input.name.startsWith('vae_'));
|
||||
|
||||
// Determine if we need to add or remove inputs
|
||||
if (existingModelInputs.length < numInputs || existingClipInputs.length < numInputs || existingVaeInputs.length < numInputs) {
|
||||
// Add new model, clip, and vae inputs if not enough existing
|
||||
for (let i = Math.max(existingModelInputs.length, existingClipInputs.length, existingVaeInputs.length) + 1; i <= numInputs; i++) {
|
||||
const modelInputName = `model_${i}`;
|
||||
const clipInputName = `clip_${i}`;
|
||||
const vaeInputName = `vae_${i}`;
|
||||
if (!node.inputs.find(input => input.name === modelInputName)) {
|
||||
node.addInput(modelInputName, "MODEL");
|
||||
}
|
||||
if (!node.inputs.find(input => input.name === clipInputName)) {
|
||||
node.addInput(clipInputName, "CLIP");
|
||||
}
|
||||
if (!node.inputs.find(input => input.name === vaeInputName)) {
|
||||
node.addInput(vaeInputName, "VAE");
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Remove excess model, clip, and vae inputs if too many
|
||||
node.inputs = node.inputs.filter(input =>
|
||||
(!input.name.startsWith('model_') && !input.name.startsWith('clip_') && !input.name.startsWith('vae_')) ||
|
||||
(parseInt(input.name.split('_')[1]) <= numInputs)
|
||||
);
|
||||
}
|
||||
|
||||
node.setSize(node.computeSize());
|
||||
};
|
||||
|
||||
// Move number_of_inputs to the top initially
|
||||
const numInputsWidget = node.widgets.find(w => w.name === "number_of_inputs");
|
||||
if (numInputsWidget) {
|
||||
node.widgets = [numInputsWidget, ...node.widgets.filter(w => w !== numInputsWidget)];
|
||||
numInputsWidget.callback = () => {
|
||||
updateInputs();
|
||||
app.graph.setDirtyCanvas(true);
|
||||
};
|
||||
}
|
||||
|
||||
// Set seed widget to hidden input
|
||||
const seedWidget = node.widgets.find((w) => w.name === "seed");
|
||||
if (seedWidget) {
|
||||
seedWidget.type = "HIDDEN";
|
||||
}
|
||||
|
||||
// Delay the initial update to ensure node is fully initialized
|
||||
setTimeout(updateInputs, 0);
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -10,10 +10,16 @@ app.registerExtension({
|
||||
"select_model_here",
|
||||
"",
|
||||
(v) => {
|
||||
// When model_list changes, update model_name
|
||||
const modelNameWidget = node.widgets.find(w => w.name === "model_name");
|
||||
if (modelNameWidget) {
|
||||
modelNameWidget.value = v;
|
||||
try {
|
||||
// When model_list changes, update model_name
|
||||
const modelNameWidget = node.widgets.find(w => w.name === "model_name");
|
||||
if (modelNameWidget) {
|
||||
modelNameWidget.value = v;
|
||||
} else {
|
||||
console.error('[Ollama Config] Model name widget not found');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('[Ollama Config] Error updating model name:', error);
|
||||
}
|
||||
},
|
||||
{ values: [] }
|
||||
@@ -26,13 +32,26 @@ app.registerExtension({
|
||||
value: "Update Models",
|
||||
callback: async function() {
|
||||
try {
|
||||
const url = node.widgets.find(w => w.name === "ollama_url").value;
|
||||
const url = node.widgets.find(w => w.name === "ollama_url")?.value;
|
||||
if (!url) {
|
||||
console.error('[Ollama Config] Ollama URL is not set');
|
||||
return;
|
||||
}
|
||||
|
||||
console.log('[Ollama Config] Fetching models from:', url);
|
||||
const response = await fetch(`${url}/api/tags`);
|
||||
|
||||
if (!response.ok) {
|
||||
console.error('[Ollama Config] Server response not OK:', response.status, response.statusText);
|
||||
return;
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
if (data.models) {
|
||||
const modelNames = data.models.map(m => m.name);
|
||||
if (modelNames.length > 0) {
|
||||
console.log('Found models:', modelNames);
|
||||
// Update model_list widget
|
||||
modelListWidget.options.values = modelNames;
|
||||
modelListWidget.value = modelNames[0];
|
||||
@@ -41,11 +60,22 @@ app.registerExtension({
|
||||
const modelNameWidget = node.widgets.find(w => w.name === "model_name");
|
||||
if (modelNameWidget) {
|
||||
modelNameWidget.value = modelNames[0];
|
||||
} else {
|
||||
console.error('[Ollama Config] Model name widget not found');
|
||||
}
|
||||
} else {
|
||||
console.error('[Ollama Config] No models found in response');
|
||||
}
|
||||
} else {
|
||||
console.error('[Ollama Config] Invalid response format:', data);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error updating models:', error);
|
||||
console.error('[Ollama Config] Error updating models:', error);
|
||||
console.error('[Ollama Config] Error details:', {
|
||||
message: error.message,
|
||||
stack: error.stack,
|
||||
name: error.name
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
@@ -14,24 +14,28 @@ app.registerExtension({
|
||||
|
||||
// Function to update the Reset Button text
|
||||
const updateResetButtonTextNode = () => {
|
||||
console.log("[ollama_talk]=====> updateResetButtonTextNode:");
|
||||
if (!node.graph) return;
|
||||
|
||||
fetch("/get_current_context_size", {
|
||||
method: "POST",
|
||||
})
|
||||
.then((response) => response.json())
|
||||
.then((data) => {
|
||||
if (data.success) {
|
||||
// console.log("[Ollama] /get_current_context_size fetched successfully");
|
||||
if (data.value === 0) {
|
||||
resetButton.name = "Save/Reset Context File (Empty)";
|
||||
} else {
|
||||
resetButton.name = `Save/Reset Context File (${data.value} lines)`;
|
||||
resetButton.name = `Reset Context File (${data.value} lines)`;
|
||||
}
|
||||
} else {
|
||||
console.error("Error in context size:", data.error);
|
||||
console.error("[Ollama] Error in context size:", data.error);
|
||||
resetButton.name = "Save/Reset Context File (Error)";
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error fetching context size:", error);
|
||||
console.error("[Ollama] Error fetching context size:", error);
|
||||
resetButton.name = "Save/Reset Context File (Error)";
|
||||
});
|
||||
};
|
||||
@@ -50,20 +54,15 @@ app.registerExtension({
|
||||
if (data.success) {
|
||||
// updateLineNumber();
|
||||
updateResetButtonTextNode();
|
||||
app.ui.toast("Counter reset successfully!", { duration: 5000 });
|
||||
app.ui.dialog.show("[Ollama] Context saved in Bjornulf/ollama and reset successfully!");
|
||||
} else {
|
||||
app.ui.toast(
|
||||
`Failed to reset counter: ${data.error || "Unknown error"}`,
|
||||
{ type: "error", duration: 5000 }
|
||||
);
|
||||
app.ui.dialog.show(
|
||||
`[Ollama] Failed to reset Context: ${data.error || "Unknown error"}`);
|
||||
}
|
||||
})
|
||||
.catch((error) => {
|
||||
console.error("Error:", error);
|
||||
app.ui.toast("An error occurred while resetting the counter.", {
|
||||
type: "error",
|
||||
duration: 5000,
|
||||
});
|
||||
app.ui.dialog.show("[Ollama] An error occurred while resetting the Context.");
|
||||
});
|
||||
}
|
||||
);
|
||||
@@ -89,31 +88,52 @@ app.registerExtension({
|
||||
})
|
||||
.then((response) => response.text())
|
||||
.then((data) => {
|
||||
console.log("Resume response:", data);
|
||||
console.log("[Ollama] Resume response:", data);
|
||||
})
|
||||
.catch((error) => console.error("Error:", error));
|
||||
.catch((error) => console.error("[Ollama] Error:", error));
|
||||
});
|
||||
|
||||
// Function to update button visibility based on widget values
|
||||
// const updateButtonVisibility = () => {
|
||||
// // Check context file widget
|
||||
// const contextWidget = node.widgets.find(
|
||||
// (w) => w.name === "use_context_file"
|
||||
// );
|
||||
// const isContextFileEnabled = contextWidget
|
||||
// ? contextWidget.value
|
||||
// : false;
|
||||
// resetButton.type = isContextFileEnabled ? "button" : "HIDDEN";
|
||||
|
||||
// // Check waiting for prompt widget
|
||||
// const waitingWidget = node.widgets.find(
|
||||
// (w) => w.name === "waiting_for_prompt"
|
||||
// );
|
||||
// const isWaitingForPrompt = waitingWidget ? waitingWidget.value : false;
|
||||
// resumeButton.type = isWaitingForPrompt ? "button" : "HIDDEN";
|
||||
|
||||
// //ALSO update reset button text node
|
||||
// updateResetButtonTextNode(); // Will trigger when... toggle / refresh page
|
||||
|
||||
// // Force canvas redraw to update UI
|
||||
// node.setDirtyCanvas(true);
|
||||
// };
|
||||
|
||||
// In updateButtonVisibility function - Only update when context is enabled
|
||||
const updateButtonVisibility = () => {
|
||||
// Check context file widget
|
||||
const contextWidget = node.widgets.find(
|
||||
(w) => w.name === "use_context_file"
|
||||
);
|
||||
const isContextFileEnabled = contextWidget
|
||||
? contextWidget.value
|
||||
: false;
|
||||
const contextWidget = node.widgets.find(w => w.name === "use_context_file");
|
||||
const isContextFileEnabled = contextWidget ? contextWidget.value : false;
|
||||
resetButton.type = isContextFileEnabled ? "button" : "HIDDEN";
|
||||
|
||||
// Check waiting for prompt widget
|
||||
const waitingWidget = node.widgets.find(
|
||||
(w) => w.name === "waiting_for_prompt"
|
||||
);
|
||||
const waitingWidget = node.widgets.find(w => w.name === "waiting_for_prompt");
|
||||
const isWaitingForPrompt = waitingWidget ? waitingWidget.value : false;
|
||||
resumeButton.type = isWaitingForPrompt ? "button" : "HIDDEN";
|
||||
|
||||
//ALSO update reset button text node
|
||||
updateResetButtonTextNode(); // Will trigger when... toggle / refresh page
|
||||
// ONLY update reset button text if context file is enabled
|
||||
if (isContextFileEnabled) {
|
||||
updateResetButtonTextNode();
|
||||
}
|
||||
|
||||
// Force canvas redraw to update UI
|
||||
node.setDirtyCanvas(true);
|
||||
@@ -151,8 +171,15 @@ app.registerExtension({
|
||||
setTimeout(updateButtonVisibility, 0);
|
||||
|
||||
// Listen for node execution events
|
||||
// api.addEventListener("executed", async () => {
|
||||
// updateResetButtonTextNode();
|
||||
// });
|
||||
api.addEventListener("executed", async () => {
|
||||
updateResetButtonTextNode();
|
||||
// Check if context file is enabled before updating
|
||||
const contextWidget = node.widgets.find(w => w.name === "use_context_file");
|
||||
if (contextWidget && contextWidget.value) {
|
||||
updateResetButtonTextNode();
|
||||
}
|
||||
});
|
||||
|
||||
//If workflow is stopped during pause, cancel the run
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import re
|
||||
import random
|
||||
import time
|
||||
import logging
|
||||
# import logging
|
||||
|
||||
class WriteTextAdvanced:
|
||||
@classmethod
|
||||
@@ -23,8 +23,8 @@ class WriteTextAdvanced:
|
||||
CATEGORY = "Bjornulf"
|
||||
|
||||
def write_text_special(self, text, variables="", seed=None):
|
||||
logging.info(f"Raw text: {text}")
|
||||
logging.info(f"Variables: {variables}")
|
||||
# logging.info(f"Raw text: {text}")
|
||||
# logging.info(f"Variables: {variables}")
|
||||
|
||||
if len(text) > 10000:
|
||||
return ("Text too large to process at once. Please split into smaller parts.",)
|
||||
@@ -41,7 +41,7 @@ class WriteTextAdvanced:
|
||||
key, value = line.split('=', 1)
|
||||
var_dict[key.strip()] = value.strip()
|
||||
|
||||
logging.info(f"Parsed variables: {var_dict}")
|
||||
# logging.info(f"Parsed variables: {var_dict}")
|
||||
|
||||
# Replace variables
|
||||
for key, value in var_dict.items():
|
||||
@@ -54,7 +54,7 @@ class WriteTextAdvanced:
|
||||
return random.choice(match.group(1).split('|'))
|
||||
|
||||
result = re.sub(pattern, replace_random, text)
|
||||
logging.info(f"Final text: {result}")
|
||||
# logging.info(f"Final text: {result}")
|
||||
|
||||
return (result,)
|
||||
|
||||
|
||||