0.50
60
README.md
@@ -1,6 +1,6 @@
|
|||||||
# 🔗 Comfyui : Bjornulf_custom_nodes v0.49 🔗
|
# 🔗 Comfyui : Bjornulf_custom_nodes v0.50 🔗
|
||||||
|
|
||||||
A list of 56 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
|
A list of 59 custom nodes for Comfyui : Display, manipulate, and edit text, images, videos, loras and more.
|
||||||
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
|
You can manage looping operations, generate randomized content, trigger logical conditions, pause and manually control your workflows and even work with external AI tools, like Ollama or Text To Speech.
|
||||||
|
|
||||||
# Coffee : ☕☕☕☕☕ 5/5
|
# Coffee : ☕☕☕☕☕ 5/5
|
||||||
@@ -45,6 +45,7 @@ You can manage looping operations, generate randomized content, trigger logical
|
|||||||
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
|
`53.` [♻ Loop Load checkpoint (Model Selector)](#53----loop-load-checkpoint-model-selector)
|
||||||
`54.` [♻ Loop Lora Selector](#54----loop-lora-selector)
|
`54.` [♻ Loop Lora Selector](#54----loop-lora-selector)
|
||||||
`56.` [♻📝 Loop Sequential (Integer)](#56----loop-sequential-integer)
|
`56.` [♻📝 Loop Sequential (Integer)](#56----loop-sequential-integer)
|
||||||
|
`57.` [♻📝 Loop Sequential (input Lines)](#57)
|
||||||
|
|
||||||
## 🎲 Randomization 🎲
|
## 🎲 Randomization 🎲
|
||||||
`3.` [✒🗔 Advanced Write Text (+ 🎲 random selection and 🅰️ variables)](#3----advanced-write-text---random-selection-and-🅰%EF%B8%8F-variables)
|
`3.` [✒🗔 Advanced Write Text (+ 🎲 random selection and 🅰️ variables)](#3----advanced-write-text---random-selection-and-🅰%EF%B8%8F-variables)
|
||||||
@@ -98,7 +99,10 @@ You can manage looping operations, generate randomized content, trigger logical
|
|||||||
`49.` [📹👁 Video Preview](#49----video-preview)
|
`49.` [📹👁 Video Preview](#49----video-preview)
|
||||||
`50.` [🖼➜📹 Images to Video path (tmp video)](#50----images-to-video-path-tmp-video)
|
`50.` [🖼➜📹 Images to Video path (tmp video)](#50----images-to-video-path-tmp-video)
|
||||||
`51.` [📹➜🖼 Video Path to Images](#51----video-path-to-images)
|
`51.` [📹➜🖼 Video Path to Images](#51----video-path-to-images)
|
||||||
`52.` [🔊📹 Audio Video Sync](#52----audio-video-sync)
|
`52.` [🔊📹 Audio Video Sync](#52----audio-video-sync)
|
||||||
|
`58.` [📹🔗 Concat Videos](#58)
|
||||||
|
`59.` [📹🔊 Combine Video + Audio](#59)
|
||||||
|
|
||||||
|
|
||||||
## 🤖 AI 🤖
|
## 🤖 AI 🤖
|
||||||
`19.` [🦙 Ollama](#19----ollama)
|
`19.` [🦙 Ollama](#19----ollama)
|
||||||
@@ -107,6 +111,7 @@ You can manage looping operations, generate randomized content, trigger logical
|
|||||||
## 🔊 Audio 🔊
|
## 🔊 Audio 🔊
|
||||||
`31.` [🔊 TTS - Text to Speech](#31----tts---text-to-speech-100-local-any-voice-you-want-any-language)
|
`31.` [🔊 TTS - Text to Speech](#31----tts---text-to-speech-100-local-any-voice-you-want-any-language)
|
||||||
`52.` [🔊📹 Audio Video Sync](#52----audio-video-sync)
|
`52.` [🔊📹 Audio Video Sync](#52----audio-video-sync)
|
||||||
|
`59.` [📹🔊 Combine Video + Audio](#59)
|
||||||
|
|
||||||
## 💻 System 💻
|
## 💻 System 💻
|
||||||
`34.` [🧹 Free VRAM hack](#34----free-vram-hack)
|
`34.` [🧹 Free VRAM hack](#34----free-vram-hack)
|
||||||
@@ -251,6 +256,7 @@ cd /where/you/installed/ComfyUI && python main.py
|
|||||||
- **v0.47**: New node : Loop Load checkpoint (Model Selector).
|
- **v0.47**: New node : Loop Load checkpoint (Model Selector).
|
||||||
- **v0.48**: Two new nodes for loras : Random Lora Selector and Loop Lora Selector.
|
- **v0.48**: Two new nodes for loras : Random Lora Selector and Loop Lora Selector.
|
||||||
- **v0.49**: New node : Loop Sequential (Integer) - Loop through a range of integer values. (But once per workflow run), audio sync is smarter and adapt the video duration to the audio duration. add requirements.txt
|
- **v0.49**: New node : Loop Sequential (Integer) - Loop through a range of integer values. (But once per workflow run), audio sync is smarter and adapt the video duration to the audio duration. add requirements.txt
|
||||||
|
- **v0.50**: allow audio in Images to Video path (tmp video). Add three new nodes : Concat Videos, combine video/audio and Loop Sequential (input Lines). save text changes to write inside COmfyui folder. Fix random line from input outputing LIST. ❗ Breaking change to audio/video sync node, allowing different types as input.
|
||||||
|
|
||||||
# 📝 Nodes descriptions
|
# 📝 Nodes descriptions
|
||||||
|
|
||||||
@@ -392,7 +398,8 @@ Resize an image to exact dimensions. The other node will save the image to the e
|
|||||||
## 15 - 💾 Save Text
|
## 15 - 💾 Save Text
|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
Save the given text input to a file. Useful for logging and storing text data.
|
Save the given text input to a file. Useful for logging and storing text data.
|
||||||
|
If the file already exist, it will add the text at the end of the file.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -721,6 +728,8 @@ Details :
|
|||||||
Check node number 40 before deciding which one to use.
|
Check node number 40 before deciding which one to use.
|
||||||
Node 53 is the loop version of this node.
|
Node 53 is the loop version of this node.
|
||||||
|
|
||||||
|
NOTE : If you want to load a single checkpoint but want to extract its folder name (To use the checkpoint name as a folder name for example, or with if/else node), you can use my node 41 with only one checkpoint. (It will take one at random, so... always the same one.)
|
||||||
|
|
||||||
### 42 - ♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model
|
### 42 - ♻ Loop (Model+Clip+Vae) - aka Checkpoint / Model
|
||||||
|
|
||||||

|

|
||||||
@@ -819,7 +828,7 @@ Combine multiple images (A single image or a list of images.)
|
|||||||
There are two types of logic to "combine images". With "all_in_one" enabled, it will combine all the images into one tensor.
|
There are two types of logic to "combine images". With "all_in_one" enabled, it will combine all the images into one tensor.
|
||||||
Otherwise it will send the images one by one. (check examples below) :
|
Otherwise it will send the images one by one. (check examples below) :
|
||||||
|
|
||||||
This is an example of the "all_in_one" option disabled :
|
This is an example of the "all_in_one" option disabled (Note that there are 2 images, these are NOT side by side, they are combined in a list.) :
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -856,6 +865,7 @@ This node takes a video path as input and displays the video.
|
|||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
This node will take a list of images and convert them to a temporary video file.
|
This node will take a list of images and convert them to a temporary video file.
|
||||||
|
❗ Update 0.50 : You can now send audio to the video. (audio_path OR audio TYPE)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
@@ -880,8 +890,8 @@ You can then chain up several video and they will transition smoothly.
|
|||||||
|
|
||||||
Some details, this node will :
|
Some details, this node will :
|
||||||
- If video slightly too long : add silence to the audio file.
|
- If video slightly too long : add silence to the audio file.
|
||||||
- If video way too long : will slow down the video up to 0.50x the speed + add silence to the audio.
|
- If video way too long : will slow down the video up to 0.50x the speed + add silence to the audio. (now editable)
|
||||||
- If audio slightly too long : will speed up video up to 1.5x the speed.
|
- If audio slightly too long : will speed up video up to 1.5x the speed. (now editable)
|
||||||
- If video way too long : will speed up video up to 1.5x the speed + add silence to the audio.
|
- If video way too long : will speed up video up to 1.5x the speed + add silence to the audio.
|
||||||
|
|
||||||
It is good like for example with MuseTalk <https://github.com/chaojie/ComfyUI-MuseTalk>
|
It is good like for example with MuseTalk <https://github.com/chaojie/ComfyUI-MuseTalk>
|
||||||
@@ -890,6 +900,13 @@ Here is an example of the `Audio Video Sync` node, notice that it is also conven
|
|||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
❗ Update 0.50 : audio_duration is now optional, if not connected it will take it from the audio.
|
||||||
|
❗ Update 0.50 : You can now send the video with a list of images OR a video_path, same for audio : AUDIO or audio_path.
|
||||||
|
|
||||||
|
New v0.50 layout, same logic :
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
### 53 - ♻ Loop Load checkpoint (Model Selector)
|
### 53 - ♻ Loop Load checkpoint (Model Selector)
|
||||||
|
|
||||||
**Description:**
|
**Description:**
|
||||||
@@ -933,3 +950,32 @@ Under the hood it is using the file `counter_integer.txt` in the `ComfyUI/Bjornu
|
|||||||

|

|
||||||

|

|
||||||

|

|
||||||
|
|
||||||
|
### 57 - ♻📝 Loop Sequential (input Lines)
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
This loop works like a normal loop, BUT it is sequential : It will run only once for each workflow run !!!
|
||||||
|
The first time it will output the first line, the second time the second line, etc...
|
||||||
|
You also have control of the line with +1 / -1 buttons.
|
||||||
|
When the last is reached, the node will STOP the workflow, preventing anything else to run after it.
|
||||||
|
Under the hood it is using the file `counter_lines.txt` in the `ComfyUI/Bjornulf` folder.
|
||||||
|
|
||||||
|
Here is an example of usage with my TTS node : when I have a list of sentences to process, if i don't like a version, I can just click on the -1 button, tick "overwrite" on TTS node and it will generate the same sentence again, repeat until good.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 58 - 📹🔗 Concat Videos
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Take two videos and concatenate them. (One after the other in the same video.)
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### 59 - 📹🔊 Combine Video + Audio
|
||||||
|
|
||||||
|
**Description:**
|
||||||
|
Simply combine video and audio together.
|
||||||
|
Video : Use list of images or video path.
|
||||||
|
Audio : Use audio path or audio type.
|
||||||
|
|
||||||
|

|
||||||
|
|||||||
@@ -59,9 +59,15 @@ from .loop_model_selector import LoopModelSelector
|
|||||||
from .random_lora_selector import RandomLoraSelector
|
from .random_lora_selector import RandomLoraSelector
|
||||||
from .loop_lora_selector import LoopLoraSelector
|
from .loop_lora_selector import LoopLoraSelector
|
||||||
from .loop_sequential_integer import LoopIntegerSequential
|
from .loop_sequential_integer import LoopIntegerSequential
|
||||||
|
from .loop_lines_sequential import LoopLinesSequential
|
||||||
|
from .concat_videos import ConcatVideos
|
||||||
|
from .combine_video_audio import CombineVideoAudio
|
||||||
|
|
||||||
NODE_CLASS_MAPPINGS = {
|
NODE_CLASS_MAPPINGS = {
|
||||||
"Bjornulf_ollamaLoader": ollamaLoader,
|
"Bjornulf_ollamaLoader": ollamaLoader,
|
||||||
|
"Bjornulf_CombineVideoAudio": CombineVideoAudio,
|
||||||
|
"Bjornulf_ConcatVideos": ConcatVideos,
|
||||||
|
"Bjornulf_LoopLinesSequential": LoopLinesSequential,
|
||||||
"Bjornulf_LoopIntegerSequential": LoopIntegerSequential,
|
"Bjornulf_LoopIntegerSequential": LoopIntegerSequential,
|
||||||
"Bjornulf_LoopLoraSelector": LoopLoraSelector,
|
"Bjornulf_LoopLoraSelector": LoopLoraSelector,
|
||||||
"Bjornulf_RandomLoraSelector": RandomLoraSelector,
|
"Bjornulf_RandomLoraSelector": RandomLoraSelector,
|
||||||
@@ -122,6 +128,9 @@ NODE_CLASS_MAPPINGS = {
|
|||||||
|
|
||||||
NODE_DISPLAY_NAME_MAPPINGS = {
|
NODE_DISPLAY_NAME_MAPPINGS = {
|
||||||
"Bjornulf_WriteText": "✒ Write Text",
|
"Bjornulf_WriteText": "✒ Write Text",
|
||||||
|
"Bjornulf_CombineVideoAudio": "📹🔊 Combine Video + Audio",
|
||||||
|
"Bjornulf_ConcatVideos": "📹🔗 Concat Videos",
|
||||||
|
"Bjornulf_LoopLinesSequential": "♻📝 Loop Sequential (input Lines)",
|
||||||
"Bjornulf_LoopIntegerSequential": "♻📝 Loop Sequential (Integer)",
|
"Bjornulf_LoopIntegerSequential": "♻📝 Loop Sequential (Integer)",
|
||||||
"Bjornulf_LoopLoraSelector": "♻ Loop Lora Selector",
|
"Bjornulf_LoopLoraSelector": "♻ Loop Lora Selector",
|
||||||
"Bjornulf_RandomLoraSelector": "🎲 Random Lora Selector",
|
"Bjornulf_RandomLoraSelector": "🎲 Random Lora Selector",
|
||||||
|
|||||||
@@ -4,211 +4,371 @@ import os
|
|||||||
import subprocess
|
import subprocess
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
import math
|
import math
|
||||||
|
from PIL import Image
|
||||||
|
import logging
|
||||||
|
import torchvision.transforms as transforms
|
||||||
|
|
||||||
class AudioVideoSync:
|
class AudioVideoSync:
|
||||||
|
"""
|
||||||
|
ComfyUI custom node for synchronizing audio and video with configurable speed adjustments.
|
||||||
|
Supports both video files and image sequences as input, as well as audio files or AUDIO objects.
|
||||||
|
"""
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
pass
|
"""Initialize the AudioVideoSync node."""
|
||||||
|
self.base_dir = "Bjornulf"
|
||||||
|
self.temp_dir = os.path.join(self.base_dir, "temp_frames")
|
||||||
|
self.sync_video_dir = os.path.join(self.base_dir, "sync_video")
|
||||||
|
self.sync_audio_dir = os.path.join(self.base_dir, "sync_audio")
|
||||||
|
|
||||||
|
# Create necessary directories
|
||||||
|
for directory in [self.temp_dir, self.sync_video_dir, self.sync_audio_dir]:
|
||||||
|
os.makedirs(directory, exist_ok=True)
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def INPUT_TYPES(cls):
|
def INPUT_TYPES(cls):
|
||||||
|
"""Define input parameters for the node."""
|
||||||
return {
|
return {
|
||||||
"required": {
|
"required": {
|
||||||
"audio": ("AUDIO",),
|
"max_speedup": ("FLOAT", {
|
||||||
"video_path": ("STRING", {"default": ""}),
|
"default": 1.5,
|
||||||
"audio_duration": ("FLOAT", {"default": 0.0, "min": 0.0, "max": 3600.0, "step": 0.001}),
|
"min": 1.0,
|
||||||
|
"max": 10.0,
|
||||||
|
"step": 0.1
|
||||||
|
}),
|
||||||
|
"max_slowdown": ("FLOAT", {
|
||||||
|
"default": 0.5,
|
||||||
|
"min": 0.1,
|
||||||
|
"max": 1.0,
|
||||||
|
"step": 0.1
|
||||||
|
}),
|
||||||
},
|
},
|
||||||
|
"optional": {
|
||||||
|
"IMAGES": ("IMAGE",),
|
||||||
|
"AUDIO": ("AUDIO",),
|
||||||
|
"audio_path": ("STRING", {"default": "", "forceInput": True}),
|
||||||
|
"audio_duration": ("FLOAT", {
|
||||||
|
"default": 0.0,
|
||||||
|
"min": 0.0,
|
||||||
|
"max": 3600.0,
|
||||||
|
"step": 0.001
|
||||||
|
}),
|
||||||
|
"video_path": ("STRING", {
|
||||||
|
"default": "",
|
||||||
|
"forceInput": True
|
||||||
|
}),
|
||||||
|
"output_fps": ("FLOAT", {
|
||||||
|
"default": 30.0,
|
||||||
|
"min": 1.0,
|
||||||
|
"max": 120.0,
|
||||||
|
"step": 0.1
|
||||||
|
}),
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
RETURN_TYPES = ("AUDIO", "STRING", "STRING", "FLOAT", "FLOAT", "INT", "FLOAT", "FLOAT")
|
RETURN_TYPES = ("IMAGE", "AUDIO", "STRING", "STRING", "FLOAT", "FLOAT", "FLOAT", "FLOAT", "INT")
|
||||||
RETURN_NAMES = ("sync_audio", "sync_audio_path", "sync_video_path", "video_fps", "video_duration", "sync_video_frame_count", "sync_audio_duration", "sync_video_duration")
|
RETURN_NAMES = ("sync_IMAGES", "sync_AUDIO", "sync_audio_path", "sync_video_path",
|
||||||
|
"input_video_duration", "sync_video_duration", "input_audio_duration", "sync_audio_duration",
|
||||||
|
"sync_video_frame_count")
|
||||||
FUNCTION = "sync_audio_video"
|
FUNCTION = "sync_audio_video"
|
||||||
CATEGORY = "Bjornulf"
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
def sync_audio_video(self, audio, video_path, audio_duration):
|
def generate_timestamp(self):
|
||||||
|
"""Generate a unique timestamp for file naming."""
|
||||||
|
return datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||||
|
|
||||||
|
def validate_audio_input(self, audio):
|
||||||
|
"""Validate the audio input format."""
|
||||||
if not isinstance(audio, dict) or 'waveform' not in audio or 'sample_rate' not in audio:
|
if not isinstance(audio, dict) or 'waveform' not in audio or 'sample_rate' not in audio:
|
||||||
raise ValueError("Expected audio input to be a dictionary with 'waveform' and 'sample_rate' keys")
|
raise ValueError("Expected audio input to be a dictionary with 'waveform' and 'sample_rate' keys")
|
||||||
|
|
||||||
audio_data = audio['waveform']
|
def validate_speed_limits(self, max_speedup, max_slowdown):
|
||||||
sample_rate = audio['sample_rate']
|
"""Validate the speed limit parameters."""
|
||||||
|
if max_speedup < 1.0:
|
||||||
# Get original video properties
|
raise ValueError("max_speedup must be greater than or equal to 1.0")
|
||||||
original_duration = self.get_video_duration(video_path)
|
if max_slowdown > 1.0:
|
||||||
video_fps = self.get_video_fps(video_path)
|
raise ValueError("max_slowdown must be less than or equal to 1.0")
|
||||||
original_frame_count = self.get_frame_count(video_path)
|
|
||||||
|
|
||||||
print(f"Original video duration: {original_duration}")
|
|
||||||
print(f"Target audio duration: {audio_duration}")
|
|
||||||
print(f"Video FPS: {video_fps}")
|
|
||||||
print(f"Original frame count: {original_frame_count}")
|
|
||||||
|
|
||||||
# Create synchronized versions of video and audio
|
|
||||||
sync_video_path = self.create_sync_video(video_path, original_duration, audio_duration)
|
|
||||||
sync_audio_path = self.save_audio(audio_data, sample_rate, audio_duration, original_duration)
|
|
||||||
|
|
||||||
# Get properties of synchronized files
|
|
||||||
sync_video_duration = self.get_video_duration(sync_video_path)
|
|
||||||
sync_frame_count = self.get_frame_count(sync_video_path)
|
|
||||||
sync_audio_duration = torchaudio.info(sync_audio_path).num_frames / sample_rate
|
|
||||||
|
|
||||||
print(f"Sync video duration: {sync_video_duration}")
|
|
||||||
print(f"Sync video frame count: {sync_frame_count}")
|
|
||||||
print(f"Sync audio duration: {sync_audio_duration}")
|
|
||||||
|
|
||||||
return (
|
|
||||||
audio, # Return original audio dictionary
|
|
||||||
sync_audio_path,
|
|
||||||
sync_video_path,
|
|
||||||
video_fps,
|
|
||||||
original_duration,
|
|
||||||
sync_frame_count,
|
|
||||||
sync_audio_duration,
|
|
||||||
sync_video_duration
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_video_duration(self, video_path):
|
|
||||||
cmd = ['ffprobe', '-v', 'error', '-show_entries', 'format=duration', '-of', 'default=noprint_wrappers=1:nokey=1', video_path]
|
|
||||||
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
|
||||||
return float(result.stdout)
|
|
||||||
|
|
||||||
def get_video_fps(self, video_path):
|
|
||||||
cmd = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-count_packets', '-show_entries', 'stream=r_frame_rate', '-of', 'csv=p=0', video_path]
|
|
||||||
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
|
||||||
fps = result.stdout.strip()
|
|
||||||
if '/' in fps:
|
|
||||||
num, den = map(float, fps.split('/'))
|
|
||||||
return num / den
|
|
||||||
return float(fps)
|
|
||||||
|
|
||||||
def get_frame_count(self, video_path):
|
|
||||||
cmd = ['ffprobe', '-v', 'error', '-count_packets', '-select_streams', 'v:0', '-show_entries', 'stream=nb_read_packets', '-of', 'csv=p=0', video_path]
|
|
||||||
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
|
||||||
return int(result.stdout.strip())
|
|
||||||
|
|
||||||
def create_sync_video(self, video_path, original_duration, target_duration):
|
|
||||||
os.makedirs("Bjornulf/sync_video", exist_ok=True)
|
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
|
||||||
final_output_path = f"Bjornulf/sync_video/sync_video_{timestamp}.mp4"
|
|
||||||
|
|
||||||
# Calculate the relative difference between durations
|
|
||||||
duration_difference = abs(target_duration - original_duration) / original_duration
|
|
||||||
|
|
||||||
# If target duration is longer but within 50% difference, use speed adjustment instead of repeating
|
|
||||||
if target_duration > original_duration and duration_difference <= 0.5:
|
|
||||||
# Calculate slowdown ratio
|
|
||||||
speed_ratio = original_duration / target_duration
|
|
||||||
pts_speed = 1/speed_ratio
|
|
||||||
|
|
||||||
speed_adjust_cmd = [
|
|
||||||
'ffmpeg',
|
|
||||||
'-i', video_path,
|
|
||||||
'-filter:v', f'setpts={pts_speed}*PTS',
|
|
||||||
'-an',
|
|
||||||
'-c:v', 'libx264',
|
|
||||||
'-preset', 'medium',
|
|
||||||
'-crf', '23',
|
|
||||||
final_output_path
|
|
||||||
]
|
|
||||||
subprocess.run(speed_adjust_cmd, check=True)
|
|
||||||
print(f"Speed-adjusted video (slowdown ratio: {speed_ratio}) saved to: {final_output_path}")
|
|
||||||
|
|
||||||
elif target_duration > original_duration:
|
|
||||||
# Use the original repeating logic for larger differences
|
|
||||||
repeat_count = math.ceil(target_duration / original_duration)
|
|
||||||
concat_file = f"Bjornulf/sync_video/concat_{timestamp}.txt"
|
|
||||||
with open(concat_file, 'w') as f:
|
|
||||||
for _ in range(repeat_count):
|
|
||||||
f.write(f"file '{os.path.abspath(video_path)}'\n")
|
|
||||||
|
|
||||||
concat_cmd = [
|
|
||||||
'ffmpeg',
|
|
||||||
'-f', 'concat',
|
|
||||||
'-safe', '0',
|
|
||||||
'-i', concat_file,
|
|
||||||
'-c', 'copy',
|
|
||||||
final_output_path
|
|
||||||
]
|
|
||||||
subprocess.run(concat_cmd, check=True)
|
|
||||||
os.remove(concat_file)
|
|
||||||
print(f"Duplicated video {repeat_count} times, saved to: {final_output_path}")
|
|
||||||
|
|
||||||
|
def get_audio_duration(self, audio):
|
||||||
|
"""Calculate audio duration from audio input."""
|
||||||
|
if isinstance(audio, dict) and 'waveform' in audio and 'sample_rate' in audio:
|
||||||
|
return audio['waveform'].shape[-1] / audio['sample_rate']
|
||||||
else:
|
else:
|
||||||
# Original speed-up logic remains the same
|
raise ValueError("Invalid audio input format")
|
||||||
speed_ratio = original_duration / target_duration
|
|
||||||
|
def ffprobe_run(self, cmd):
|
||||||
|
"""Run ffprobe command and return the output."""
|
||||||
|
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
|
||||||
|
return result.stdout.strip()
|
||||||
|
|
||||||
|
def get_video_info(self, video_path):
|
||||||
|
"""Get video duration, fps, and frame count."""
|
||||||
|
duration = float(self.ffprobe_run([
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
|
'-show_entries', 'format=duration',
|
||||||
|
'-of', 'default=noprint_wrappers=1:nokey=1',
|
||||||
|
video_path
|
||||||
|
]))
|
||||||
|
|
||||||
|
fps_str = self.ffprobe_run([
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
|
'-select_streams', 'v:0',
|
||||||
|
'-show_entries', 'stream=r_frame_rate',
|
||||||
|
'-of', 'csv=p=0',
|
||||||
|
video_path
|
||||||
|
])
|
||||||
|
fps = float(eval(fps_str)) if '/' in fps_str else float(fps_str)
|
||||||
|
|
||||||
|
frame_count = int(self.ffprobe_run([
|
||||||
|
'ffprobe', '-v', 'error',
|
||||||
|
'-count_packets',
|
||||||
|
'-select_streams', 'v:0',
|
||||||
|
'-show_entries', 'stream=nb_read_packets',
|
||||||
|
'-of', 'csv=p=0',
|
||||||
|
video_path
|
||||||
|
]))
|
||||||
|
|
||||||
|
return duration, fps, frame_count
|
||||||
|
|
||||||
|
def process_images_to_video(self, IMAGES, fps):
|
||||||
|
"""Convert image sequence to video."""
|
||||||
|
timestamp = self.generate_timestamp()
|
||||||
|
temp_dir = os.path.join(self.temp_dir, f"frames_{timestamp}")
|
||||||
|
os.makedirs(temp_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Save frames
|
||||||
|
frame_paths = []
|
||||||
|
for i, img in enumerate(IMAGES):
|
||||||
|
if isinstance(img, torch.Tensor):
|
||||||
|
if img.dim() == 4:
|
||||||
|
img = img.squeeze(0)
|
||||||
|
img = (img * 255).byte().cpu().numpy()
|
||||||
|
img = Image.fromarray(img)
|
||||||
|
|
||||||
if abs(speed_ratio - 1.0) <= 0.1: # If the difference is less than 10%
|
frame_path = os.path.join(temp_dir, f"frame_{i:05d}.png")
|
||||||
copy_cmd = [
|
img.save(frame_path)
|
||||||
'ffmpeg', '-i', video_path, '-c', 'copy', final_output_path
|
frame_paths.append(frame_path)
|
||||||
]
|
|
||||||
subprocess.run(copy_cmd, check=True)
|
# Create video
|
||||||
print(f"Video copied without speed adjustment to: {final_output_path}")
|
output_path = os.path.join(self.temp_dir, f"video_{timestamp}.mp4")
|
||||||
|
subprocess.run([
|
||||||
|
'ffmpeg', '-y',
|
||||||
|
'-framerate', str(fps),
|
||||||
|
'-i', os.path.join(temp_dir, 'frame_%05d.png'),
|
||||||
|
'-c:v', 'libx264',
|
||||||
|
'-pix_fmt', 'yuv420p',
|
||||||
|
'-preset', 'medium',
|
||||||
|
'-crf', '19',
|
||||||
|
output_path
|
||||||
|
], check=True)
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
for path in frame_paths:
|
||||||
|
os.remove(path)
|
||||||
|
os.rmdir(temp_dir)
|
||||||
|
|
||||||
|
return output_path
|
||||||
|
|
||||||
|
def adjust_video_speed(self, video_path, speed_factor, output_path):
|
||||||
|
"""Adjust video speed using ffmpeg."""
|
||||||
|
pts_speed = 1 / speed_factor
|
||||||
|
subprocess.run([
|
||||||
|
'ffmpeg', '-y',
|
||||||
|
'-i', video_path,
|
||||||
|
'-filter:v', f'setpts={pts_speed}*PTS',
|
||||||
|
'-an',
|
||||||
|
'-c:v', 'libx264',
|
||||||
|
'-preset', 'medium',
|
||||||
|
'-crf', '19',
|
||||||
|
output_path
|
||||||
|
], check=True)
|
||||||
|
|
||||||
|
def create_sync_video(self, video_path, original_duration, target_duration, max_speedup, max_slowdown):
|
||||||
|
"""Create synchronized version of the video."""
|
||||||
|
timestamp = self.generate_timestamp()
|
||||||
|
output_path = os.path.join(self.sync_video_dir, f"sync_video_{timestamp}.mp4")
|
||||||
|
|
||||||
|
if target_duration > original_duration:
|
||||||
|
speed_ratio = original_duration / target_duration
|
||||||
|
if speed_ratio >= max_slowdown:
|
||||||
|
# Slow down video within limits
|
||||||
|
self.adjust_video_speed(video_path, speed_ratio, output_path)
|
||||||
else:
|
else:
|
||||||
speed = min(speed_ratio, 1.5)
|
# Repeat video if slowdown would exceed limit
|
||||||
pts_speed = 1/speed
|
repeat_count = math.ceil(target_duration / original_duration)
|
||||||
|
concat_file = os.path.join(self.sync_video_dir, f"concat_{timestamp}.txt")
|
||||||
|
|
||||||
speed_adjust_cmd = [
|
with open(concat_file, 'w') as f:
|
||||||
'ffmpeg',
|
for _ in range(repeat_count):
|
||||||
|
f.write(f"file '{os.path.abspath(video_path)}'\n")
|
||||||
|
|
||||||
|
subprocess.run([
|
||||||
|
'ffmpeg', '-y',
|
||||||
|
'-f', 'concat',
|
||||||
|
'-safe', '0',
|
||||||
|
'-i', concat_file,
|
||||||
|
'-c', 'copy',
|
||||||
|
output_path
|
||||||
|
], check=True)
|
||||||
|
os.remove(concat_file)
|
||||||
|
else:
|
||||||
|
speed_ratio = original_duration / target_duration
|
||||||
|
if abs(speed_ratio - 1.0) <= 0.1:
|
||||||
|
# Copy video if speed change is minimal
|
||||||
|
subprocess.run([
|
||||||
|
'ffmpeg', '-y',
|
||||||
'-i', video_path,
|
'-i', video_path,
|
||||||
'-filter:v', f'setpts={pts_speed}*PTS',
|
'-c', 'copy',
|
||||||
'-an',
|
output_path
|
||||||
'-c:v', 'libx264',
|
], check=True)
|
||||||
'-preset', 'medium',
|
else:
|
||||||
'-crf', '23',
|
# Speed up video within limits
|
||||||
final_output_path
|
speed = min(speed_ratio, max_speedup)
|
||||||
]
|
self.adjust_video_speed(video_path, speed, output_path)
|
||||||
subprocess.run(speed_adjust_cmd, check=True)
|
|
||||||
print(f"Speed-adjusted video (ratio: {speed}) saved to: {final_output_path}")
|
|
||||||
|
|
||||||
return os.path.abspath(final_output_path)
|
return os.path.abspath(output_path)
|
||||||
|
|
||||||
def save_audio(self, audio_tensor, sample_rate, target_duration, original_video_duration):
|
|
||||||
os.makedirs("Bjornulf/sync_audio", exist_ok=True)
|
|
||||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
|
||||||
filename = f"Bjornulf/sync_audio/sync_audio_{timestamp}.wav"
|
|
||||||
|
|
||||||
|
def process_audio(self, audio_tensor, sample_rate, target_duration, original_duration,
|
||||||
|
max_speedup, max_slowdown):
|
||||||
|
"""Process audio to match video duration."""
|
||||||
if audio_tensor.dim() == 3:
|
if audio_tensor.dim() == 3:
|
||||||
audio_tensor = audio_tensor.squeeze(0)
|
audio_tensor = audio_tensor.squeeze(0)
|
||||||
elif audio_tensor.dim() == 1:
|
elif audio_tensor.dim() == 1:
|
||||||
audio_tensor = audio_tensor.unsqueeze(0)
|
audio_tensor = audio_tensor.unsqueeze(0)
|
||||||
|
|
||||||
current_duration = audio_tensor.shape[1] / sample_rate
|
current_duration = audio_tensor.shape[1] / sample_rate
|
||||||
|
|
||||||
# Calculate the relative difference between durations
|
|
||||||
duration_difference = abs(target_duration - original_video_duration) / original_video_duration
|
|
||||||
|
|
||||||
# Calculate the final duration based on the same logic as create_sync_video
|
# Calculate synchronized video duration
|
||||||
if target_duration > original_video_duration:
|
if target_duration > original_duration:
|
||||||
if duration_difference <= 0.5:
|
speed_ratio = original_duration / target_duration
|
||||||
# For small differences, we'll keep the original audio duration
|
if speed_ratio >= max_slowdown:
|
||||||
sync_video_duration = target_duration
|
sync_duration = target_duration
|
||||||
else:
|
else:
|
||||||
# For larger differences, we'll repeat the video
|
sync_duration = math.ceil(target_duration / original_duration) * original_duration
|
||||||
sync_video_duration = math.ceil(target_duration / original_video_duration) * original_video_duration
|
|
||||||
else:
|
else:
|
||||||
# Handle speed-up cases
|
speed_ratio = original_duration / target_duration
|
||||||
speed_ratio = original_video_duration / target_duration
|
|
||||||
if abs(speed_ratio - 1.0) <= 0.1:
|
if abs(speed_ratio - 1.0) <= 0.1:
|
||||||
sync_video_duration = original_video_duration
|
sync_duration = original_duration
|
||||||
else:
|
else:
|
||||||
speed = min(speed_ratio, 1.5)
|
speed = min(speed_ratio, max_speedup)
|
||||||
sync_video_duration = original_video_duration / speed
|
sync_duration = original_duration / speed
|
||||||
|
|
||||||
# Adjust audio to match sync video duration
|
# Adjust audio length
|
||||||
if current_duration < sync_video_duration:
|
if current_duration < sync_duration:
|
||||||
# Pad with silence
|
silence_samples = int((sync_duration - current_duration) * sample_rate)
|
||||||
silence_samples = int((sync_video_duration - current_duration) * sample_rate)
|
|
||||||
silence = torch.zeros(audio_tensor.shape[0], silence_samples)
|
silence = torch.zeros(audio_tensor.shape[0], silence_samples)
|
||||||
padded_audio = torch.cat([audio_tensor, silence], dim=1)
|
processed_audio = torch.cat([audio_tensor, silence], dim=1)
|
||||||
else:
|
else:
|
||||||
# Trim audio to match sync video duration
|
required_samples = int(sync_duration * sample_rate)
|
||||||
required_samples = int(sync_video_duration * sample_rate)
|
processed_audio = audio_tensor[:, :required_samples]
|
||||||
padded_audio = audio_tensor[:, :required_samples]
|
|
||||||
|
|
||||||
torchaudio.save(filename, padded_audio, sample_rate)
|
return processed_audio, sync_duration
|
||||||
print(f"target_duration: {target_duration}")
|
|
||||||
print(f"original_video_duration: {original_video_duration}")
|
def save_audio(self, audio_tensor, sample_rate, target_duration, original_duration,
|
||||||
print(f"sync_video_duration: {sync_video_duration}")
|
max_speedup, max_slowdown):
|
||||||
print(f"current_audio_duration: {current_duration}")
|
"""Save processed audio to file."""
|
||||||
print(f"final_audio_duration: {padded_audio.shape[1] / sample_rate}")
|
timestamp = self.generate_timestamp()
|
||||||
|
output_path = os.path.join(self.sync_audio_dir, f"sync_audio_{timestamp}.wav")
|
||||||
|
|
||||||
|
processed_audio, sync_duration = self.process_audio(
|
||||||
|
audio_tensor, sample_rate, target_duration, original_duration,
|
||||||
|
max_speedup, max_slowdown
|
||||||
|
)
|
||||||
|
|
||||||
|
torchaudio.save(output_path, processed_audio, sample_rate)
|
||||||
|
return os.path.abspath(output_path)
|
||||||
|
|
||||||
|
def load_audio_from_path(self, audio_path):
|
||||||
|
"""Load audio from file path."""
|
||||||
|
waveform, sample_rate = torchaudio.load(audio_path)
|
||||||
|
return {'waveform': waveform, 'sample_rate': sample_rate}
|
||||||
|
|
||||||
|
def extract_frames(self, video_path):
|
||||||
|
"""Extract all frames of the video as a tensor."""
|
||||||
|
temp_dir = os.path.join(self.temp_dir, "temp_frames")
|
||||||
|
os.makedirs(temp_dir, exist_ok=True)
|
||||||
|
|
||||||
|
# Extract frames using ffmpeg
|
||||||
|
subprocess.run([
|
||||||
|
'ffmpeg', '-i', video_path,
|
||||||
|
os.path.join(temp_dir, 'frame_%05d.png')
|
||||||
|
], check=True)
|
||||||
|
|
||||||
|
# Load frames and convert to tensor
|
||||||
|
frames = []
|
||||||
|
frame_files = sorted(os.listdir(temp_dir))
|
||||||
|
transform = transforms.Compose([transforms.ToTensor()])
|
||||||
|
|
||||||
|
for frame_file in frame_files:
|
||||||
|
image = Image.open(os.path.join(temp_dir, frame_file))
|
||||||
|
frame_tensor = transform(image)
|
||||||
|
frames.append(frame_tensor)
|
||||||
|
|
||||||
|
# Stack frames into a single tensor
|
||||||
|
frames_tensor = torch.stack(frames)
|
||||||
|
|
||||||
|
# Clean up temporary directory
|
||||||
|
for frame_file in frame_files:
|
||||||
|
os.remove(os.path.join(temp_dir, frame_file))
|
||||||
|
os.rmdir(temp_dir)
|
||||||
|
|
||||||
|
return frames_tensor
|
||||||
|
|
||||||
|
def sync_audio_video(self, max_speedup=1.5, max_slowdown=0.5,
|
||||||
|
AUDIO=None, audio_path="", audio_duration=None,
|
||||||
|
video_path="", IMAGES=None, output_fps=30.0):
|
||||||
|
"""Main function to synchronize audio and video."""
|
||||||
|
self.validate_speed_limits(max_speedup, max_slowdown)
|
||||||
|
|
||||||
|
# Handle audio input
|
||||||
|
if AUDIO is None and not audio_path:
|
||||||
|
raise ValueError("Either AUDIO or audio_path must be provided")
|
||||||
|
|
||||||
print(f"sync audio saved to: {filename}")
|
if audio_path:
|
||||||
return os.path.abspath(filename)
|
AUDIO = self.load_audio_from_path(audio_path)
|
||||||
|
|
||||||
|
self.validate_audio_input(AUDIO)
|
||||||
|
|
||||||
|
# Calculate audio duration if not provided
|
||||||
|
if audio_duration is None or audio_duration == 0.0:
|
||||||
|
audio_duration = self.get_audio_duration(AUDIO)
|
||||||
|
|
||||||
|
logging.info(f"Audio duration: {audio_duration}")
|
||||||
|
|
||||||
|
# Process input source
|
||||||
|
if IMAGES is not None and len(IMAGES) > 0:
|
||||||
|
video_path = self.process_images_to_video(IMAGES, output_fps)
|
||||||
|
original_duration = len(IMAGES) / output_fps
|
||||||
|
video_fps = output_fps
|
||||||
|
original_frame_count = len(IMAGES)
|
||||||
|
elif video_path:
|
||||||
|
original_duration, video_fps, original_frame_count = self.get_video_info(video_path)
|
||||||
|
else:
|
||||||
|
raise ValueError("Either video_path or IMAGES must be provided")
|
||||||
|
|
||||||
|
# Create synchronized versions
|
||||||
|
sync_video_path = self.create_sync_video(
|
||||||
|
video_path, original_duration, audio_duration, max_speedup, max_slowdown
|
||||||
|
)
|
||||||
|
sync_audio_path = self.save_audio(
|
||||||
|
AUDIO['waveform'], AUDIO['sample_rate'], audio_duration,
|
||||||
|
original_duration, max_speedup, max_slowdown
|
||||||
|
)
|
||||||
|
|
||||||
|
# Get final properties
|
||||||
|
sync_video_duration, _, sync_frame_count = self.get_video_info(sync_video_path)
|
||||||
|
sync_audio_duration = torchaudio.info(sync_audio_path).num_frames / AUDIO['sample_rate']
|
||||||
|
|
||||||
|
video_frames = self.extract_frames(sync_video_path)
|
||||||
|
|
||||||
|
return (
|
||||||
|
video_frames,
|
||||||
|
AUDIO,
|
||||||
|
sync_audio_path,
|
||||||
|
sync_video_path,
|
||||||
|
original_duration, # input_video_duration
|
||||||
|
sync_video_duration,
|
||||||
|
audio_duration, # input_audio_duration
|
||||||
|
sync_audio_duration,
|
||||||
|
sync_frame_count
|
||||||
|
)
|
||||||
161
combine_video_audio.py
Normal file
@@ -0,0 +1,161 @@
|
|||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
from PIL import Image
|
||||||
|
import numpy as np
|
||||||
|
import torch
|
||||||
|
import torchaudio
|
||||||
|
import time
|
||||||
|
import shutil
|
||||||
|
|
||||||
|
class CombineVideoAudio:
|
||||||
|
def __init__(self):
|
||||||
|
self.base_dir = "Bjornulf"
|
||||||
|
self.temp_dir = os.path.join(self.base_dir, "temp_frames")
|
||||||
|
self.output_dir = os.path.join(self.base_dir, "combined_output")
|
||||||
|
os.makedirs(self.temp_dir, exist_ok=True)
|
||||||
|
os.makedirs(self.output_dir, exist_ok=True)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(s):
|
||||||
|
return {
|
||||||
|
"required": {},
|
||||||
|
"optional": {
|
||||||
|
"IMAGES": ("IMAGE", {"forceInput": True}),
|
||||||
|
"AUDIO": ("AUDIO", {"forceInput": True}),
|
||||||
|
"audio_path": ("STRING", {"default": "", "multiline": False, "forceInput": True}),
|
||||||
|
"video_path": ("STRING", {"default": "", "multiline": False, "forceInput": True}),
|
||||||
|
"fps": ("FLOAT", {"default": 30.0, "min": 1.0, "max": 120.0, "step": 0.1}),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
RETURN_TYPES = ("STRING", "FLOAT", "FLOAT", "INT")
|
||||||
|
RETURN_NAMES = ("video_path", "video_duration", "fps", "number_of_frames")
|
||||||
|
FUNCTION = "combine_audio_video"
|
||||||
|
CATEGORY = "video"
|
||||||
|
|
||||||
|
def get_video_frame_count(self, video_path):
|
||||||
|
try:
|
||||||
|
result = subprocess.run([
|
||||||
|
"ffprobe", "-v", "error", "-count_packets",
|
||||||
|
"-select_streams", "v:0", "-show_entries", "stream=nb_read_packets",
|
||||||
|
"-of", "csv=p=0", video_path
|
||||||
|
], capture_output=True, text=True, check=True)
|
||||||
|
|
||||||
|
frame_count = result.stdout.strip()
|
||||||
|
if not frame_count:
|
||||||
|
raise ValueError("ffprobe returned empty frame count")
|
||||||
|
|
||||||
|
return int(frame_count)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"Error running ffprobe: {e}")
|
||||||
|
print(f"ffprobe stderr: {e.stderr}")
|
||||||
|
raise
|
||||||
|
except ValueError as e:
|
||||||
|
print(f"Error parsing ffprobe output: {e}")
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Unexpected error getting frame count: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
def get_video_duration(self, video_path):
|
||||||
|
try:
|
||||||
|
result = subprocess.run([
|
||||||
|
"ffprobe", "-v", "error", "-show_entries", "format=duration",
|
||||||
|
"-of", "default=noprint_wrappers=1:nokey=1", video_path
|
||||||
|
], capture_output=True, text=True, check=True)
|
||||||
|
|
||||||
|
duration = result.stdout.strip()
|
||||||
|
if not duration:
|
||||||
|
raise ValueError("ffprobe returned empty duration")
|
||||||
|
|
||||||
|
return float(duration)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"Error running ffprobe: {e}")
|
||||||
|
print(f"ffprobe stderr: {e.stderr}")
|
||||||
|
raise
|
||||||
|
except ValueError as e:
|
||||||
|
print(f"Error parsing ffprobe output: {e}")
|
||||||
|
raise
|
||||||
|
except Exception as e:
|
||||||
|
print(f"Unexpected error getting video duration: {e}")
|
||||||
|
raise
|
||||||
|
|
||||||
|
def combine_audio_video(self, IMAGES=None, AUDIO=None, audio_path="", video_path="", fps=30.0):
|
||||||
|
temp_dir = tempfile.mkdtemp(dir=self.temp_dir)
|
||||||
|
try:
|
||||||
|
# Handle audio input
|
||||||
|
if audio_path and os.path.exists(audio_path):
|
||||||
|
final_audio_path = audio_path
|
||||||
|
elif AUDIO is not None:
|
||||||
|
final_audio_path = os.path.join(temp_dir, "temp_audio.wav")
|
||||||
|
waveform = AUDIO['waveform']
|
||||||
|
sample_rate = AUDIO['sample_rate']
|
||||||
|
|
||||||
|
# Ensure waveform is 2D
|
||||||
|
if waveform.dim() == 3:
|
||||||
|
waveform = waveform.squeeze(0)
|
||||||
|
elif waveform.dim() == 1:
|
||||||
|
waveform = waveform.unsqueeze(0)
|
||||||
|
|
||||||
|
# Ensure waveform is float and in the range [-1, 1]
|
||||||
|
if waveform.dtype != torch.float32:
|
||||||
|
waveform = waveform.float()
|
||||||
|
waveform = waveform.clamp(-1, 1)
|
||||||
|
|
||||||
|
torchaudio.save(final_audio_path, waveform, sample_rate)
|
||||||
|
else:
|
||||||
|
raise ValueError("No valid audio input provided")
|
||||||
|
|
||||||
|
|
||||||
|
# Handle video input
|
||||||
|
if video_path and os.path.exists(video_path):
|
||||||
|
final_video_path = video_path
|
||||||
|
elif IMAGES is not None:
|
||||||
|
frames_path = os.path.join(temp_dir, "frame_%04d.png")
|
||||||
|
for i, frame in enumerate(IMAGES):
|
||||||
|
if isinstance(frame, torch.Tensor):
|
||||||
|
frame = frame.cpu().numpy()
|
||||||
|
|
||||||
|
if frame.ndim == 4:
|
||||||
|
frame = frame.squeeze(0) # Remove batch dimension if present
|
||||||
|
if frame.shape[0] == 3:
|
||||||
|
frame = frame.transpose(1, 2, 0) # CHW to HWC
|
||||||
|
|
||||||
|
if frame.dtype != np.uint8:
|
||||||
|
frame = (frame * 255).astype(np.uint8)
|
||||||
|
|
||||||
|
Image.fromarray(frame).save(frames_path % (i + 1))
|
||||||
|
|
||||||
|
final_video_path = os.path.join(temp_dir, "temp_video.mp4")
|
||||||
|
subprocess.run([
|
||||||
|
"ffmpeg", "-y", "-framerate", str(fps),
|
||||||
|
"-i", frames_path, "-c:v", "libx264", "-pix_fmt", "yuv420p",
|
||||||
|
final_video_path
|
||||||
|
], check=True)
|
||||||
|
else:
|
||||||
|
raise ValueError("No valid video input provided")
|
||||||
|
|
||||||
|
# Get video duration
|
||||||
|
duration = self.get_video_duration(final_video_path)
|
||||||
|
|
||||||
|
# Generate a unique filename for the output
|
||||||
|
output_filename = f"combined_output_{int(time.time())}.mp4"
|
||||||
|
output_path = os.path.join(self.output_dir, output_filename)
|
||||||
|
|
||||||
|
# Combine audio and video
|
||||||
|
subprocess.run([
|
||||||
|
"ffmpeg", "-y", "-i", final_video_path, "-i", final_audio_path,
|
||||||
|
"-t", str(duration), "-c:v", "copy", "-c:a", "aac",
|
||||||
|
output_path
|
||||||
|
], check=True)
|
||||||
|
|
||||||
|
# Get the number of frames
|
||||||
|
number_of_frames = self.get_video_frame_count(output_path)
|
||||||
|
|
||||||
|
return (output_path, duration, fps, number_of_frames)
|
||||||
|
|
||||||
|
finally:
|
||||||
|
# Clean up temporary directory
|
||||||
|
shutil.rmtree(temp_dir, ignore_errors=True)
|
||||||
|
|
||||||
91
concat_videos.py
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
import subprocess
|
||||||
|
from pathlib import Path
|
||||||
|
import os
|
||||||
|
|
||||||
|
class ConcatVideos:
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(cls):
|
||||||
|
return {
|
||||||
|
"required": {
|
||||||
|
"video_path_1": ("STRING", {"default": ""}),
|
||||||
|
"video_path_2": ("STRING", {"default": ""}),
|
||||||
|
"output_filename": ("STRING", {"default": "concatenated.mp4"})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
RETURN_TYPES = ("STRING",)
|
||||||
|
RETURN_NAMES = ("concat_path",)
|
||||||
|
FUNCTION = "concat_videos"
|
||||||
|
OUTPUT_NODE = True
|
||||||
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
# Get absolute paths for working directories
|
||||||
|
self.work_dir = Path(os.path.abspath("temp_concat"))
|
||||||
|
self.output_dir = Path(os.path.abspath("Bjornulf/concat_videos"))
|
||||||
|
os.makedirs(self.work_dir, exist_ok=True)
|
||||||
|
os.makedirs(self.output_dir, exist_ok=True)
|
||||||
|
|
||||||
|
def concat_videos(self, video_path_1: str, video_path_2: str, output_filename: str):
|
||||||
|
"""
|
||||||
|
Concatenate two videos using ffmpeg with high-quality settings.
|
||||||
|
Returns the absolute path of the output file.
|
||||||
|
"""
|
||||||
|
# Convert to absolute paths
|
||||||
|
video_path_1 = os.path.abspath(video_path_1)
|
||||||
|
video_path_2 = os.path.abspath(video_path_2)
|
||||||
|
|
||||||
|
# Validate inputs
|
||||||
|
if not (Path(video_path_1).exists() and Path(video_path_2).exists()):
|
||||||
|
raise ValueError(f"Both video paths must exist.\nPath 1: {video_path_1}\nPath 2: {video_path_2}")
|
||||||
|
|
||||||
|
# Create concat file with absolute paths
|
||||||
|
concat_file = self.work_dir / "concat.txt"
|
||||||
|
with open(concat_file, 'w') as f:
|
||||||
|
f.write(f"file '{video_path_1}'\n")
|
||||||
|
f.write(f"file '{video_path_2}'\n")
|
||||||
|
|
||||||
|
# Set output path (absolute)
|
||||||
|
output_path = self.output_dir / output_filename
|
||||||
|
output_path = output_path.absolute()
|
||||||
|
|
||||||
|
# Concatenate videos using ffmpeg with high quality settings
|
||||||
|
cmd = [
|
||||||
|
'ffmpeg', '-y',
|
||||||
|
'-f', 'concat',
|
||||||
|
'-safe', '0',
|
||||||
|
'-i', str(concat_file),
|
||||||
|
# Video settings for maximum quality
|
||||||
|
'-c:v', 'libx264',
|
||||||
|
'-preset', 'veryslow', # Slowest preset for best compression
|
||||||
|
'-crf', '17', # Lower CRF for higher quality (range: 0-51, 0 is lossless)
|
||||||
|
'-x264-params', 'ref=6:me=umh:subme=7:trellis=2:direct-pred=auto:b-adapt=2',
|
||||||
|
# Audio settings
|
||||||
|
'-c:a', 'aac',
|
||||||
|
'-b:a', '320k', # High audio bitrate
|
||||||
|
# Additional quality settings
|
||||||
|
'-movflags', '+faststart', # Enables streaming
|
||||||
|
'-pix_fmt', 'yuv420p', # Ensures compatibility
|
||||||
|
str(output_path)
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Run FFmpeg command
|
||||||
|
process = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
check=True,
|
||||||
|
capture_output=True,
|
||||||
|
text=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Return absolute path as string
|
||||||
|
return (str(output_path),)
|
||||||
|
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
raise RuntimeError(f"FFmpeg error: {e.stderr}")
|
||||||
|
except Exception as e:
|
||||||
|
raise RuntimeError(f"Error during video concatenation: {str(e)}")
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def IS_CHANGED(cls, **kwargs):
|
||||||
|
return float("NaN")
|
||||||
@@ -5,6 +5,7 @@ import tempfile
|
|||||||
import torch
|
import torch
|
||||||
import numpy as np
|
import numpy as np
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
import wave
|
||||||
|
|
||||||
class ImagesListToVideo:
|
class ImagesListToVideo:
|
||||||
@classmethod
|
@classmethod
|
||||||
@@ -13,6 +14,10 @@ class ImagesListToVideo:
|
|||||||
"required": {
|
"required": {
|
||||||
"images": ("IMAGE",),
|
"images": ("IMAGE",),
|
||||||
"frames_per_second": ("FLOAT", {"default": 30, "min": 1, "max": 120, "step": 1}),
|
"frames_per_second": ("FLOAT", {"default": 30, "min": 1, "max": 120, "step": 1}),
|
||||||
|
},
|
||||||
|
"optional": {
|
||||||
|
"audio_path": ("STRING", {"default": "", "multiline": False}),
|
||||||
|
"audio": ("AUDIO", {"default": None}),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -21,7 +26,7 @@ class ImagesListToVideo:
|
|||||||
FUNCTION = "images_to_video"
|
FUNCTION = "images_to_video"
|
||||||
CATEGORY = "Bjornulf"
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
def images_to_video(self, images, frames_per_second=30):
|
def images_to_video(self, images, frames_per_second=30, audio_path="", audio=None):
|
||||||
# Create the output directory if it doesn't exist
|
# Create the output directory if it doesn't exist
|
||||||
output_dir = os.path.join("Bjornulf", "images_to_video")
|
output_dir = os.path.join("Bjornulf", "images_to_video")
|
||||||
os.makedirs(output_dir, exist_ok=True)
|
os.makedirs(output_dir, exist_ok=True)
|
||||||
@@ -30,42 +35,85 @@ class ImagesListToVideo:
|
|||||||
video_filename = f"video_{uuid.uuid4().hex}.mp4"
|
video_filename = f"video_{uuid.uuid4().hex}.mp4"
|
||||||
video_path = os.path.join(output_dir, video_filename)
|
video_path = os.path.join(output_dir, video_filename)
|
||||||
|
|
||||||
# Create a temporary directory to store image files
|
# Create a temporary directory to store image files and audio
|
||||||
with tempfile.TemporaryDirectory() as temp_dir:
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
# Save each image as a PNG file in the temporary directory
|
# Save each image as a PNG file in the temporary directory
|
||||||
for i, img in enumerate(images):
|
for i, img in enumerate(images):
|
||||||
# Convert the image to the correct format
|
|
||||||
img_np = self.convert_to_numpy(img)
|
img_np = self.convert_to_numpy(img)
|
||||||
|
|
||||||
# Ensure the image is in RGB format
|
|
||||||
if img_np.shape[-1] != 3:
|
if img_np.shape[-1] != 3:
|
||||||
img_np = self.convert_to_rgb(img_np)
|
img_np = self.convert_to_rgb(img_np)
|
||||||
|
|
||||||
# Convert to PIL Image
|
|
||||||
img_pil = Image.fromarray(img_np)
|
img_pil = Image.fromarray(img_np)
|
||||||
img_path = os.path.join(temp_dir, f"frame_{i:05d}.png")
|
img_path = os.path.join(temp_dir, f"frame_{i:05d}.png")
|
||||||
img_pil.save(img_path)
|
img_pil.save(img_path)
|
||||||
|
|
||||||
# Use FFmpeg to create a video from the image sequence
|
# Prepare FFmpeg command
|
||||||
ffmpeg_cmd = [
|
ffmpeg_cmd = [
|
||||||
"ffmpeg",
|
"ffmpeg",
|
||||||
"-framerate", str(frames_per_second),
|
"-framerate", str(frames_per_second),
|
||||||
"-i", os.path.join(temp_dir, "frame_%05d.png"),
|
"-i", os.path.join(temp_dir, "frame_%05d.png"),
|
||||||
"-c:v", "libx264",
|
"-c:v", "libx264",
|
||||||
"-pix_fmt", "yuv420p",
|
"-pix_fmt", "yuv420p",
|
||||||
"-crf", "23",
|
"-crf", "19"
|
||||||
"-y", # Overwrite output file if it exists
|
|
||||||
video_path
|
|
||||||
]
|
]
|
||||||
|
|
||||||
try:
|
# Handle audio
|
||||||
subprocess.run(ffmpeg_cmd, check=True, capture_output=True, text=True)
|
temp_audio_path = None
|
||||||
except subprocess.CalledProcessError as e:
|
if audio is not None and isinstance(audio, dict):
|
||||||
print(f"FFmpeg error: {e.stderr}")
|
waveform = audio['waveform'].numpy().squeeze()
|
||||||
return ("",) # Return empty string if video creation fails
|
sample_rate = audio['sample_rate']
|
||||||
|
temp_audio_path = os.path.join(temp_dir, "temp_audio.wav")
|
||||||
|
self.write_wav(temp_audio_path, waveform, sample_rate)
|
||||||
|
elif audio_path and os.path.isfile(audio_path):
|
||||||
|
temp_audio_path = audio_path
|
||||||
|
|
||||||
|
if temp_audio_path:
|
||||||
|
# Create temporary video without audio first
|
||||||
|
temp_video = os.path.join(temp_dir, "temp_video.mp4")
|
||||||
|
temp_cmd = ffmpeg_cmd + ["-y", temp_video]
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Create video without audio
|
||||||
|
subprocess.run(temp_cmd, check=True, capture_output=True, text=True)
|
||||||
|
|
||||||
|
# Add audio to the video
|
||||||
|
audio_cmd = [
|
||||||
|
"ffmpeg",
|
||||||
|
"-i", temp_video,
|
||||||
|
"-i", temp_audio_path,
|
||||||
|
"-c:v", "copy",
|
||||||
|
"-c:a", "aac",
|
||||||
|
"-shortest",
|
||||||
|
"-y",
|
||||||
|
video_path
|
||||||
|
]
|
||||||
|
subprocess.run(audio_cmd, check=True, capture_output=True, text=True)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"FFmpeg error: {e.stderr}")
|
||||||
|
return ("",)
|
||||||
|
else:
|
||||||
|
# No audio, just create the video directly
|
||||||
|
ffmpeg_cmd.append("-y")
|
||||||
|
ffmpeg_cmd.append(video_path)
|
||||||
|
try:
|
||||||
|
subprocess.run(ffmpeg_cmd, check=True, capture_output=True, text=True)
|
||||||
|
except subprocess.CalledProcessError as e:
|
||||||
|
print(f"FFmpeg error: {e.stderr}")
|
||||||
|
return ("",)
|
||||||
|
|
||||||
return (video_path,)
|
return (video_path,)
|
||||||
|
|
||||||
|
def write_wav(self, file_path, audio_data, sample_rate):
|
||||||
|
with wave.open(file_path, 'wb') as wav_file:
|
||||||
|
wav_file.setnchannels(1) # Mono
|
||||||
|
wav_file.setsampwidth(2) # 2 bytes per sample
|
||||||
|
wav_file.setframerate(sample_rate)
|
||||||
|
|
||||||
|
# Normalize and convert to 16-bit PCM
|
||||||
|
audio_data = np.int16(audio_data * 32767)
|
||||||
|
|
||||||
|
# Write audio data
|
||||||
|
wav_file.writeframes(audio_data.tobytes())
|
||||||
|
|
||||||
def convert_to_numpy(self, img):
|
def convert_to_numpy(self, img):
|
||||||
if isinstance(img, torch.Tensor):
|
if isinstance(img, torch.Tensor):
|
||||||
img = img.cpu().numpy()
|
img = img.cpu().numpy()
|
||||||
|
|||||||
111
loop_lines_sequential.py
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
import os
|
||||||
|
from aiohttp import web
|
||||||
|
from server import PromptServer
|
||||||
|
import logging
|
||||||
|
|
||||||
|
class LoopLinesSequential:
|
||||||
|
@classmethod
|
||||||
|
def INPUT_TYPES(cls):
|
||||||
|
return {
|
||||||
|
"required": {
|
||||||
|
"text": ("STRING", {"forceInput": True}),
|
||||||
|
"jump": ("INT", {"default": 1, "min": 1, "max": 100, "step": 1}),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
RETURN_TYPES = ("STRING", "INT", "INT") # Added INT for line number
|
||||||
|
RETURN_NAMES = ("current_line", "remaining_cycles", "current_line_number")
|
||||||
|
FUNCTION = "get_next_line"
|
||||||
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def IS_CHANGED(cls, **kwargs):
|
||||||
|
return float("NaN")
|
||||||
|
|
||||||
|
def get_next_line(self, text, jump):
|
||||||
|
lines = [line.strip() for line in text.split('\n') if line.strip()]
|
||||||
|
|
||||||
|
if not lines:
|
||||||
|
raise ValueError("No valid lines found in input text")
|
||||||
|
|
||||||
|
counter_file = os.path.join("Bjornulf", "counter_lines.txt")
|
||||||
|
os.makedirs(os.path.dirname(counter_file), exist_ok=True)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with open(counter_file, 'r') as f:
|
||||||
|
current_index = int(f.read().strip())
|
||||||
|
except (FileNotFoundError, ValueError):
|
||||||
|
current_index = -jump
|
||||||
|
|
||||||
|
next_index = current_index + jump
|
||||||
|
|
||||||
|
if next_index >= len(lines):
|
||||||
|
raise ValueError(f"Counter has reached the last line (total lines: {len(lines)}). Reset Counter to continue.")
|
||||||
|
|
||||||
|
with open(counter_file, 'w') as f:
|
||||||
|
f.write(str(next_index))
|
||||||
|
|
||||||
|
remaining_cycles = max(0, (len(lines) - next_index - 1) // jump + 1)
|
||||||
|
|
||||||
|
return (lines[next_index], remaining_cycles - 1, next_index + 1) # Added line number (1-based)
|
||||||
|
|
||||||
|
# Server routes
|
||||||
|
@PromptServer.instance.routes.post("/reset_lines_counter")
|
||||||
|
async def reset_lines_counter(request):
|
||||||
|
logging.info("Reset lines counter called")
|
||||||
|
counter_file = os.path.join("Bjornulf", "counter_lines.txt")
|
||||||
|
try:
|
||||||
|
os.remove(counter_file)
|
||||||
|
return web.json_response({"success": True}, status=200)
|
||||||
|
except FileNotFoundError:
|
||||||
|
return web.json_response({"success": True}, status=200)
|
||||||
|
except Exception as e:
|
||||||
|
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||||
|
|
||||||
|
@PromptServer.instance.routes.post("/increment_lines_counter")
|
||||||
|
async def increment_lines_counter(request):
|
||||||
|
counter_file = os.path.join("Bjornulf", "counter_lines.txt")
|
||||||
|
try:
|
||||||
|
current_index = 0
|
||||||
|
try:
|
||||||
|
with open(counter_file, 'r') as f:
|
||||||
|
current_index = int(f.read().strip())
|
||||||
|
except (FileNotFoundError, ValueError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
with open(counter_file, 'w') as f:
|
||||||
|
f.write(str(current_index + 1))
|
||||||
|
return web.json_response({"success": True}, status=200)
|
||||||
|
except Exception as e:
|
||||||
|
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||||
|
|
||||||
|
@PromptServer.instance.routes.post("/decrement_lines_counter")
|
||||||
|
async def decrement_lines_counter(request):
|
||||||
|
counter_file = os.path.join("Bjornulf", "counter_lines.txt")
|
||||||
|
try:
|
||||||
|
current_index = 0
|
||||||
|
try:
|
||||||
|
with open(counter_file, 'r') as f:
|
||||||
|
current_index = int(f.read().strip())
|
||||||
|
except (FileNotFoundError, ValueError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Prevent negative values
|
||||||
|
new_index = max(-1, current_index - 1)
|
||||||
|
with open(counter_file, 'w') as f:
|
||||||
|
f.write(str(new_index))
|
||||||
|
return web.json_response({"success": True}, status=200)
|
||||||
|
except Exception as e:
|
||||||
|
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||||
|
|
||||||
|
@PromptServer.instance.routes.get("/get_current_line")
|
||||||
|
async def get_current_line(request):
|
||||||
|
counter_file = os.path.join("Bjornulf", "counter_lines.txt")
|
||||||
|
try:
|
||||||
|
with open(counter_file, 'r') as f:
|
||||||
|
current_index = int(f.read().strip())
|
||||||
|
return web.json_response({"success": True, "value": current_index + 1}, status=200)
|
||||||
|
except (FileNotFoundError, ValueError):
|
||||||
|
return web.json_response({"success": True, "value": 0}, status=200)
|
||||||
|
except Exception as e:
|
||||||
|
return web.json_response({"success": False, "error": str(e)}, status=500)
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
[project]
|
[project]
|
||||||
name = "bjornulf_custom_nodes"
|
name = "bjornulf_custom_nodes"
|
||||||
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
|
description = "Nodes: Ollama, Text to Speech, Combine Texts, Random Texts, Save image for Bjornulf LobeChat, Text with random Seed, Random line from input, Combine images, Image to grayscale (black & white), Remove image Transparency (alpha), Resize Image, ..."
|
||||||
version = "0.49"
|
version = "0.50"
|
||||||
license = {file = "LICENSE"}
|
license = {file = "LICENSE"}
|
||||||
|
|
||||||
[project.urls]
|
[project.urls]
|
||||||
|
|||||||
@@ -31,4 +31,4 @@ class RandomLineFromInput:
|
|||||||
chosen_line = random.choice(lines)
|
chosen_line = random.choice(lines)
|
||||||
|
|
||||||
# Return as a list with one element
|
# Return as a list with one element
|
||||||
return ([chosen_line],)
|
return (chosen_line,)
|
||||||
40
save_text.py
@@ -6,32 +6,32 @@ class SaveText:
|
|||||||
return {
|
return {
|
||||||
"required": {
|
"required": {
|
||||||
"text": ("STRING", {"multiline": True, "forceInput": True}),
|
"text": ("STRING", {"multiline": True, "forceInput": True}),
|
||||||
"filename": ("STRING", {"default": "001.txt"})
|
"filepath": ("STRING", {"default": "output/this_test.txt"}),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# INPUT_IS_LIST = True
|
|
||||||
RETURN_TYPES = ("STRING",)
|
RETURN_TYPES = ("STRING",)
|
||||||
RETURN_NAMES = ("text",)
|
RETURN_NAMES = ("text",)
|
||||||
FUNCTION = "save_text"
|
FUNCTION = "save_text"
|
||||||
OUTPUT_NODE = True
|
OUTPUT_NODE = True
|
||||||
CATEGORY = "Bjornulf"
|
CATEGORY = "Bjornulf"
|
||||||
# OUTPUT_IS_LIST = (True,)
|
|
||||||
|
|
||||||
def save_text(self, text, filename):
|
def save_text(self, text, filepath):
|
||||||
directory = "custom_nodes/Bjornulf_custom_nodes/SaveText/"
|
# Validate file extension
|
||||||
if not os.path.exists(directory):
|
if not filepath.lower().endswith('.txt'):
|
||||||
os.makedirs(directory)
|
raise ValueError("Output file must be a .txt file")
|
||||||
|
|
||||||
base, ext = os.path.splitext(filename)
|
try:
|
||||||
counter = 1
|
# Create directory if it doesn't exist
|
||||||
new_filename = os.path.join(directory, filename)
|
directory = os.path.dirname(filepath)
|
||||||
|
if directory and not os.path.exists(directory):
|
||||||
while os.path.exists(new_filename):
|
os.makedirs(directory)
|
||||||
new_filename = os.path.join(directory, f"{base}_{counter:03d}{ext}")
|
|
||||||
counter += 1
|
# Append text to file with a newline
|
||||||
|
with open(filepath, 'a', encoding='utf-8') as file:
|
||||||
with open(new_filename, 'w') as file:
|
file.write(text + '\n')
|
||||||
file.write(text)
|
|
||||||
|
return {"ui": {"text": text}, "result": (text,)}
|
||||||
return {"ui": {"text": text}, "result": (text,)}
|
|
||||||
|
except (OSError, IOError) as e:
|
||||||
|
raise ValueError(f"Error saving file: {str(e)}")
|
||||||
BIN
screenshots/audio_sync_video_new.png
Normal file
|
After Width: | Height: | Size: 79 KiB |
|
Before Width: | Height: | Size: 317 KiB After Width: | Height: | Size: 318 KiB |
|
Before Width: | Height: | Size: 299 KiB After Width: | Height: | Size: 300 KiB |
BIN
screenshots/combine_video_audio.png
Normal file
|
After Width: | Height: | Size: 523 KiB |
BIN
screenshots/concat_video.png
Normal file
|
After Width: | Height: | Size: 194 KiB |
BIN
screenshots/loop_sequential_lines.png
Normal file
|
After Width: | Height: | Size: 111 KiB |
@@ -55,7 +55,7 @@ class TextToSpeech:
|
|||||||
}
|
}
|
||||||
|
|
||||||
RETURN_TYPES = ("AUDIO", "STRING", "STRING", "FLOAT")
|
RETURN_TYPES = ("AUDIO", "STRING", "STRING", "FLOAT")
|
||||||
RETURN_NAMES = ("AUDIO", "audio_path", "full_path", "duration")
|
RETURN_NAMES = ("AUDIO", "audio_path", "audio_full_path", "audio_duration")
|
||||||
FUNCTION = "generate_audio"
|
FUNCTION = "generate_audio"
|
||||||
CATEGORY = "Bjornulf"
|
CATEGORY = "Bjornulf"
|
||||||
|
|
||||||
|
|||||||
111
web/js/loop_lines_sequential.js
Normal file
@@ -0,0 +1,111 @@
|
|||||||
|
import { app } from "../../../scripts/app.js";
|
||||||
|
|
||||||
|
app.registerExtension({
|
||||||
|
name: "Bjornulf.LoopLinesSequential",
|
||||||
|
async nodeCreated(node) {
|
||||||
|
if (node.comfyClass !== "Bjornulf_LoopLinesSequential") return;
|
||||||
|
|
||||||
|
// Hide seed widget
|
||||||
|
const seedWidget = node.widgets.find(w => w.name === "seed");
|
||||||
|
if (seedWidget) {
|
||||||
|
seedWidget.visible = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add line number display
|
||||||
|
const lineNumberWidget = node.addWidget("html", "Current Line: --", null, {
|
||||||
|
callback: () => {},
|
||||||
|
});
|
||||||
|
|
||||||
|
// Function to update line number display
|
||||||
|
const updateLineNumber = () => {
|
||||||
|
fetch('/get_current_line')
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
if (data.success) {
|
||||||
|
lineNumberWidget.value = `Current Line: ${data.value}`;
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch(error => {
|
||||||
|
console.error('Error getting line number:', error);
|
||||||
|
});
|
||||||
|
};
|
||||||
|
|
||||||
|
// Add increment button
|
||||||
|
const incrementButton = node.addWidget("button", "+1", null, () => {
|
||||||
|
fetch('/increment_lines_counter', {
|
||||||
|
method: 'POST'
|
||||||
|
})
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
if (data.success) {
|
||||||
|
updateLineNumber();
|
||||||
|
app.ui.toast("Counter incremented", {'duration': 3000});
|
||||||
|
} else {
|
||||||
|
app.ui.toast(`Failed to increment counter: ${data.error || "Unknown error"}`, {'type': 'error', 'duration': 5000});
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
console.error('Error:', error);
|
||||||
|
app.ui.toast("An error occurred while incrementing the counter.", {'type': 'error', 'duration': 5000});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add decrement button
|
||||||
|
const decrementButton = node.addWidget("button", "-1", null, () => {
|
||||||
|
fetch('/decrement_lines_counter', {
|
||||||
|
method: 'POST'
|
||||||
|
})
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
if (data.success) {
|
||||||
|
updateLineNumber();
|
||||||
|
app.ui.toast("Counter decremented", {'duration': 3000});
|
||||||
|
} else {
|
||||||
|
app.ui.toast(`Failed to decrement counter: ${data.error || "Unknown error"}`, {'type': 'error', 'duration': 5000});
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
console.error('Error:', error);
|
||||||
|
app.ui.toast("An error occurred while decrementing the counter.", {'type': 'error', 'duration': 5000});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Add reset button
|
||||||
|
const resetButton = node.addWidget("button", "Reset Counter", null, () => {
|
||||||
|
fetch('/reset_lines_counter', {
|
||||||
|
method: 'POST'
|
||||||
|
})
|
||||||
|
.then(response => response.json())
|
||||||
|
.then(data => {
|
||||||
|
if (data.success) {
|
||||||
|
updateLineNumber();
|
||||||
|
app.ui.toast("Counter reset successfully!", {'duration': 5000});
|
||||||
|
} else {
|
||||||
|
app.ui.toast(`Failed to reset counter: ${data.error || "Unknown error"}`, {'type': 'error', 'duration': 5000});
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.catch((error) => {
|
||||||
|
console.error('Error:', error);
|
||||||
|
app.ui.toast("An error occurred while resetting the counter.", {'type': 'error', 'duration': 5000});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// Update line number periodically
|
||||||
|
setInterval(updateLineNumber, 1000);
|
||||||
|
|
||||||
|
// Override the original execute function
|
||||||
|
const originalExecute = node.execute;
|
||||||
|
node.execute = function() {
|
||||||
|
const result = originalExecute.apply(this, arguments);
|
||||||
|
if (result instanceof Promise) {
|
||||||
|
return result.catch(error => {
|
||||||
|
if (error.message.includes("Counter has reached its limit")) {
|
||||||
|
app.ui.toast(`Execution blocked: ${error.message}`, {'type': 'error', 'duration': 5000});
|
||||||
|
}
|
||||||
|
throw error;
|
||||||
|
});
|
||||||
|
}
|
||||||
|
return result;
|
||||||
|
};
|
||||||
|
}
|
||||||
|
});
|
||||||