mirror of
https://github.com/jags111/efficiency-nodes-comfyui.git
synced 2026-05-07 01:06:42 -03:00
Compare commits
8 Commits
copilot/fi
...
copilot/ad
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d588d890bb | ||
|
|
48b182ba88 | ||
|
|
1cbbe4ddca | ||
|
|
f9ab4b04a9 | ||
|
|
8c967f83a4 | ||
|
|
82bdf04271 | ||
|
|
7a150ac766 | ||
|
|
268474fbe8 |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -20,11 +20,6 @@ wheels/
|
|||||||
.installed.cfg
|
.installed.cfg
|
||||||
*.egg
|
*.egg
|
||||||
|
|
||||||
# Virtual environments
|
|
||||||
venv/
|
|
||||||
ENV/
|
|
||||||
env/
|
|
||||||
|
|
||||||
# IDE
|
# IDE
|
||||||
.vscode/
|
.vscode/
|
||||||
.idea/
|
.idea/
|
||||||
|
|||||||
129
EXAMPLE_WORKFLOW.md
Normal file
129
EXAMPLE_WORKFLOW.md
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
# Example Workflow: Using Trigger Words with LoRA Batch Testing
|
||||||
|
|
||||||
|
This document provides a step-by-step guide to create a workflow that tests multiple LoRAs with their respective trigger words.
|
||||||
|
|
||||||
|
## Scenario
|
||||||
|
|
||||||
|
You want to test 3 different style LoRAs on the same prompt to see which produces the best results. Each LoRA requires a specific trigger word.
|
||||||
|
|
||||||
|
## LoRAs to Test
|
||||||
|
|
||||||
|
1. `anime_style_v1.safetensors` → Trigger word: "anime style, masterpiece"
|
||||||
|
2. `photorealistic_v2.safetensors` → Trigger word: "photorealistic, 8k uhd"
|
||||||
|
3. `oil_painting.safetensors` → Trigger word: "oil painting, classical art"
|
||||||
|
|
||||||
|
## Step-by-Step Setup
|
||||||
|
|
||||||
|
### 1. Add Efficient Loader Node
|
||||||
|
|
||||||
|
**Settings:**
|
||||||
|
- `ckpt_name`: Your base model (e.g., "sd_v15.safetensors")
|
||||||
|
- `positive`: "a beautiful mountain landscape at sunset"
|
||||||
|
- `negative`: "low quality, blurry"
|
||||||
|
- Leave `lora_name` as "None" (we'll use the XY Plot instead)
|
||||||
|
|
||||||
|
### 2. Add XY Input: LoRA Plot Node
|
||||||
|
|
||||||
|
**Settings:**
|
||||||
|
- `input_mode`: "X: LoRA Batch, Y: LoRA Weight"
|
||||||
|
- `X_batch_path`: Path to your LoRA folder (e.g., "D:\LoRAs" or "/home/user/loras")
|
||||||
|
- `X_subdirectories`: false
|
||||||
|
- `X_batch_sort`: "ascending"
|
||||||
|
- `X_batch_count`: 3
|
||||||
|
- `model_strength`: 1.0
|
||||||
|
- `clip_strength`: 1.0
|
||||||
|
|
||||||
|
**NEW - Trigger Words Field:**
|
||||||
|
```
|
||||||
|
X_trigger_words:
|
||||||
|
anime style, masterpiece
|
||||||
|
photorealistic, 8k uhd
|
||||||
|
oil painting, classical art
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important:** Make sure the trigger words are in the same order as your sorted LoRAs!
|
||||||
|
|
||||||
|
### 3. Add XY Input: LoRA Plot Node (for Y-axis)
|
||||||
|
|
||||||
|
For the Y-axis, we'll vary the LoRA weights:
|
||||||
|
|
||||||
|
**Settings:**
|
||||||
|
- This node provides the Y values for the weight variations
|
||||||
|
- Connect the Y output from the LoRA Plot node configured in step 2
|
||||||
|
|
||||||
|
OR use a separate simple value node if you prefer fixed weight steps.
|
||||||
|
|
||||||
|
### 4. Add XY Plot Node
|
||||||
|
|
||||||
|
**Settings:**
|
||||||
|
- Connect the `X` output from the LoRA Plot node to the XY Plot's `X` input
|
||||||
|
- Connect the `Y` output to the XY Plot's `Y` input
|
||||||
|
- `grid_spacing`: 5
|
||||||
|
- `XY_flip`: True (if you want LoRAs on X-axis)
|
||||||
|
|
||||||
|
### 5. Add KSampler (Efficient) Node
|
||||||
|
|
||||||
|
**Settings:**
|
||||||
|
- Connect `script` input to the XY Plot node's output
|
||||||
|
- Connect `model`, `positive`, `negative`, `latent` from the Efficient Loader
|
||||||
|
- Set your sampling parameters (steps, CFG, sampler, etc.)
|
||||||
|
|
||||||
|
### 6. Add Save Image Node
|
||||||
|
|
||||||
|
Connect the output from KSampler to save your results.
|
||||||
|
|
||||||
|
## Expected Results
|
||||||
|
|
||||||
|
When you run this workflow, you'll get an XY plot grid with:
|
||||||
|
|
||||||
|
**X-axis (LoRAs):**
|
||||||
|
- Column 1: Images generated with anime_style_v1 LoRA
|
||||||
|
- Prompt used: "a beautiful mountain landscape at sunset anime style, masterpiece"
|
||||||
|
- Column 2: Images generated with photorealistic_v2 LoRA
|
||||||
|
- Prompt used: "a beautiful mountain landscape at sunset photorealistic, 8k uhd"
|
||||||
|
- Column 3: Images generated with oil_painting LoRA
|
||||||
|
- Prompt used: "a beautiful mountain landscape at sunset oil painting, classical art"
|
||||||
|
|
||||||
|
**Y-axis (Weights):**
|
||||||
|
- Varying LoRA strengths as configured
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
1. **Verify LoRA Order:** Run the workflow with `X_batch_count: 1` first to verify which LoRA is loaded first, then adjust your trigger words accordingly.
|
||||||
|
|
||||||
|
2. **Empty Trigger Words:** If a LoRA doesn't need a trigger word, just leave that line blank:
|
||||||
|
```
|
||||||
|
anime style, masterpiece
|
||||||
|
|
||||||
|
oil painting
|
||||||
|
```
|
||||||
|
(The second LoRA has no trigger word)
|
||||||
|
|
||||||
|
3. **Test Individually First:** Before running a large batch, test each LoRA individually with its trigger word to ensure you have the correct trigger words.
|
||||||
|
|
||||||
|
4. **Combine with Other XY Inputs:** You can also combine LoRA batching with checkpoint variations, sampler variations, etc.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**Problem:** Trigger words aren't being applied
|
||||||
|
- **Solution:** Check that you've entered trigger words in the `X_trigger_words` field (multiline text area)
|
||||||
|
|
||||||
|
**Problem:** Wrong trigger word applied to wrong LoRA
|
||||||
|
- **Solution:** Verify your LoRAs are sorted in the expected order. Use the same sort order for trigger words.
|
||||||
|
|
||||||
|
**Problem:** Too many/too few trigger words
|
||||||
|
- **Solution:** The number of trigger words should match `X_batch_count`. Extra trigger words are ignored, missing ones default to empty.
|
||||||
|
|
||||||
|
## Advanced: Combining with Prompt S/R
|
||||||
|
|
||||||
|
You can use Prompt Search & Replace in combination with trigger words for even more control:
|
||||||
|
|
||||||
|
1. Set up your LoRA Plot with trigger words as above
|
||||||
|
2. Add an **XY Input: Prompt S/R** node for Y-axis instead
|
||||||
|
3. This allows you to vary both the LoRA (with its trigger word) and parts of the prompt simultaneously
|
||||||
|
|
||||||
|
Example:
|
||||||
|
- X-axis: Different LoRAs (each with trigger word)
|
||||||
|
- Y-axis: Replace "sunset" with ["sunrise", "midday", "midnight"]
|
||||||
|
|
||||||
|
Result: Each LoRA tested across different times of day, with appropriate trigger words applied.
|
||||||
132
TRIGGER_WORDS_GUIDE.md
Normal file
132
TRIGGER_WORDS_GUIDE.md
Normal file
@@ -0,0 +1,132 @@
|
|||||||
|
# Trigger Words for LoRAs in XY Plot
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This feature allows you to automatically add trigger words to your positive prompts when specific LoRAs are applied during XY Plot batch runs. This is particularly useful when testing multiple LoRAs that require specific trigger words to work effectively.
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
When a LoRA with a trigger word is applied during an XY Plot iteration, the trigger word is automatically appended to the positive prompt before the image is generated. This ensures that each LoRA gets its required trigger word without having to add all trigger words to the base prompt.
|
||||||
|
|
||||||
|
## Supported Nodes
|
||||||
|
|
||||||
|
The following nodes now support trigger words:
|
||||||
|
|
||||||
|
1. **XY Input: LoRA Plot** - For batch LoRA testing with varying weights
|
||||||
|
2. **XY Input: LoRA** - For individual LoRA selection
|
||||||
|
3. **LoRA Stacker** - For creating LoRA stacks with trigger words
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### XY Input: LoRA Plot (Batch Mode)
|
||||||
|
|
||||||
|
When using the LoRA Plot node in batch mode (e.g., "X: LoRA Batch, Y: LoRA Weight"):
|
||||||
|
|
||||||
|
1. **X_trigger_words** (multiline text field): Enter one trigger word per line, matching the order of your LoRAs in the batch directory.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
```
|
||||||
|
anime style
|
||||||
|
masterpiece, highly detailed
|
||||||
|
photorealistic
|
||||||
|
```
|
||||||
|
|
||||||
|
2. The LoRAs will be sorted according to your `X_batch_sort` setting (ascending/descending), and trigger words will be matched to them in order.
|
||||||
|
|
||||||
|
3. If you have more LoRAs than trigger words, the extra LoRAs will have no trigger word (empty string).
|
||||||
|
|
||||||
|
### XY Input: LoRA Plot (Single LoRA Mode)
|
||||||
|
|
||||||
|
When testing a single LoRA with varying weights:
|
||||||
|
|
||||||
|
1. **trigger_word** (single line text field): Enter the trigger word for the selected LoRA.
|
||||||
|
|
||||||
|
Example: `anime style, masterpiece`
|
||||||
|
|
||||||
|
2. This trigger word will be added to all iterations for that LoRA.
|
||||||
|
|
||||||
|
### XY Input: LoRA (Individual Selection)
|
||||||
|
|
||||||
|
When selecting individual LoRAs:
|
||||||
|
|
||||||
|
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA slot has its own trigger word field.
|
||||||
|
|
||||||
|
2. Enter the appropriate trigger word for each LoRA you select.
|
||||||
|
|
||||||
|
3. In batch mode, use the **trigger_words** (multiline) field instead.
|
||||||
|
|
||||||
|
### LoRA Stacker
|
||||||
|
|
||||||
|
When creating LoRA stacks:
|
||||||
|
|
||||||
|
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA in the stack has its own trigger word field.
|
||||||
|
|
||||||
|
2. These trigger words will be preserved when the stack is passed to other nodes.
|
||||||
|
|
||||||
|
## Example Workflow
|
||||||
|
|
||||||
|
Here's a typical workflow using trigger words:
|
||||||
|
|
||||||
|
1. Create an **XY Input: LoRA Plot** node
|
||||||
|
2. Set `input_mode` to "X: LoRA Batch, Y: LoRA Weight"
|
||||||
|
3. Set `X_batch_path` to your LoRA directory
|
||||||
|
- Windows: `d:\LoRas` or `C:\ComfyUI\models\loras`
|
||||||
|
- Linux/Mac: `/path/to/loras` or `~/ComfyUI/models/loras`
|
||||||
|
4. Set `X_batch_count` to the number of LoRAs you want to test
|
||||||
|
5. In the `X_trigger_words` field, enter trigger words (one per line):
|
||||||
|
```
|
||||||
|
anime style, masterpiece
|
||||||
|
photorealistic, 8k
|
||||||
|
oil painting, classical art
|
||||||
|
```
|
||||||
|
6. Connect to your **XY Plot** node
|
||||||
|
7. Set up your base prompt in the **Efficient Loader** (e.g., "a beautiful landscape")
|
||||||
|
8. Run the workflow
|
||||||
|
|
||||||
|
**Result**: Each LoRA will be tested with its trigger word automatically added to the prompt:
|
||||||
|
- LoRA 1: "a beautiful landscape anime style, masterpiece"
|
||||||
|
- LoRA 2: "a beautiful landscape photorealistic, 8k"
|
||||||
|
- LoRA 3: "a beautiful landscape oil painting, classical art"
|
||||||
|
|
||||||
|
## Technical Details
|
||||||
|
|
||||||
|
### Data Structure
|
||||||
|
|
||||||
|
LoRA parameters are now stored as 4-tuples instead of 3-tuples:
|
||||||
|
- **Old format (backward compatible)**: `(lora_name, model_strength, clip_strength)`
|
||||||
|
- **New format**: `(lora_name, model_strength, clip_strength, trigger_word)`
|
||||||
|
|
||||||
|
The system automatically handles both formats for backward compatibility.
|
||||||
|
|
||||||
|
### Prompt Injection
|
||||||
|
|
||||||
|
Trigger words are appended to the positive prompt during XY Plot iteration, just before the model loads the LoRA and encodes the prompt. The original prompt is preserved in a tuple structure to support multiple iterations and combinations.
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
1. **Empty trigger words are OK**: If a LoRA doesn't need a trigger word, just leave it blank.
|
||||||
|
|
||||||
|
2. **Multiple trigger words**: You can include multiple trigger words in a single field by separating them with commas: `anime style, masterpiece, highly detailed`
|
||||||
|
|
||||||
|
3. **Order matters**: In batch mode, make sure your trigger words are in the same order as your sorted LoRAs.
|
||||||
|
|
||||||
|
4. **Test first**: If you're unsure which trigger words a LoRA needs, check its documentation or test it individually first.
|
||||||
|
|
||||||
|
5. **Combining with Prompt S/R**: You can still use Prompt Search & Replace in combination with trigger words for even more control.
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
**Q: My trigger words aren't being applied**
|
||||||
|
- Make sure you're using the updated nodes (check that trigger_word fields exist)
|
||||||
|
- Verify that you have trigger words entered in the correct fields
|
||||||
|
- Check that your LoRA count matches the number of trigger words (or use fewer trigger words)
|
||||||
|
|
||||||
|
**Q: Can I use trigger words with LoRA Stacks?**
|
||||||
|
- Yes! Use the LoRA Stacker node to create stacks with trigger words, then pass them to the XY Plot nodes.
|
||||||
|
|
||||||
|
**Q: Do trigger words work with the Efficient Loader's lora_name field?**
|
||||||
|
- The Efficient Loader's single LoRA field doesn't have a trigger word option. Use LoRA Stacker to create a stack with a trigger word, then connect it to the Efficient Loader's lora_stack input.
|
||||||
|
|
||||||
|
## Backward Compatibility
|
||||||
|
|
||||||
|
This feature is fully backward compatible. Existing workflows that don't use trigger words will continue to work exactly as before. The system automatically handles both 3-tuple (old) and 4-tuple (new) LoRA parameter formats.
|
||||||
63
TRIGGER_WORDS_SUMMARY.md
Normal file
63
TRIGGER_WORDS_SUMMARY.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Summary: Trigger Words for LoRAs in XY Plot
|
||||||
|
|
||||||
|
## What's New?
|
||||||
|
|
||||||
|
You can now automatically add trigger words to your prompts when testing LoRAs in XY Plot workflows! This feature solves the problem where some LoRAs require specific trigger words to work effectively.
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### For Batch LoRA Testing
|
||||||
|
|
||||||
|
1. Use the **XY Input: LoRA Plot** node
|
||||||
|
2. In the `X_trigger_words` field, add one trigger word per line:
|
||||||
|
```
|
||||||
|
anime style
|
||||||
|
photorealistic
|
||||||
|
oil painting
|
||||||
|
```
|
||||||
|
3. Your LoRAs will automatically get their trigger words during the batch run!
|
||||||
|
|
||||||
|
### For Individual LoRAs
|
||||||
|
|
||||||
|
1. Each LoRA slot in the **XY Input: LoRA** node now has a `trigger_word` field
|
||||||
|
2. Simply type the trigger word for each LoRA you select
|
||||||
|
|
||||||
|
### For LoRA Stacks
|
||||||
|
|
||||||
|
1. The **LoRA Stacker** node now has `trigger_word` fields for each slot
|
||||||
|
2. Create your stack with trigger words, and they'll be applied automatically
|
||||||
|
|
||||||
|
## Why Is This Useful?
|
||||||
|
|
||||||
|
**Before:** You had to either:
|
||||||
|
- Add all trigger words to your base prompt (causing unwanted interactions)
|
||||||
|
- Manually manage separate workflows for each LoRA
|
||||||
|
- Test without trigger words (suboptimal results)
|
||||||
|
|
||||||
|
**Now:** Trigger words are automatically added only when their specific LoRA is applied!
|
||||||
|
|
||||||
|
## Example
|
||||||
|
|
||||||
|
**Base Prompt:** "a beautiful landscape"
|
||||||
|
|
||||||
|
**LoRAs with Trigger Words:**
|
||||||
|
- style_lora_1.safetensors → "anime style, masterpiece"
|
||||||
|
- photo_lora.safetensors → "photorealistic, 8k"
|
||||||
|
|
||||||
|
**Automatic Results:**
|
||||||
|
- With style_lora_1: "a beautiful landscape anime style, masterpiece"
|
||||||
|
- With photo_lora: "a beautiful landscape photorealistic, 8k"
|
||||||
|
|
||||||
|
## Compatibility
|
||||||
|
|
||||||
|
✅ **Fully backward compatible** - existing workflows work without changes
|
||||||
|
✅ **Optional feature** - leave trigger words blank if you don't need them
|
||||||
|
✅ **Works with all XY Plot combinations** - LoRA weights, model strength, clip strength
|
||||||
|
|
||||||
|
## Where to Learn More
|
||||||
|
|
||||||
|
See [TRIGGER_WORDS_GUIDE.md](TRIGGER_WORDS_GUIDE.md) for detailed usage instructions, technical details, and troubleshooting tips.
|
||||||
|
|
||||||
|
## Feedback
|
||||||
|
|
||||||
|
If you encounter any issues or have suggestions for improvement, please open an issue on GitHub!
|
||||||
@@ -315,6 +315,7 @@ class TSC_LoRA_Stacker:
|
|||||||
inputs["required"][f"lora_wt_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
inputs["required"][f"lora_wt_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||||
|
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||||
|
|
||||||
inputs["optional"] = {
|
inputs["optional"] = {
|
||||||
"lora_stack": ("LORA_STACK",)
|
"lora_stack": ("LORA_STACK",)
|
||||||
@@ -330,17 +331,18 @@ class TSC_LoRA_Stacker:
|
|||||||
|
|
||||||
# Extract values from kwargs
|
# Extract values from kwargs
|
||||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||||
|
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||||
|
|
||||||
# Create a list of tuples using provided parameters, exclude tuples with lora_name as "None"
|
# Create a list of tuples using provided parameters, exclude tuples with lora_name as "None"
|
||||||
if input_mode == "simple":
|
if input_mode == "simple":
|
||||||
weights = [kwargs.get(f"lora_wt_{i}") for i in range(1, lora_count + 1)]
|
weights = [kwargs.get(f"lora_wt_{i}") for i in range(1, lora_count + 1)]
|
||||||
loras = [(lora_name, lora_weight, lora_weight) for lora_name, lora_weight in zip(loras, weights) if
|
loras = [(lora_name, lora_weight, lora_weight, trigger_word) for lora_name, lora_weight, trigger_word in zip(loras, weights, trigger_words) if
|
||||||
lora_name != "None"]
|
lora_name != "None"]
|
||||||
else:
|
else:
|
||||||
model_strs = [kwargs.get(f"model_str_{i}") for i in range(1, lora_count + 1)]
|
model_strs = [kwargs.get(f"model_str_{i}") for i in range(1, lora_count + 1)]
|
||||||
clip_strs = [kwargs.get(f"clip_str_{i}") for i in range(1, lora_count + 1)]
|
clip_strs = [kwargs.get(f"clip_str_{i}") for i in range(1, lora_count + 1)]
|
||||||
loras = [(lora_name, model_str, clip_str) for lora_name, model_str, clip_str in
|
loras = [(lora_name, model_str, clip_str, trigger_word) for lora_name, model_str, clip_str, trigger_word in
|
||||||
zip(loras, model_strs, clip_strs) if lora_name != "None"]
|
zip(loras, model_strs, clip_strs, trigger_words) if lora_name != "None"]
|
||||||
|
|
||||||
# If lora_stack is not None, extend the loras list with lora_stack
|
# If lora_stack is not None, extend the loras list with lora_stack
|
||||||
if lora_stack is not None:
|
if lora_stack is not None:
|
||||||
@@ -1261,7 +1263,24 @@ class TSC_KSampler:
|
|||||||
lora_stack[0] = tuple(v if v is not None else lora_stack[0][i] for i, v in enumerate(var[0]))
|
lora_stack[0] = tuple(v if v is not None else lora_stack[0][i] for i, v in enumerate(var[0]))
|
||||||
|
|
||||||
max_label_len = 50 + (12 * (len(lora_stack) - 1))
|
max_label_len = 50 + (12 * (len(lora_stack) - 1))
|
||||||
lora_name, lora_model_wt, lora_clip_wt = lora_stack[0]
|
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||||
|
lora_tuple = lora_stack[0]
|
||||||
|
lora_name = lora_tuple[0]
|
||||||
|
lora_model_wt = lora_tuple[1]
|
||||||
|
lora_clip_wt = lora_tuple[2]
|
||||||
|
lora_trigger_word = lora_tuple[3] if len(lora_tuple) > 3 else ""
|
||||||
|
|
||||||
|
# Inject trigger word into positive prompt if present
|
||||||
|
# positive_prompt structure: (current_prompt, original_prompt, prompt_after_X_loop)
|
||||||
|
if lora_trigger_word:
|
||||||
|
if positive_prompt[2] is not None:
|
||||||
|
# In Y loop after X loop - build on the X loop result
|
||||||
|
positive_prompt = (positive_prompt[2] + " " + lora_trigger_word, positive_prompt[1], positive_prompt[2])
|
||||||
|
else:
|
||||||
|
# In X loop or initial - build on original and save for Y loop
|
||||||
|
modified_prompt = positive_prompt[1] + " " + lora_trigger_word
|
||||||
|
positive_prompt = (modified_prompt, positive_prompt[1], modified_prompt)
|
||||||
|
|
||||||
lora_filename = os.path.splitext(os.path.basename(lora_name))[0]
|
lora_filename = os.path.splitext(os.path.basename(lora_name))[0]
|
||||||
|
|
||||||
if var_type == "LoRA" or var_type == "LoRA Stacks":
|
if var_type == "LoRA" or var_type == "LoRA Stacks":
|
||||||
@@ -1274,11 +1293,12 @@ class TSC_KSampler:
|
|||||||
else:
|
else:
|
||||||
text = f"LoRA: {lora_filename}({lora_model_wt},{lora_clip_wt})"
|
text = f"LoRA: {lora_filename}({lora_model_wt},{lora_clip_wt})"
|
||||||
elif len(lora_stack) > 1:
|
elif len(lora_stack) > 1:
|
||||||
lora_filenames = [os.path.splitext(os.path.basename(lora_name))[0] for lora_name, _, _ in
|
lora_filenames = []
|
||||||
lora_stack]
|
lora_details = []
|
||||||
lora_details = [(format(float(lora_model_wt), ".2f").rstrip('0').rstrip('.'),
|
for lora_tuple in lora_stack:
|
||||||
format(float(lora_clip_wt), ".2f").rstrip('0').rstrip('.')) for
|
lora_filenames.append(os.path.splitext(os.path.basename(lora_tuple[0]))[0])
|
||||||
_, lora_model_wt, lora_clip_wt in lora_stack]
|
lora_details.append((format(float(lora_tuple[1]), ".2f").rstrip('0').rstrip('.'),
|
||||||
|
format(float(lora_tuple[2]), ".2f").rstrip('0').rstrip('.')))
|
||||||
non_name_length = sum(
|
non_name_length = sum(
|
||||||
len(f"({lora_details[i][0]},{lora_details[i][1]})") + 2 for i in range(len(lora_stack)))
|
len(f"({lora_details[i][0]},{lora_details[i][1]})") + 2 for i in range(len(lora_stack)))
|
||||||
available_space = max_label_len - non_name_length
|
available_space = max_label_len - non_name_length
|
||||||
@@ -1727,7 +1747,11 @@ class TSC_KSampler:
|
|||||||
if X_type not in lora_types and Y_type not in lora_types:
|
if X_type not in lora_types and Y_type not in lora_types:
|
||||||
if lora_stack:
|
if lora_stack:
|
||||||
names_list = []
|
names_list = []
|
||||||
for name, model_wt, clip_wt in lora_stack:
|
for lora_tuple in lora_stack:
|
||||||
|
# Support both 3-tuple and 4-tuple
|
||||||
|
name = lora_tuple[0]
|
||||||
|
model_wt = lora_tuple[1]
|
||||||
|
clip_wt = lora_tuple[2]
|
||||||
base_name = os.path.splitext(os.path.basename(name))[0]
|
base_name = os.path.splitext(os.path.basename(name))[0]
|
||||||
formatted_str = f"{base_name}({round(model_wt, 3)},{round(clip_wt, 3)})"
|
formatted_str = f"{base_name}({round(model_wt, 3)},{round(clip_wt, 3)})"
|
||||||
names_list.append(formatted_str)
|
names_list.append(formatted_str)
|
||||||
@@ -2923,7 +2947,8 @@ class TSC_XYplot_LoRA_Batch:
|
|||||||
"batch_sort": (["ascending", "descending"],),
|
"batch_sort": (["ascending", "descending"],),
|
||||||
"batch_max": ("INT",{"default": -1, "min": -1, "max": XYPLOT_LIM, "step": 1}),
|
"batch_max": ("INT",{"default": -1, "min": -1, "max": XYPLOT_LIM, "step": 1}),
|
||||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})},
|
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||||
|
"trigger_words": ("STRING", {"default": "", "multiline": True})},
|
||||||
"optional": {"lora_stack": ("LORA_STACK",)}
|
"optional": {"lora_stack": ("LORA_STACK",)}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2932,7 +2957,7 @@ class TSC_XYplot_LoRA_Batch:
|
|||||||
FUNCTION = "xy_value"
|
FUNCTION = "xy_value"
|
||||||
CATEGORY = "Efficiency Nodes/XY Inputs"
|
CATEGORY = "Efficiency Nodes/XY Inputs"
|
||||||
|
|
||||||
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, batch_max, lora_stack=None):
|
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, trigger_words, batch_max, lora_stack=None):
|
||||||
if batch_max == 0:
|
if batch_max == 0:
|
||||||
return (None,)
|
return (None,)
|
||||||
|
|
||||||
@@ -2949,8 +2974,14 @@ class TSC_XYplot_LoRA_Batch:
|
|||||||
elif batch_sort == "descending":
|
elif batch_sort == "descending":
|
||||||
loras.sort(reverse=True)
|
loras.sort(reverse=True)
|
||||||
|
|
||||||
|
# Parse trigger words (one per line)
|
||||||
|
trigger_word_list = [tw.strip() for tw in trigger_words.split('\n')] if trigger_words else []
|
||||||
|
|
||||||
# Construct the xy_value using the obtained loras
|
# Construct the xy_value using the obtained loras
|
||||||
xy_value = [[(lora, model_strength, clip_strength)] + (lora_stack if lora_stack else []) for lora in loras]
|
xy_value = []
|
||||||
|
for i, lora in enumerate(loras):
|
||||||
|
trigger_word = trigger_word_list[i] if i < len(trigger_word_list) else ""
|
||||||
|
xy_value.append([(lora, model_strength, clip_strength, trigger_word)] + (lora_stack if lora_stack else []))
|
||||||
|
|
||||||
if batch_max != -1: # If there's a limit
|
if batch_max != -1: # If there's a limit
|
||||||
xy_value = xy_value[:batch_max]
|
xy_value = xy_value[:batch_max]
|
||||||
@@ -2976,6 +3007,7 @@ class TSC_XYplot_LoRA:
|
|||||||
"lora_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM, "step": 1}),
|
"lora_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM, "step": 1}),
|
||||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||||
|
"trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -2983,6 +3015,7 @@ class TSC_XYplot_LoRA:
|
|||||||
inputs["required"][f"lora_name_{i}"] = (loras,)
|
inputs["required"][f"lora_name_{i}"] = (loras,)
|
||||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||||
|
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||||
|
|
||||||
inputs["optional"] = {
|
inputs["optional"] = {
|
||||||
"lora_stack": ("LORA_STACK",)
|
"lora_stack": ("LORA_STACK",)
|
||||||
@@ -3009,6 +3042,7 @@ class TSC_XYplot_LoRA:
|
|||||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||||
model_strs = [kwargs.get(f"model_str_{i}", model_strength) for i in range(1, lora_count + 1)]
|
model_strs = [kwargs.get(f"model_str_{i}", model_strength) for i in range(1, lora_count + 1)]
|
||||||
clip_strs = [kwargs.get(f"clip_str_{i}", clip_strength) for i in range(1, lora_count + 1)]
|
clip_strs = [kwargs.get(f"clip_str_{i}", clip_strength) for i in range(1, lora_count + 1)]
|
||||||
|
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||||
|
|
||||||
# Use model_strength and clip_strength for the loras where values are not provided
|
# Use model_strength and clip_strength for the loras where values are not provided
|
||||||
if "Weights" not in input_mode:
|
if "Weights" not in input_mode:
|
||||||
@@ -3017,14 +3051,17 @@ class TSC_XYplot_LoRA:
|
|||||||
clip_strs[i] = clip_strength
|
clip_strs[i] = clip_strength
|
||||||
|
|
||||||
# Extend each sub-array with lora_stack if it's not None
|
# Extend each sub-array with lora_stack if it's not None
|
||||||
xy_value = [[(lora, model_str, clip_str)] + lora_stack for lora, model_str, clip_str
|
xy_value = [[(lora, model_str, clip_str, trigger_word)] + lora_stack
|
||||||
in zip(loras, model_strs, clip_strs) if lora != "None"]
|
for lora, model_str, clip_str, trigger_word
|
||||||
|
in zip(loras, model_strs, clip_strs, trigger_words) if lora != "None"]
|
||||||
|
|
||||||
result = ((xy_type, xy_value),)
|
result = ((xy_type, xy_value),)
|
||||||
else:
|
else:
|
||||||
try:
|
try:
|
||||||
|
# Get trigger_words from kwargs, default to empty string
|
||||||
|
trigger_words = kwargs.get("trigger_words", "")
|
||||||
result = self.lora_batch.xy_value(batch_path, subdirectories, batch_sort, model_strength,
|
result = self.lora_batch.xy_value(batch_path, subdirectories, batch_sort, model_strength,
|
||||||
clip_strength, batch_max, lora_stack)
|
clip_strength, trigger_words, batch_max, lora_stack)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"{error('XY Plot Error:')} {e}")
|
print(f"{error('XY Plot Error:')} {e}")
|
||||||
|
|
||||||
@@ -3048,10 +3085,12 @@ class TSC_XYplot_LoRA_Plot:
|
|||||||
"lora_name": (loras,),
|
"lora_name": (loras,),
|
||||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||||
|
"trigger_word": ("STRING", {"default": "", "multiline": False}),
|
||||||
"X_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
"X_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||||
"X_batch_path": ("STRING", {"default": xy_batch_default_path, "multiline": False}),
|
"X_batch_path": ("STRING", {"default": xy_batch_default_path, "multiline": False}),
|
||||||
"X_subdirectories": ("BOOLEAN", {"default": False}),
|
"X_subdirectories": ("BOOLEAN", {"default": False}),
|
||||||
"X_batch_sort": (["ascending", "descending"],),
|
"X_batch_sort": (["ascending", "descending"],),
|
||||||
|
"X_trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||||
"X_first_value": ("FLOAT", {"default": 0.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
"X_first_value": ("FLOAT", {"default": 0.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||||
"X_last_value": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
"X_last_value": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||||
"Y_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
"Y_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||||
@@ -3090,8 +3129,8 @@ class TSC_XYplot_LoRA_Plot:
|
|||||||
|
|
||||||
return (None,)
|
return (None,)
|
||||||
|
|
||||||
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, X_batch_count, X_batch_path, X_subdirectories,
|
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, trigger_word, X_batch_count, X_batch_path, X_subdirectories,
|
||||||
X_batch_sort, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
X_batch_sort, X_trigger_words, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
||||||
|
|
||||||
x_value, y_value = [], []
|
x_value, y_value = [], []
|
||||||
lora_stack = lora_stack if lora_stack else []
|
lora_stack = lora_stack if lora_stack else []
|
||||||
@@ -3101,6 +3140,7 @@ class TSC_XYplot_LoRA_Plot:
|
|||||||
return (None,None,)
|
return (None,None,)
|
||||||
if "LoRA Batch" in input_mode:
|
if "LoRA Batch" in input_mode:
|
||||||
lora_name = None
|
lora_name = None
|
||||||
|
trigger_word = None
|
||||||
if "LoRA Weight" in input_mode:
|
if "LoRA Weight" in input_mode:
|
||||||
model_strength = None
|
model_strength = None
|
||||||
clip_strength = None
|
clip_strength = None
|
||||||
@@ -3113,7 +3153,7 @@ class TSC_XYplot_LoRA_Plot:
|
|||||||
if "X: LoRA Batch" in input_mode:
|
if "X: LoRA Batch" in input_mode:
|
||||||
try:
|
try:
|
||||||
x_value = self.lora_batch.xy_value(X_batch_path, X_subdirectories, X_batch_sort,
|
x_value = self.lora_batch.xy_value(X_batch_path, X_subdirectories, X_batch_sort,
|
||||||
model_strength, clip_strength, X_batch_count, lora_stack)[0][1]
|
model_strength, clip_strength, X_trigger_words, X_batch_count, lora_stack)[0][1]
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
print(f"{error('XY Plot Error:')} {e}")
|
print(f"{error('XY Plot Error:')} {e}")
|
||||||
return (None,)
|
return (None,)
|
||||||
@@ -3121,19 +3161,19 @@ class TSC_XYplot_LoRA_Plot:
|
|||||||
elif "X: Model Strength" in input_mode:
|
elif "X: Model Strength" in input_mode:
|
||||||
x_floats = generate_floats(X_batch_count, X_first_value, X_last_value)
|
x_floats = generate_floats(X_batch_count, X_first_value, X_last_value)
|
||||||
x_type = "LoRA MStr"
|
x_type = "LoRA MStr"
|
||||||
x_value = [[(lora_name, x, clip_strength)] + lora_stack for x in x_floats]
|
x_value = [[(lora_name, x, clip_strength, trigger_word)] + lora_stack for x in x_floats]
|
||||||
|
|
||||||
# Handling Y values
|
# Handling Y values
|
||||||
y_floats = generate_floats(Y_batch_count, Y_first_value, Y_last_value)
|
y_floats = generate_floats(Y_batch_count, Y_first_value, Y_last_value)
|
||||||
if "Y: LoRA Weight" in input_mode:
|
if "Y: LoRA Weight" in input_mode:
|
||||||
y_type = "LoRA Wt"
|
y_type = "LoRA Wt"
|
||||||
y_value = [[(lora_name, y, y)] + lora_stack for y in y_floats]
|
y_value = [[(lora_name, y, y, trigger_word)] + lora_stack for y in y_floats]
|
||||||
elif "Y: Model Strength" in input_mode:
|
elif "Y: Model Strength" in input_mode:
|
||||||
y_type = "LoRA MStr"
|
y_type = "LoRA MStr"
|
||||||
y_value = [[(lora_name, y, clip_strength)] + lora_stack for y in y_floats]
|
y_value = [[(lora_name, y, clip_strength, trigger_word)] + lora_stack for y in y_floats]
|
||||||
elif "Y: Clip Strength" in input_mode:
|
elif "Y: Clip Strength" in input_mode:
|
||||||
y_type = "LoRA CStr"
|
y_type = "LoRA CStr"
|
||||||
y_value = [[(lora_name, model_strength, y)] + lora_stack for y in y_floats]
|
y_value = [[(lora_name, model_strength, y, trigger_word)] + lora_stack for y in y_floats]
|
||||||
|
|
||||||
return ((x_type, x_value), (y_type, y_value))
|
return ((x_type, x_value), (y_type, y_value))
|
||||||
|
|
||||||
@@ -3958,12 +3998,6 @@ class TSC_ImageOverlay:
|
|||||||
overlay_image = comfy.utils.common_upscale(samples, overlay_image_size[0], overlay_image_size[1], resize_method, False)
|
overlay_image = comfy.utils.common_upscale(samples, overlay_image_size[0], overlay_image_size[1], resize_method, False)
|
||||||
overlay_image = overlay_image.movedim(1, -1)
|
overlay_image = overlay_image.movedim(1, -1)
|
||||||
|
|
||||||
# Handle batch dimension - use first image if overlay_image is a batch
|
|
||||||
if len(overlay_image.shape) == 4:
|
|
||||||
if overlay_image.shape[0] > 1:
|
|
||||||
print(f"{warning('Image Overlay Warning:')} Multiple overlay images detected ({overlay_image.shape[0]}), using only the first image.")
|
|
||||||
overlay_image = overlay_image[0]
|
|
||||||
|
|
||||||
overlay_image = tensor2pil(overlay_image)
|
overlay_image = tensor2pil(overlay_image)
|
||||||
|
|
||||||
# Add Alpha channel to overlay
|
# Add Alpha channel to overlay
|
||||||
|
|||||||
14
tsc_utils.py
14
tsc_utils.py
@@ -354,7 +354,13 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
|||||||
if len(lora_params) == 0:
|
if len(lora_params) == 0:
|
||||||
return ckpt, clip
|
return ckpt, clip
|
||||||
|
|
||||||
lora_name, strength_model, strength_clip = lora_params[0]
|
lora_tuple = lora_params[0]
|
||||||
|
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||||
|
lora_name = lora_tuple[0]
|
||||||
|
strength_model = lora_tuple[1]
|
||||||
|
strength_clip = lora_tuple[2]
|
||||||
|
# Ignore trigger_word (index 3) if present - it's only for prompt modification
|
||||||
|
|
||||||
if os.path.isabs(lora_name):
|
if os.path.isabs(lora_name):
|
||||||
lora_path = lora_name
|
lora_path = lora_name
|
||||||
else:
|
else:
|
||||||
@@ -375,7 +381,11 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
|||||||
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||||
|
|
||||||
# Unpack lora parameters from the first element of the list for now
|
# Unpack lora parameters from the first element of the list for now
|
||||||
lora_name, strength_model, strength_clip = lora_params[0]
|
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||||
|
lora_tuple = lora_params[0]
|
||||||
|
lora_name = lora_tuple[0]
|
||||||
|
strength_model = lora_tuple[1]
|
||||||
|
strength_clip = lora_tuple[2]
|
||||||
ckpt, clip, _ = load_checkpoint(ckpt_name, id, cache=ckpt_cache)
|
ckpt, clip, _ = load_checkpoint(ckpt_name, id, cache=ckpt_cache)
|
||||||
|
|
||||||
lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||||
|
|||||||
Reference in New Issue
Block a user