mirror of
https://github.com/jags111/efficiency-nodes-comfyui.git
synced 2026-05-07 01:06:42 -03:00
Compare commits
3 Commits
copilot/ad
...
copilot/se
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
96a32d8499 | ||
|
|
83a08492ef | ||
|
|
77031523a2 |
306
.github/copilot-instructions.md
vendored
Normal file
306
.github/copilot-instructions.md
vendored
Normal file
@@ -0,0 +1,306 @@
|
||||
# Copilot Instructions for Efficiency Nodes ComfyUI
|
||||
|
||||
## Repository Overview
|
||||
|
||||
This repository provides custom efficiency nodes for [ComfyUI](https://github.com/comfyanonymous/ComfyUI), a powerful node-based UI for Stable Diffusion. The nodes streamline workflows by combining multiple operations into efficient, cached, and preview-enabled nodes.
|
||||
|
||||
## Project Structure
|
||||
|
||||
- **`efficiency_nodes.py`**: Main file containing all 45+ node class definitions (primary implementation file)
|
||||
- **`tsc_utils.py`**: Utility functions for caching, tensor operations, and console messaging
|
||||
- **`__init__.py`**: Entry point that exports NODE_CLASS_MAPPINGS for ComfyUI
|
||||
- **`py/`**: Specialized modules for upscaling, sampling, encoding, and tiling
|
||||
- **`node_settings.json`**: Configuration for model caching behavior
|
||||
- **`requirements.txt`**: Python dependencies (clip-interrogator, simpleeval)
|
||||
|
||||
## Core Architecture
|
||||
|
||||
### Node Pattern
|
||||
|
||||
All custom nodes follow the ComfyUI standard structure:
|
||||
|
||||
```python
|
||||
class TSC_NodeName:
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
return {
|
||||
"required": {...}, # Required inputs
|
||||
"optional": {...}, # Optional inputs
|
||||
"hidden": {...} # Hidden inputs (UNIQUE_ID, PROMPT)
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("TYPE1", "TYPE2")
|
||||
RETURN_NAMES = ("output1", "output2")
|
||||
FUNCTION = "method_name"
|
||||
CATEGORY = "Efficiency Nodes/SubCategory"
|
||||
|
||||
def method_name(self, **kwargs):
|
||||
# Node logic here
|
||||
return (result1, result2)
|
||||
```
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
- **Classes**: Use `TSC_` prefix (creator's initials) + descriptive name in PascalCase
|
||||
- Examples: `TSC_EfficientLoader`, `TSC_KSampler`, `TSC_XYplot`
|
||||
- **Methods**: Use snake_case for all methods
|
||||
- **Constants**: Use UPPER_SNAKE_CASE for module-level constants
|
||||
|
||||
### Custom Data Types
|
||||
|
||||
The repository defines several custom types for workflow composition:
|
||||
|
||||
- `LORA_STACK`: Tuple for stacking multiple LoRA models
|
||||
- `CONTROL_NET_STACK`: Tuple for stacking ControlNet configurations
|
||||
- `SCRIPT`: Type for chaining script operations (XY Plot, HighRes-Fix, etc.)
|
||||
- `XY`: Type for XY plot data
|
||||
- `SDXL_TUPLE`: SDXL-specific configuration tuple
|
||||
|
||||
## Key Patterns
|
||||
|
||||
### 1. Wrapper Pattern
|
||||
|
||||
Efficiency nodes wrap base ComfyUI nodes to add features:
|
||||
|
||||
```python
|
||||
# Wraps KSampler with caching, preview, and script support
|
||||
class TSC_KSampler:
|
||||
def sample(self, ...):
|
||||
# Check cache
|
||||
# Execute base KSampler
|
||||
# Store results
|
||||
# Handle script execution
|
||||
# Return enhanced output
|
||||
```
|
||||
|
||||
### 2. Caching System
|
||||
|
||||
Use the caching utilities from `tsc_utils.py`:
|
||||
|
||||
```python
|
||||
from tsc_utils import load_ksampler_results, store_ksampler_results
|
||||
|
||||
# Load cached results
|
||||
cached = load_ksampler_results(unique_id, prompt)
|
||||
|
||||
# Store results for future use
|
||||
store_ksampler_results(unique_id, prompt, results)
|
||||
```
|
||||
|
||||
**Important**: Cache operations use `unique_id` and `prompt` from hidden inputs to ensure per-instance caching.
|
||||
|
||||
### 3. Stack Pattern
|
||||
|
||||
Support stacking for composable workflows:
|
||||
|
||||
```python
|
||||
"optional": {
|
||||
"lora_stack": ("LORA_STACK",),
|
||||
"cnet_stack": ("CONTROL_NET_STACK",),
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Script System
|
||||
|
||||
Nodes can execute scripts for advanced workflows:
|
||||
|
||||
```python
|
||||
"optional": {
|
||||
"script": ("SCRIPT",),
|
||||
}
|
||||
|
||||
# In node execution:
|
||||
if script:
|
||||
# Execute script logic (XY Plot, HighRes-Fix, etc.)
|
||||
```
|
||||
|
||||
### 5. Dynamic UI Inputs
|
||||
|
||||
Use `folder_paths` for dynamic dropdown population:
|
||||
|
||||
```python
|
||||
import folder_paths
|
||||
|
||||
"required": {
|
||||
"ckpt_name": (folder_paths.get_filename_list("checkpoints"),),
|
||||
"vae_name": (["Baked VAE"] + folder_paths.get_filename_list("vae"),),
|
||||
}
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Required
|
||||
|
||||
- **PyTorch**: Core tensor operations
|
||||
- **PIL**: Image processing
|
||||
- **NumPy**: Array operations
|
||||
- **clip-interrogator**: Image captioning
|
||||
- **simpleeval**: Safe expression evaluation
|
||||
|
||||
### ComfyUI Integration
|
||||
|
||||
The code integrates with ComfyUI via `sys.path` manipulation:
|
||||
|
||||
```python
|
||||
# Pattern used throughout codebase
|
||||
comfy_dir = os.path.abspath(os.path.join(my_dir, '..', '..'))
|
||||
sys.path.append(comfy_dir)
|
||||
from comfy import samplers, sd, utils
|
||||
# ... imports ...
|
||||
sys.path.remove(comfy_dir)
|
||||
```
|
||||
|
||||
### Optional Dependencies
|
||||
|
||||
- **comfyui_controlnet_aux**: ControlNet preprocessing
|
||||
- **ComfyUI-AnimateDiff-Evolved**: AnimateDiff support
|
||||
|
||||
Handle optional dependencies gracefully:
|
||||
|
||||
```python
|
||||
try:
|
||||
import optional_module
|
||||
NODE_CLASS_MAPPINGS.update({"Node Name": NodeClass})
|
||||
except ImportError:
|
||||
pass
|
||||
```
|
||||
|
||||
## Node Registration
|
||||
|
||||
Nodes are registered in `NODE_CLASS_MAPPINGS` dictionary:
|
||||
|
||||
```python
|
||||
NODE_CLASS_MAPPINGS = {
|
||||
"Display Name": TSC_ClassName,
|
||||
"KSampler (Efficient)": TSC_KSampler,
|
||||
"Efficient Loader": TSC_EfficientLoader,
|
||||
# ... more nodes
|
||||
}
|
||||
|
||||
# Optional nodes added conditionally
|
||||
try:
|
||||
from simpleeval import simple_eval
|
||||
NODE_CLASS_MAPPINGS.update({
|
||||
"Simple Eval Examples": TSC_SimpleEval,
|
||||
})
|
||||
except ImportError:
|
||||
print("simpleeval not installed, skipping related nodes")
|
||||
```
|
||||
|
||||
## Code Style Guidelines
|
||||
|
||||
### Imports
|
||||
|
||||
1. Standard library imports first
|
||||
2. Third-party imports (torch, PIL, numpy)
|
||||
3. ComfyUI imports (with path manipulation)
|
||||
4. Local imports (tsc_utils, py modules)
|
||||
|
||||
### Error Handling
|
||||
|
||||
Use the colored messaging functions from `tsc_utils.py`:
|
||||
|
||||
```python
|
||||
from tsc_utils import error, warning, success
|
||||
|
||||
try:
|
||||
# Operation
|
||||
success("Operation completed")
|
||||
except Exception as e:
|
||||
error(f"Operation failed: {e}")
|
||||
```
|
||||
|
||||
### Input Validation
|
||||
|
||||
Validate inputs in the INPUT_TYPES definition:
|
||||
|
||||
```python
|
||||
"seed": ("INT", {"default": 0, "min": 0, "max": 0xffffffffffffffff}),
|
||||
"steps": ("INT", {"default": 20, "min": 1, "max": 10000}),
|
||||
"cfg": ("FLOAT", {"default": 8.0, "min": 0.0, "max": 100.0}),
|
||||
"sampler_name": (comfy.samplers.KSampler.SAMPLERS,),
|
||||
```
|
||||
|
||||
## Testing and Validation
|
||||
|
||||
This repository does not have formal unit tests. Changes should be validated by:
|
||||
|
||||
1. **Import Test**: Verify `__init__.py` imports successfully
|
||||
2. **ComfyUI Integration**: Load nodes in ComfyUI UI and verify they appear
|
||||
3. **Workflow Test**: Create test workflows and verify node functionality
|
||||
4. **Error Testing**: Test edge cases and ensure graceful error messages
|
||||
|
||||
## Common Patterns to Follow
|
||||
|
||||
### Adding a New Node
|
||||
|
||||
1. Create class with `TSC_` prefix
|
||||
2. Define `INPUT_TYPES`, `RETURN_TYPES`, `FUNCTION`, `CATEGORY`
|
||||
3. Implement the function method
|
||||
4. Add to `NODE_CLASS_MAPPINGS`
|
||||
5. Test in ComfyUI workflow
|
||||
|
||||
### Adding Optional Features
|
||||
|
||||
1. Wrap in try/except for dependency checking
|
||||
2. Use `.update()` to add to NODE_CLASS_MAPPINGS
|
||||
3. Provide fallback or skip if dependency missing
|
||||
4. Print informative message about missing dependency
|
||||
|
||||
### Working with Models
|
||||
|
||||
1. Use `folder_paths` for model discovery
|
||||
2. Implement caching via `tsc_utils` functions
|
||||
3. Store loaded models in `loaded_objects` dict with unique IDs
|
||||
4. Handle model loading errors gracefully
|
||||
|
||||
### Handling UI Updates
|
||||
|
||||
1. Use hidden inputs for `UNIQUE_ID` and `PROMPT` tracking
|
||||
2. Return UI update dictionaries when needed
|
||||
3. Follow ComfyUI's output format for preview images
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- **Caching**: Always use caching for expensive operations (model loading, sampling)
|
||||
- **Memory**: Be mindful of GPU memory with large models
|
||||
- **Preview**: Implement progressive preview for long operations
|
||||
- **Batching**: Support batch processing where applicable
|
||||
|
||||
## Documentation
|
||||
|
||||
- Update README.md for new nodes
|
||||
- Add examples to the [project Wiki](https://github.com/jags111/efficiency-nodes-comfyui/wiki)
|
||||
- Include workflow JSON examples for complex nodes
|
||||
- Document any new configuration options in `node_settings.json`
|
||||
|
||||
## Key Files to Understand
|
||||
|
||||
1. **efficiency_nodes.py**: Study existing nodes for patterns
|
||||
2. **tsc_utils.py**: Understand caching and utility functions
|
||||
3. **py/bnk_adv_encode.py**: Advanced CLIP encoding examples
|
||||
4. **py/smZ_cfg_denoiser.py**: Custom denoiser implementation
|
||||
5. **__init__.py**: Entry point and version management
|
||||
|
||||
## ComfyUI-Specific Tips
|
||||
|
||||
- Nodes are instantiated fresh for each workflow execution
|
||||
- Use `UNIQUE_ID` from hidden inputs for per-node-instance state
|
||||
- `PROMPT` contains the full workflow graph
|
||||
- Return types must match RETURN_TYPES exactly
|
||||
- UI widgets are defined in INPUT_TYPES with tuples
|
||||
- Use `folder_paths` for discovering models/resources
|
||||
|
||||
## Version Information
|
||||
|
||||
- Current version: 2.0+ (see `CC_VERSION` in `__init__.py`)
|
||||
- Published to ComfyUI registry via `pyproject.toml`
|
||||
- Auto-publishes on main branch when `pyproject.toml` changes
|
||||
|
||||
## Resources
|
||||
|
||||
- [ComfyUI Repository](https://github.com/comfyanonymous/ComfyUI)
|
||||
- [Project Wiki](https://github.com/jags111/efficiency-nodes-comfyui/wiki)
|
||||
- [Project README](../README.md)
|
||||
- Original author: Luciano Cirino (TSC)
|
||||
- Current maintainer: jags111
|
||||
32
.gitignore
vendored
32
.gitignore
vendored
@@ -1,32 +0,0 @@
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
.Python
|
||||
build/
|
||||
develop-eggs/
|
||||
dist/
|
||||
downloads/
|
||||
eggs/
|
||||
.eggs/
|
||||
lib/
|
||||
lib64/
|
||||
parts/
|
||||
sdist/
|
||||
var/
|
||||
wheels/
|
||||
*.egg-info/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
*.swp
|
||||
*.swo
|
||||
*~
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
@@ -1,129 +0,0 @@
|
||||
# Example Workflow: Using Trigger Words with LoRA Batch Testing
|
||||
|
||||
This document provides a step-by-step guide to create a workflow that tests multiple LoRAs with their respective trigger words.
|
||||
|
||||
## Scenario
|
||||
|
||||
You want to test 3 different style LoRAs on the same prompt to see which produces the best results. Each LoRA requires a specific trigger word.
|
||||
|
||||
## LoRAs to Test
|
||||
|
||||
1. `anime_style_v1.safetensors` → Trigger word: "anime style, masterpiece"
|
||||
2. `photorealistic_v2.safetensors` → Trigger word: "photorealistic, 8k uhd"
|
||||
3. `oil_painting.safetensors` → Trigger word: "oil painting, classical art"
|
||||
|
||||
## Step-by-Step Setup
|
||||
|
||||
### 1. Add Efficient Loader Node
|
||||
|
||||
**Settings:**
|
||||
- `ckpt_name`: Your base model (e.g., "sd_v15.safetensors")
|
||||
- `positive`: "a beautiful mountain landscape at sunset"
|
||||
- `negative`: "low quality, blurry"
|
||||
- Leave `lora_name` as "None" (we'll use the XY Plot instead)
|
||||
|
||||
### 2. Add XY Input: LoRA Plot Node
|
||||
|
||||
**Settings:**
|
||||
- `input_mode`: "X: LoRA Batch, Y: LoRA Weight"
|
||||
- `X_batch_path`: Path to your LoRA folder (e.g., "D:\LoRAs" or "/home/user/loras")
|
||||
- `X_subdirectories`: false
|
||||
- `X_batch_sort`: "ascending"
|
||||
- `X_batch_count`: 3
|
||||
- `model_strength`: 1.0
|
||||
- `clip_strength`: 1.0
|
||||
|
||||
**NEW - Trigger Words Field:**
|
||||
```
|
||||
X_trigger_words:
|
||||
anime style, masterpiece
|
||||
photorealistic, 8k uhd
|
||||
oil painting, classical art
|
||||
```
|
||||
|
||||
**Important:** Make sure the trigger words are in the same order as your sorted LoRAs!
|
||||
|
||||
### 3. Add XY Input: LoRA Plot Node (for Y-axis)
|
||||
|
||||
For the Y-axis, we'll vary the LoRA weights:
|
||||
|
||||
**Settings:**
|
||||
- This node provides the Y values for the weight variations
|
||||
- Connect the Y output from the LoRA Plot node configured in step 2
|
||||
|
||||
OR use a separate simple value node if you prefer fixed weight steps.
|
||||
|
||||
### 4. Add XY Plot Node
|
||||
|
||||
**Settings:**
|
||||
- Connect the `X` output from the LoRA Plot node to the XY Plot's `X` input
|
||||
- Connect the `Y` output to the XY Plot's `Y` input
|
||||
- `grid_spacing`: 5
|
||||
- `XY_flip`: True (if you want LoRAs on X-axis)
|
||||
|
||||
### 5. Add KSampler (Efficient) Node
|
||||
|
||||
**Settings:**
|
||||
- Connect `script` input to the XY Plot node's output
|
||||
- Connect `model`, `positive`, `negative`, `latent` from the Efficient Loader
|
||||
- Set your sampling parameters (steps, CFG, sampler, etc.)
|
||||
|
||||
### 6. Add Save Image Node
|
||||
|
||||
Connect the output from KSampler to save your results.
|
||||
|
||||
## Expected Results
|
||||
|
||||
When you run this workflow, you'll get an XY plot grid with:
|
||||
|
||||
**X-axis (LoRAs):**
|
||||
- Column 1: Images generated with anime_style_v1 LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset anime style, masterpiece"
|
||||
- Column 2: Images generated with photorealistic_v2 LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset photorealistic, 8k uhd"
|
||||
- Column 3: Images generated with oil_painting LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset oil painting, classical art"
|
||||
|
||||
**Y-axis (Weights):**
|
||||
- Varying LoRA strengths as configured
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Verify LoRA Order:** Run the workflow with `X_batch_count: 1` first to verify which LoRA is loaded first, then adjust your trigger words accordingly.
|
||||
|
||||
2. **Empty Trigger Words:** If a LoRA doesn't need a trigger word, just leave that line blank:
|
||||
```
|
||||
anime style, masterpiece
|
||||
|
||||
oil painting
|
||||
```
|
||||
(The second LoRA has no trigger word)
|
||||
|
||||
3. **Test Individually First:** Before running a large batch, test each LoRA individually with its trigger word to ensure you have the correct trigger words.
|
||||
|
||||
4. **Combine with Other XY Inputs:** You can also combine LoRA batching with checkpoint variations, sampler variations, etc.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** Trigger words aren't being applied
|
||||
- **Solution:** Check that you've entered trigger words in the `X_trigger_words` field (multiline text area)
|
||||
|
||||
**Problem:** Wrong trigger word applied to wrong LoRA
|
||||
- **Solution:** Verify your LoRAs are sorted in the expected order. Use the same sort order for trigger words.
|
||||
|
||||
**Problem:** Too many/too few trigger words
|
||||
- **Solution:** The number of trigger words should match `X_batch_count`. Extra trigger words are ignored, missing ones default to empty.
|
||||
|
||||
## Advanced: Combining with Prompt S/R
|
||||
|
||||
You can use Prompt Search & Replace in combination with trigger words for even more control:
|
||||
|
||||
1. Set up your LoRA Plot with trigger words as above
|
||||
2. Add an **XY Input: Prompt S/R** node for Y-axis instead
|
||||
3. This allows you to vary both the LoRA (with its trigger word) and parts of the prompt simultaneously
|
||||
|
||||
Example:
|
||||
- X-axis: Different LoRAs (each with trigger word)
|
||||
- Y-axis: Replace "sunset" with ["sunrise", "midday", "midnight"]
|
||||
|
||||
Result: Each LoRA tested across different times of day, with appropriate trigger words applied.
|
||||
@@ -1,132 +0,0 @@
|
||||
# Trigger Words for LoRAs in XY Plot
|
||||
|
||||
## Overview
|
||||
|
||||
This feature allows you to automatically add trigger words to your positive prompts when specific LoRAs are applied during XY Plot batch runs. This is particularly useful when testing multiple LoRAs that require specific trigger words to work effectively.
|
||||
|
||||
## How It Works
|
||||
|
||||
When a LoRA with a trigger word is applied during an XY Plot iteration, the trigger word is automatically appended to the positive prompt before the image is generated. This ensures that each LoRA gets its required trigger word without having to add all trigger words to the base prompt.
|
||||
|
||||
## Supported Nodes
|
||||
|
||||
The following nodes now support trigger words:
|
||||
|
||||
1. **XY Input: LoRA Plot** - For batch LoRA testing with varying weights
|
||||
2. **XY Input: LoRA** - For individual LoRA selection
|
||||
3. **LoRA Stacker** - For creating LoRA stacks with trigger words
|
||||
|
||||
## Usage
|
||||
|
||||
### XY Input: LoRA Plot (Batch Mode)
|
||||
|
||||
When using the LoRA Plot node in batch mode (e.g., "X: LoRA Batch, Y: LoRA Weight"):
|
||||
|
||||
1. **X_trigger_words** (multiline text field): Enter one trigger word per line, matching the order of your LoRAs in the batch directory.
|
||||
|
||||
Example:
|
||||
```
|
||||
anime style
|
||||
masterpiece, highly detailed
|
||||
photorealistic
|
||||
```
|
||||
|
||||
2. The LoRAs will be sorted according to your `X_batch_sort` setting (ascending/descending), and trigger words will be matched to them in order.
|
||||
|
||||
3. If you have more LoRAs than trigger words, the extra LoRAs will have no trigger word (empty string).
|
||||
|
||||
### XY Input: LoRA Plot (Single LoRA Mode)
|
||||
|
||||
When testing a single LoRA with varying weights:
|
||||
|
||||
1. **trigger_word** (single line text field): Enter the trigger word for the selected LoRA.
|
||||
|
||||
Example: `anime style, masterpiece`
|
||||
|
||||
2. This trigger word will be added to all iterations for that LoRA.
|
||||
|
||||
### XY Input: LoRA (Individual Selection)
|
||||
|
||||
When selecting individual LoRAs:
|
||||
|
||||
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA slot has its own trigger word field.
|
||||
|
||||
2. Enter the appropriate trigger word for each LoRA you select.
|
||||
|
||||
3. In batch mode, use the **trigger_words** (multiline) field instead.
|
||||
|
||||
### LoRA Stacker
|
||||
|
||||
When creating LoRA stacks:
|
||||
|
||||
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA in the stack has its own trigger word field.
|
||||
|
||||
2. These trigger words will be preserved when the stack is passed to other nodes.
|
||||
|
||||
## Example Workflow
|
||||
|
||||
Here's a typical workflow using trigger words:
|
||||
|
||||
1. Create an **XY Input: LoRA Plot** node
|
||||
2. Set `input_mode` to "X: LoRA Batch, Y: LoRA Weight"
|
||||
3. Set `X_batch_path` to your LoRA directory
|
||||
- Windows: `d:\LoRas` or `C:\ComfyUI\models\loras`
|
||||
- Linux/Mac: `/path/to/loras` or `~/ComfyUI/models/loras`
|
||||
4. Set `X_batch_count` to the number of LoRAs you want to test
|
||||
5. In the `X_trigger_words` field, enter trigger words (one per line):
|
||||
```
|
||||
anime style, masterpiece
|
||||
photorealistic, 8k
|
||||
oil painting, classical art
|
||||
```
|
||||
6. Connect to your **XY Plot** node
|
||||
7. Set up your base prompt in the **Efficient Loader** (e.g., "a beautiful landscape")
|
||||
8. Run the workflow
|
||||
|
||||
**Result**: Each LoRA will be tested with its trigger word automatically added to the prompt:
|
||||
- LoRA 1: "a beautiful landscape anime style, masterpiece"
|
||||
- LoRA 2: "a beautiful landscape photorealistic, 8k"
|
||||
- LoRA 3: "a beautiful landscape oil painting, classical art"
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Data Structure
|
||||
|
||||
LoRA parameters are now stored as 4-tuples instead of 3-tuples:
|
||||
- **Old format (backward compatible)**: `(lora_name, model_strength, clip_strength)`
|
||||
- **New format**: `(lora_name, model_strength, clip_strength, trigger_word)`
|
||||
|
||||
The system automatically handles both formats for backward compatibility.
|
||||
|
||||
### Prompt Injection
|
||||
|
||||
Trigger words are appended to the positive prompt during XY Plot iteration, just before the model loads the LoRA and encodes the prompt. The original prompt is preserved in a tuple structure to support multiple iterations and combinations.
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Empty trigger words are OK**: If a LoRA doesn't need a trigger word, just leave it blank.
|
||||
|
||||
2. **Multiple trigger words**: You can include multiple trigger words in a single field by separating them with commas: `anime style, masterpiece, highly detailed`
|
||||
|
||||
3. **Order matters**: In batch mode, make sure your trigger words are in the same order as your sorted LoRAs.
|
||||
|
||||
4. **Test first**: If you're unsure which trigger words a LoRA needs, check its documentation or test it individually first.
|
||||
|
||||
5. **Combining with Prompt S/R**: You can still use Prompt Search & Replace in combination with trigger words for even more control.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Q: My trigger words aren't being applied**
|
||||
- Make sure you're using the updated nodes (check that trigger_word fields exist)
|
||||
- Verify that you have trigger words entered in the correct fields
|
||||
- Check that your LoRA count matches the number of trigger words (or use fewer trigger words)
|
||||
|
||||
**Q: Can I use trigger words with LoRA Stacks?**
|
||||
- Yes! Use the LoRA Stacker node to create stacks with trigger words, then pass them to the XY Plot nodes.
|
||||
|
||||
**Q: Do trigger words work with the Efficient Loader's lora_name field?**
|
||||
- The Efficient Loader's single LoRA field doesn't have a trigger word option. Use LoRA Stacker to create a stack with a trigger word, then connect it to the Efficient Loader's lora_stack input.
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
This feature is fully backward compatible. Existing workflows that don't use trigger words will continue to work exactly as before. The system automatically handles both 3-tuple (old) and 4-tuple (new) LoRA parameter formats.
|
||||
@@ -1,63 +0,0 @@
|
||||
# Summary: Trigger Words for LoRAs in XY Plot
|
||||
|
||||
## What's New?
|
||||
|
||||
You can now automatically add trigger words to your prompts when testing LoRAs in XY Plot workflows! This feature solves the problem where some LoRAs require specific trigger words to work effectively.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### For Batch LoRA Testing
|
||||
|
||||
1. Use the **XY Input: LoRA Plot** node
|
||||
2. In the `X_trigger_words` field, add one trigger word per line:
|
||||
```
|
||||
anime style
|
||||
photorealistic
|
||||
oil painting
|
||||
```
|
||||
3. Your LoRAs will automatically get their trigger words during the batch run!
|
||||
|
||||
### For Individual LoRAs
|
||||
|
||||
1. Each LoRA slot in the **XY Input: LoRA** node now has a `trigger_word` field
|
||||
2. Simply type the trigger word for each LoRA you select
|
||||
|
||||
### For LoRA Stacks
|
||||
|
||||
1. The **LoRA Stacker** node now has `trigger_word` fields for each slot
|
||||
2. Create your stack with trigger words, and they'll be applied automatically
|
||||
|
||||
## Why Is This Useful?
|
||||
|
||||
**Before:** You had to either:
|
||||
- Add all trigger words to your base prompt (causing unwanted interactions)
|
||||
- Manually manage separate workflows for each LoRA
|
||||
- Test without trigger words (suboptimal results)
|
||||
|
||||
**Now:** Trigger words are automatically added only when their specific LoRA is applied!
|
||||
|
||||
## Example
|
||||
|
||||
**Base Prompt:** "a beautiful landscape"
|
||||
|
||||
**LoRAs with Trigger Words:**
|
||||
- style_lora_1.safetensors → "anime style, masterpiece"
|
||||
- photo_lora.safetensors → "photorealistic, 8k"
|
||||
|
||||
**Automatic Results:**
|
||||
- With style_lora_1: "a beautiful landscape anime style, masterpiece"
|
||||
- With photo_lora: "a beautiful landscape photorealistic, 8k"
|
||||
|
||||
## Compatibility
|
||||
|
||||
✅ **Fully backward compatible** - existing workflows work without changes
|
||||
✅ **Optional feature** - leave trigger words blank if you don't need them
|
||||
✅ **Works with all XY Plot combinations** - LoRA weights, model strength, clip strength
|
||||
|
||||
## Where to Learn More
|
||||
|
||||
See [TRIGGER_WORDS_GUIDE.md](TRIGGER_WORDS_GUIDE.md) for detailed usage instructions, technical details, and troubleshooting tips.
|
||||
|
||||
## Feedback
|
||||
|
||||
If you encounter any issues or have suggestions for improvement, please open an issue on GitHub!
|
||||
@@ -315,7 +315,6 @@ class TSC_LoRA_Stacker:
|
||||
inputs["required"][f"lora_wt_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||
|
||||
inputs["optional"] = {
|
||||
"lora_stack": ("LORA_STACK",)
|
||||
@@ -331,18 +330,17 @@ class TSC_LoRA_Stacker:
|
||||
|
||||
# Extract values from kwargs
|
||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||
|
||||
# Create a list of tuples using provided parameters, exclude tuples with lora_name as "None"
|
||||
if input_mode == "simple":
|
||||
weights = [kwargs.get(f"lora_wt_{i}") for i in range(1, lora_count + 1)]
|
||||
loras = [(lora_name, lora_weight, lora_weight, trigger_word) for lora_name, lora_weight, trigger_word in zip(loras, weights, trigger_words) if
|
||||
loras = [(lora_name, lora_weight, lora_weight) for lora_name, lora_weight in zip(loras, weights) if
|
||||
lora_name != "None"]
|
||||
else:
|
||||
model_strs = [kwargs.get(f"model_str_{i}") for i in range(1, lora_count + 1)]
|
||||
clip_strs = [kwargs.get(f"clip_str_{i}") for i in range(1, lora_count + 1)]
|
||||
loras = [(lora_name, model_str, clip_str, trigger_word) for lora_name, model_str, clip_str, trigger_word in
|
||||
zip(loras, model_strs, clip_strs, trigger_words) if lora_name != "None"]
|
||||
loras = [(lora_name, model_str, clip_str) for lora_name, model_str, clip_str in
|
||||
zip(loras, model_strs, clip_strs) if lora_name != "None"]
|
||||
|
||||
# If lora_stack is not None, extend the loras list with lora_stack
|
||||
if lora_stack is not None:
|
||||
@@ -1263,24 +1261,7 @@ class TSC_KSampler:
|
||||
lora_stack[0] = tuple(v if v is not None else lora_stack[0][i] for i, v in enumerate(var[0]))
|
||||
|
||||
max_label_len = 50 + (12 * (len(lora_stack) - 1))
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_tuple = lora_stack[0]
|
||||
lora_name = lora_tuple[0]
|
||||
lora_model_wt = lora_tuple[1]
|
||||
lora_clip_wt = lora_tuple[2]
|
||||
lora_trigger_word = lora_tuple[3] if len(lora_tuple) > 3 else ""
|
||||
|
||||
# Inject trigger word into positive prompt if present
|
||||
# positive_prompt structure: (current_prompt, original_prompt, prompt_after_X_loop)
|
||||
if lora_trigger_word:
|
||||
if positive_prompt[2] is not None:
|
||||
# In Y loop after X loop - build on the X loop result
|
||||
positive_prompt = (positive_prompt[2] + " " + lora_trigger_word, positive_prompt[1], positive_prompt[2])
|
||||
else:
|
||||
# In X loop or initial - build on original and save for Y loop
|
||||
modified_prompt = positive_prompt[1] + " " + lora_trigger_word
|
||||
positive_prompt = (modified_prompt, positive_prompt[1], modified_prompt)
|
||||
|
||||
lora_name, lora_model_wt, lora_clip_wt = lora_stack[0]
|
||||
lora_filename = os.path.splitext(os.path.basename(lora_name))[0]
|
||||
|
||||
if var_type == "LoRA" or var_type == "LoRA Stacks":
|
||||
@@ -1293,12 +1274,11 @@ class TSC_KSampler:
|
||||
else:
|
||||
text = f"LoRA: {lora_filename}({lora_model_wt},{lora_clip_wt})"
|
||||
elif len(lora_stack) > 1:
|
||||
lora_filenames = []
|
||||
lora_details = []
|
||||
for lora_tuple in lora_stack:
|
||||
lora_filenames.append(os.path.splitext(os.path.basename(lora_tuple[0]))[0])
|
||||
lora_details.append((format(float(lora_tuple[1]), ".2f").rstrip('0').rstrip('.'),
|
||||
format(float(lora_tuple[2]), ".2f").rstrip('0').rstrip('.')))
|
||||
lora_filenames = [os.path.splitext(os.path.basename(lora_name))[0] for lora_name, _, _ in
|
||||
lora_stack]
|
||||
lora_details = [(format(float(lora_model_wt), ".2f").rstrip('0').rstrip('.'),
|
||||
format(float(lora_clip_wt), ".2f").rstrip('0').rstrip('.')) for
|
||||
_, lora_model_wt, lora_clip_wt in lora_stack]
|
||||
non_name_length = sum(
|
||||
len(f"({lora_details[i][0]},{lora_details[i][1]})") + 2 for i in range(len(lora_stack)))
|
||||
available_space = max_label_len - non_name_length
|
||||
@@ -1747,11 +1727,7 @@ class TSC_KSampler:
|
||||
if X_type not in lora_types and Y_type not in lora_types:
|
||||
if lora_stack:
|
||||
names_list = []
|
||||
for lora_tuple in lora_stack:
|
||||
# Support both 3-tuple and 4-tuple
|
||||
name = lora_tuple[0]
|
||||
model_wt = lora_tuple[1]
|
||||
clip_wt = lora_tuple[2]
|
||||
for name, model_wt, clip_wt in lora_stack:
|
||||
base_name = os.path.splitext(os.path.basename(name))[0]
|
||||
formatted_str = f"{base_name}({round(model_wt, 3)},{round(clip_wt, 3)})"
|
||||
names_list.append(formatted_str)
|
||||
@@ -2947,8 +2923,7 @@ class TSC_XYplot_LoRA_Batch:
|
||||
"batch_sort": (["ascending", "descending"],),
|
||||
"batch_max": ("INT",{"default": -1, "min": -1, "max": XYPLOT_LIM, "step": 1}),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_words": ("STRING", {"default": "", "multiline": True})},
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})},
|
||||
"optional": {"lora_stack": ("LORA_STACK",)}
|
||||
}
|
||||
|
||||
@@ -2957,7 +2932,7 @@ class TSC_XYplot_LoRA_Batch:
|
||||
FUNCTION = "xy_value"
|
||||
CATEGORY = "Efficiency Nodes/XY Inputs"
|
||||
|
||||
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, trigger_words, batch_max, lora_stack=None):
|
||||
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, batch_max, lora_stack=None):
|
||||
if batch_max == 0:
|
||||
return (None,)
|
||||
|
||||
@@ -2974,14 +2949,8 @@ class TSC_XYplot_LoRA_Batch:
|
||||
elif batch_sort == "descending":
|
||||
loras.sort(reverse=True)
|
||||
|
||||
# Parse trigger words (one per line)
|
||||
trigger_word_list = [tw.strip() for tw in trigger_words.split('\n')] if trigger_words else []
|
||||
|
||||
# Construct the xy_value using the obtained loras
|
||||
xy_value = []
|
||||
for i, lora in enumerate(loras):
|
||||
trigger_word = trigger_word_list[i] if i < len(trigger_word_list) else ""
|
||||
xy_value.append([(lora, model_strength, clip_strength, trigger_word)] + (lora_stack if lora_stack else []))
|
||||
xy_value = [[(lora, model_strength, clip_strength)] + (lora_stack if lora_stack else []) for lora in loras]
|
||||
|
||||
if batch_max != -1: # If there's a limit
|
||||
xy_value = xy_value[:batch_max]
|
||||
@@ -3007,7 +2976,6 @@ class TSC_XYplot_LoRA:
|
||||
"lora_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM, "step": 1}),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3015,7 +2983,6 @@ class TSC_XYplot_LoRA:
|
||||
inputs["required"][f"lora_name_{i}"] = (loras,)
|
||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||
|
||||
inputs["optional"] = {
|
||||
"lora_stack": ("LORA_STACK",)
|
||||
@@ -3042,7 +3009,6 @@ class TSC_XYplot_LoRA:
|
||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||
model_strs = [kwargs.get(f"model_str_{i}", model_strength) for i in range(1, lora_count + 1)]
|
||||
clip_strs = [kwargs.get(f"clip_str_{i}", clip_strength) for i in range(1, lora_count + 1)]
|
||||
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||
|
||||
# Use model_strength and clip_strength for the loras where values are not provided
|
||||
if "Weights" not in input_mode:
|
||||
@@ -3051,17 +3017,14 @@ class TSC_XYplot_LoRA:
|
||||
clip_strs[i] = clip_strength
|
||||
|
||||
# Extend each sub-array with lora_stack if it's not None
|
||||
xy_value = [[(lora, model_str, clip_str, trigger_word)] + lora_stack
|
||||
for lora, model_str, clip_str, trigger_word
|
||||
in zip(loras, model_strs, clip_strs, trigger_words) if lora != "None"]
|
||||
xy_value = [[(lora, model_str, clip_str)] + lora_stack for lora, model_str, clip_str
|
||||
in zip(loras, model_strs, clip_strs) if lora != "None"]
|
||||
|
||||
result = ((xy_type, xy_value),)
|
||||
else:
|
||||
try:
|
||||
# Get trigger_words from kwargs, default to empty string
|
||||
trigger_words = kwargs.get("trigger_words", "")
|
||||
result = self.lora_batch.xy_value(batch_path, subdirectories, batch_sort, model_strength,
|
||||
clip_strength, trigger_words, batch_max, lora_stack)
|
||||
clip_strength, batch_max, lora_stack)
|
||||
except Exception as e:
|
||||
print(f"{error('XY Plot Error:')} {e}")
|
||||
|
||||
@@ -3085,12 +3048,10 @@ class TSC_XYplot_LoRA_Plot:
|
||||
"lora_name": (loras,),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_word": ("STRING", {"default": "", "multiline": False}),
|
||||
"X_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||
"X_batch_path": ("STRING", {"default": xy_batch_default_path, "multiline": False}),
|
||||
"X_subdirectories": ("BOOLEAN", {"default": False}),
|
||||
"X_batch_sort": (["ascending", "descending"],),
|
||||
"X_trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||
"X_first_value": ("FLOAT", {"default": 0.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"X_last_value": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"Y_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||
@@ -3129,8 +3090,8 @@ class TSC_XYplot_LoRA_Plot:
|
||||
|
||||
return (None,)
|
||||
|
||||
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, trigger_word, X_batch_count, X_batch_path, X_subdirectories,
|
||||
X_batch_sort, X_trigger_words, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
||||
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, X_batch_count, X_batch_path, X_subdirectories,
|
||||
X_batch_sort, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
||||
|
||||
x_value, y_value = [], []
|
||||
lora_stack = lora_stack if lora_stack else []
|
||||
@@ -3140,7 +3101,6 @@ class TSC_XYplot_LoRA_Plot:
|
||||
return (None,None,)
|
||||
if "LoRA Batch" in input_mode:
|
||||
lora_name = None
|
||||
trigger_word = None
|
||||
if "LoRA Weight" in input_mode:
|
||||
model_strength = None
|
||||
clip_strength = None
|
||||
@@ -3153,7 +3113,7 @@ class TSC_XYplot_LoRA_Plot:
|
||||
if "X: LoRA Batch" in input_mode:
|
||||
try:
|
||||
x_value = self.lora_batch.xy_value(X_batch_path, X_subdirectories, X_batch_sort,
|
||||
model_strength, clip_strength, X_trigger_words, X_batch_count, lora_stack)[0][1]
|
||||
model_strength, clip_strength, X_batch_count, lora_stack)[0][1]
|
||||
except Exception as e:
|
||||
print(f"{error('XY Plot Error:')} {e}")
|
||||
return (None,)
|
||||
@@ -3161,19 +3121,19 @@ class TSC_XYplot_LoRA_Plot:
|
||||
elif "X: Model Strength" in input_mode:
|
||||
x_floats = generate_floats(X_batch_count, X_first_value, X_last_value)
|
||||
x_type = "LoRA MStr"
|
||||
x_value = [[(lora_name, x, clip_strength, trigger_word)] + lora_stack for x in x_floats]
|
||||
x_value = [[(lora_name, x, clip_strength)] + lora_stack for x in x_floats]
|
||||
|
||||
# Handling Y values
|
||||
y_floats = generate_floats(Y_batch_count, Y_first_value, Y_last_value)
|
||||
if "Y: LoRA Weight" in input_mode:
|
||||
y_type = "LoRA Wt"
|
||||
y_value = [[(lora_name, y, y, trigger_word)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, y, y)] + lora_stack for y in y_floats]
|
||||
elif "Y: Model Strength" in input_mode:
|
||||
y_type = "LoRA MStr"
|
||||
y_value = [[(lora_name, y, clip_strength, trigger_word)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, y, clip_strength)] + lora_stack for y in y_floats]
|
||||
elif "Y: Clip Strength" in input_mode:
|
||||
y_type = "LoRA CStr"
|
||||
y_value = [[(lora_name, model_strength, y, trigger_word)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, model_strength, y)] + lora_stack for y in y_floats]
|
||||
|
||||
return ((x_type, x_value), (y_type, y_value))
|
||||
|
||||
|
||||
14
tsc_utils.py
14
tsc_utils.py
@@ -354,13 +354,7 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
||||
if len(lora_params) == 0:
|
||||
return ckpt, clip
|
||||
|
||||
lora_tuple = lora_params[0]
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_name = lora_tuple[0]
|
||||
strength_model = lora_tuple[1]
|
||||
strength_clip = lora_tuple[2]
|
||||
# Ignore trigger_word (index 3) if present - it's only for prompt modification
|
||||
|
||||
lora_name, strength_model, strength_clip = lora_params[0]
|
||||
if os.path.isabs(lora_name):
|
||||
lora_path = lora_name
|
||||
else:
|
||||
@@ -381,11 +375,7 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
||||
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||
|
||||
# Unpack lora parameters from the first element of the list for now
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_tuple = lora_params[0]
|
||||
lora_name = lora_tuple[0]
|
||||
strength_model = lora_tuple[1]
|
||||
strength_clip = lora_tuple[2]
|
||||
lora_name, strength_model, strength_clip = lora_params[0]
|
||||
ckpt, clip, _ = load_checkpoint(ckpt_name, id, cache=ckpt_cache)
|
||||
|
||||
lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||
|
||||
Reference in New Issue
Block a user