mirror of
https://github.com/jags111/efficiency-nodes-comfyui.git
synced 2026-05-07 01:06:42 -03:00
Compare commits
8 Commits
4579b7d607
...
copilot/ad
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d588d890bb | ||
|
|
48b182ba88 | ||
|
|
1cbbe4ddca | ||
|
|
f9ab4b04a9 | ||
|
|
8c967f83a4 | ||
|
|
82bdf04271 | ||
|
|
7a150ac766 | ||
|
|
268474fbe8 |
5
.gitignore
vendored
5
.gitignore
vendored
@@ -20,11 +20,6 @@ wheels/
|
||||
.installed.cfg
|
||||
*.egg
|
||||
|
||||
# Virtual environments
|
||||
venv/
|
||||
ENV/
|
||||
env/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
|
||||
129
EXAMPLE_WORKFLOW.md
Normal file
129
EXAMPLE_WORKFLOW.md
Normal file
@@ -0,0 +1,129 @@
|
||||
# Example Workflow: Using Trigger Words with LoRA Batch Testing
|
||||
|
||||
This document provides a step-by-step guide to create a workflow that tests multiple LoRAs with their respective trigger words.
|
||||
|
||||
## Scenario
|
||||
|
||||
You want to test 3 different style LoRAs on the same prompt to see which produces the best results. Each LoRA requires a specific trigger word.
|
||||
|
||||
## LoRAs to Test
|
||||
|
||||
1. `anime_style_v1.safetensors` → Trigger word: "anime style, masterpiece"
|
||||
2. `photorealistic_v2.safetensors` → Trigger word: "photorealistic, 8k uhd"
|
||||
3. `oil_painting.safetensors` → Trigger word: "oil painting, classical art"
|
||||
|
||||
## Step-by-Step Setup
|
||||
|
||||
### 1. Add Efficient Loader Node
|
||||
|
||||
**Settings:**
|
||||
- `ckpt_name`: Your base model (e.g., "sd_v15.safetensors")
|
||||
- `positive`: "a beautiful mountain landscape at sunset"
|
||||
- `negative`: "low quality, blurry"
|
||||
- Leave `lora_name` as "None" (we'll use the XY Plot instead)
|
||||
|
||||
### 2. Add XY Input: LoRA Plot Node
|
||||
|
||||
**Settings:**
|
||||
- `input_mode`: "X: LoRA Batch, Y: LoRA Weight"
|
||||
- `X_batch_path`: Path to your LoRA folder (e.g., "D:\LoRAs" or "/home/user/loras")
|
||||
- `X_subdirectories`: false
|
||||
- `X_batch_sort`: "ascending"
|
||||
- `X_batch_count`: 3
|
||||
- `model_strength`: 1.0
|
||||
- `clip_strength`: 1.0
|
||||
|
||||
**NEW - Trigger Words Field:**
|
||||
```
|
||||
X_trigger_words:
|
||||
anime style, masterpiece
|
||||
photorealistic, 8k uhd
|
||||
oil painting, classical art
|
||||
```
|
||||
|
||||
**Important:** Make sure the trigger words are in the same order as your sorted LoRAs!
|
||||
|
||||
### 3. Add XY Input: LoRA Plot Node (for Y-axis)
|
||||
|
||||
For the Y-axis, we'll vary the LoRA weights:
|
||||
|
||||
**Settings:**
|
||||
- This node provides the Y values for the weight variations
|
||||
- Connect the Y output from the LoRA Plot node configured in step 2
|
||||
|
||||
OR use a separate simple value node if you prefer fixed weight steps.
|
||||
|
||||
### 4. Add XY Plot Node
|
||||
|
||||
**Settings:**
|
||||
- Connect the `X` output from the LoRA Plot node to the XY Plot's `X` input
|
||||
- Connect the `Y` output to the XY Plot's `Y` input
|
||||
- `grid_spacing`: 5
|
||||
- `XY_flip`: True (if you want LoRAs on X-axis)
|
||||
|
||||
### 5. Add KSampler (Efficient) Node
|
||||
|
||||
**Settings:**
|
||||
- Connect `script` input to the XY Plot node's output
|
||||
- Connect `model`, `positive`, `negative`, `latent` from the Efficient Loader
|
||||
- Set your sampling parameters (steps, CFG, sampler, etc.)
|
||||
|
||||
### 6. Add Save Image Node
|
||||
|
||||
Connect the output from KSampler to save your results.
|
||||
|
||||
## Expected Results
|
||||
|
||||
When you run this workflow, you'll get an XY plot grid with:
|
||||
|
||||
**X-axis (LoRAs):**
|
||||
- Column 1: Images generated with anime_style_v1 LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset anime style, masterpiece"
|
||||
- Column 2: Images generated with photorealistic_v2 LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset photorealistic, 8k uhd"
|
||||
- Column 3: Images generated with oil_painting LoRA
|
||||
- Prompt used: "a beautiful mountain landscape at sunset oil painting, classical art"
|
||||
|
||||
**Y-axis (Weights):**
|
||||
- Varying LoRA strengths as configured
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Verify LoRA Order:** Run the workflow with `X_batch_count: 1` first to verify which LoRA is loaded first, then adjust your trigger words accordingly.
|
||||
|
||||
2. **Empty Trigger Words:** If a LoRA doesn't need a trigger word, just leave that line blank:
|
||||
```
|
||||
anime style, masterpiece
|
||||
|
||||
oil painting
|
||||
```
|
||||
(The second LoRA has no trigger word)
|
||||
|
||||
3. **Test Individually First:** Before running a large batch, test each LoRA individually with its trigger word to ensure you have the correct trigger words.
|
||||
|
||||
4. **Combine with Other XY Inputs:** You can also combine LoRA batching with checkpoint variations, sampler variations, etc.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** Trigger words aren't being applied
|
||||
- **Solution:** Check that you've entered trigger words in the `X_trigger_words` field (multiline text area)
|
||||
|
||||
**Problem:** Wrong trigger word applied to wrong LoRA
|
||||
- **Solution:** Verify your LoRAs are sorted in the expected order. Use the same sort order for trigger words.
|
||||
|
||||
**Problem:** Too many/too few trigger words
|
||||
- **Solution:** The number of trigger words should match `X_batch_count`. Extra trigger words are ignored, missing ones default to empty.
|
||||
|
||||
## Advanced: Combining with Prompt S/R
|
||||
|
||||
You can use Prompt Search & Replace in combination with trigger words for even more control:
|
||||
|
||||
1. Set up your LoRA Plot with trigger words as above
|
||||
2. Add an **XY Input: Prompt S/R** node for Y-axis instead
|
||||
3. This allows you to vary both the LoRA (with its trigger word) and parts of the prompt simultaneously
|
||||
|
||||
Example:
|
||||
- X-axis: Different LoRAs (each with trigger word)
|
||||
- Y-axis: Replace "sunset" with ["sunrise", "midday", "midnight"]
|
||||
|
||||
Result: Each LoRA tested across different times of day, with appropriate trigger words applied.
|
||||
272
README.md
272
README.md
@@ -249,275 +249,3 @@ Thank you for being awesome!
|
||||
|
||||
<!-- end support-pitch -->
|
||||
|
||||
✨🍬Planning to help this branch stay alive and any issues will try to solve or fix .. But will be slow as I run many github repos . before raising any issues, please update comfyUI to the latest and esnure all the required packages are updated ass well. Share your workflow in issues to retest same at our end and update the patch.🍬
|
||||
|
||||
|
||||
<b> Efficiency Nodes for ComfyUI Version 2.0+
|
||||
=======
|
||||
### A collection of <a href="https://github.com/comfyanonymous/ComfyUI" >ComfyUI</a> custom nodes to help streamline workflows and reduce total node count.
|
||||
## Releases
|
||||
|
||||
Please check out our WIKI for any use cases and new developments including workflow and settings.<br>
|
||||
[Efficiency Nodes Wiki](https://github.com/jags111/efficiency-nodes-comfyui/wiki)<br>
|
||||
|
||||
### Nodes:
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Efficient Loader</b> & <b>Eff. Loader SDXL</b></summary>
|
||||
<ul>
|
||||
<li>Nodes that can load & cache Checkpoint, VAE, & LoRA type models. <i>(cache settings found in config file 'node_settings.json')</i></li>
|
||||
<li>Able to apply LoRA & Control Net stacks via their <code>lora_stack</code> and <code>cnet_stack</code> inputs.</li>
|
||||
<li>Come with positive and negative prompt text boxes. You can also set the way you want the prompt to be <a href="https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb">encoded</a> via the <code>token_normalization</code> and <code>weight_interpretation</code> widgets.</li>
|
||||
<li>These node's also feature a variety of custom menu options as shown below.
|
||||
<p></p><img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes//NodeMenu%20-%20Efficient%20Loaders.png" width="240" style="display: inline-block;"></p>
|
||||
<p><i>note: "🔍 View model info..." requires <a href="https://github.com/pythongosssss/ComfyUI-Custom-Scripts">ComfyUI-Custom-Scripts</a> to be installed to function.</i></p></li>
|
||||
<li>These loaders are used by the <b>XY Plot</b> node for many of its plot type dependencies.</li>
|
||||
</ul>
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Efficient%20Loader.png" width="240" style="display: inline-block;">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Eff.%20Loader%20SDXL.png" width="240" style="display: inline-block;">
|
||||
</p>
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>KSampler (Efficient)</b>, <b>KSampler Adv. (Efficient)</b>, <b>KSampler SDXL (Eff.)</b></summary>
|
||||
|
||||
- Modded KSamplers with the ability to live preview generations and/or vae decode images.
|
||||
- Feature a special seed box that allows for a clearer management of seeds. <i>(-1 seed to apply the selected seed behavior)</i>
|
||||
- Can execute a variety of scripts, such as the <b>XY Plot</b> script. To activate the <code>script</code>, simply connect the input connection.
|
||||
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20KSampler%20(Efficient).png" width="240">
|
||||
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20KSampler%20Adv.%20(Efficient).png" width="240">
|
||||
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20KSampler%20SDXL%20(Eff.).png" width="240">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Script Nodes</b></summary>
|
||||
|
||||
- A group of node's that are used in conjuction with the Efficient KSamplers to execute a variety of 'pre-wired' set of actions.
|
||||
- Script nodes can be chained if their input/outputs allow it. Multiple instances of the same Script Node in a chain does nothing.
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/ScriptChain.png" width="1080">
|
||||
</p>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>XY Plot</b></summary>
|
||||
<ul>
|
||||
<li>Node that allows users to specify parameters for the Efficiency KSamplers to plot on a grid.</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/XY%20Plot%20-%20Node%20Example.png" width="1080">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>HighRes-Fix</b></summary>
|
||||
<ul>
|
||||
<li>Node that the gives user the ability to upscale KSampler results through variety of different methods.</li>
|
||||
<li>Comes out of the box with popular Neural Network Latent Upscalers such as Ttl's <a href="https://github.com/Ttl/ComfyUi_NNLatentUpscale">ComfyUi_NNLatentUpscale</a> and City96's <a href="https://github.com/city96/SD-Latent-Upscaler">SD-Latent-Upscaler</a>.</li>
|
||||
<li>Supports ControlNet guided latent upscaling. <i> (You must have Fannovel's <a href="https://github.com/Fannovel16/comfyui_controlnet_aux">comfyui_controlnet_aux</a> installed to unlock this feature)</i></li>
|
||||
<li> Local models---The node pulls the required files from huggingface hub by default. You can create a models folder and place the modules there if you have a flaky connection or prefer to use it completely offline, it will load them locally instead. The path should be: ComfyUI/custom_nodes/efficiency-nodes-comfyui/models; Alternatively, just clone the entire HF repo to it: (git clone https://huggingface.co/city96/SD-Latent-Upscaler) to ComfyUI/custom_nodes/efficiency-nodes-comfyui/models</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/HighResFix%20-%20Node%20Example.gif" width="1080">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Noise Control</b></summary>
|
||||
<ul>
|
||||
<li>This node gives the user the ability to manipulate noise sources in a variety of ways, such as the sampling's RNG source.</li>
|
||||
<li>The <a href="https://github.com/shiimizu/ComfyUI_smZNodes">CFG Denoiser</a> noise hijack was developed by smZ, it allows you to get closer recreating Automatic1111 results.</li>
|
||||
<p></p><i>Note: The CFG Denoiser does not work with a variety of conditioning types such as ControlNet & GLIGEN</i></p>
|
||||
<li>This node also allows you to add noise <a href="https://github.com/chrisgoringe/cg-noise">Seed Variations</a> to your generations.</li>
|
||||
<li>For trying to replicate Automatic1111 images, this node will help you achieve it. Encode your prompt using "length+mean" <code>token_normalization</code> with "A1111" <code>weight_interpretation</code>, set the Noise Control Script node's <code>rng_source</code> to "gpu", and turn the <code>cfg_denoiser</code> to true.</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Noise%20Control%20Script.png" width="320">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Tiled Upscaler</b></summary>
|
||||
<ul>
|
||||
<li>The Tiled Upscaler script attempts to encompas BlenderNeko's <a href="https://github.com/BlenderNeko/ComfyUI_TiledKSampler">ComfyUI_TiledKSampler</a> workflow into 1 node.</li>
|
||||
<li>Script supports Tiled ControlNet help via the options.</li>
|
||||
<li>Strongly recommend the <code>preview_method</code> be "vae_decoded_only" when running the script.</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/Tiled%20Upscaler%20-%20Node%20Example.gif" width="1080">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>AnimateDiff</b></summary>
|
||||
<ul>
|
||||
<li>To unlock the AnimateDiff script it is required you have installed Kosinkadink's <a href="https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved">ComfyUI-AnimateDiff-Evolved</a>.</li>
|
||||
<li>The latent <code>batch_size</code> when running this script becomes your frame count.</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/AnimateDiff%20-%20Node%20Example.gif" width="1080">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
</details>
|
||||
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Image Overlay</b></summary>
|
||||
<ul>
|
||||
<li>Node that allows for flexible image overlaying. Works also with image batches.</li>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/Image%20Overlay%20-%20Node%20Example.png" width="1080">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>SimpleEval Nodes</b></summary>
|
||||
<ul>
|
||||
<li>A collection of nodes that allows users to write simple Python expressions for a variety of data types using the <i><a href="https://github.com/danthedeckie/simpleeval" >simpleeval</a></i> library.</li>
|
||||
<li>To activate you must have installed the simpleeval library in your Python workspace.</li>
|
||||
<pre>pip install simpleeval</pre>
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Evaluate%20Integers.png" width="320">
|
||||
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Evaluate%20Floats.png" width="320">
|
||||
|
||||
<img src="https://github.com/LucianoCirino/efficiency-nodes-media/blob/main/images/nodes/NODE%20-%20Evaluate%20Strings.png" width="320">
|
||||
</p>
|
||||
|
||||
</details>
|
||||
|
||||
<!-------------------------------------------------------------------------------------------------------------------------------------------------------->
|
||||
<details>
|
||||
<summary><b>Latent Upscale nodes</b></summary>
|
||||
<ul>
|
||||
<li>Forked from NN latent this node provides some remarkable neural enhancement to the latents making scaling a cool task</li>
|
||||
<li>Both NN latent upscale and Latent upscaler does the Latent improvemnet in remarkable ways. If you face any issue regarding same please install the nodes from this link([SD-Latent-Upscaler](https://github.com/city96/SD-Latent-Upscaler) and the NN latent upscale from [ComfyUI_NNlatentUpscale](https://github.com/Ttl/ComfyUi_NNLatentUpscale) </li>
|
||||
|
||||
</ul>
|
||||
<p align="center">
|
||||
<img src="images/2023-12-08_19-53-37.png" width="320">
|
||||
|
||||
<img src="images/2023-12-08_19-54-11.png" width="320">
|
||||
|
||||
|
||||
</p>
|
||||
|
||||
</details>
|
||||
|
||||
## Workflow Examples:
|
||||
|
||||
Kindly load all PNG files in same name in the (workflow driectory) to comfyUI to get all this workflows. The PNG files have the json embedded into them and are easy to drag and drop !<br>
|
||||
|
||||
1. HiRes-Fixing<br>
|
||||
[<img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/HiResfix_workflow.png" width="800">](https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/HiResfix_workflow.png)<br>
|
||||
|
||||
2. SDXL Refining & **Noise Control Script**<br>
|
||||
[<img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/SDXL_base_refine_noise_workflow.png" width="800">](https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/SDXL_base_refine_noise_workflow.png)<br>
|
||||
|
||||
3. **XY Plot**: LoRA <code>model_strength</code> vs <code>clip_strength</code><br>
|
||||
[<img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/Eff_XYPlot%20-%20LoRA%20Model%20vs%20Clip%20Strengths01.png" width="800">](https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/Eff_XYPlot%20-%20LoRA%20Model%20vs%20Clip%20Strengths01.png)<br>
|
||||
|
||||
4. Stacking Scripts: **XY Plot** + **Noise Control** + **HiRes-Fix**<br>
|
||||
[<img src="https://github.com/LucianoCirino/efficiency-nodes-comfyui/blob/v2.0/workflows/XYPlot%20-%20Seeds%20vs%20Checkpoints%20%26%20Stacked%20Scripts.png" width="800">](https://github.com/LucianoCirino/efficiency-nodes-comfyui/blob/v2.0/workflows/XYPlot%20-%20Seeds%20vs%20Checkpoints%20%26%20Stacked%20Scripts.png)<br>
|
||||
|
||||
5. Stacking Scripts: **HiRes-Fix** (with ControlNet)<br>
|
||||
[<img src="https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/eff_animatescriptWF001.gif" width="800">](https://github.com/jags111/efficiency-nodes-comfyui/blob/main/workflows/eff_animatescriptWF001.gif)<br>
|
||||
|
||||
6. SVD workflow: **Stable Video Diffusion** + *Kohya Hires** (with latent control)<br>
|
||||
<br>
|
||||
|
||||
|
||||
### Dependencies
|
||||
The python library <i><a href="https://github.com/danthedeckie/simpleeval" >simpleeval</a></i> is required to be installed if you wish to use the **Simpleeval Nodes**.
|
||||
<pre>pip install simpleeval</pre>
|
||||
Also can be installed with a simple pip command <br>
|
||||
'pip install simpleeval'
|
||||
|
||||
A single file library for easily adding evaluatable expressions into python projects. Say you want to allow a user to set an alarm volume, which could depend on the time of day, alarm level, how many previous alarms had gone off, and if there is music playing at the time.
|
||||
|
||||
check Notes for more information.
|
||||
|
||||
## **Install:**
|
||||
To install, drop the "_**efficiency-nodes-comfyui**_" folder into the "_**...\ComfyUI\ComfyUI\custom_nodes**_" directory and restart UI.
|
||||
|
||||
## Todo
|
||||
|
||||
[ ] Add guidance to notebook
|
||||
|
||||
|
||||
# Comfy Resources
|
||||
|
||||
**Efficiency Linked Repos**
|
||||
- [BlenderNeko ComfyUI_ADV_CLIP_emb](https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb) by@BlenderNeko
|
||||
- [Chrisgoringe cg-noise](https://github.com/chrisgoringe/cg-noise) by@Chrisgoringe
|
||||
- [pythongosssss ComfyUI-Custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts) by@pythongosssss
|
||||
- [shiimizu ComfyUI_smZNodes](https://github.com/shiimizu/ComfyUI_smZNodes) by@shiimizu
|
||||
- [LEv145_images-grid-comfyUI-plugin](https://github.com/LEv145/images-grid-comfy-plugin)) by@LEv145
|
||||
- [ltdrdata-ComfyUI-Inspire-Pack](https://github.com/ltdrdata/ComfyUI-Inspire-Pack) by@ltdrdata
|
||||
- [pythongosssss-ComfyUI-custom-Scripts](https://github.com/pythongosssss/ComfyUI-Custom-Scripts) by@pythongosssss
|
||||
- [RockOfFire-ComfyUI_Comfyroll_CustomNodes](https://github.com/RockOfFire/ComfyUI_Comfyroll_CustomNodes) by@RockOfFire
|
||||
|
||||
**Guides**:
|
||||
- [Official Examples (eng)](https://comfyanonymous.github.io/ComfyUI_examples/)-
|
||||
- [ComfyUI Community Manual (eng)](https://blenderneko.github.io/ComfyUI-docs/) by @BlenderNeko
|
||||
|
||||
- **Extensions and Custom Nodes**:
|
||||
- [Plugins for Comfy List (eng)](https://github.com/WASasquatch/comfyui-plugins) by @WASasquatch
|
||||
- [ComfyUI tag on CivitAI (eng)](https://civitai.com/tag/comfyui)-
|
||||
- [Tomoaki's personal Wiki (jap)](https://comfyui.creamlab.net/guides/) by @tjhayasaka
|
||||
|
||||
## Support
|
||||
If you create a cool image with our nodes, please show your result and message us on twitter at @jags111 or @NeuralismAI .
|
||||
|
||||
You can join the <a href="https://discord.gg/vNVqT82W" alt="Neuralism Discord"> NEURALISM AI DISCORD </a> or <a href="https://discord.gg/UmSd4qyh" alt =Jags AI Discord > JAGS AI DISCORD </a>
|
||||
Share your work created with this model. Exchange experiences and parameters. And see more interesting custom workflows.
|
||||
|
||||
Support us in Patreon for more future models and new versions of AI notebooks.
|
||||
- tip me on <a href="https://www.patreon.com/jags111"> [patreon]</a>
|
||||
|
||||
My buymeacoffee.com pages and links are here and if you feel you are happy with my work just buy me a coffee !
|
||||
|
||||
<a href="https://www.buymeacoffee.com/jagsAI"> coffee for JAGS AI</a>
|
||||
|
||||
Thank you for being awesome!
|
||||
|
||||
<img src = "images/ComfyUI_temp_vpose_00005_.png" width = "50%">
|
||||
|
||||
<!-- end support-pitch -->
|
||||
|
||||
|
||||
|
||||
## Issue #300 Improvements
|
||||
Date: 2026-03-13 17:21:22
|
||||
|
||||
### Changes
|
||||
- Added installation instructions
|
||||
- Enhanced code documentation
|
||||
- Added usage examples
|
||||
- Fixed broken links
|
||||
|
||||
### Security Enhancements
|
||||
- Added input sanitization examples
|
||||
- Included security best practices
|
||||
- Updated error handling guidelines
|
||||
|
||||
### Testing
|
||||
- Verified documentation accuracy
|
||||
- Added test examples
|
||||
|
||||
Let me know if you need any clarification about this issue.
|
||||
|
||||
132
TRIGGER_WORDS_GUIDE.md
Normal file
132
TRIGGER_WORDS_GUIDE.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Trigger Words for LoRAs in XY Plot
|
||||
|
||||
## Overview
|
||||
|
||||
This feature allows you to automatically add trigger words to your positive prompts when specific LoRAs are applied during XY Plot batch runs. This is particularly useful when testing multiple LoRAs that require specific trigger words to work effectively.
|
||||
|
||||
## How It Works
|
||||
|
||||
When a LoRA with a trigger word is applied during an XY Plot iteration, the trigger word is automatically appended to the positive prompt before the image is generated. This ensures that each LoRA gets its required trigger word without having to add all trigger words to the base prompt.
|
||||
|
||||
## Supported Nodes
|
||||
|
||||
The following nodes now support trigger words:
|
||||
|
||||
1. **XY Input: LoRA Plot** - For batch LoRA testing with varying weights
|
||||
2. **XY Input: LoRA** - For individual LoRA selection
|
||||
3. **LoRA Stacker** - For creating LoRA stacks with trigger words
|
||||
|
||||
## Usage
|
||||
|
||||
### XY Input: LoRA Plot (Batch Mode)
|
||||
|
||||
When using the LoRA Plot node in batch mode (e.g., "X: LoRA Batch, Y: LoRA Weight"):
|
||||
|
||||
1. **X_trigger_words** (multiline text field): Enter one trigger word per line, matching the order of your LoRAs in the batch directory.
|
||||
|
||||
Example:
|
||||
```
|
||||
anime style
|
||||
masterpiece, highly detailed
|
||||
photorealistic
|
||||
```
|
||||
|
||||
2. The LoRAs will be sorted according to your `X_batch_sort` setting (ascending/descending), and trigger words will be matched to them in order.
|
||||
|
||||
3. If you have more LoRAs than trigger words, the extra LoRAs will have no trigger word (empty string).
|
||||
|
||||
### XY Input: LoRA Plot (Single LoRA Mode)
|
||||
|
||||
When testing a single LoRA with varying weights:
|
||||
|
||||
1. **trigger_word** (single line text field): Enter the trigger word for the selected LoRA.
|
||||
|
||||
Example: `anime style, masterpiece`
|
||||
|
||||
2. This trigger word will be added to all iterations for that LoRA.
|
||||
|
||||
### XY Input: LoRA (Individual Selection)
|
||||
|
||||
When selecting individual LoRAs:
|
||||
|
||||
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA slot has its own trigger word field.
|
||||
|
||||
2. Enter the appropriate trigger word for each LoRA you select.
|
||||
|
||||
3. In batch mode, use the **trigger_words** (multiline) field instead.
|
||||
|
||||
### LoRA Stacker
|
||||
|
||||
When creating LoRA stacks:
|
||||
|
||||
1. **trigger_word_1, trigger_word_2, etc.**: Each LoRA in the stack has its own trigger word field.
|
||||
|
||||
2. These trigger words will be preserved when the stack is passed to other nodes.
|
||||
|
||||
## Example Workflow
|
||||
|
||||
Here's a typical workflow using trigger words:
|
||||
|
||||
1. Create an **XY Input: LoRA Plot** node
|
||||
2. Set `input_mode` to "X: LoRA Batch, Y: LoRA Weight"
|
||||
3. Set `X_batch_path` to your LoRA directory
|
||||
- Windows: `d:\LoRas` or `C:\ComfyUI\models\loras`
|
||||
- Linux/Mac: `/path/to/loras` or `~/ComfyUI/models/loras`
|
||||
4. Set `X_batch_count` to the number of LoRAs you want to test
|
||||
5. In the `X_trigger_words` field, enter trigger words (one per line):
|
||||
```
|
||||
anime style, masterpiece
|
||||
photorealistic, 8k
|
||||
oil painting, classical art
|
||||
```
|
||||
6. Connect to your **XY Plot** node
|
||||
7. Set up your base prompt in the **Efficient Loader** (e.g., "a beautiful landscape")
|
||||
8. Run the workflow
|
||||
|
||||
**Result**: Each LoRA will be tested with its trigger word automatically added to the prompt:
|
||||
- LoRA 1: "a beautiful landscape anime style, masterpiece"
|
||||
- LoRA 2: "a beautiful landscape photorealistic, 8k"
|
||||
- LoRA 3: "a beautiful landscape oil painting, classical art"
|
||||
|
||||
## Technical Details
|
||||
|
||||
### Data Structure
|
||||
|
||||
LoRA parameters are now stored as 4-tuples instead of 3-tuples:
|
||||
- **Old format (backward compatible)**: `(lora_name, model_strength, clip_strength)`
|
||||
- **New format**: `(lora_name, model_strength, clip_strength, trigger_word)`
|
||||
|
||||
The system automatically handles both formats for backward compatibility.
|
||||
|
||||
### Prompt Injection
|
||||
|
||||
Trigger words are appended to the positive prompt during XY Plot iteration, just before the model loads the LoRA and encodes the prompt. The original prompt is preserved in a tuple structure to support multiple iterations and combinations.
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Empty trigger words are OK**: If a LoRA doesn't need a trigger word, just leave it blank.
|
||||
|
||||
2. **Multiple trigger words**: You can include multiple trigger words in a single field by separating them with commas: `anime style, masterpiece, highly detailed`
|
||||
|
||||
3. **Order matters**: In batch mode, make sure your trigger words are in the same order as your sorted LoRAs.
|
||||
|
||||
4. **Test first**: If you're unsure which trigger words a LoRA needs, check its documentation or test it individually first.
|
||||
|
||||
5. **Combining with Prompt S/R**: You can still use Prompt Search & Replace in combination with trigger words for even more control.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Q: My trigger words aren't being applied**
|
||||
- Make sure you're using the updated nodes (check that trigger_word fields exist)
|
||||
- Verify that you have trigger words entered in the correct fields
|
||||
- Check that your LoRA count matches the number of trigger words (or use fewer trigger words)
|
||||
|
||||
**Q: Can I use trigger words with LoRA Stacks?**
|
||||
- Yes! Use the LoRA Stacker node to create stacks with trigger words, then pass them to the XY Plot nodes.
|
||||
|
||||
**Q: Do trigger words work with the Efficient Loader's lora_name field?**
|
||||
- The Efficient Loader's single LoRA field doesn't have a trigger word option. Use LoRA Stacker to create a stack with a trigger word, then connect it to the Efficient Loader's lora_stack input.
|
||||
|
||||
## Backward Compatibility
|
||||
|
||||
This feature is fully backward compatible. Existing workflows that don't use trigger words will continue to work exactly as before. The system automatically handles both 3-tuple (old) and 4-tuple (new) LoRA parameter formats.
|
||||
63
TRIGGER_WORDS_SUMMARY.md
Normal file
63
TRIGGER_WORDS_SUMMARY.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Summary: Trigger Words for LoRAs in XY Plot
|
||||
|
||||
## What's New?
|
||||
|
||||
You can now automatically add trigger words to your prompts when testing LoRAs in XY Plot workflows! This feature solves the problem where some LoRAs require specific trigger words to work effectively.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### For Batch LoRA Testing
|
||||
|
||||
1. Use the **XY Input: LoRA Plot** node
|
||||
2. In the `X_trigger_words` field, add one trigger word per line:
|
||||
```
|
||||
anime style
|
||||
photorealistic
|
||||
oil painting
|
||||
```
|
||||
3. Your LoRAs will automatically get their trigger words during the batch run!
|
||||
|
||||
### For Individual LoRAs
|
||||
|
||||
1. Each LoRA slot in the **XY Input: LoRA** node now has a `trigger_word` field
|
||||
2. Simply type the trigger word for each LoRA you select
|
||||
|
||||
### For LoRA Stacks
|
||||
|
||||
1. The **LoRA Stacker** node now has `trigger_word` fields for each slot
|
||||
2. Create your stack with trigger words, and they'll be applied automatically
|
||||
|
||||
## Why Is This Useful?
|
||||
|
||||
**Before:** You had to either:
|
||||
- Add all trigger words to your base prompt (causing unwanted interactions)
|
||||
- Manually manage separate workflows for each LoRA
|
||||
- Test without trigger words (suboptimal results)
|
||||
|
||||
**Now:** Trigger words are automatically added only when their specific LoRA is applied!
|
||||
|
||||
## Example
|
||||
|
||||
**Base Prompt:** "a beautiful landscape"
|
||||
|
||||
**LoRAs with Trigger Words:**
|
||||
- style_lora_1.safetensors → "anime style, masterpiece"
|
||||
- photo_lora.safetensors → "photorealistic, 8k"
|
||||
|
||||
**Automatic Results:**
|
||||
- With style_lora_1: "a beautiful landscape anime style, masterpiece"
|
||||
- With photo_lora: "a beautiful landscape photorealistic, 8k"
|
||||
|
||||
## Compatibility
|
||||
|
||||
✅ **Fully backward compatible** - existing workflows work without changes
|
||||
✅ **Optional feature** - leave trigger words blank if you don't need them
|
||||
✅ **Works with all XY Plot combinations** - LoRA weights, model strength, clip strength
|
||||
|
||||
## Where to Learn More
|
||||
|
||||
See [TRIGGER_WORDS_GUIDE.md](TRIGGER_WORDS_GUIDE.md) for detailed usage instructions, technical details, and troubleshooting tips.
|
||||
|
||||
## Feedback
|
||||
|
||||
If you encounter any issues or have suggestions for improvement, please open an issue on GitHub!
|
||||
@@ -315,6 +315,7 @@ class TSC_LoRA_Stacker:
|
||||
inputs["required"][f"lora_wt_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||
|
||||
inputs["optional"] = {
|
||||
"lora_stack": ("LORA_STACK",)
|
||||
@@ -330,17 +331,18 @@ class TSC_LoRA_Stacker:
|
||||
|
||||
# Extract values from kwargs
|
||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||
|
||||
# Create a list of tuples using provided parameters, exclude tuples with lora_name as "None"
|
||||
if input_mode == "simple":
|
||||
weights = [kwargs.get(f"lora_wt_{i}") for i in range(1, lora_count + 1)]
|
||||
loras = [(lora_name, lora_weight, lora_weight) for lora_name, lora_weight in zip(loras, weights) if
|
||||
loras = [(lora_name, lora_weight, lora_weight, trigger_word) for lora_name, lora_weight, trigger_word in zip(loras, weights, trigger_words) if
|
||||
lora_name != "None"]
|
||||
else:
|
||||
model_strs = [kwargs.get(f"model_str_{i}") for i in range(1, lora_count + 1)]
|
||||
clip_strs = [kwargs.get(f"clip_str_{i}") for i in range(1, lora_count + 1)]
|
||||
loras = [(lora_name, model_str, clip_str) for lora_name, model_str, clip_str in
|
||||
zip(loras, model_strs, clip_strs) if lora_name != "None"]
|
||||
loras = [(lora_name, model_str, clip_str, trigger_word) for lora_name, model_str, clip_str, trigger_word in
|
||||
zip(loras, model_strs, clip_strs, trigger_words) if lora_name != "None"]
|
||||
|
||||
# If lora_stack is not None, extend the loras list with lora_stack
|
||||
if lora_stack is not None:
|
||||
@@ -1261,7 +1263,24 @@ class TSC_KSampler:
|
||||
lora_stack[0] = tuple(v if v is not None else lora_stack[0][i] for i, v in enumerate(var[0]))
|
||||
|
||||
max_label_len = 50 + (12 * (len(lora_stack) - 1))
|
||||
lora_name, lora_model_wt, lora_clip_wt = lora_stack[0]
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_tuple = lora_stack[0]
|
||||
lora_name = lora_tuple[0]
|
||||
lora_model_wt = lora_tuple[1]
|
||||
lora_clip_wt = lora_tuple[2]
|
||||
lora_trigger_word = lora_tuple[3] if len(lora_tuple) > 3 else ""
|
||||
|
||||
# Inject trigger word into positive prompt if present
|
||||
# positive_prompt structure: (current_prompt, original_prompt, prompt_after_X_loop)
|
||||
if lora_trigger_word:
|
||||
if positive_prompt[2] is not None:
|
||||
# In Y loop after X loop - build on the X loop result
|
||||
positive_prompt = (positive_prompt[2] + " " + lora_trigger_word, positive_prompt[1], positive_prompt[2])
|
||||
else:
|
||||
# In X loop or initial - build on original and save for Y loop
|
||||
modified_prompt = positive_prompt[1] + " " + lora_trigger_word
|
||||
positive_prompt = (modified_prompt, positive_prompt[1], modified_prompt)
|
||||
|
||||
lora_filename = os.path.splitext(os.path.basename(lora_name))[0]
|
||||
|
||||
if var_type == "LoRA" or var_type == "LoRA Stacks":
|
||||
@@ -1274,11 +1293,12 @@ class TSC_KSampler:
|
||||
else:
|
||||
text = f"LoRA: {lora_filename}({lora_model_wt},{lora_clip_wt})"
|
||||
elif len(lora_stack) > 1:
|
||||
lora_filenames = [os.path.splitext(os.path.basename(lora_name))[0] for lora_name, _, _ in
|
||||
lora_stack]
|
||||
lora_details = [(format(float(lora_model_wt), ".2f").rstrip('0').rstrip('.'),
|
||||
format(float(lora_clip_wt), ".2f").rstrip('0').rstrip('.')) for
|
||||
_, lora_model_wt, lora_clip_wt in lora_stack]
|
||||
lora_filenames = []
|
||||
lora_details = []
|
||||
for lora_tuple in lora_stack:
|
||||
lora_filenames.append(os.path.splitext(os.path.basename(lora_tuple[0]))[0])
|
||||
lora_details.append((format(float(lora_tuple[1]), ".2f").rstrip('0').rstrip('.'),
|
||||
format(float(lora_tuple[2]), ".2f").rstrip('0').rstrip('.')))
|
||||
non_name_length = sum(
|
||||
len(f"({lora_details[i][0]},{lora_details[i][1]})") + 2 for i in range(len(lora_stack)))
|
||||
available_space = max_label_len - non_name_length
|
||||
@@ -1727,7 +1747,11 @@ class TSC_KSampler:
|
||||
if X_type not in lora_types and Y_type not in lora_types:
|
||||
if lora_stack:
|
||||
names_list = []
|
||||
for name, model_wt, clip_wt in lora_stack:
|
||||
for lora_tuple in lora_stack:
|
||||
# Support both 3-tuple and 4-tuple
|
||||
name = lora_tuple[0]
|
||||
model_wt = lora_tuple[1]
|
||||
clip_wt = lora_tuple[2]
|
||||
base_name = os.path.splitext(os.path.basename(name))[0]
|
||||
formatted_str = f"{base_name}({round(model_wt, 3)},{round(clip_wt, 3)})"
|
||||
names_list.append(formatted_str)
|
||||
@@ -2923,7 +2947,8 @@ class TSC_XYplot_LoRA_Batch:
|
||||
"batch_sort": (["ascending", "descending"],),
|
||||
"batch_max": ("INT",{"default": -1, "min": -1, "max": XYPLOT_LIM, "step": 1}),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})},
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_words": ("STRING", {"default": "", "multiline": True})},
|
||||
"optional": {"lora_stack": ("LORA_STACK",)}
|
||||
}
|
||||
|
||||
@@ -2932,7 +2957,7 @@ class TSC_XYplot_LoRA_Batch:
|
||||
FUNCTION = "xy_value"
|
||||
CATEGORY = "Efficiency Nodes/XY Inputs"
|
||||
|
||||
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, batch_max, lora_stack=None):
|
||||
def xy_value(self, batch_path, subdirectories, batch_sort, model_strength, clip_strength, trigger_words, batch_max, lora_stack=None):
|
||||
if batch_max == 0:
|
||||
return (None,)
|
||||
|
||||
@@ -2949,8 +2974,14 @@ class TSC_XYplot_LoRA_Batch:
|
||||
elif batch_sort == "descending":
|
||||
loras.sort(reverse=True)
|
||||
|
||||
# Parse trigger words (one per line)
|
||||
trigger_word_list = [tw.strip() for tw in trigger_words.split('\n')] if trigger_words else []
|
||||
|
||||
# Construct the xy_value using the obtained loras
|
||||
xy_value = [[(lora, model_strength, clip_strength)] + (lora_stack if lora_stack else []) for lora in loras]
|
||||
xy_value = []
|
||||
for i, lora in enumerate(loras):
|
||||
trigger_word = trigger_word_list[i] if i < len(trigger_word_list) else ""
|
||||
xy_value.append([(lora, model_strength, clip_strength, trigger_word)] + (lora_stack if lora_stack else []))
|
||||
|
||||
if batch_max != -1: # If there's a limit
|
||||
xy_value = xy_value[:batch_max]
|
||||
@@ -2976,6 +3007,7 @@ class TSC_XYplot_LoRA:
|
||||
"lora_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM, "step": 1}),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2983,6 +3015,7 @@ class TSC_XYplot_LoRA:
|
||||
inputs["required"][f"lora_name_{i}"] = (loras,)
|
||||
inputs["required"][f"model_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"clip_str_{i}"] = ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01})
|
||||
inputs["required"][f"trigger_word_{i}"] = ("STRING", {"default": "", "multiline": False})
|
||||
|
||||
inputs["optional"] = {
|
||||
"lora_stack": ("LORA_STACK",)
|
||||
@@ -3009,6 +3042,7 @@ class TSC_XYplot_LoRA:
|
||||
loras = [kwargs.get(f"lora_name_{i}") for i in range(1, lora_count + 1)]
|
||||
model_strs = [kwargs.get(f"model_str_{i}", model_strength) for i in range(1, lora_count + 1)]
|
||||
clip_strs = [kwargs.get(f"clip_str_{i}", clip_strength) for i in range(1, lora_count + 1)]
|
||||
trigger_words = [kwargs.get(f"trigger_word_{i}", "") for i in range(1, lora_count + 1)]
|
||||
|
||||
# Use model_strength and clip_strength for the loras where values are not provided
|
||||
if "Weights" not in input_mode:
|
||||
@@ -3017,14 +3051,17 @@ class TSC_XYplot_LoRA:
|
||||
clip_strs[i] = clip_strength
|
||||
|
||||
# Extend each sub-array with lora_stack if it's not None
|
||||
xy_value = [[(lora, model_str, clip_str)] + lora_stack for lora, model_str, clip_str
|
||||
in zip(loras, model_strs, clip_strs) if lora != "None"]
|
||||
xy_value = [[(lora, model_str, clip_str, trigger_word)] + lora_stack
|
||||
for lora, model_str, clip_str, trigger_word
|
||||
in zip(loras, model_strs, clip_strs, trigger_words) if lora != "None"]
|
||||
|
||||
result = ((xy_type, xy_value),)
|
||||
else:
|
||||
try:
|
||||
# Get trigger_words from kwargs, default to empty string
|
||||
trigger_words = kwargs.get("trigger_words", "")
|
||||
result = self.lora_batch.xy_value(batch_path, subdirectories, batch_sort, model_strength,
|
||||
clip_strength, batch_max, lora_stack)
|
||||
clip_strength, trigger_words, batch_max, lora_stack)
|
||||
except Exception as e:
|
||||
print(f"{error('XY Plot Error:')} {e}")
|
||||
|
||||
@@ -3048,10 +3085,12 @@ class TSC_XYplot_LoRA_Plot:
|
||||
"lora_name": (loras,),
|
||||
"model_strength": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"clip_strength": ("FLOAT", {"default": 1.0, "min": -10.0, "max": 10.0, "step": 0.01}),
|
||||
"trigger_word": ("STRING", {"default": "", "multiline": False}),
|
||||
"X_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||
"X_batch_path": ("STRING", {"default": xy_batch_default_path, "multiline": False}),
|
||||
"X_subdirectories": ("BOOLEAN", {"default": False}),
|
||||
"X_batch_sort": (["ascending", "descending"],),
|
||||
"X_trigger_words": ("STRING", {"default": "", "multiline": True}),
|
||||
"X_first_value": ("FLOAT", {"default": 0.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"X_last_value": ("FLOAT", {"default": 1.0, "min": -10.00, "max": 10.0, "step": 0.01}),
|
||||
"Y_batch_count": ("INT", {"default": XYPLOT_DEF, "min": 0, "max": XYPLOT_LIM}),
|
||||
@@ -3090,8 +3129,8 @@ class TSC_XYplot_LoRA_Plot:
|
||||
|
||||
return (None,)
|
||||
|
||||
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, X_batch_count, X_batch_path, X_subdirectories,
|
||||
X_batch_sort, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
||||
def xy_value(self, input_mode, lora_name, model_strength, clip_strength, trigger_word, X_batch_count, X_batch_path, X_subdirectories,
|
||||
X_batch_sort, X_trigger_words, X_first_value, X_last_value, Y_batch_count, Y_first_value, Y_last_value, lora_stack=None):
|
||||
|
||||
x_value, y_value = [], []
|
||||
lora_stack = lora_stack if lora_stack else []
|
||||
@@ -3101,6 +3140,7 @@ class TSC_XYplot_LoRA_Plot:
|
||||
return (None,None,)
|
||||
if "LoRA Batch" in input_mode:
|
||||
lora_name = None
|
||||
trigger_word = None
|
||||
if "LoRA Weight" in input_mode:
|
||||
model_strength = None
|
||||
clip_strength = None
|
||||
@@ -3113,7 +3153,7 @@ class TSC_XYplot_LoRA_Plot:
|
||||
if "X: LoRA Batch" in input_mode:
|
||||
try:
|
||||
x_value = self.lora_batch.xy_value(X_batch_path, X_subdirectories, X_batch_sort,
|
||||
model_strength, clip_strength, X_batch_count, lora_stack)[0][1]
|
||||
model_strength, clip_strength, X_trigger_words, X_batch_count, lora_stack)[0][1]
|
||||
except Exception as e:
|
||||
print(f"{error('XY Plot Error:')} {e}")
|
||||
return (None,)
|
||||
@@ -3121,19 +3161,19 @@ class TSC_XYplot_LoRA_Plot:
|
||||
elif "X: Model Strength" in input_mode:
|
||||
x_floats = generate_floats(X_batch_count, X_first_value, X_last_value)
|
||||
x_type = "LoRA MStr"
|
||||
x_value = [[(lora_name, x, clip_strength)] + lora_stack for x in x_floats]
|
||||
x_value = [[(lora_name, x, clip_strength, trigger_word)] + lora_stack for x in x_floats]
|
||||
|
||||
# Handling Y values
|
||||
y_floats = generate_floats(Y_batch_count, Y_first_value, Y_last_value)
|
||||
if "Y: LoRA Weight" in input_mode:
|
||||
y_type = "LoRA Wt"
|
||||
y_value = [[(lora_name, y, y)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, y, y, trigger_word)] + lora_stack for y in y_floats]
|
||||
elif "Y: Model Strength" in input_mode:
|
||||
y_type = "LoRA MStr"
|
||||
y_value = [[(lora_name, y, clip_strength)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, y, clip_strength, trigger_word)] + lora_stack for y in y_floats]
|
||||
elif "Y: Clip Strength" in input_mode:
|
||||
y_type = "LoRA CStr"
|
||||
y_value = [[(lora_name, model_strength, y)] + lora_stack for y in y_floats]
|
||||
y_value = [[(lora_name, model_strength, y, trigger_word)] + lora_stack for y in y_floats]
|
||||
|
||||
return ((x_type, x_value), (y_type, y_value))
|
||||
|
||||
@@ -3957,12 +3997,6 @@ class TSC_ImageOverlay:
|
||||
samples = overlay_image.movedim(-1, 1)
|
||||
overlay_image = comfy.utils.common_upscale(samples, overlay_image_size[0], overlay_image_size[1], resize_method, False)
|
||||
overlay_image = overlay_image.movedim(1, -1)
|
||||
|
||||
# Handle batch dimension - use first image if overlay_image is a batch
|
||||
if len(overlay_image.shape) == 4:
|
||||
if overlay_image.shape[0] > 1:
|
||||
print(f"{warning('Image Overlay Warning:')} Multiple overlay images detected ({overlay_image.shape[0]}), using only the first image.")
|
||||
overlay_image = overlay_image[0]
|
||||
|
||||
overlay_image = tensor2pil(overlay_image)
|
||||
|
||||
|
||||
@@ -221,16 +221,12 @@ def encode_token_weights_l(model, token_weight_pairs):
|
||||
l_out, _ = model.clip_l.encode_token_weights(token_weight_pairs)
|
||||
return l_out, None
|
||||
|
||||
def encode_token_weights(model, token_weight_pairs, encode_func):
|
||||
# Keep CLIP options aligned with ComfyUI's core encode path so token
|
||||
# tensors are created on the same device as the active text encoder pass.
|
||||
model.cond_stage_model.reset_clip_options()
|
||||
if model.layer_idx is not None:
|
||||
model.cond_stage_model.set_clip_options({"layer": model.layer_idx})
|
||||
|
||||
model_management.load_model_gpu(model.patcher)
|
||||
model.cond_stage_model.set_clip_options({"execution_device": model.patcher.load_device})
|
||||
return encode_func(model.cond_stage_model, token_weight_pairs)
|
||||
def encode_token_weights(model, token_weight_pairs, encode_func):
|
||||
if model.layer_idx is not None:
|
||||
model.cond_stage_model.set_clip_options({"layer": model.layer_idx})
|
||||
|
||||
model_management.load_model_gpu(model.patcher)
|
||||
return encode_func(model.cond_stage_model, token_weight_pairs)
|
||||
|
||||
def prepareXL(embs_l, embs_g, pooled, clip_balance):
|
||||
l_w = 1 - max(0, clip_balance - .5) * 2
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[project]
|
||||
name = "efficiency-nodes-comfyui"
|
||||
description = "Efficiency Nodes for ComfyUI Version 2.0 A collection of ComfyUI custom nodes to help streamline workflows and reduce total node count."
|
||||
version = "1.0.9"
|
||||
version = "1.0.8"
|
||||
license = { file = "LICENSE" }
|
||||
dependencies = ["clip-interrogator", "simpleeval"]
|
||||
|
||||
|
||||
14
tsc_utils.py
14
tsc_utils.py
@@ -354,7 +354,13 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
||||
if len(lora_params) == 0:
|
||||
return ckpt, clip
|
||||
|
||||
lora_name, strength_model, strength_clip = lora_params[0]
|
||||
lora_tuple = lora_params[0]
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_name = lora_tuple[0]
|
||||
strength_model = lora_tuple[1]
|
||||
strength_clip = lora_tuple[2]
|
||||
# Ignore trigger_word (index 3) if present - it's only for prompt modification
|
||||
|
||||
if os.path.isabs(lora_name):
|
||||
lora_path = lora_name
|
||||
else:
|
||||
@@ -375,7 +381,11 @@ def load_lora(lora_params, ckpt_name, id, cache=None, ckpt_cache=None, cache_ove
|
||||
return recursive_load_lora(lora_params[1:], lora_model, lora_clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||
|
||||
# Unpack lora parameters from the first element of the list for now
|
||||
lora_name, strength_model, strength_clip = lora_params[0]
|
||||
# Support both 3-tuple (old) and 4-tuple (new with trigger words)
|
||||
lora_tuple = lora_params[0]
|
||||
lora_name = lora_tuple[0]
|
||||
strength_model = lora_tuple[1]
|
||||
strength_clip = lora_tuple[2]
|
||||
ckpt, clip, _ = load_checkpoint(ckpt_name, id, cache=ckpt_cache)
|
||||
|
||||
lora_model, lora_clip = recursive_load_lora(lora_params, ckpt, clip, id, ckpt_cache, cache_overwrite, folder_paths)
|
||||
|
||||
Reference in New Issue
Block a user