Compare commits

..

1 Commits

Author SHA1 Message Date
Will Miao
68c5f79a67 Refactor showcase and modal components for improved functionality and performance
- Removed unused showcase toggle functionality from ModelCard and ModelModal.
- Simplified metadata panel handling in MediaUtils and MetadataPanel, transitioning to button-based visibility instead of hover.
- Enhanced showcase rendering logic in ShowcaseView to support new layout and navigation features.
- Updated event handling for media controls and thumbnail navigation to streamline user interactions.
- Improved example image import functionality and error handling.
- Cleaned up redundant code and comments across various components for better readability and maintainability.
2025-07-27 15:52:09 +08:00
574 changed files with 19470 additions and 371597 deletions

View File

@@ -1,201 +0,0 @@
---
name: lora-manager-e2e
description: End-to-end testing and validation for LoRa Manager features. Use when performing automated E2E validation of LoRa Manager standalone mode, including starting/restarting the server, using Chrome DevTools MCP to interact with the web UI at http://127.0.0.1:8188/loras, and verifying frontend-to-backend functionality. Covers workflow validation, UI interaction testing, and integration testing between the standalone Python backend and the browser frontend.
---
# LoRa Manager E2E Testing
This skill provides workflows and utilities for end-to-end testing of LoRa Manager using Chrome DevTools MCP.
## Prerequisites
- LoRa Manager project cloned and dependencies installed (`pip install -r requirements.txt`)
- Chrome browser available for debugging
- Chrome DevTools MCP connected
## Quick Start Workflow
### 1. Start LoRa Manager Standalone
```python
# Use the provided script to start the server
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188
```
Or manually:
```bash
cd /home/miao/workspace/ComfyUI/custom_nodes/ComfyUI-Lora-Manager
python standalone.py --port 8188
```
Wait for server ready message before proceeding.
### 2. Open Chrome Debug Mode
```bash
# Chrome with remote debugging on port 9222
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras
```
### 3. Connect Chrome DevTools MCP
Ensure the MCP server is connected to Chrome at `http://localhost:9222`.
### 4. Navigate and Interact
Use Chrome DevTools MCP tools to:
- Take snapshots: `take_snapshot`
- Click elements: `click`
- Fill forms: `fill` or `fill_form`
- Evaluate scripts: `evaluate_script`
- Wait for elements: `wait_for`
## Common E2E Test Patterns
### Pattern: Full Page Load Verification
```python
# Navigate to LoRA list page
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
# Wait for page to load
wait_for(text="LoRAs", timeout=10000)
# Take snapshot to verify UI state
snapshot = take_snapshot()
```
### Pattern: Restart Server for Configuration Changes
```python
# Stop current server (if running)
# Start with new configuration
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188 --restart
# Wait and refresh browser
navigate_page(type="reload", ignoreCache=True)
wait_for(text="LoRAs", timeout=15000)
```
### Pattern: Verify Backend API via Frontend
```python
# Execute script in browser to call backend API
result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
const data = await response.json();
return { count: data.length, firstItem: data[0]?.name };
}
""")
```
### Pattern: Form Submission Flow
```python
# Fill a form (e.g., search or filter)
fill_form(elements=[
{"uid": "search-input", "value": "character"},
])
# Click submit button
click(uid="search-button")
# Wait for results
wait_for(text="Results", timeout=5000)
# Verify results via snapshot
snapshot = take_snapshot()
```
### Pattern: Modal Dialog Interaction
```python
# Open modal (e.g., add LoRA)
click(uid="add-lora-button")
# Wait for modal to appear
wait_for(text="Add LoRA", timeout=3000)
# Fill modal form
fill_form(elements=[
{"uid": "lora-name", "value": "Test LoRA"},
{"uid": "lora-path", "value": "/path/to/lora.safetensors"},
])
# Submit
click(uid="modal-submit-button")
# Wait for success message or close
wait_for(text="Success", timeout=5000)
```
## Available Scripts
### scripts/start_server.py
Starts or restarts the LoRa Manager standalone server.
```bash
python scripts/start_server.py [--port PORT] [--restart] [--wait]
```
Options:
- `--port`: Server port (default: 8188)
- `--restart`: Kill existing server before starting
- `--wait`: Wait for server to be ready before exiting
### scripts/wait_for_server.py
Polls server until ready or timeout.
```bash
python scripts/wait_for_server.py [--port PORT] [--timeout SECONDS]
```
## Test Scenarios Reference
See [references/test-scenarios.md](references/test-scenarios.md) for detailed test scenarios including:
- LoRA list display and filtering
- Model metadata editing
- Recipe creation and management
- Settings configuration
- Import/export functionality
## Network Request Verification
Use `list_network_requests` and `get_network_request` to verify API calls:
```python
# List recent XHR/fetch requests
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Get details of specific request
details = get_network_request(reqid=123)
```
## Console Message Monitoring
```python
# Check for errors or warnings
messages = list_console_messages(types=["error", "warn"])
```
## Performance Testing
```python
# Start performance trace
performance_start_trace(reload=True, autoStop=False)
# Perform actions...
# Stop and analyze
results = performance_stop_trace()
```
## Cleanup
Always ensure proper cleanup after tests:
1. Stop the standalone server
2. Close browser pages (keep at least one open)
3. Clear temporary data if needed

View File

@@ -1,324 +0,0 @@
# Chrome DevTools MCP Cheatsheet for LoRa Manager
Quick reference for common MCP commands used in LoRa Manager E2E testing.
## Navigation
```python
# Navigate to LoRA list page
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
# Reload page with cache clear
navigate_page(type="reload", ignoreCache=True)
# Go back/forward
navigate_page(type="back")
navigate_page(type="forward")
```
## Waiting
```python
# Wait for text to appear
wait_for(text="LoRAs", timeout=10000)
# Wait for specific element (via evaluate_script)
evaluate_script(function="""
() => {
return new Promise((resolve) => {
const check = () => {
if (document.querySelector('.lora-card')) {
resolve(true);
} else {
setTimeout(check, 100);
}
};
check();
});
}
""")
```
## Taking Snapshots
```python
# Full page snapshot
snapshot = take_snapshot()
# Verbose snapshot (more details)
snapshot = take_snapshot(verbose=True)
# Save to file
take_snapshot(filePath="test-snapshots/page-load.json")
```
## Element Interaction
```python
# Click element
click(uid="element-uid-from-snapshot")
# Double click
click(uid="element-uid", dblClick=True)
# Fill input
fill(uid="search-input", value="test query")
# Fill multiple inputs
fill_form(elements=[
{"uid": "input-1", "value": "value 1"},
{"uid": "input-2", "value": "value 2"},
])
# Hover
hover(uid="lora-card-1")
# Upload file
upload_file(uid="file-input", filePath="/path/to/file.safetensors")
```
## Keyboard Input
```python
# Press key
press_key(key="Enter")
press_key(key="Escape")
press_key(key="Tab")
# Keyboard shortcuts
press_key(key="Control+A") # Select all
press_key(key="Control+F") # Find
```
## JavaScript Evaluation
```python
# Simple evaluation
result = evaluate_script(function="() => document.title")
# Async evaluation
result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
return await response.json();
}
""")
# Check element existence
exists = evaluate_script(function="""
() => document.querySelector('.lora-card') !== null
""")
# Get element count
count = evaluate_script(function="""
() => document.querySelectorAll('.lora-card').length
""")
```
## Network Monitoring
```python
# List all network requests
requests = list_network_requests()
# Filter by resource type
xhr_requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Get specific request details
details = get_network_request(reqid=123)
# Include preserved requests from previous navigations
all_requests = list_network_requests(includePreservedRequests=True)
```
## Console Monitoring
```python
# List all console messages
messages = list_console_messages()
# Filter by type
errors = list_console_messages(types=["error", "warn"])
# Include preserved messages
all_messages = list_console_messages(includePreservedMessages=True)
# Get specific message
details = get_console_message(msgid=1)
```
## Performance Testing
```python
# Start trace with page reload
performance_start_trace(reload=True, autoStop=False)
# Start trace without reload
performance_start_trace(reload=False, autoStop=True, filePath="trace.json.gz")
# Stop trace
results = performance_stop_trace()
# Stop and save
performance_stop_trace(filePath="trace-results.json.gz")
# Analyze specific insight
insight = performance_analyze_insight(
insightSetId="results.insightSets[0].id",
insightName="LCPBreakdown"
)
```
## Page Management
```python
# List open pages
pages = list_pages()
# Select a page
select_page(pageId=0, bringToFront=True)
# Create new page
new_page(url="http://127.0.0.1:8188/loras")
# Close page (keep at least one open!)
close_page(pageId=1)
# Resize page
resize_page(width=1920, height=1080)
```
## Screenshots
```python
# Full page screenshot
take_screenshot(fullPage=True)
# Viewport screenshot
take_screenshot()
# Element screenshot
take_screenshot(uid="lora-card-1")
# Save to file
take_screenshot(filePath="screenshots/page.png", format="png")
# JPEG with quality
take_screenshot(filePath="screenshots/page.jpg", format="jpeg", quality=90)
```
## Dialog Handling
```python
# Accept dialog
handle_dialog(action="accept")
# Accept with text input
handle_dialog(action="accept", promptText="user input")
# Dismiss dialog
handle_dialog(action="dismiss")
```
## Device Emulation
```python
# Mobile viewport
emulate(viewport={"width": 375, "height": 667, "isMobile": True, "hasTouch": True})
# Tablet viewport
emulate(viewport={"width": 768, "height": 1024, "isMobile": True, "hasTouch": True})
# Desktop viewport
emulate(viewport={"width": 1920, "height": 1080})
# Network throttling
emulate(networkConditions="Slow 3G")
emulate(networkConditions="Fast 4G")
# CPU throttling
emulate(cpuThrottlingRate=4) # 4x slowdown
# Geolocation
emulate(geolocation={"latitude": 37.7749, "longitude": -122.4194})
# User agent
emulate(userAgent="Mozilla/5.0 (Custom)")
# Reset emulation
emulate(viewport=None, networkConditions="No emulation", userAgent=None)
```
## Drag and Drop
```python
# Drag element to another
drag(from_uid="draggable-item", to_uid="drop-zone")
```
## Common LoRa Manager Test Patterns
### Verify LoRA Cards Loaded
```python
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
wait_for(text="LoRAs", timeout=10000)
# Check if cards loaded
result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
return {
count: cards.length,
hasData: cards.length > 0
};
}
""")
```
### Search and Verify Results
```python
fill(uid="search-input", value="character")
press_key(key="Enter")
wait_for(timeout=2000) # Wait for debounce
# Check results
result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
const names = Array.from(cards).map(c => c.dataset.name || c.textContent);
return { count: cards.length, names };
}
""")
```
### Check API Response
```python
# Trigger API call
evaluate_script(function="""
() => window.loraApiCallPromise = fetch('/loras/api/list').then(r => r.json())
""")
# Wait and get result
import time
time.sleep(1)
result = evaluate_script(function="""
async () => await window.loraApiCallPromise
""")
```
### Monitor Console for Errors
```python
# Before test: clear console (navigate reloads)
navigate_page(type="reload")
# ... perform actions ...
# Check for errors
errors = list_console_messages(types=["error"])
assert len(errors) == 0, f"Console errors: {errors}"
```

View File

@@ -1,272 +0,0 @@
# LoRa Manager E2E Test Scenarios
This document provides detailed test scenarios for end-to-end validation of LoRa Manager features.
## Table of Contents
1. [LoRA List Page](#lora-list-page)
2. [Model Details](#model-details)
3. [Recipes](#recipes)
4. [Settings](#settings)
5. [Import/Export](#importexport)
---
## LoRA List Page
### Scenario: Page Load and Display
**Objective**: Verify the LoRA list page loads correctly and displays models.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/loras`
2. Wait for page title "LoRAs" to appear
3. Take snapshot to verify:
- Header with "LoRAs" title is visible
- Search/filter controls are present
- Grid/list view toggle exists
- LoRA cards are displayed (if models exist)
- Pagination controls (if applicable)
**Expected Result**: Page loads without errors, UI elements are present.
### Scenario: Search Functionality
**Objective**: Verify search filters LoRA models correctly.
**Steps**:
1. Ensure at least one LoRA exists with known name (e.g., "test-character")
2. Navigate to LoRA list page
3. Enter search term in search box: "test"
4. Press Enter or click search button
5. Wait for results to update
**Expected Result**: Only LoRAs matching search term are displayed.
**Verification Script**:
```python
# After search, verify filtered results
evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
const names = Array.from(cards).map(c => c.dataset.name);
return { count: cards.length, names };
}
""")
```
### Scenario: Filter by Tags
**Objective**: Verify tag filtering works correctly.
**Steps**:
1. Navigate to LoRA list page
2. Click on a tag (e.g., "character", "style")
3. Wait for filtered results
**Expected Result**: Only LoRAs with selected tag are displayed.
### Scenario: View Mode Toggle
**Objective**: Verify grid/list view toggle works.
**Steps**:
1. Navigate to LoRA list page
2. Click list view button
3. Verify list layout
4. Click grid view button
5. Verify grid layout
**Expected Result**: View mode changes correctly, layout updates.
---
## Model Details
### Scenario: Open Model Details
**Objective**: Verify clicking a LoRA opens its details.
**Steps**:
1. Navigate to LoRA list page
2. Click on a LoRA card
3. Wait for details panel/modal to open
**Expected Result**: Details panel shows:
- Model name
- Preview image
- Metadata (trigger words, tags, etc.)
- Action buttons (edit, delete, etc.)
### Scenario: Edit Model Metadata
**Objective**: Verify metadata editing works end-to-end.
**Steps**:
1. Open a LoRA's details
2. Click "Edit" button
3. Modify trigger words field
4. Add/remove tags
5. Save changes
6. Refresh page
7. Reopen the same LoRA
**Expected Result**: Changes persist after refresh.
### Scenario: Delete Model
**Objective**: Verify model deletion works.
**Steps**:
1. Open a LoRA's details
2. Click "Delete" button
3. Confirm deletion in dialog
4. Wait for removal
**Expected Result**: Model removed from list, success message shown.
---
## Recipes
### Scenario: Recipe List Display
**Objective**: Verify recipes page loads and displays recipes.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/recipes`
2. Wait for "Recipes" title
3. Take snapshot
**Expected Result**: Recipe list displayed with cards/items.
### Scenario: Create New Recipe
**Objective**: Verify recipe creation workflow.
**Steps**:
1. Navigate to recipes page
2. Click "New Recipe" button
3. Fill recipe form:
- Name: "Test Recipe"
- Description: "E2E test recipe"
- Add LoRA models
4. Save recipe
5. Verify recipe appears in list
**Expected Result**: New recipe created and displayed.
### Scenario: Apply Recipe
**Objective**: Verify applying a recipe to ComfyUI.
**Steps**:
1. Open a recipe
2. Click "Apply" or "Load in ComfyUI"
3. Verify action completes
**Expected Result**: Recipe applied successfully.
---
## Settings
### Scenario: Settings Page Load
**Objective**: Verify settings page displays correctly.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/settings`
2. Wait for "Settings" title
3. Take snapshot
**Expected Result**: Settings form with various options displayed.
### Scenario: Change Setting and Restart
**Objective**: Verify settings persist after restart.
**Steps**:
1. Navigate to settings page
2. Change a setting (e.g., default view mode)
3. Save settings
4. Restart server: `python scripts/start_server.py --restart --wait`
5. Refresh browser page
6. Navigate to settings
**Expected Result**: Changed setting value persists.
---
## Import/Export
### Scenario: Export Models List
**Objective**: Verify export functionality.
**Steps**:
1. Navigate to LoRA list
2. Click "Export" button
3. Select format (JSON/CSV)
4. Download file
**Expected Result**: File downloaded with correct data.
### Scenario: Import Models
**Objective**: Verify import functionality.
**Steps**:
1. Prepare import file
2. Navigate to import page
3. Upload file
4. Verify import results
**Expected Result**: Models imported successfully, confirmation shown.
---
## API Integration Tests
### Scenario: Verify API Endpoints
**Objective**: Verify backend API responds correctly.
**Test via browser console**:
```javascript
// List LoRAs
fetch('/loras/api/list').then(r => r.json()).then(console.log)
// Get LoRA details
fetch('/loras/api/detail/<id>').then(r => r.json()).then(console.log)
// Search LoRAs
fetch('/loras/api/search?q=test').then(r => r.json()).then(console.log)
```
**Expected Result**: APIs return valid JSON with expected structure.
---
## Console Error Monitoring
During all tests, monitor browser console for errors:
```python
# Check for JavaScript errors
messages = list_console_messages(types=["error"])
assert len(messages) == 0, f"Console errors found: {messages}"
```
## Network Request Verification
Verify key API calls are made:
```python
# List XHR requests
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Look for specific endpoints
lora_list_requests = [r for r in requests if "/api/list" in r.get("url", "")]
assert len(lora_list_requests) > 0, "LoRA list API not called"
```

View File

@@ -1,193 +0,0 @@
#!/usr/bin/env python3
"""
Example E2E test demonstrating LoRa Manager testing workflow.
This script shows how to:
1. Start the standalone server
2. Use Chrome DevTools MCP to interact with the UI
3. Verify functionality end-to-end
Note: This is a template. Actual execution requires Chrome DevTools MCP.
"""
import subprocess
import sys
import time
def run_test():
"""Run example E2E test flow."""
print("=" * 60)
print("LoRa Manager E2E Test Example")
print("=" * 60)
# Step 1: Start server
print("\n[1/5] Starting LoRa Manager standalone server...")
result = subprocess.run(
[sys.executable, "start_server.py", "--port", "8188", "--wait", "--timeout", "30"],
capture_output=True,
text=True
)
if result.returncode != 0:
print(f"Failed to start server: {result.stderr}")
return 1
print("Server ready!")
# Step 2: Open Chrome (manual step - show command)
print("\n[2/5] Open Chrome with debug mode:")
print("google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras")
print("(In actual test, this would be automated via MCP)")
# Step 3: Navigate and verify page load
print("\n[3/5] Page Load Verification:")
print("""
MCP Commands to execute:
1. navigate_page(type="url", url="http://127.0.0.1:8188/loras")
2. wait_for(text="LoRAs", timeout=10000)
3. snapshot = take_snapshot()
""")
# Step 4: Test search functionality
print("\n[4/5] Search Functionality Test:")
print("""
MCP Commands to execute:
1. fill(uid="search-input", value="test")
2. press_key(key="Enter")
3. wait_for(text="Results", timeout=5000)
4. result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
return { count: cards.length };
}
""")
""")
# Step 5: Verify API
print("\n[5/5] API Verification:")
print("""
MCP Commands to execute:
1. api_result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
const data = await response.json();
return { count: data.length, status: response.status };
}
""")
2. Verify api_result['status'] == 200
""")
print("\n" + "=" * 60)
print("Test flow completed!")
print("=" * 60)
return 0
def example_restart_flow():
"""Example: Testing configuration change that requires restart."""
print("\n" + "=" * 60)
print("Example: Server Restart Flow")
print("=" * 60)
print("""
Scenario: Change setting and verify after restart
Steps:
1. Navigate to settings page
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
2. Change a setting (e.g., theme)
- fill(uid="theme-select", value="dark")
- click(uid="save-settings-button")
3. Restart server
- subprocess.run([python, "start_server.py", "--restart", "--wait"])
4. Refresh browser
- navigate_page(type="reload", ignoreCache=True)
- wait_for(text="LoRAs", timeout=15000)
5. Verify setting persisted
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
- theme = evaluate_script(function="() => document.querySelector('#theme-select').value")
- assert theme == "dark"
""")
def example_modal_interaction():
"""Example: Testing modal dialog interaction."""
print("\n" + "=" * 60)
print("Example: Modal Dialog Interaction")
print("=" * 60)
print("""
Scenario: Add new LoRA via modal
Steps:
1. Open modal
- click(uid="add-lora-button")
- wait_for(text="Add LoRA", timeout=3000)
2. Fill form
- fill_form(elements=[
{"uid": "lora-name", "value": "Test Character"},
{"uid": "lora-path", "value": "/models/test.safetensors"},
])
3. Submit
- click(uid="modal-submit-button")
4. Verify success
- wait_for(text="Successfully added", timeout=5000)
- snapshot = take_snapshot()
""")
def example_network_monitoring():
"""Example: Network request monitoring."""
print("\n" + "=" * 60)
print("Example: Network Request Monitoring")
print("=" * 60)
print("""
Scenario: Verify API calls during user interaction
Steps:
1. Clear network log (implicit on navigation)
- navigate_page(type="url", url="http://127.0.0.1:8188/loras")
2. Perform action that triggers API call
- fill(uid="search-input", value="character")
- press_key(key="Enter")
3. List network requests
- requests = list_network_requests(resourceTypes=["xhr", "fetch"])
4. Find search API call
- search_requests = [r for r in requests if "/api/search" in r.get("url", "")]
- assert len(search_requests) > 0, "Search API was not called"
5. Get request details
- if search_requests:
details = get_network_request(reqid=search_requests[0]["reqid"])
- Verify request method, response status, etc.
""")
if __name__ == "__main__":
print("LoRa Manager E2E Test Examples\n")
print("This script demonstrates E2E testing patterns.\n")
print("Note: Actual execution requires Chrome DevTools MCP connection.\n")
run_test()
example_restart_flow()
example_modal_interaction()
example_network_monitoring()
print("\n" + "=" * 60)
print("All examples shown!")
print("=" * 60)

View File

@@ -1,169 +0,0 @@
#!/usr/bin/env python3
"""
Start or restart LoRa Manager standalone server for E2E testing.
"""
import argparse
import subprocess
import sys
import time
import socket
import signal
import os
def find_server_process(port: int) -> list[int]:
"""Find PIDs of processes listening on the given port."""
try:
result = subprocess.run(
["lsof", "-ti", f":{port}"],
capture_output=True,
text=True,
check=False
)
if result.returncode == 0 and result.stdout.strip():
return [int(pid) for pid in result.stdout.strip().split("\n") if pid]
except FileNotFoundError:
# lsof not available, try netstat
try:
result = subprocess.run(
["netstat", "-tlnp"],
capture_output=True,
text=True,
check=False
)
pids = []
for line in result.stdout.split("\n"):
if f":{port}" in line:
parts = line.split()
for part in parts:
if "/" in part:
try:
pid = int(part.split("/")[0])
pids.append(pid)
except ValueError:
pass
return pids
except FileNotFoundError:
pass
return []
def kill_server(port: int) -> None:
"""Kill processes using the specified port."""
pids = find_server_process(port)
for pid in pids:
try:
os.kill(pid, signal.SIGTERM)
print(f"Sent SIGTERM to process {pid}")
except ProcessLookupError:
pass
# Wait for processes to terminate
time.sleep(1)
# Force kill if still running
pids = find_server_process(port)
for pid in pids:
try:
os.kill(pid, signal.SIGKILL)
print(f"Sent SIGKILL to process {pid}")
except ProcessLookupError:
pass
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
"""Check if server is accepting connections."""
try:
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
return True
except (socket.timeout, ConnectionRefusedError, OSError):
return False
def wait_for_server(port: int, timeout: int = 30) -> bool:
"""Wait for server to become ready."""
start = time.time()
while time.time() - start < timeout:
if is_server_ready(port):
return True
time.sleep(0.5)
return False
def main() -> int:
parser = argparse.ArgumentParser(
description="Start LoRa Manager standalone server for E2E testing"
)
parser.add_argument(
"--port",
type=int,
default=8188,
help="Server port (default: 8188)"
)
parser.add_argument(
"--restart",
action="store_true",
help="Kill existing server before starting"
)
parser.add_argument(
"--wait",
action="store_true",
help="Wait for server to be ready before exiting"
)
parser.add_argument(
"--timeout",
type=int,
default=30,
help="Timeout for waiting (default: 30)"
)
args = parser.parse_args()
# Get project root (parent of .agents directory)
script_dir = os.path.dirname(os.path.abspath(__file__))
skill_dir = os.path.dirname(script_dir)
project_root = os.path.dirname(os.path.dirname(os.path.dirname(skill_dir)))
# Restart if requested
if args.restart:
print(f"Killing existing server on port {args.port}...")
kill_server(args.port)
time.sleep(1)
# Check if already running
if is_server_ready(args.port):
print(f"Server already running on port {args.port}")
return 0
# Start server
print(f"Starting LoRa Manager standalone server on port {args.port}...")
cmd = [sys.executable, "standalone.py", "--port", str(args.port)]
# Start in background
process = subprocess.Popen(
cmd,
cwd=project_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
start_new_session=True
)
print(f"Server process started with PID {process.pid}")
# Wait for ready if requested
if args.wait:
print(f"Waiting for server to be ready (timeout: {args.timeout}s)...")
if wait_for_server(args.port, args.timeout):
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
return 0
else:
print(f"Timeout waiting for server")
return 1
print(f"Server starting at http://127.0.0.1:{args.port}/loras")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,61 +0,0 @@
#!/usr/bin/env python3
"""
Wait for LoRa Manager server to become ready.
"""
import argparse
import socket
import sys
import time
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
"""Check if server is accepting connections."""
try:
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
return True
except (socket.timeout, ConnectionRefusedError, OSError):
return False
def wait_for_server(port: int, timeout: int = 30) -> bool:
"""Wait for server to become ready."""
start = time.time()
while time.time() - start < timeout:
if is_server_ready(port):
return True
time.sleep(0.5)
return False
def main() -> int:
parser = argparse.ArgumentParser(
description="Wait for LoRa Manager server to become ready"
)
parser.add_argument(
"--port",
type=int,
default=8188,
help="Server port (default: 8188)"
)
parser.add_argument(
"--timeout",
type=int,
default=30,
help="Timeout in seconds (default: 30)"
)
args = parser.parse_args()
print(f"Waiting for server on port {args.port} (timeout: {args.timeout}s)...")
if wait_for_server(args.port, args.timeout):
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
return 0
else:
print(f"Timeout: Server not ready after {args.timeout}s")
return 1
if __name__ == "__main__":
sys.exit(main())

4
.github/FUNDING.yml vendored
View File

@@ -1,5 +1,5 @@
# These are supported funding model platforms
ko_fi: pixelpawsai
patreon: PixelPawsAI
custom: ['paypal.me/pixelpawsai', 'https://afdian.com/a/pixelpawsai']
ko_fi: pixelpawsai
custom: ['paypal.me/pixelpawsai']

View File

@@ -1 +0,0 @@
Always use English for comments.

View File

@@ -1,93 +0,0 @@
name: Backend Tests
on:
push:
branches:
- main
- master
paths:
- 'py/**'
- 'standalone.py'
- 'tests/**'
- 'requirements.txt'
- 'requirements-dev.txt'
- 'pyproject.toml'
- 'pytest.ini'
- '.github/workflows/backend-tests.yml'
pull_request:
paths:
- 'py/**'
- 'standalone.py'
- 'tests/**'
- 'requirements.txt'
- 'requirements-dev.txt'
- 'pyproject.toml'
- 'pytest.ini'
- '.github/workflows/backend-tests.yml'
jobs:
pytest:
name: Run pytest with coverage
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: '3.11'
cache: 'pip'
cache-dependency-path: |
requirements.txt
requirements-dev.txt
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements-dev.txt
- name: Verify symlink support
run: |
python - <<'PY'
import os
import pathlib
import tempfile
root = pathlib.Path(tempfile.mkdtemp(prefix="lm-symlink-check-"))
target = root / "target"
target.mkdir()
link = root / "link"
try:
link.symlink_to(target, target_is_directory=True)
except OSError as exc:
raise SystemExit(f"Failed to create directory symlink in CI: {exc}")
is_link = os.path.islink(link)
is_dir = os.path.isdir(link)
realpath = os.path.realpath(link)
print(f"islink={is_link} isdir={is_dir} realpath={realpath}")
if not (is_link and is_dir and realpath == str(target)):
raise SystemExit("Directory symlink is not functioning correctly in CI; aborting.")
PY
- name: Run pytest with coverage
env:
COVERAGE_FILE: coverage/backend/.coverage
run: |
mkdir -p coverage/backend
python -m pytest \
--cov=py \
--cov=standalone \
--cov-report=term-missing \
--cov-report=xml:coverage/backend/coverage.xml \
--cov-report=html:coverage/backend/html \
--cov-report=json:coverage/backend/coverage.json
- name: Upload coverage artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: backend-coverage
path: coverage/backend
if-no-files-found: warn

View File

@@ -1,52 +0,0 @@
name: Frontend Tests
on:
push:
branches:
- main
- master
paths:
- 'package.json'
- 'package-lock.json'
- 'vitest.config.js'
- 'tests/frontend/**'
- 'static/js/**'
- 'scripts/run_frontend_coverage.js'
- '.github/workflows/frontend-tests.yml'
pull_request:
paths:
- 'package.json'
- 'package-lock.json'
- 'vitest.config.js'
- 'tests/frontend/**'
- 'static/js/**'
- 'scripts/run_frontend_coverage.js'
- '.github/workflows/frontend-tests.yml'
jobs:
vitest:
name: Run Vitest with coverage
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Use Node.js 20
uses: actions/setup-node@v4
with:
node-version: 20
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run frontend tests with coverage
run: npm run test:coverage
- name: Upload coverage artifact
if: always()
uses: actions/upload-artifact@v4
with:
name: frontend-coverage
path: coverage/frontend
if-no-files-found: warn

14
.gitignore vendored
View File

@@ -1,21 +1,7 @@
__pycache__/
.pytest_cache/
settings.json
path_mappings.yaml
output/*
py/run_test.py
.vscode/
cache/
civitai/
node_modules/
coverage/
.coverage
model_cache/
# agent
.opencode/
# Vue widgets development cache (but keep build output)
vue-widgets/node_modules/
vue-widgets/.vite/
vue-widgets/dist/

192
AGENTS.md
View File

@@ -1,192 +0,0 @@
# AGENTS.md
This file provides guidance for agentic coding assistants working in this repository.
## Development Commands
### Backend Development
```bash
# Install dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt
# Run standalone server (port 8188 by default)
python standalone.py --port 8188
# Run all backend tests
pytest
# Run specific test file
pytest tests/test_recipes.py
# Run specific test function
pytest tests/test_recipes.py::test_function_name
# Run backend tests with coverage
COVERAGE_FILE=coverage/backend/.coverage pytest \
--cov=py \
--cov=standalone \
--cov-report=term-missing \
--cov-report=html:coverage/backend/html \
--cov-report=xml:coverage/backend/coverage.xml \
--cov-report=json:coverage/backend/coverage.json
```
### Frontend Development
```bash
# Install frontend dependencies
npm install
# Run frontend tests
npm test
# Run frontend tests in watch mode
npm run test:watch
# Run frontend tests with coverage
npm run test:coverage
```
## Python Code Style
### Imports
- Use `from __future__ import annotations` for forward references in type hints
- Group imports: standard library, third-party, local (separated by blank lines)
- Use absolute imports within `py/` package: `from ..services import X`
- Mock ComfyUI dependencies in tests using `tests/conftest.py` patterns
### Formatting & Types
- PEP 8 with 4-space indentation
- Type hints required for function signatures and class attributes
- Use `TYPE_CHECKING` guard for type-checking-only imports
- Prefer dataclasses for simple data containers
- Use `Optional[T]` for nullable types, `Union[T, None]` only when necessary
### Naming Conventions
- Files: `snake_case.py` (e.g., `model_scanner.py`, `lora_service.py`)
- Classes: `PascalCase` (e.g., `ModelScanner`, `LoraService`)
- Functions/variables: `snake_case` (e.g., `get_instance`, `model_type`)
- Constants: `UPPER_SNAKE_CASE` (e.g., `VALID_LORA_TYPES`)
- Private members: `_single_underscore` (protected), `__double_underscore` (name-mangled)
### Error Handling
- Use `logging.getLogger(__name__)` for module-level loggers
- Define custom exceptions in `py/services/errors.py`
- Use `asyncio.Lock` for thread-safe singleton patterns
- Raise specific exceptions with descriptive messages
- Log errors at appropriate levels (DEBUG, INFO, WARNING, ERROR, CRITICAL)
### Async Patterns
- Use `async def` for I/O-bound operations
- Mark async tests with `@pytest.mark.asyncio`
- Use `async with` for context managers
- Singleton pattern with class-level locks: see `ModelScanner.get_instance()`
- Use `aiohttp.web.Response` for HTTP responses
### Testing Patterns
- Use `pytest` with `--import-mode=importlib`
- Fixtures in `tests/conftest.py` handle ComfyUI mocking
- Use `@pytest.mark.no_settings_dir_isolation` for tests needing real paths
- Test files: `tests/test_*.py`
- Use `tmp_path_factory` for temporary directory isolation
## JavaScript Code Style
### Imports & Modules
- ES modules with `import`/`export`
- Use `import { app } from "../../scripts/app.js"` for ComfyUI integration
- Export named functions/classes: `export function foo() {}`
- Widget files use `*_widget.js` suffix
### Naming & Formatting
- camelCase for functions, variables, object properties
- PascalCase for classes/constructors
- Constants: `UPPER_SNAKE_CASE` (e.g., `CONVERTED_TYPE`)
- Files: `snake_case.js` or `kebab-case.js`
- 2-space indentation preferred (follow existing file conventions)
### Widget Development
- Use `app.registerExtension()` to register ComfyUI extensions
- Use `node.addDOMWidget(name, type, element, options)` for custom widgets
- Event handlers attached via `addEventListener` or widget callbacks
- See `web/comfyui/utils.js` for shared utilities
## Architecture Patterns
### Service Layer
- Use `ServiceRegistry` singleton for dependency injection
- Services follow singleton pattern via `get_instance()` class method
- Separate scanners (discovery) from services (business logic)
- Handlers in `py/routes/handlers/` implement route logic
### Model Types
- BaseModelService is abstract base for LoRA, Checkpoint, Embedding services
- ModelScanner provides file discovery and hash-based deduplication
- Persistent cache in SQLite via `PersistentModelCache`
- Metadata sync from CivitAI/CivArchive via `MetadataSyncService`
### Routes & Handlers
- Route registrars organize endpoints by domain: `ModelRouteRegistrar`, etc.
- Handlers are pure functions taking dependencies as parameters
- Use `WebSocketManager` for real-time progress updates
- Return `aiohttp.web.json_response` or `web.Response`
### Recipe System
- Base metadata in `py/recipes/base.py`
- Enrichment adds model metadata: `RecipeEnrichmentService`
- Parsers for different formats in `py/recipes/parsers/`
## Important Notes
- Always use English for comments (per copilot-instructions.md)
- Dual mode: ComfyUI plugin (uses folder_paths) vs standalone (reads settings.json)
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
- Settings auto-saved in user directory or portable mode
- WebSocket broadcasts for real-time updates (downloads, scans)
- Symlink handling requires normalized paths
- API endpoints follow `/loras/*`, `/checkpoints/*`, `/embeddings/*` patterns
- Run `python scripts/sync_translation_keys.py` after UI string updates
## Frontend UI Architecture
This project has two distinct UI systems:
### 1. Standalone Lora Manager Web UI
- Location: `./static/` and `./templates/`
- Purpose: Full-featured web application for managing LoRA models
- Tech stack: Vanilla JS + CSS, served by the standalone server
- Development: Uses npm for frontend testing (`npm test`, `npm run test:watch`, etc.)
### 2. ComfyUI Custom Node Widgets
- Location: `./web/comfyui/`
- Purpose: Widgets and UI logic that ComfyUI loads as custom node extensions
- Tech stack: Vanilla JS + Vue.js widgets (in `./vue-widgets/` and built to `./web/comfyui/vue-widgets/`)
- Widget styling: Primary styles in `./web/comfyui/lm_styles.css` (NOT `./static/css/`)
- Development: No npm build step for these widgets (Vue widgets use build system)
### Widget Development Guidelines
- Use `app.registerExtension()` to register ComfyUI extensions (ComfyUI integration layer)
- Use `node.addDOMWidget()` for custom DOM widgets
- Widget styles should follow the patterns in `./web/comfyui/lm_styles.css`
- Selected state: `rgba(66, 153, 225, 0.3)` background, `rgba(66, 153, 225, 0.6)` border
- Hover state: `rgba(66, 153, 225, 0.2)` background
- Color palette matches the Lora Manager accent color (blue #4299e1)
- Use oklch() for color values when possible (defined in `./static/css/base.css`)
- Vue widget components are in `./vue-widgets/src/components/` and built to `./web/comfyui/vue-widgets/`
- When modifying widget styles, check `./web/comfyui/lm_styles.css` for consistency with other ComfyUI widgets

211
CLAUDE.md
View File

@@ -1,211 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Overview
ComfyUI LoRA Manager is a comprehensive LoRA management system for ComfyUI that combines a Python backend with browser-based widgets. It provides model organization, downloading from CivitAI/CivArchive, recipe management, and one-click workflow integration.
## Development Commands
### Backend Development
```bash
# Install dependencies
pip install -r requirements.txt
# Install development dependencies (for testing)
pip install -r requirements-dev.txt
# Run standalone server (port 8188 by default)
python standalone.py --port 8188
# Run backend tests with coverage
COVERAGE_FILE=coverage/backend/.coverage pytest \
--cov=py \
--cov=standalone \
--cov-report=term-missing \
--cov-report=html:coverage/backend/html \
--cov-report=xml:coverage/backend/coverage.xml \
--cov-report=json:coverage/backend/coverage.json
# Run specific test file
pytest tests/test_recipes.py
```
### Frontend Development
```bash
# Install frontend dependencies
npm install
# Run frontend tests
npm test
# Run frontend tests in watch mode
npm run test:watch
# Run frontend tests with coverage
npm run test:coverage
```
### Localization
```bash
# Sync translation keys after UI string updates
python scripts/sync_translation_keys.py
```
## Architecture
### Backend Structure (Python)
**Core Entry Points:**
- `__init__.py` - ComfyUI plugin entry point, registers nodes and routes
- `standalone.py` - Standalone server that mocks ComfyUI dependencies
- `py/lora_manager.py` - Main LoraManager class that registers HTTP routes
**Service Layer** (`py/services/`):
- `ServiceRegistry` - Singleton service registry for dependency management
- `ModelServiceFactory` - Factory for creating model services (LoRA, Checkpoint, Embedding)
- Scanner services (`lora_scanner.py`, `checkpoint_scanner.py`, `embedding_scanner.py`) - Model file discovery and indexing
- `model_scanner.py` - Base scanner with hash-based deduplication and metadata extraction
- `persistent_model_cache.py` - SQLite-based cache for model metadata
- `metadata_sync_service.py` - Syncs metadata from CivitAI/CivArchive APIs
- `civitai_client.py` / `civarchive_client.py` - API clients for external services
- `downloader.py` / `download_manager.py` - Model download orchestration
- `recipe_scanner.py` - Recipe file management and image association
- `settings_manager.py` - Application settings with migration support
- `websocket_manager.py` - WebSocket broadcasting for real-time updates
- `use_cases/` - Business logic orchestration (auto-organize, bulk refresh, downloads)
**Routes Layer** (`py/routes/`):
- Route registrars organize endpoints by domain (models, recipes, previews, example images, updates)
- `handlers/` - Request handlers implementing business logic
- Routes use aiohttp and integrate with ComfyUI's PromptServer
**Recipe System** (`py/recipes/`):
- `base.py` - Base recipe metadata structure
- `enrichment.py` - Enriches recipes with model metadata
- `merger.py` - Merges recipe data from multiple sources
- `parsers/` - Parsers for different recipe formats (PNG, JSON, workflow)
**Custom Nodes** (`py/nodes/`):
- `lora_loader.py` - LoRA loader nodes with preset support
- `save_image.py` - Enhanced save image with pattern-based filenames
- `trigger_word_toggle.py` - Toggle trigger words in prompts
- `lora_stacker.py` - Stack multiple LoRAs
- `prompt.py` - Prompt node with autocomplete
- `wanvideo_lora_select.py` - WanVideo-specific LoRA selection
**Configuration** (`py/config.py`):
- Manages folder paths for models, checkpoints, embeddings
- Handles symlink mappings for complex directory structures
- Auto-saves paths to settings.json in ComfyUI mode
### Frontend Structure (JavaScript)
**ComfyUI Widgets** (`web/comfyui/`):
- Vanilla JavaScript ES modules extending ComfyUI's LiteGraph-based UI
- `loras_widget.js` - Main LoRA selection widget with preview
- `loras_widget_events.js` - Event handling for widget interactions
- `autocomplete.js` - Autocomplete for trigger words and embeddings
- `preview_tooltip.js` - Preview tooltip for model cards
- `top_menu_extension.js` - Adds "Launch LoRA Manager" menu item
- `trigger_word_highlight.js` - Syntax highlighting for trigger words
- `utils.js` - Shared utilities and API helpers
**Widget Development:**
- Widgets use `app.registerExtension` and `getCustomWidgets` hooks
- `node.addDOMWidget(name, type, element, options)` embeds HTML in nodes
- See `docs/dom_widget_dev_guide.md` for complete DOMWidget development guide
**Web Source** (`web-src/`):
- Modern frontend components (if migrating from static)
- `components/` - Reusable UI components
- `styles/` - CSS styling
### Key Patterns
**Dual Mode Operation:**
- ComfyUI plugin mode: Integrates with ComfyUI's PromptServer, uses folder_paths
- Standalone mode: Mocks ComfyUI dependencies via `standalone.py`, reads paths from settings.json
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
**Settings Management:**
- Settings stored in user directory (via `platformdirs`) or portable mode (in repo)
- Migration system tracks settings schema version
- Template in `settings.json.example` defines defaults
**Model Scanning Flow:**
1. Scanner walks folder paths, computes file hashes
2. Hash-based deduplication prevents duplicate processing
3. Metadata extracted from safetensors headers
4. Persistent cache stores results in SQLite
5. Background sync fetches CivitAI/CivArchive metadata
6. WebSocket broadcasts updates to connected clients
**Recipe System:**
- Recipes store LoRA combinations with parameters
- Supports import from workflow JSON, PNG metadata
- Images associated with recipes via sibling file detection
- Enrichment adds model metadata for display
**Frontend-Backend Communication:**
- REST API for CRUD operations
- WebSocket for real-time progress updates (downloads, scans)
- API endpoints follow `/loras/*` pattern
## Code Style
**Python:**
- PEP 8 with 4-space indentation
- snake_case for files, functions, variables
- PascalCase for classes
- Type hints preferred
- English comments only (per copilot-instructions.md)
- Loggers via `logging.getLogger(__name__)`
**JavaScript:**
- ES modules with camelCase
- Files use `*_widget.js` suffix for ComfyUI widgets
- Prefer vanilla JS, avoid framework dependencies
## Testing
**Backend Tests:**
- pytest with `--import-mode=importlib`
- Test files: `tests/test_*.py`
- Fixtures in `tests/conftest.py`
- Mock ComfyUI dependencies using standalone.py patterns
- Markers: `@pytest.mark.asyncio` for async tests, `@pytest.mark.no_settings_dir_isolation` for real paths
**Frontend Tests:**
- Vitest with jsdom environment
- Test files: `tests/frontend/**/*.test.js`
- Setup in `tests/frontend/setup.js`
- Coverage via `npm run test:coverage`
## Important Notes
**Settings Location:**
- ComfyUI mode: Auto-saves folder paths to user settings directory
- Standalone mode: Use `settings.json` (copy from `settings.json.example`)
- Portable mode: Set `"use_portable_settings": true` in settings.json
**API Integration:**
- CivitAI API key required for downloads (add to settings)
- CivArchive API used as fallback for deleted models
- Metadata archive database available for offline metadata
**Symlink Handling:**
- Config scans symlinks to map virtual paths to physical locations
- Preview validation uses normalized preview root paths
- Fingerprinting prevents redundant symlink rescans
**ComfyUI Node Development:**
- Nodes defined in `py/nodes/`, registered in `__init__.py`
- Frontend widgets in `web/comfyui/`, matched by node type
- Use `WEB_DIRECTORY = "./web/comfyui"` convention
**Recipe Image Association:**
- Recipes scan for sibling images in same directory
- Supports repair/migration of recipe image paths
- See `py/services/recipe_scanner.py` for implementation details

167
README.md
View File

@@ -34,71 +34,79 @@ Enhance your Civitai browsing experience with our companion browser extension! S
## Release Notes
### v0.9.15
* **Filter Presets** - Save filter combinations as presets for quick switching and reapplication.
* **Bug Fixes** - Fixed various bugs for improved stability.
### v0.8.20
* **LM Civitai Extension** - Released [browser extension through Chrome Web Store](https://chromewebstore.google.com/detail/lm-civitai-extension/capigligggeijgmocnaflanlbghnamgm?utm_source=item-share-cb) that works seamlessly with LoRA Manager to enhance Civitai browsing experience, showing which models are already in your local library, enabling one-click downloads, and providing queue and parallel download support
* **Enhanced Lora Loader** - Added support for nunchaku, improving convenience when working with ComfyUI-nunchaku workflows, plus new template workflows for quick onboarding
* **WanVideo Integration** - Introduced WanVideo Lora Select (LoraManager) node compatible with ComfyUI-WanVideoWrapper for streamlined lora usage in video workflows, including a template workflow to help you get started quickly
### v0.9.14
* **LoRA Cycler Node** - Introduced a new LoRA Cycler node that enables iteration through specified LoRAs with support for repeat count and pause iteration functionality. Refer to the new "Lora Cycler" template workflow for concrete example.
* **Enhanced Prompt Node with Tag Autocomplete** - Enhanced the Prompt node with comprehensive tag autocomplete based on merged Danbooru + e621 tags. Supports tag search and autocomplete functionality. Implemented a command system with shortcuts like `/char` or `/artist` for category-specific tag searching. Added `/ac` or `/noac` commands to quickly enable or disable autocomplete. Refer to the "Lora Manager Basic" template workflow in ComfyUI -> Templates -> ComfyUI-Lora-Manager for detailed tips.
* **Bug Fixes & Stability** - Addressed multiple bugs and improved overall stability.
### v0.8.19
* **Analytics Dashboard** - Added new Statistics page providing comprehensive visual analysis of model collection and usage patterns for better library insights
* **Target Node Selection** - Enhanced workflow integration with intelligent target choosing when sending LoRAs/recipes to workflows with multiple loader/stacker nodes; a visual selector now appears showing node color, type, ID, and title for precise targeting
* **Enhanced NSFW Controls** - Added support for setting NSFW levels on recipes with automatic content blurring based on user preferences
* **Customizable Card Display** - New display settings allowing users to choose whether card information and action buttons are always visible or only revealed on hover
* **Expanded Compatibility** - Added support for efficiency-nodes-comfyui in Save Recipe and Save Image nodes, plus fixed compatibility with ComfyUI_Custom_Nodes_AlekPet
### v0.9.12
* **LoRA Randomizer System** - Introduced a comprehensive LoRA randomization system featuring LoRA Pool and LoRA Randomizer nodes for flexible and dynamic generation workflows.
* **LoRA Randomizer Template** - Refer to the new "LoRA Randomizer" template workflow for detailed examples of flexible randomization modes, lock & reuse options, and other features.
* **Recipe Folders** - Introduced a folder system for the Recipes page, allowing users to freely organize recipes just like they do with models.
* **Recipe Bulk Operations** - Added bulk mode support for batch moving, deleting, and setting base models for selected recipes with intuitive controls like click-and-drag selection, drag-to-folder, and Ctrl+A (Select All).
* **Prompt Search & Sorting** - Search recipes by prompt content and sort by Recipe Name, Imported Date, or LoRA Count for better browsing.
* **Recipe Favorites** - Mark specific recipes as favorites for quick access.
* **Video Recipe Support** - Enabled support for video recipes (import via LM extension or URL; video file import not supported).
* **Performance Improvements** - Fixed performance issues for dramatically improved startup and loading speed. After first scan, subsequent loads are instant regardless of collection size.
* **ComfyUI Nodes 2.0 Support** - Basic support for ComfyUI Nodes 2.0.
### v0.8.18
* **Custom Example Images** - Added ability to import your own example images for LoRAs and checkpoints with automatic metadata extraction from embedded information
* **Enhanced Example Management** - New action buttons to set specific examples as previews or delete custom examples
* **Improved Duplicate Detection** - Enhanced "Find Duplicates" with hash verification feature to eliminate false positives when identifying duplicate models
* **Tag Management** - Added tag editing functionality allowing users to customize and manage model tags
* **Advanced Selection Controls** - Implemented Ctrl+A shortcut for quickly selecting all filtered LoRAs, automatically entering bulk mode when needed
* **Note**: Cache file functionality temporarily disabled pending rework
### v0.9.10
* **Smarter Update Matching** - Users can now choose to check and group updates by matching base model only or with no base-model constraint; version lists also support toggling between same-base versions or all versions.
* **Flexible Tag Filtering** - The filter panel now supports tag exclusion: click a tag to include, click again to exclude, and click a third time to clear, enabling stronger and more flexible tag filters.
* **License Visibility & Controls** - Model detail headers and ComfyUI preview popups now show Civitai license icons. The filter panel gains license include/exclude options, and a new global context menu action, "Refresh license metadata," fetches missing license data.
* **Recipe Improvements** - Recipes now allow importing with zero LoRAs, and recipe detail pages show the related checkpoint for easier reference.
* **Better ZIP Downloads** - When downloading models packaged in ZIPs, model files are extracted into the target model folder; ZIPs containing multiple model files (e.g., WanVideo high/low LoRA pairs) are added as separate models.
* **Template Workflow Update** - Refreshed the "Illustrious Pony Example" template workflow with usage guidance for each LoRA Manager node.
* **Bug Fixes & Stability** - General fixes and stability improvements.
### v0.8.17
* **Duplicate Model Detection** - Added "Find Duplicates" functionality for LoRAs and checkpoints using model file hash detection, enabling convenient viewing and batch deletion of duplicate models
* **Enhanced URL Recipe Imports** - Optimized import recipe via URL functionality using CivitAI API calls instead of web scraping, now supporting all rated images (including NSFW) for recipe imports
* **Improved TriggerWord Control** - Enhanced TriggerWord Toggle node with new default_active switch to set the initial state (active/inactive) when trigger words are added
* **Centralized Example Management** - Added "Migrate Existing Example Images" feature to consolidate downloaded example images from model folders into central storage with customizable naming patterns
* **Intelligent Word Suggestions** - Implemented smart trigger word suggestions by reading class tokens and tag frequency from safetensors files, displaying recommendations when editing trigger words
* **Model Version Management** - Added "Re-link to CivitAI" context menu option for connecting models to different CivitAI versions when needed
### v0.9.9
* **Check for Updates Feature** - Users can now check for updates for all models or selected models in bulk mode. Models with available updates will display an "update available" badge on their model card, and users can filter to show only models with updates.
* **Model Versions Management** - Added a new Versions tab in the model modal that centralizes all versions of a model, providing download, delete, and ignore update functions.
* **Send Checkpoint to ComfyUI** - Users can now click the send button on a checkpoint card to send the checkpoint directly to the current workflow's checkpoint or diffusion model loader node in ComfyUI.
* **Customizable Model Card Display** - Added a new setting that allows users to choose whether to display the model name or filename on model cards.
* **New Path Template Placeholders** - Added new path template placeholders: `{model_name}` and `{version_name}` for more flexible organization.
* **ComfyUI Auto Path Correction Setting** - Added a new setting within ComfyUI to enable or disable the auto path correction feature.
### v0.8.16
* **Dramatic Startup Speed Improvement** - Added cache serialization mechanism for significantly faster loading times, especially beneficial for large model collections
* **Enhanced Refresh Options** - Extended functionality with "Full Rebuild (complete)" option alongside "Quick Refresh (incremental)" to fix potential memory cache issues without requiring application restart
* **Customizable Display Density** - Replaced compact mode with adjustable display density settings for personalized layout customization
* **Model Creator Information** - Added creator details to model information panels for better attribution
* **Improved WebP Support** - Enhanced Save Image node with workflow embedding capability for WebP format images
* **Direct Example Access** - Added "Open Example Images Folder" button to card interfaces for convenient browsing of downloaded model examples
* **Enhanced Compatibility** - Full ComfyUI Desktop support for "Send lora or recipe to workflow" functionality
* **Cache Management** - Added settings to clear existing cache files when needed
* **Bug Fixes & Stability** - Various improvements for overall reliability and performance
### v0.9.8
* **Full CivArchive API Support** - Added complete support for the CivArchive API as a fallback metadata source beyond Civitai API. Models deleted from Civitai can now still retrieve metadata through the CivArchive API.
* **Download Models from CivArchive** - Added support for downloading models directly from CivArchive, similar to downloading from Civitai. Simply click the Download button and paste the model URL to download the corresponding model.
* **Custom Priority Tags** - Introduced Custom Priority Tags feature, allowing users to define custom priority tags. These tags will appear as suggestions when editing tags or during auto organization/download using default paths, providing more precise and controlled folder organization. [Guide](https://github.com/willmiao/ComfyUI-Lora-Manager/wiki/Priority-Tags-Configuration-Guide)
* **Drag and Drop Tag Reordering** - Added drag and drop functionality to reorder tags in the tags edit mode for improved usability.
* **Download Control in Example Images Panel** - Added stop control in the Download Example Images Panel for better download management.
* **Prompt (LoraManager) Node with Autocomplete** - Added new Prompt (LoraManager) node with autocomplete feature for adding embeddings.
* **Lora Manager Nodes in Subgraphs** - Lora Manager nodes now support being placed within subgraphs for more flexible workflow organization.
### v0.8.15
* **Enhanced One-Click Integration** - Replaced copy button with direct send button allowing LoRAs/recipes to be sent directly to your current ComfyUI workflow without needing to paste
* **Flexible Workflow Integration** - Click to append LoRAs/recipes to existing loader nodes or Shift+click to replace content, with additional right-click menu options for "Send to Workflow (Append)" or "Send to Workflow (Replace)"
* **Improved LoRA Loader Controls** - Added header drag functionality for proportional strength adjustment of all LoRAs simultaneously (including CLIP strengths when expanded)
* **Keyboard Navigation Support** - Implemented Page Up/Down for page scrolling, Home key to jump to top, and End key to jump to bottom for faster browsing through large collections
### v0.9.6
* **Metadata Archive Database Support** - Added the ability to download and utilize a metadata archive database, enabling access to metadata for models that have been deleted from CivitAI.
* **App-Level Proxy Settings** - Introduced support for configuring a global proxy within the application, making it easier to use the manager behind network restrictions.
* **Bug Fixes** - Various bug fixes for improved stability and reliability.
### v0.8.14
* **Virtualized Scrolling** - Completely rebuilt rendering mechanism for smooth browsing with no lag or freezing, now supporting virtually unlimited model collections with optimized layouts for large displays, improving space utilization and user experience
* **Compact Display Mode** - Added space-efficient view option that displays more cards per row (7 on 1080p, 8 on 2K, 10 on 4K)
* **Enhanced LoRA Node Functionality** - Comprehensive improvements to LoRA loader/stacker nodes including real-time trigger word updates (reflecting any change anywhere in the LoRA chain for precise updates) and expanded context menu with "Copy Notes" and "Copy Trigger Words" options for faster workflow
### v0.9.2
* **Bulk Auto-Organization Action** - Added a new bulk auto-organization feature. You can now select multiple models and automatically organize them according to your current path template settings for streamlined management.
* **Bug Fixes** - Addressed several bugs to improve stability and reliability.
### v0.8.13
* **Enhanced Recipe Management** - Added "Find duplicates" feature to identify and batch delete duplicate recipes with duplicate detection notifications during imports
* **Improved Source Tracking** - Source URLs are now saved with recipes imported via URL, allowing users to view original content with one click or manually edit links
* **Advanced LoRA Control** - Double-click LoRAs in Loader/Stacker nodes to access expanded CLIP strength controls for more precise adjustments of model and CLIP strength separately
* **Lycoris Model Support** - Added compatibility with Lycoris models for expanded creative options
* **Bug Fixes & UX Improvements** - Resolved various issues and enhanced overall user experience with numerous optimizations
### v0.9.1
* **Enhanced Bulk Operations** - Improved bulk operations with Marquee Selection and a bulk operation context menu, providing a more intuitive, desktop-application-like user experience.
* **New Bulk Actions** - Added bulk operations for adding tags and setting base models to multiple models simultaneously.
### v0.8.12
* **Enhanced Model Discovery** - Added alphabetical navigation bar to LoRAs page for faster browsing through large collections
* **Optimized Example Images** - Improved download logic to automatically refresh stale metadata before fetching example images
* **Model Exclusion System** - New right-click option to exclude specific LoRAs or checkpoints from management
* **Improved Showcase Experience** - Enhanced interaction in LoRA and checkpoint showcase areas for better usability
### v0.9.0
* **UI Overhaul for Enhanced Navigation** - Replaced the top flat folder tags with a new folder sidebar and breadcrumb navigation system for a more intuitive folder browsing and selection experience.
* **Dual-Mode Folder Sidebar** - The new folder sidebar offers two display modes: 'List Mode,' which mirrors the classic folder view, and 'Tree Mode,' which presents a hierarchical folder structure for effortless navigation through nested directories.
* **Internationalization Support** - Introduced multi-language support, now available in English, Simplified Chinese, Traditional Chinese, Spanish, Japanese, Korean, French, Russian, and German. Feedback from native speakers is welcome to improve the translations.
* **Automatic Filename Conflict Resolution** - Implemented automatic file renaming (`original name + short hash`) to prevent conflicts when downloading or moving models.
* **Performance Optimizations & Bug Fixes** - Various performance improvements and bug fixes for a more stable and responsive experience.
### v0.8.11
* **Offline Image Support** - Added functionality to download and save all model example images locally, ensuring access even when offline or if images are removed from CivitAI or the site is down
* **Resilient Download System** - Implemented pause/resume capability with checkpoint recovery that persists through restarts or unexpected exits
* **Bug Fixes & Stability** - Resolved various issues to enhance overall reliability and performance
### v0.8.10
* **Standalone Mode** - Run LoRA Manager independently from ComfyUI for a lightweight experience that works even with other stable diffusion interfaces
* **Portable Edition** - New one-click portable version for easy startup and updates in standalone mode
* **Enhanced Metadata Collection** - Added support for SamplerCustomAdvanced node in the metadata collector module
* **Improved UI Organization** - Optimized Lora Loader node height to display up to 5 LoRAs at once with scrolling capability for larger collections
[View Update History](./update_logs.md)
@@ -157,12 +165,10 @@ Enhance your Civitai browsing experience with our companion browser extension! S
### Option 2: **Portable Standalone Edition** (No ComfyUI required)
1. Download the [Portable Package](https://github.com/willmiao/ComfyUI-Lora-Manager/releases/download/v0.9.8/lora_manager_portable.7z)
2. Copy the provided `settings.json.example` file to create a new file named `settings.json` in `comfyui-lora-manager` folder.
3. Edit the new `settings.json` to include your correct model folder paths and CivitAI API key
- Set `"use_portable_settings": true` if you want the configuration to remain inside the repository folder instead of your user settings directory.
1. Download the [Portable Package](https://github.com/willmiao/ComfyUI-Lora-Manager/releases/download/v0.8.15/lora_manager_portable.7z)
2. Copy the provided `settings.json.example` file to create a new file named `settings.json` in `comfyui-lora-manager` folder
3. Edit `settings.json` to include your correct model folder paths and CivitAI API key
4. Run run.bat
- To change the startup port, edit `run.bat` and modify the parameter (e.g. `--port 9001`)
### Option 3: **Manual Installation**
@@ -228,7 +234,7 @@ You can combine multiple patterns to create detailed, organized filenames for yo
You can now run LoRA Manager independently from ComfyUI:
1. **For ComfyUI users**:
- Launch ComfyUI with LoRA Manager at least once to initialize the necessary path information in the `settings.json` file located in your user settings folder (see paths above).
- Launch ComfyUI with LoRA Manager at least once to initialize the necessary path information in the `settings.json` file.
- Make sure dependencies are installed: `pip install -r requirements.txt`
- From your ComfyUI root directory, run:
```bash
@@ -241,9 +247,8 @@ You can now run LoRA Manager independently from ComfyUI:
```
2. **For non-ComfyUI users**:
- Copy the provided `settings.json.example` file to create a new file named `settings.json`. Update the API key, optional language, and folder paths only—the library registry is created automatically when LoRA Manager starts.
- Edit `settings.json` to include your correct model folder paths and CivitAI API key (you can leave the defaults until ready to configure them)
- Enable portable mode by setting `"use_portable_settings": true` if you prefer LoRA Manager to read and write the `settings.json` located in the project directory.
- Copy the provided `settings.json.example` file to create a new file named `settings.json`
- Edit `settings.json` to include your correct model folder paths and CivitAI API key
- Install required dependencies: `pip install -r requirements.txt`
- Run standalone mode:
```bash
@@ -251,37 +256,8 @@ You can now run LoRA Manager independently from ComfyUI:
```
- Access the interface through your browser at: `http://localhost:8188/loras`
> **Note:** Existing installations automatically migrate the legacy `settings.json` from the plugin folder to the user settings directory the first time you launch this version.
This standalone mode provides a lightweight option for managing your model and recipe collection without needing to run the full ComfyUI environment, making it useful even for users who primarily use other stable diffusion interfaces.
## Testing & Coverage
### Backend
Install the development dependencies and run pytest with coverage reports:
```bash
pip install -r requirements-dev.txt
COVERAGE_FILE=coverage/backend/.coverage pytest \
--cov=py \
--cov=standalone \
--cov-report=term-missing \
--cov-report=html:coverage/backend/html \
--cov-report=xml:coverage/backend/coverage.xml \
--cov-report=json:coverage/backend/coverage.json
```
HTML, XML, and JSON artifacts are stored under `coverage/backend/` so you can inspect hot spots locally or from CI artifacts.
### Frontend
Run the Vitest coverage suite to analyze widget hot spots:
```bash
npm run test:coverage
```
---
## Contributing
@@ -322,6 +298,3 @@ Join our Discord community for support, discussions, and updates:
[Discord Server](https://discord.gg/vcqNrWVFvM)
---
## Star History
[![Star History Chart](https://api.star-history.com/svg?repos=willmiao/ComfyUI-Lora-Manager&type=Date)](https://star-history.com/#willmiao/ComfyUI-Lora-Manager&Date)

View File

@@ -1,99 +1,27 @@
try: # pragma: no cover - import fallback for pytest collection
from .py.lora_manager import LoraManager
from .py.nodes.lora_loader import LoraLoaderLM, LoraTextLoaderLM
from .py.nodes.trigger_word_toggle import TriggerWordToggleLM
from .py.nodes.prompt import PromptLM
from .py.nodes.text import TextLM
from .py.nodes.lora_stacker import LoraStackerLM
from .py.nodes.save_image import SaveImageLM
from .py.nodes.debug_metadata import DebugMetadataLM
from .py.nodes.wanvideo_lora_select import WanVideoLoraSelectLM
from .py.nodes.wanvideo_lora_select_from_text import WanVideoLoraTextSelectLM
from .py.nodes.lora_pool import LoraPoolLM
from .py.nodes.lora_randomizer import LoraRandomizerLM
from .py.nodes.lora_cycler import LoraCyclerLM
from .py.metadata_collector import init as init_metadata_collector
except (
ImportError
): # pragma: no cover - allows running under pytest without package install
import importlib
import pathlib
import sys
package_root = pathlib.Path(__file__).resolve().parent
if str(package_root) not in sys.path:
sys.path.append(str(package_root))
PromptLM = importlib.import_module("py.nodes.prompt").PromptLM
TextLM = importlib.import_module("py.nodes.text").TextLM
LoraManager = importlib.import_module("py.lora_manager").LoraManager
LoraLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraLoaderLM
LoraTextLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraTextLoaderLM
TriggerWordToggleLM = importlib.import_module(
"py.nodes.trigger_word_toggle"
).TriggerWordToggleLM
LoraStackerLM = importlib.import_module("py.nodes.lora_stacker").LoraStackerLM
SaveImageLM = importlib.import_module("py.nodes.save_image").SaveImageLM
DebugMetadataLM = importlib.import_module("py.nodes.debug_metadata").DebugMetadataLM
WanVideoLoraSelectLM = importlib.import_module(
"py.nodes.wanvideo_lora_select"
).WanVideoLoraSelectLM
WanVideoLoraTextSelectLM = importlib.import_module(
"py.nodes.wanvideo_lora_select_from_text"
).WanVideoLoraTextSelectLM
LoraPoolLM = importlib.import_module("py.nodes.lora_pool").LoraPoolLM
LoraRandomizerLM = importlib.import_module(
"py.nodes.lora_randomizer"
).LoraRandomizerLM
LoraCyclerLM = importlib.import_module(
"py.nodes.lora_cycler"
).LoraCyclerLM
init_metadata_collector = importlib.import_module("py.metadata_collector").init
from .py.lora_manager import LoraManager
from .py.nodes.lora_loader import LoraManagerLoader
from .py.nodes.trigger_word_toggle import TriggerWordToggle
from .py.nodes.lora_stacker import LoraStacker
from .py.nodes.save_image import SaveImage
from .py.nodes.debug_metadata import DebugMetadata
from .py.nodes.wanvideo_lora_select import WanVideoLoraSelect
# Import metadata collector to install hooks on startup
from .py.metadata_collector import init as init_metadata_collector
NODE_CLASS_MAPPINGS = {
PromptLM.NAME: PromptLM,
TextLM.NAME: TextLM,
LoraLoaderLM.NAME: LoraLoaderLM,
LoraTextLoaderLM.NAME: LoraTextLoaderLM,
TriggerWordToggleLM.NAME: TriggerWordToggleLM,
LoraStackerLM.NAME: LoraStackerLM,
SaveImageLM.NAME: SaveImageLM,
DebugMetadataLM.NAME: DebugMetadataLM,
WanVideoLoraSelectLM.NAME: WanVideoLoraSelectLM,
WanVideoLoraTextSelectLM.NAME: WanVideoLoraTextSelectLM,
LoraPoolLM.NAME: LoraPoolLM,
LoraRandomizerLM.NAME: LoraRandomizerLM,
LoraCyclerLM.NAME: LoraCyclerLM,
LoraManagerLoader.NAME: LoraManagerLoader,
TriggerWordToggle.NAME: TriggerWordToggle,
LoraStacker.NAME: LoraStacker,
SaveImage.NAME: SaveImage,
DebugMetadata.NAME: DebugMetadata,
WanVideoLoraSelect.NAME: WanVideoLoraSelect
}
WEB_DIRECTORY = "./web/comfyui"
# Check and build Vue widgets if needed (development mode)
try:
from .py.vue_widget_builder import check_and_build_vue_widgets
# Auto-build in development, warn only if fails
check_and_build_vue_widgets(auto_build=True, warn_only=True)
except ImportError:
# Fallback for pytest
import importlib
check_and_build_vue_widgets = importlib.import_module(
"py.vue_widget_builder"
).check_and_build_vue_widgets
check_and_build_vue_widgets(auto_build=True, warn_only=True)
except Exception as e:
import logging
logging.warning(f"[LoRA Manager] Vue widget build check skipped: {e}")
# Initialize metadata collector
init_metadata_collector()
# Register routes on import
LoraManager.add_routes()
__all__ = ["NODE_CLASS_MAPPINGS", "WEB_DIRECTORY"]
__all__ = ['NODE_CLASS_MAPPINGS', 'WEB_DIRECTORY']

View File

@@ -1,180 +0,0 @@
## Overview
The **LoRA Manager Civitai Extension** is a Browser extension designed to work seamlessly with [LoRA Manager](https://github.com/willmiao/ComfyUI-Lora-Manager) to significantly enhance your browsing experience on [Civitai](https://civitai.com).
It also supports browsing on [CivArchive](https://civarchive.com/) (formerly CivitaiArchive).
With this extension, you can:
✅ Instantly see which models are already present in your local library
✅ Download new models with a single click
✅ Manage downloads efficiently with queue and parallel download support
✅ Keep your downloaded models automatically organized according to your custom settings
![Civitai Models page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-models-page.png)
![CivArchive Models page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civarchive-models-page.png)
---
## Why Are All Features for Supporters Only?
I love building tools for the Stable Diffusion and ComfyUI communities, and LoRA Manager is a passion project that I've poured countless hours into. When I created this companion extension, my hope was to offer its core features for free, as a thank-you to all of you.
Unfortunately, I've reached a point where I need to be realistic. The level of support from the free model has been far lower than what's needed to justify the continuous development and maintenance for both projects. It was a difficult decision, but I've chosen to make the extension's features exclusive to supporters.
This change is crucial for me to be able to continue dedicating my time to improving the free and open-source LoRA Manager, which I'm committed to keeping available for everyone.
Your support does more than just unlock a few features—it allows me to keep innovating and ensures the core LoRA Manager project thrives. I'm incredibly grateful for your understanding and any support you can offer. ❤️
(_For those who previously supported me on Ko-fi with a one-time donation, I'll be sending out license keys individually as a thank-you._)
---
## Installation
### Supported Browsers & Installation Methods
| Browser | Installation Method |
|--------------------|-------------------------------------------------------------------------------------|
| **Google Chrome** | [Chrome Web Store link](https://chromewebstore.google.com/detail/capigligggeijgmocnaflanlbghnamgm?utm_source=item-share-cb) |
| **Microsoft Edge** | Install via Chrome Web Store (compatible) |
| **Brave Browser** | Install via Chrome Web Store (compatible) |
| **Opera** | Install via Chrome Web Store (compatible) |
| **Firefox** | <div id="firefox-install" class="install-ok"><a href="https://github.com/willmiao/lm-civitai-extension-firefox/releases/latest/download/extension.xpi">📦 Install Firefox Extension (reviewed and verified by Mozilla)</a></div> |
For non-Chrome browsers (e.g., Microsoft Edge), you can typically install extensions from the Chrome Web Store by following these steps: open the extensions Chrome Web Store page, click 'Get extension', then click 'Allow' when prompted to enable installations from other stores, and finally click 'Add extension' to complete the installation.
---
## Privacy & Security
I understand concerns around browser extensions and privacy, and I want to be fully transparent about how the **LM Civitai Extension** works:
- **Reviewed and Verified**
This extension has been **manually reviewed and approved by the Chrome Web Store**. The Firefox version uses the **exact same code** (only the packaging format differs) and has passed **Mozillas Add-on review**.
- **Minimal Network Access**
The only external server this extension connects to is:
**`https://willmiao.shop`** — used solely for **license validation**.
It does **not collect, transmit, or store any personal or usage data**.
No browsing history, no user IDs, no analytics, no hidden trackers.
- **Local-Only Model Detection**
Model detection and LoRA Manager communication all happen **locally** within your browser, directly interacting with your local LoRA Manager backend.
I value your trust and are committed to keeping your local setup private and secure. If you have any questions, feel free to reach out!
---
## How to Use
After installing the extension, you'll automatically receive a **7-day trial** to explore all features.
When the extension is correctly installed and your license is valid:
- Open **Civitai**, and you'll see visual indicators added by the extension on model cards, showing:
- ✅ Models already present in your local library
- ⬇️ A download button for models not in your library
Clicking the download button adds the corresponding model version to the download queue, waiting to be downloaded. You can set up to **5 models to download simultaneously**.
### Visual Indicators Appear On:
- **Home Page** — Featured models
- **Models Page**
- **Creator Profiles** — If the creator has set their models to be visible
- **Recommended Resources** — On individual model pages
### Version Buttons on Model Pages
On a specific model page, visual indicators also appear on version buttons, showing which versions are already in your local library.
When switching to a specific version by clicking a version button:
- Clicking the download button will open a dropdown:
- Download via **LoRA Manager**
- Download via **Original Download** (browser download)
You can check **Remember my choice** to set your preferred default. You can change this setting anytime in the extension's settings.
![Civitai Model Page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-model-page.png)
### Resources on Image Pages (2025-08-05) — now shows in-library indicators for image resources. Import image as recipe coming soon!
![Civitai Image Page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-image-page.jpg)
---
## Model Download Location & LoRA Manager Settings
To use the **one-click download function**, you must first set:
- Your **Default LoRAs Root**
- Your **Default Checkpoints Root**
These are set within LoRA Manager's settings.
When everything is configured, downloaded model files will be placed in:
`<Default_Models_Root>/<Base_Model_of_the_Model>/<First_Tag_of_the_Model>`
### Update: Default Path Customization (2025-07-21)
A new setting to customize the default download path has been added in the nightly version. You can now personalize where models are saved when downloading via the LM Civitai Extension.
![Default Path Customization](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/default-path-customization.png)
The previous YAML path mapping file will be deprecated—settings will now be unified in settings.json to simplify configuration.
---
## Backend Port Configuration
If your **ComfyUI** or **LoRA Manager** backend is running on a port **other than the default 8188**, you must configure the backend port in the extension's settings.
After correctly setting and saving the port, you'll see in the extension's header area:
- A **Healthy** status with the tooltip: `Connected to LoRA Manager on port xxxx`
---
## Advanced Usage
### Connecting to a Remote LoRA Manager
If your LoRA Manager is running on another computer, you can still connect from your browser using port forwarding.
> **Why can't you set a remote IP directly?**
>
> For privacy and security, the extension only requests access to `http://127.0.0.1/*`. Supporting remote IPs would require much broader permissions, which may be rejected by browser stores and could raise user concerns.
**Solution: Port Forwarding with `socat`**
On your browser computer, run:
`socat TCP-LISTEN:8188,bind=127.0.0.1,fork TCP:REMOTE.IP.ADDRESS.HERE:8188`
- Replace `REMOTE.IP.ADDRESS.HERE` with the IP of the machine running LoRA Manager.
- Adjust the port if needed.
This lets the extension connect to `127.0.0.1:8188` as usual, with traffic forwarded to your remote server.
_Thanks to user **Temikus** for sharing this solution!_
---
## Roadmap
The extension will evolve alongside **LoRA Manager** improvements. Planned features include:
- [x] Support for **additional model types** (e.g., embeddings)
- [ ] One-click **Recipe Import**
- [x] Display of in-library status for all resources in the **Resources Used** section of the image page
- [x] One-click **Auto-organize Models**
**Stay tuned — and thank you for your support!**
---

View File

@@ -1,93 +0,0 @@
# Example image route architecture
The example image routing stack mirrors the layered model route stack described in
[`docs/architecture/model_routes.md`](model_routes.md). HTTP wiring, controller setup,
handler orchestration, and long-running workflows now live in clearly separated modules so
we can extend download/import behaviour without touching the entire feature surface.
```mermaid
graph TD
subgraph HTTP
A[ExampleImagesRouteRegistrar] -->|binds| B[ExampleImagesRoutes controller]
end
subgraph Application
B --> C[ExampleImagesHandlerSet]
C --> D1[Handlers]
D1 --> E1[Use cases]
E1 --> F1[Download manager / processor / file manager]
end
subgraph Side Effects
F1 --> G1[Filesystem]
F1 --> G2[Model metadata]
F1 --> G3[WebSocket progress]
end
```
## Layer responsibilities
| Layer | Module(s) | Responsibility |
| --- | --- | --- |
| Registrar | `py/routes/example_images_route_registrar.py` | Declarative catalogue of every example image endpoint plus helpers that bind them to an `aiohttp` router. Keeps HTTP concerns symmetrical with the model registrar. |
| Controller | `py/routes/example_images_routes.py` | Lazily constructs `ExampleImagesHandlerSet`, injects defaults for the download manager, processor, and file manager, and exposes the registrar-ready mapping just like `BaseModelRoutes`. |
| Handler set | `py/routes/handlers/example_images_handlers.py` | Groups HTTP adapters by concern (downloads, imports/deletes, filesystem access). Each handler translates domain errors into HTTP responses and defers to a use case or utility service. |
| Use cases | `py/services/use_cases/example_images/*.py` | Encapsulate orchestration for downloads and imports. They validate input, translate concurrency/configuration errors, and keep handler logic declarative. |
| Supporting services | `py/utils/example_images_download_manager.py`, `py/utils/example_images_processor.py`, `py/utils/example_images_file_manager.py` | Execute long-running work: pull assets from Civitai, persist uploads, clean metadata, expose filesystem actions with guardrails, and broadcast progress snapshots. |
## Handler responsibilities & invariants
`ExampleImagesHandlerSet` flattens the handler objects into the `{"handler_name": coroutine}`
mapping consumed by the registrar. The table below outlines how each handler collaborates
with the use cases and utilities.
| Handler | Key endpoints | Collaborators | Contracts |
| --- | --- | --- | --- |
| `ExampleImagesDownloadHandler` | `/api/lm/download-example-images`, `/api/lm/example-images-status`, `/api/lm/pause-example-images`, `/api/lm/resume-example-images`, `/api/lm/force-download-example-images` | `DownloadExampleImagesUseCase`, `DownloadManager` | Delegates payload validation and concurrency checks to the use case; progress/status endpoints expose the same snapshot used for WebSocket broadcasts; pause/resume surface `DownloadNotRunningError` as HTTP 400 instead of 500. |
| `ExampleImagesManagementHandler` | `/api/lm/import-example-images`, `/api/lm/delete-example-image` | `ImportExampleImagesUseCase`, `ExampleImagesProcessor` | Multipart uploads are streamed to disk via the use case; validation failures return HTTP 400 with no filesystem side effects; deletion funnels through the processor to prune metadata and cached images consistently. |
| `ExampleImagesFileHandler` | `/api/lm/open-example-images-folder`, `/api/lm/example-image-files`, `/api/lm/has-example-images` | `ExampleImagesFileManager` | Centralises filesystem access, enforcing settings-based root paths and returning HTTP 400/404 for missing configuration or folders; responses always include `success`/`has_images` booleans for UI consumption. |
## Use case boundaries
| Use case | Entry point | Dependencies | Guarantees |
| --- | --- | --- | --- |
| `DownloadExampleImagesUseCase` | `execute(payload)` | `DownloadManager.start_download`, download configuration errors | Raises `DownloadExampleImagesInProgressError` when the manager reports an active job, rewraps configuration errors into `DownloadExampleImagesConfigurationError`, and lets `ExampleImagesDownloadError` bubble as 500s so handlers do not duplicate logging. |
| `ImportExampleImagesUseCase` | `execute(request)` | `ExampleImagesProcessor.import_images`, temporary file helpers | Supports multipart or JSON payloads, normalises file paths into a single list, cleans up temp files even on failure, and maps validation issues to `ImportExampleImagesValidationError` for HTTP 400 responses. |
## Maintaining critical invariants
* **Shared progress snapshots** - The download handler returns the same snapshot built by
`DownloadManager`, guaranteeing parity between HTTP polling endpoints and WebSocket
progress events.
* **Safe filesystem access** - All folder/file actions flow through
`ExampleImagesFileManager`, which validates the configured example image root and ensures
responses never leak absolute paths outside the allowed directory.
* **Metadata hygiene** - Import/delete operations run through `ExampleImagesProcessor`,
which updates model metadata via `MetadataManager` and notifies the relevant scanners so
cache state stays in sync.
## Migration notes
The refactor brings the example image stack in line with the model/recipe stacks:
1. `ExampleImagesRouteRegistrar` now owns the declarative route list. Downstream projects
should rely on `ExampleImagesRoutes.to_route_mapping()` instead of manually wiring
handler callables.
2. `ExampleImagesRoutes` caches its `ExampleImagesHandlerSet` just like
`BaseModelRoutes`. If you previously instantiated handlers directly, inject custom
collaborators via the controller constructor (`download_manager`, `processor`,
`file_manager`) to keep test seams predictable.
3. Tests that mocked `ExampleImagesRoutes.setup_routes` should switch to patching
`DownloadExampleImagesUseCase`/`ImportExampleImagesUseCase` at import time. The handlers
expect those abstractions to surface validation/concurrency errors, and bypassing them
will skip the HTTP-friendly error mapping.
## Extending the stack
1. Add the endpoint to `ROUTE_DEFINITIONS` with a unique `handler_name`.
2. Expose the coroutine on an existing handler class (or create a new handler and extend
`ExampleImagesHandlerSet`).
3. Wire additional services or factories inside `_build_handler_set` on
`ExampleImagesRoutes`, mirroring how the model stack introduces new use cases.
`tests/routes/test_example_images_routes.py` exercises registrar binding, download pause
flows, and import validations. Use it as a template when introducing new handler
collaborators or error mappings.

View File

@@ -1,100 +0,0 @@
# Base model route architecture
The model routing stack now splits HTTP wiring, orchestration logic, and
business rules into discrete layers. The goal is to make it obvious where a
new collaborator should live and which contract it must honour. The diagram
below captures the end-to-end flow for a typical request:
```mermaid
graph TD
subgraph HTTP
A[ModelRouteRegistrar] -->|binds| B[BaseModelRoutes handler proxy]
end
subgraph Application
B --> C[ModelHandlerSet]
C --> D1[Handlers]
D1 --> E1[Use cases]
E1 --> F1[Services / scanners]
end
subgraph Side Effects
F1 --> G1[Cache & metadata]
F1 --> G2[Filesystem]
F1 --> G3[WebSocket state]
end
```
Every box maps to a concrete module:
| Layer | Module(s) | Responsibility |
| --- | --- | --- |
| Registrar | `py/routes/model_route_registrar.py` | Declarative list of routes shared by every model type and helper methods for binding them to an `aiohttp` application. |
| Route controller | `py/routes/base_model_routes.py` | Constructs the handler graph, injects shared services, exposes proxies that surface `503 Service not ready` when the model service has not been attached. |
| Handler set | `py/routes/handlers/model_handlers.py` | Thin HTTP adapters grouped by concern (page rendering, listings, mutations, queries, downloads, CivitAI integration, move operations, auto-organize). |
| Use cases | `py/services/use_cases/*.py` | Encapsulate long-running flows (`DownloadModelUseCase`, `BulkMetadataRefreshUseCase`, `AutoOrganizeUseCase`). They normalise validation errors and concurrency constraints before returning control to the handlers. |
| Services | `py/services/*.py` | Existing services and scanners that mutate caches, write metadata, move files, and broadcast WebSocket updates. |
## Handler responsibilities & contracts
`ModelHandlerSet` flattens the handler objects into the exact callables used by
the registrar. The table below highlights the separation of concerns within
the set and the invariants that must hold after each handler returns.
| Handler | Key endpoints | Collaborators | Contracts |
| --- | --- | --- | --- |
| `ModelPageView` | `/{prefix}` | `SettingsManager`, `server_i18n`, Jinja environment, `service.scanner` | Template is rendered with `is_initializing` flag when caches are cold; i18n filter is registered exactly once per environment instance. |
| `ModelListingHandler` | `/api/lm/{prefix}/list` | `service.get_paginated_data`, `service.format_response` | Listings respect pagination query parameters and cap `page_size` at 100; every item is formatted before response. |
| `ModelManagementHandler` | Mutations (delete, exclude, metadata, preview, tags, rename, bulk delete, duplicate verification) | `ModelLifecycleService`, `MetadataSyncService`, `PreviewAssetService`, `TagUpdateService`, scanner cache/index | Cache state mirrors filesystem changes: deletes prune cache & hash index, preview replacements synchronise metadata and cache NSFW levels, metadata saves trigger cache resort when names change. |
| `ModelQueryHandler` | Read-only queries (top tags, folders, duplicates, metadata, URLs) | Service query helpers & scanner cache | Outputs always wrapped in `{"success": True}` when no error; duplicate/filename grouping omits empty entries; invalid parameters (e.g. missing `model_root`) return HTTP 400. |
| `ModelDownloadHandler` | `/api/lm/download-model`, `/download-model-get`, `/download-progress/{id}`, `/cancel-download-get` | `DownloadModelUseCase`, `DownloadCoordinator`, `WebSocketManager` | Payload validation errors become HTTP 400 without mutating download progress cache; early-access failures surface as HTTP 401; successful downloads cache progress snapshots that back both WebSocket broadcasts and polling endpoints. |
| `ModelCivitaiHandler` | CivitAI metadata routes | `MetadataSyncService`, metadata provider factory, `BulkMetadataRefreshUseCase` | `fetch_all_civitai` streams progress via `WebSocketBroadcastCallback`; version lookups validate model type before returning; local availability fields derive from hash lookups without mutating cache state. |
| `ModelMoveHandler` | `move_model`, `move_models_bulk` | `ModelMoveService` | Moves execute atomically per request; bulk operations aggregate success/failure per file set. |
| `ModelAutoOrganizeHandler` | `/api/lm/{prefix}/auto-organize` (GET/POST), `/auto-organize-progress` | `AutoOrganizeUseCase`, `WebSocketProgressCallback`, `WebSocketManager` | Enforces single-flight execution using the shared lock; progress broadcasts remain available to polling clients until explicitly cleared; conflicts return HTTP 409 with a descriptive error. |
## Use case boundaries
Each use case exposes a narrow asynchronous API that hides the underlying
services. Their error mapping is essential for predictable HTTP responses.
| Use case | Entry point | Dependencies | Guarantees |
| --- | --- | --- | --- |
| `DownloadModelUseCase` | `execute(payload)` | `DownloadCoordinator.schedule_download` | Translates `ValueError` into `DownloadModelValidationError` for HTTP 400, recognises early-access errors (`"401"` in message) and surfaces them as `DownloadModelEarlyAccessError`, forwards success dictionaries untouched. |
| `AutoOrganizeUseCase` | `execute(file_paths, progress_callback)` | `ModelFileService.auto_organize_models`, `WebSocketManager` lock | Guarded by `ws_manager` lock + status checks; raises `AutoOrganizeInProgressError` before invoking the file service when another run is already active. |
| `BulkMetadataRefreshUseCase` | `execute_with_error_handling(progress_callback)` | `MetadataSyncService`, `SettingsManager`, `WebSocketBroadcastCallback` | Iterates through cached models, applies metadata sync, emits progress snapshots that handlers broadcast unchanged. |
## Maintaining legacy contracts
The refactor preserves the invariants called out in the previous architecture
notes. The most critical ones are reiterated here to emphasise the
collaboration points:
1. **Cache mutations** Delete, exclude, rename, and bulk delete operations are
channelled through `ModelManagementHandler`. The handler delegates to
`ModelLifecycleService` or `MetadataSyncService`, and the scanner cache is
mutated in-place before the handler returns. The accompanying tests assert
that `scanner._cache.raw_data` and `scanner._hash_index` stay in sync after
each mutation.
2. **Preview updates** `PreviewAssetService.replace_preview` writes the new
asset, `MetadataSyncService` persists the JSON metadata, and
`scanner.update_preview_in_cache` mirrors the change. The handler returns
the static URL produced by `config.get_preview_static_url`, keeping browser
clients in lockstep with disk state.
3. **Download progress** `DownloadCoordinator.schedule_download` generates the
download identifier, registers a WebSocket progress callback, and caches the
latest numeric progress via `WebSocketManager`. Both `download_model`
responses and `/download-progress/{id}` polling read from the same cache to
guarantee consistent progress reporting across transports.
## Extending the stack
To add a new shared route:
1. Declare it in `COMMON_ROUTE_DEFINITIONS` using a unique handler name.
2. Implement the corresponding coroutine on one of the handlers inside
`ModelHandlerSet` (or introduce a new handler class when the concern does not
fit existing ones).
3. Inject additional dependencies in `BaseModelRoutes._create_handler_set` by
wiring services or use cases through the constructor parameters.
Model-specific routes should continue to be registered inside the subclass
implementation of `setup_specific_routes`, reusing the shared registrar where
possible.

View File

@@ -1,34 +0,0 @@
# Multi-Library Management for Standalone Mode
## Requirements Summary
- **Independent libraries**: In standalone mode, users can maintain multiple libraries, where each library represents a distinct set of model folders (LoRAs, checkpoints, embeddings, etc.). Only one library is active at any given time, but users need a fast way to switch between them.
- **Library-specific settings**: The fields that vary per library are `folder_paths`, `default_lora_root`, `default_checkpoint_root`, and `default_embedding_root` inside `settings.json`.
- **Persistent caches**: Every library must have its own SQLite persistent model cache so that metadata generated for one library does not leak into another.
- **Backward compatibility**: Existing single-library setups should continue to work. When no multi-library configuration is provided, the application should behave exactly as before.
## Proposed Design
1. **Library registry**
- Extend the standalone configuration to hold a list of libraries, each identified by a unique name.
- Each entry stores the folder path configuration plus any library-scoped metadata (e.g. creation time, display name).
- The active library key is stored separately to allow quick switching without rewriting the full config.
2. **Settings management**
- Update `settings_manager` to load and persist the library registry. When a library is activated, hydrate the in-memory settings object with that library's folder configuration.
- Provide helper methods for creating, renaming, and deleting libraries, ensuring validation for duplicate names and path collisions.
- Continue writing the active library settings to `settings.json` for compatibility, while storing the registry in a new section such as `libraries`.
3. **Persistent model cache**
- Derive the SQLite file path from the active library, e.g. `model_cache_<library>.sqlite` or a nested directory structure like `model_cache/<library>/models.sqlite`.
- Update `PersistentModelCache` so it resolves the database path dynamically whenever the active library changes. Ensure connections are closed before switching to avoid locking issues.
- Migrate existing single cache files by treating them as the default library's cache.
4. **Model scanning workflow**
- Modify `ModelScanner` and related services to react to library switches by clearing in-memory caches, re-reading folder paths, and rehydrating metadata from the library-specific SQLite cache.
- Provide API endpoints in standalone mode to list libraries, activate one, and trigger a rescan.
5. **UI/UX considerations**
- In the standalone UI, introduce a library selector component that surfaces available libraries and offers quick switching.
- Offer feedback when switching libraries (e.g. spinner while rescanning) and guard destructive actions with confirmation prompts.
## Implementation Notes
- **Data migration**: On startup, detect if the old `settings.json` structure is present. If so, create a default library entry using the current folder paths and point the active library to it.
- **Thread safety**: Ensure that any long-running scans are cancelled or awaited before switching libraries to prevent race conditions in cache writes.
- **Testing**: Add unit tests for the settings manager to cover library CRUD operations and cache path resolution. Include integration tests that simulate switching libraries and verifying that the correct models are loaded.
- **Documentation**: Update user guides to explain how to define libraries, switch between them, and where the new cache files are stored.
- **Extensibility**: Keep the design open to future per-library settings (e.g. auto-refresh intervals, metadata overrides) by storing library data as objects instead of flat maps.

View File

@@ -1,89 +0,0 @@
# Recipe route architecture
The recipe routing stack now mirrors the modular model route design. HTTP
bindings, controller wiring, handler orchestration, and business rules live in
separate layers so new behaviours can be added without re-threading the entire
feature. The diagram below outlines the flow for a typical request:
```mermaid
graph TD
subgraph HTTP
A[RecipeRouteRegistrar] -->|binds| B[RecipeRoutes controller]
end
subgraph Application
B --> C[RecipeHandlerSet]
C --> D1[Handlers]
D1 --> E1[Use cases]
E1 --> F1[Services / scanners]
end
subgraph Side Effects
F1 --> G1[Cache & fingerprint index]
F1 --> G2[Metadata files]
F1 --> G3[Temporary shares]
end
```
## Layer responsibilities
| Layer | Module(s) | Responsibility |
| --- | --- | --- |
| Registrar | `py/routes/recipe_route_registrar.py` | Declarative list of every recipe endpoint and helper methods that bind them to an `aiohttp` application. |
| Controller | `py/routes/base_recipe_routes.py`, `py/routes/recipe_routes.py` | Lazily resolves scanners/clients from the service registry, wires shared templates/i18n, instantiates `RecipeHandlerSet`, and exposes a `{handler_name: coroutine}` mapping for the registrar. |
| Handler set | `py/routes/handlers/recipe_handlers.py` | Thin HTTP adapters grouped by concern (page view, listings, queries, mutations, sharing). They normalise responses and translate service exceptions into HTTP status codes. |
| Services & scanners | `py/services/recipes/*.py`, `py/services/recipe_scanner.py`, `py/services/service_registry.py` | Concrete business logic: metadata parsing, persistence, sharing, fingerprint/index maintenance, and cache refresh. |
## Handler responsibilities & invariants
`RecipeHandlerSet` flattens purpose-built handler objects into the callables the
registrar binds. Each handler is responsible for a narrow concern and enforces a
set of invariants before returning:
| Handler | Key endpoints | Collaborators | Contracts |
| --- | --- | --- | --- |
| `RecipePageView` | `/loras/recipes` | `SettingsManager`, `server_i18n`, Jinja environment, recipe scanner getter | Template rendered with `is_initializing` flag when caches are still warming; i18n filter registered exactly once per environment instance. |
| `RecipeListingHandler` | `/api/lm/recipes`, `/api/lm/recipe/{id}` | `recipe_scanner.get_paginated_data`, `recipe_scanner.get_recipe_by_id` | Listings respect pagination and search filters; every item receives a `file_url` fallback even when metadata is incomplete; missing recipes become HTTP 404. |
| `RecipeQueryHandler` | Tag/base-model stats, syntax, LoRA lookups | Recipe scanner cache, `format_recipe_file_url` helper | Cache snapshots are reused without forcing refresh; duplicate lookups collapse groups by fingerprint; syntax lookups return helpful errors when LoRAs are absent. |
| `RecipeManagementHandler` | Save, update, reconnect, bulk delete, widget ingest | `RecipePersistenceService`, `RecipeAnalysisService`, recipe scanner | Persistence results propagate HTTP status codes; fingerprint/index updates flow through the scanner before returning; validation errors surface as HTTP 400 without touching disk. |
| `RecipeAnalysisHandler` | Uploaded/local/remote analysis | `RecipeAnalysisService`, `civitai_client`, recipe scanner | Unsupported content types map to HTTP 400; download errors (`RecipeDownloadError`) are not retried; every response includes a `loras` array for client compatibility. |
| `RecipeSharingHandler` | Share + download | `RecipeSharingService`, recipe scanner | Share responses provide a stable download URL and filename; expired shares surface as HTTP 404; downloads stream via `web.FileResponse` with attachment headers. |
## Use case boundaries
The dedicated services encapsulate long-running work so handlers stay thin.
| Use case | Entry point | Dependencies | Guarantees |
| --- | --- | --- | --- |
| `RecipeAnalysisService` | `analyze_uploaded_image`, `analyze_remote_image`, `analyze_local_image`, `analyze_widget_metadata` | `ExifUtils`, `RecipeParserFactory`, downloader factory, optional metadata collector/processor | Normalises missing/invalid payloads into `RecipeValidationError`; generates consistent fingerprint data to keep duplicate detection stable; temporary files are cleaned up after every analysis path. |
| `RecipePersistenceService` | `save_recipe`, `delete_recipe`, `update_recipe`, `reconnect_lora`, `bulk_delete`, `save_recipe_from_widget` | `ExifUtils`, recipe scanner, card preview sizing constants | Writes images/JSON metadata atomically; updates scanner caches and hash indices before returning; recalculates fingerprints whenever LoRA assignments change. |
| `RecipeSharingService` | `share_recipe`, `prepare_download` | `tempfile`, recipe scanner | Copies originals to TTL-managed temp files; metadata lookups re-use the scanner; expired shares trigger cleanup and `RecipeNotFoundError`. |
## Maintaining critical invariants
* **Cache updates** Mutations (`save`, `delete`, `bulk_delete`, `update`) call
back into the recipe scanner to mutate the in-memory cache and fingerprint
index before returning a response. Tests assert that these methods are invoked
even when stubbing persistence.
* **Fingerprint management** `RecipePersistenceService` recomputes
fingerprints whenever LoRA metadata changes and duplicate lookups use those
fingerprints to group recipes. Handlers bubble the resulting IDs so clients
can merge duplicates without an extra fetch.
* **Metadata synchronisation** Saving or reconnecting a recipe updates the
JSON sidecar, refreshes embedded metadata via `ExifUtils`, and instructs the
scanner to resort its cache. Sharing relies on this metadata to generate
filenames and ensure downloads stay in sync with on-disk state.
## Extending the stack
1. Declare the new endpoint in `ROUTE_DEFINITIONS` with a unique handler name.
2. Implement the coroutine on an existing handler or introduce a new handler
class inside `py/routes/handlers/recipe_handlers.py` when the concern does
not fit existing ones.
3. Wire additional collaborators inside
`BaseRecipeRoutes._create_handler_set` (inject new services or factories) and
expose helper getters on the handler owner if the handler needs to share
utilities.
Integration tests in `tests/routes/test_recipe_routes.py` exercise the listing,
mutation, analysis-error, and sharing paths end-to-end, ensuring the controller
and handler wiring remains valid as new capabilities are added.

View File

@@ -1,46 +0,0 @@
# Custom Priority Tag Format Proposal
To support user-defined priority tags with flexible aliasing across different model types, the configuration will be stored as editable strings. The format balances readability with enough structure for parsing on both the backend and frontend.
## Format Overview
- Each model type is declared on its own line: `model_type: entries`.
- Entries are comma-separated and ordered by priority from highest to lowest.
- An entry may be a single canonical tag (e.g., `realistic`) or a canonical tag with aliases.
- Canonical tags define the final folder name that should be used when matching that entry.
- Aliases are enclosed in parentheses and separated by `|` (vertical bar).
- All matching is case-insensitive; stored canonical names preserve the user-specified casing for folder creation and UI suggestions.
### Grammar
```
priority-config := model-config { "\n" model-config }
model-config := model-type ":" entry-list
model-type := <identifier without spaces>
entry-list := entry { "," entry }
entry := canonical [ "(" alias { "|" alias } ")" ]
canonical := <tag text without parentheses or commas>
alias := <tag text without parentheses, commas, or pipes>
```
Examples:
```
lora: celebrity(celeb|celebrity), stylized, character(char)
checkpoint: realistic(realism|realistic), anime(anime-style|toon)
embedding: face, celeb(celebrity|celeb)
```
## Parsing Notes
- Whitespace around separators is ignored to make manual editing more forgiving.
- Duplicate canonical tags within the same model type collapse to a single entry; the first definition wins.
- Aliases map to their canonical tag. When generating folder names, the canonical form is used.
- Tags that do not match any alias or canonical entry fall back to the first tag in the model's tag list, preserving current behavior.
## Usage
- **Backend:** Convert each model type's string into an ordered list of canonical tags with alias sets. During path generation, iterate by priority order and match tags against both canonical names and their aliases.
- **Frontend:** Surface canonical tags as suggestions, optionally displaying aliases in tooltips or secondary text. Input validation should warn about duplicate aliases within the same model type.
This format allows users to customize priority tag handling per model type while keeping editing simple and avoiding proliferation of folder names through alias normalization.

View File

@@ -1,28 +0,0 @@
# DOM Widgets Documentation
Documentation for custom DOM widget development in ComfyUI LoRA Manager.
## Files
- **[Value Persistence Best Practices](value-persistence-best-practices.md)** - Essential guide for implementing text input DOM widgets that persist values correctly
## Key Lessons
### Common Anti-Patterns
**Don't**: Create internal state variables
**Don't**: Use v-model for text inputs
**Don't**: Add serializeValue, onSetValue callbacks
**Don't**: Watch props.widget.value
### Best Practices
**Do**: Use DOM element as single source of truth
**Do**: Store DOM reference on widget.inputEl
**Do**: Direct getValue/setValue to DOM
**Do**: Clean up reference on unmount
## Related Documentation
- [DOM Widget Development Guide](../dom_widget_dev_guide.md) - Comprehensive guide for building DOM widgets
- [ComfyUI Built-in Example](../../../../code/ComfyUI_frontend/src/renderer/extensions/vueNodes/widgets/composables/useStringWidget.ts) - Reference implementation

View File

@@ -1,225 +0,0 @@
# DOM Widget Value Persistence - Best Practices
## Overview
DOM widgets require different persistence patterns depending on their complexity. This document covers two patterns:
1. **Simple Text Widgets**: DOM element as source of truth (e.g., textarea, input)
2. **Complex Widgets**: Internal value with `widget.callback` (e.g., LoraPoolWidget, RandomizerWidget)
## Understanding ComfyUI's Built-in Callback Mechanism
When `widget.value` is set (e.g., during workflow load), ComfyUI's `domWidget.ts` triggers this flow:
```typescript
// From ComfyUI_frontend/src/scripts/domWidget.ts:146-149
set value(v: V) {
this.options.setValue?.(v) // 1. Update internal state
this.callback?.(this.value) // 2. Notify listeners for UI updates
}
```
This means:
- `setValue()` handles storing the value
- `widget.callback()` is automatically called to notify the UI
- You don't need custom callback mechanisms like `onSetValue`
---
## Pattern 1: Simple Text Input Widgets
For widgets where the value IS the DOM element's text content (textarea, input fields).
### When to Use
- Single text input/textarea widgets
- Value is a simple string
- No complex state management needed
### Implementation
**main.ts:**
```typescript
const widget = node.addDOMWidget(name, type, container, {
getValue() {
return widget.inputEl?.value ?? ''
},
setValue(v: string) {
if (widget.inputEl) {
widget.inputEl.value = v ?? ''
}
}
})
```
**Vue Component:**
```typescript
onMounted(() => {
if (textareaRef.value) {
props.widget.inputEl = textareaRef.value
}
})
onUnmounted(() => {
if (props.widget.inputEl === textareaRef.value) {
props.widget.inputEl = undefined
}
})
```
### Why This Works
- Single source of truth: the DOM element
- `getValue()` reads directly from DOM
- `setValue()` writes directly to DOM
- No sync issues between multiple state variables
---
## Pattern 2: Complex Widgets
For widgets with structured data (JSON configs, arrays, objects) where the value cannot be stored in a DOM element.
### When to Use
- Value is a complex object/array (e.g., `{ loras: [...], settings: {...} }`)
- Multiple UI elements contribute to the value
- Vue reactive state manages the UI
### Implementation
**main.ts:**
```typescript
let internalValue: MyConfig | undefined
const widget = node.addDOMWidget(name, type, container, {
getValue() {
return internalValue
},
setValue(v: MyConfig) {
internalValue = v
// NO custom onSetValue needed - widget.callback is called automatically
},
serialize: true // Ensure value is saved with workflow
})
```
**Vue Component:**
```typescript
const config = ref<MyConfig>(getDefaultConfig())
onMounted(() => {
// Set up callback for UI updates when widget.value changes externally
// (e.g., workflow load, undo/redo)
props.widget.callback = (newValue: MyConfig) => {
if (newValue) {
config.value = newValue
}
}
// Restore initial value if workflow was already loaded
if (props.widget.value) {
config.value = props.widget.value
}
})
// When UI changes, update widget value
function onConfigChange(newConfig: MyConfig) {
config.value = newConfig
props.widget.value = newConfig // This also triggers callback
}
```
### Why This Works
1. **Clear separation**: `internalValue` stores the data, Vue ref manages the UI
2. **Built-in callback**: ComfyUI calls `widget.callback()` automatically after `setValue()`
3. **Bidirectional sync**:
- External → UI: `setValue()` updates `internalValue`, `callback()` updates Vue ref
- UI → External: User interaction updates Vue ref, which updates `widget.value`
---
## Common Mistakes
### ❌ Creating custom callback mechanisms
```typescript
// Wrong - unnecessary complexity
setValue(v: MyConfig) {
internalValue = v
widget.onSetValue?.(v) // Don't add this - use widget.callback instead
}
```
Use the built-in `widget.callback` instead.
### ❌ Using v-model for simple text inputs in DOM widgets
```html
<!-- Wrong - creates sync issues -->
<textarea v-model="textValue" />
<!-- Right for simple text widgets -->
<textarea ref="textareaRef" @input="onInput" />
```
### ❌ Watching props.widget.value
```typescript
// Wrong - creates race conditions
watch(() => props.widget.value, (newValue) => {
config.value = newValue
})
```
Use `widget.callback` instead - it's called at the right time in the lifecycle.
### ❌ Multiple sources of truth
```typescript
// Wrong - who is the source of truth?
let internalValue = '' // State 1
const textValue = ref('') // State 2
const domElement = textarea // State 3
props.widget.value // State 4
```
Choose ONE source of truth:
- **Simple widgets**: DOM element
- **Complex widgets**: `internalValue` (with Vue ref as derived UI state)
### ❌ Adding serializeValue for simple widgets
```typescript
// Wrong - getValue/setValue handle serialization
props.widget.serializeValue = async () => textValue.value
```
---
## Decision Guide
| Widget Type | Source of Truth | Use `widget.callback` | Example |
|-------------|-----------------|----------------------|---------|
| Simple text input | DOM element (`inputEl`) | Optional | AutocompleteTextWidget |
| Complex config | `internalValue` | Yes, for UI sync | LoraPoolWidget |
| Vue component widget | Vue ref + `internalValue` | Yes | RandomizerWidget |
---
## Testing Checklist
- [ ] Load workflow - value restores correctly
- [ ] Switch workflow - value persists
- [ ] Reload page - value persists
- [ ] UI interaction - value updates
- [ ] Undo/redo - value syncs with UI
- [ ] No console errors
---
## References
- ComfyUI DOMWidget implementation: `ComfyUI_frontend/src/scripts/domWidget.ts`
- Simple text widget example: `ComfyUI_frontend/src/renderer/extensions/vueNodes/widgets/composables/useStringWidget.ts`

View File

@@ -1,546 +0,0 @@
# DOMWidget Development Guide
This document provides a comprehensive guide for developing custom DOMWidgets in ComfyUI using Vanilla JavaScript. DOMWidgets allow you to embed standard HTML elements (div, video, canvas, input, etc.) into ComfyUI nodes while benefitting from the frontend's automatic layout and zoom management.
## 1. Core Concepts
In ComfyUI, a `DOMWidget` extends the default LiteGraph Canvas rendering logic. It maintains an HTML layer on top of the Canvas, making complex interactions and media displays significantly easier to implement than pure Canvas drawing.
### Key APIs
* **`app.registerExtension`**: The entry point for registering extensions.
* **`getCustomWidgets`**: A hook for defining new widget types associated with specific input types.
* **`node.addDOMWidget`**: The core method to add HTML elements to a node.
---
## 2. Basic Structure
A standard custom DOMWidget extension typically follows this structure:
```javascript
import { app } from "../../scripts/app.js";
app.registerExtension({
name: "My.Custom.Extension",
async getCustomWidgets() {
return {
// Define a new widget type named "MY_WIDGET_TYPE"
MY_WIDGET_TYPE(node, inputName, inputData, app) {
// 1. Create the HTML element
const container = document.createElement("div");
container.innerHTML = "Hello <b>DOMWidget</b>!";
// 2. Setup styles (Optional but recommended)
container.style.color = "white";
container.style.backgroundColor = "#222";
container.style.padding = "5px";
// 3. Add the DOMWidget and return the result
const widget = node.addDOMWidget(inputName, "MY_WIDGET_TYPE", container, {
// Configuration options
getValue() {
return container.innerText;
},
setValue(v) {
container.innerText = v;
}
});
// 4. Return in the standard format
return { widget };
}
};
}
});
```
---
## ComfyUI Dual Rendering Modes
ComfyUI frontend supports two rendering modes:
| Mode | Description | DOM Structure |
| :--- | :--- | :--- |
| **Canvas Mode** | Traditional rendering where widgets are rendered on top of canvas using absolute positioning | Uses `.dom-widget` class on containers |
| **Vue DOM Mode** | New rendering mode where nodes and widgets are rendered as Vue components | Uses `.lg-node-widget` class on containers with dynamic IDs (e.g., `v-1-0`) |
### Mode Switching
The frontend switches between modes via `LiteGraph.vueNodesMode` boolean:
- `LiteGraph.vueNodesMode = true` → Vue DOM Mode
- `LiteGraph.vueNodesMode = false` → Canvas Mode
**Key Behavior**: Mode switching triggers DOM re-rendering WITHOUT page reload. Widget elements are destroyed and recreated, so any event listeners or references to old DOM elements become invalid.
### Testing Mode Switches via Chrome DevTools MCP
```javascript
// Trigger render mode change
LiteGraph.vueNodesMode = !LiteGraph.vueNodesMode;
// Force canvas redraw (optional but helps trigger re-render)
if (app.canvas) {
app.canvas.draw(true, true);
}
```
### Development Notes
When implementing widgets that attach event listeners or maintain external references:
1. **Use `node.onRemoved`** to clean up when node is deleted
2. **Detect DOM changes** by checking if widget input element is still in document: `document.body.contains(inputElement)`
3. **Poll for mode changes** by watching `LiteGraph.vueNodesMode` and re-initializing when it changes
4. **Use `loadedGraphNode` hook** for initial setup (guarantees DOM is fully rendered)
---
## 3. The `addDOMWidget` API
```javascript
node.addDOMWidget(name, type, element, options)
```
### Parameters
1. **`name`**: The internal name of the widget (usually matches the input name).
2. **`type`**: The type identifier for the widget.
3. **`element`**: The actual HTMLElement to embed.
4. **`options`**: (Object) Configuration for lifecycle, sizing, and persistence.
### Common `options` Fields
| Field | Type | Description |
| :--- | :--- | :--- |
| `getValue` | `Function` | Defines how to retrieve the widget's value for serialization. |
| `setValue` | `Function` | Defines how to restore the widget's state from workflow data. |
| `getMinHeight` | `Function` | Returns the minimum height in pixels. |
| `getHeight` | `Function` | Returns the preferred height (supports numbers or percentage strings like `"50%"`). |
| `onResize` | `Function` | Callback triggered when the widget is resized. |
| `hideOnZoom`| `Boolean` | Whether to hide the DOM element when zoomed out to improve performance (default: `true`). |
| `selectOn` | `string[]` | Events on the element that should trigger node selection (default: `['focus', 'click']`). |
---
## 4. Size Control
Custom DOMWidgets must actively inform the parent Node of their size requirements to ensure the Node layout is calculated correctly and connection wires remain aligned.
### 4.1 Core Mechanism
Whether in Canvas Mode or Vue Mode, the underlying logic model (`LGraphNode`) calls the widget's `computeLayoutSize` method to determine dimensions. This logic is used to calculate the Node's total size and the position of input/output slots.
### 4.2 Controlling Height
It is recommended to use the `options` parameter to define height behavior.
**Performance Note:** providing `getMinHeight` and `getHeight` via `options` allows the system to skip expensive DOM measurements (`getComputedStyle`) during rendering loop. This significantly improves performance and prevents FPS drops during node resizing.
**Method 1: Using `options` (Recommended)**
```javascript
const widget = node.addDOMWidget("MyWidget", "custom", element, {
// Specify minimum height in pixels
getMinHeight: () => 150,
// Or specify preferred height (pixels or percentage string)
// getHeight: () => "50%",
});
```
**Method 2: Using CSS Variables**
You can also set specific CSS variables on the root element:
```javascript
element.style.setProperty("--comfy-widget-min-height", "150px");
// or --comfy-widget-height
```
### 4.3 Controlling Width
By default, a DOMWidget's width automatically stretches to fit the Node's width (which is determined by the Title or other Input Slots).
If you must **force the Node to be wider** to accommodate your widget, you need to override the widget instance's `computeLayoutSize` method:
```javascript
const widget = node.addDOMWidget("WideWidget", "custom", element);
// Override the default layout calculation
widget.computeLayoutSize = (targetNode) => {
return {
minHeight: 150, // Must return height
minWidth: 300 // Force the Node to be at least 300px wide
};
};
```
### 4.4 Dynamic Resizing
If your widget's content changes dynamically (e.g., expanding sections, loading images, or CSS changes), the DOM element will resize, but the Canvas-rendered Node background and Slots will not automatically follow. You must manually trigger a synchronization.
**The Update Sequence:**
Whenever the **actual rendering height** of your DOM element changes, execute the following "three-step combo":
```javascript
// 1. Calculate the new optimal size for the node based on current widget requirements
const newSize = node.computeSize();
// 2. Apply the new size to the node model (updates bounding box and slot positions)
node.setSize(newSize);
// 3. Mark the canvas as dirty to trigger a redraw in the next animation frame
node.setDirtyCanvas(true, true);
```
**Common Scenarios:**
| Scenario | Actual Height Change? | Update Required? |
| :--- | :--- | :--- |
| **Expand/Collapse content** | **Yes** | ✅ **Yes**. Prevents widget from overflowing node boundaries. |
| **Image/Video finished loading** | **Yes** | ✅ **Yes**. Initial height might be 0 until the media loads. |
| **Changing `minHeight`** | **Maybe** | ❓ **Only if** the change causes the element's actual height to shift. |
| **Changing font size/styles** | **Yes** | ✅ **Yes**. Text reflow often changes the total height. |
| **User dragging node corner** | **Yes** | ❌ **No**. LiteGraph handles this internally. |
---
## 5. State Persistence (Serialization)
### 5.1 Default Behavior
DOMWidgets have **serialization enabled** by default (`serialize` property is `true`).
* **Saving**: ComfyUI attempts to read the widget's value to save into the Workflow file.
* **Loading**: ComfyUI reads the value from the Workflow file and assigns it to the widget.
### 5.2 Custom Serialization
To make persistence work effectively (saving internal DOM state and restoring it), you must implement `getValue` and `setValue` in the `options`:
* **`getValue`**: Returns the state to be saved (Number, String, or Object).
* **`setValue`**: Receives the restored value and updates the DOM element.
**Example:**
```javascript
const inputEl = document.createElement("input");
const widget = node.addDOMWidget("MyInput", "custom", inputEl, {
// 1. Called during Save
getValue: () => {
return inputEl.value;
},
// 2. Called during Load or Copy/Paste
setValue: (value) => {
inputEl.value = value || "";
}
});
// Optional: Listen for changes to update widget.value immediately
inputEl.addEventListener("change", () => {
widget.value = inputEl.value; // Triggers callbacks
});
```
> **⚠️ Important**: For Vue-based DOM widgets with text inputs, follow the [Value Persistence Best Practices](dom-widgets/value-persistence-best-practices.md) to avoid sync issues. Key takeaway: use DOM element as single source of truth, avoid internal state variables and v-model.
### 5.3 The Restoration Mechanism (`configure`)
* **`configure(data)`**: When a Workflow is loaded, `LGraphNode` calls its `configure(data)` method.
* **`setValue` Chain**: During `configure`, the Node iterates over the saved `widgets_values` array and assigns each value (`widget.value = savedValue`). For DOMWidgets, this assignment triggers the `setValue` callback defined in your options.
Therefore, `options.setValue` is the critical hook for restoring widget state.
### 5.4 Disabling Serialization
If your widget is purely for display (e.g., a real-time monitor or generated chart) and doesn't need to save state, disable serialization to reduce workflow file size.
**Note**: You cannot set this via `options`. You must modify the widget instance directly.
```javascript
const widget = node.addDOMWidget("DisplayOnly", "custom", element);
widget.serialize = false; // Explicitly disable
```
---
## 6. Lifecycle & Events
### 6.1 `onResize`
When the Node size changes (e.g., user drags the corner), the widget can receive a notification via `options`:
```javascript
const widget = node.addDOMWidget("ResizingWidget", "custom", element, {
onResize: (w) => {
// 'w' is the widget instance
// Adjust internal DOM layout here if necessary
console.log("Widget resized");
}
});
```
### 6.2 Construction & Mounting
* **Construction**: Occurs immediately when `addDOMWidget` is called.
* **Mounting**:
* **Canvas Mode**: Appended to `.dom-widget-container` via `DomWidget.vue`.
* **Vue Mode**: Appended inside the Node component via `WidgetDOM.vue`.
* **Caution**: When `addDOMWidget` returns, the element may not be in the `document.body` yet. If you need to access layout properties like `getBoundingClientRect`, use `setTimeout` or wait for the first `onResize`.
### 6.3 Cleanup
If you create external references (like `setInterval` or global event listeners), ensure you clean them up using `node.onRemoved`:
```javascript
node.onRemoved = function() {
clearInterval(myInterval);
// Call original onRemoved if it existed
};
```
---
## 7. Styling & Best Practices
### 7.1 Styling
Since DOMWidgets are placed in absolute positioned containers or managed by Vue, ensure your container handles sizing gracefully:
```javascript
container.style.width = "100%";
container.style.boxSizing = "border-box";
```
### 7.2 Path References
When importing `app`, adjust the path based on your extension's folder depth. Typically:
`import { app } from "../../scripts/app.js";`
### 7.3 Security
If setting `innerHTML` dynamically, ensure the content is sanitized or trusted to prevent XSS attacks.
### 7.4 UI Constraints for ComfyUI Custom Node Widgets
When developing DOMWidgets as internal UI widgets for ComfyUI custom nodes, keep the following constraints in mind:
#### 7.4.1 Minimize Vertical Space
ComfyUI nodes are often displayed in a compact graph view with many nodes visible simultaneously. Avoid excessive vertical spacing that could clutter the workspace.
- Keep layouts compact and efficient
- Use appropriate padding and margins (4-8px typically)
- Stack related controls vertically but avoid unnecessary spacing
#### 7.4.2 Avoid Dynamic Height Changes
Dynamic height changes (expand/collapse sections, showing/hiding content) can cause node layout recalculations and affect connection wire positioning.
- Prefer static layouts over expandable/collapsible sections
- Use tooltips or overlays for additional information instead
- If dynamic height is unavoidable, manually trigger layout updates (see Section 4.4)
#### 7.4.3 Keep UI Simple and Intuitive
As internal widgets for ComfyUI custom nodes, the UI should be accessible to users without technical implementation details.
- Use clear, user-friendly terminology (avoid "frontend/backend roll" in favor of "fixed/always randomize")
- Focus on user intent rather than implementation details
- Avoid complex interactions that may confuse users
#### 7.4.4 Forward Middle Mouse Events to Canvas
By default, when a DOM widget receives pointer events (e.g., mouse clicks, drags), these events are captured by the widget and not forwarded to the ComfyUI canvas. This prevents users from panning the workflow using the middle mouse button when the cursor is over a DOM widget.
To enable workflow panning over your widget, you should forward middle mouse events (button 1) to the canvas using the `forwardMiddleMouseToCanvas` utility function:
```javascript
import { forwardMiddleMouseToCanvas } from "./utils.js";
// In your widget creation function
const container = document.createElement("div");
container.style.width = "100%";
container.style.height = "100%";
// ... other styles ...
// Forward middle mouse events to canvas for panning
forwardMiddleMouseToCanvas(container);
const widget = node.addDOMWidget(name, type, container, { ... });
```
The `forwardMiddleMouseToCanvas` function:
- Forwards `pointerdown` events with button 1 (middle mouse button) to `app.canvas.processMouseDown`
- Forwards `pointermove` events while middle mouse button is pressed to `app.canvas.processMouseMove`
- Forwards `pointerup` events with button 1 to `app.canvas.processMouseUp`
This allows users to pan the workflow canvas even when their mouse cursor is hovering over your DOM widget.
---
## 8. Event Handling in Vue DOM Render Mode
ComfyUI frontend supports two rendering modes for nodes:
- **Legacy Canvas Mode**: Traditional rendering where widgets are rendered on top of the canvas using absolute positioning
- **Vue DOM Render Mode**: New rendering mode where nodes and widgets are rendered as Vue components
In Vue DOM render mode, event handling works differently. The frontend uses `useCanvasInteractions` composable to manage event forwarding to the canvas. This can cause custom event handlers in your widgets (e.g., mouse wheel for sliders, custom drag operations) to be intercepted by the canvas.
### 8.1 Wheel Event Handling
By default in Vue DOM render mode, wheel events on widgets may be forwarded to the canvas for workflow zoom, overriding your custom wheel handlers (e.g., adjusting slider values with mouse wheel).
To fix this, use the `data-capture-wheel="true"` attribute on elements that should capture wheel events:
```vue
<!-- Vue component template -->
<div class="my-slider" data-capture-wheel="true" @wheel="onWheel">
<!-- Slider content -->
</div>
<script setup lang="ts">
const onWheel = (event: WheelEvent) => {
event.preventDefault()
// Custom wheel handling logic here
}
</script>
```
**How it works:**
- ComfyUI's `useCanvasInteractions.ts` checks `target?.closest('[data-capture-wheel="true"]')` before forwarding wheel events
- If an element (or its ancestor) has this attribute, wheel events are not forwarded to canvas
- Your custom `@wheel` handler will work as expected
**Granular control:**
- Apply `data-capture-wheel="true"` to specific interactive elements (e.g., sliders, scrollable areas)
- Widget container without this attribute will allow workflow zoom when wheel is used elsewhere
- This allows users to both: adjust widget values with wheel, and zoom workflow with wheel in widget's non-interactive areas
**Example from DualRangeSlider.vue:**
```vue
<template>
<div
class="dual-range-slider"
:class="{ disabled, 'is-dragging': dragging !== null }"
data-capture-wheel="true"
@wheel="onWheel"
>
<!-- Slider tracks and handles -->
</div>
</template>
```
### 8.2 Pointer Event Handling
In Vue DOM render mode, pointer events (click, drag, etc.) may also be captured by the canvas system. For custom drag operations:
1. **Use event modifiers to stop propagation:**
```vue
<div
@pointerdown.stop="startDrag"
@pointermove.stop="onDrag"
@pointerup.stop="stopDrag"
>
```
2. **Use pointer capture for reliable drag tracking:**
```javascript
const startDrag = (event: PointerEvent) => {
const target = event.currentTarget as HTMLElement
target.setPointerCapture(event.pointerId)
// ... drag initialization
}
const stopDrag = (event: PointerEvent) => {
const target = event.currentTarget as HTMLElement
target.releasePointerCapture(event.pointerId)
// ... drag cleanup
}
```
3. **Use `touch-action: none` CSS for touch devices:**
```css
.my-draggable {
touch-action: none;
}
```
### 8.3 Compatibility Checklist
Ensure your widget works in both rendering modes:
| Feature | Canvas Mode | Vue DOM Mode | Solution |
|---------|-------------|--------------|----------|
| Mouse wheel on sliders | Works by default | Needs `data-capture-wheel` | Add `data-capture-wheel="true"` to slider elements |
| Custom drag operations | Works with `stopPropagation()` | Needs `stopPropagation()` | Use `.stop` modifier and pointer capture |
| Middle mouse panning | Manual forwarding required | Manual forwarding required | Use `forwardMiddleMouseToCanvas()` |
| Workflow zoom on widget edges | Works by default | Works by default | No action needed (works by default) |
### 8.4 Testing Recommendations
Test your widget in both rendering modes:
1. Toggle between Canvas Mode and Vue DOM Mode in ComfyUI settings
2. Verify custom interactions (wheel, drag, etc.) work in both modes
3. Verify canvas interactions (zoom, pan) still work when cursor is over non-interactive widget areas
4. Test with touch devices if applicable
---
## 9. Complete Example: Text Counter
This example implements a simple widget that displays the character count of another text widget in the same node.
```javascript
import { app } from "../../scripts/app.js";
app.registerExtension({
name: "Comfy.TextCounter",
getCustomWidgets() {
return {
TEXT_COUNTER(node, inputName) {
const el = document.createElement("div");
Object.assign(el.style, {
background: "#222",
border: "1px solid #444",
padding: "8px",
borderRadius: "4px",
fontSize: "12px",
color: "#eee"
});
const label = document.createElement("span");
label.innerText = "Characters: 0";
el.appendChild(label);
const widget = node.addDOMWidget(inputName, "TEXT_COUNTER", el, {
getValue() { return ""; }, // Nothing to save
setValue(v) { }, // Nothing to restore
getMinHeight() { return 40; }
});
// Disable serialization for this display-only widget
widget.serialize = false;
// Custom method to update UI
widget.updateCount = (text) => {
label.innerText = `Characters: ${text.length}`;
};
return { widget };
}
};
},
nodeCreated(node) {
// Logic to link widgets after the node is initialized
if (node.comfyClass === "MyTextNode") {
const counterWidget = node.widgets.find(w => w.type === "TEXT_COUNTER");
const textWidget = node.widgets.find(w => w.name === "text");
if (counterWidget && textWidget) {
// Hook into the text widget's callback
const oldCallback = textWidget.callback;
textWidget.callback = function(v) {
if (oldCallback) oldCallback.apply(this, arguments);
counterWidget.updateCount(v);
};
}
}
}
});
```

View File

@@ -1,51 +0,0 @@
# Frontend DOM Fixture Strategy
This guide outlines how to reproduce the markup emitted by the Django templates while running Vitest in jsdom. The aim is to make it straightforward to write integration-style unit tests for managers and UI helpers without having to duplicate template fragments inline.
## Loading Template Markup
Vitest executes inside Node, so we can read the same HTML templates that ship with the extension:
1. Use the helper utilities from `tests/frontend/utils/domFixtures.js` to read files under the `templates/` directory.
2. Mount the returned markup into `document.body` (or any custom container) before importing the module under test so its query selectors resolve correctly.
```js
import { renderTemplate } from '../utils/domFixtures.js'; // adjust the relative path to your spec
beforeEach(() => {
renderTemplate('loras.html', {
dataset: { page: 'loras' }
});
});
```
The helper ensures the dataset is applied to the container, which mirrors how Django sets `data-page` in production.
## Working with Partial Components
Many features are implemented as template partials located under `templates/components/`. When a test only needs a fragment (for example, the progress panel or context menu markup), load the component file directly:
```js
const container = renderTemplate('components/progress_panel.html');
const progressPanel = container.querySelector('#progress-panel');
```
This pattern avoids hand-written fixture strings and keeps the tests aligned with the actual markup.
## Resetting Between Tests
The shared Vitest setup clears `document.body` and storage APIs before each test. If a suite adds additional DOM nodes outside of the body or needs to reset custom attributes mid-test, use `resetDom()` exported from `domFixtures.js`.
```js
import { resetDom } from '../utils/domFixtures.js';
afterEach(() => {
resetDom();
});
```
## Future Enhancements
- Provide typed helpers for injecting mock script tags (e.g., replicating ComfyUI globals).
- Compose higher-level fixtures that mimic specific pages (loras, checkpoints, recipes) once those managers receive dedicated suites.

View File

@@ -1,44 +0,0 @@
# LoRA & Checkpoints Filtering/Sorting Test Matrix
This matrix captures the scenarios that Phase 3 frontend tests should cover for the LoRA and Checkpoint managers. It focuses on how search, filter, sort, and duplicate badge toggles interact so future specs can share fixtures and expectations.
## Scope
- **Components**: `PageControls`, `FilterManager`, `SearchManager`, and `ModelDuplicatesManager` wiring invoked through `CheckpointsPageManager` and `LorasPageManager`.
- **Templates**: `templates/loras.html` and `templates/checkpoints.html` along with shared filter panel and toolbar partials.
- **APIs**: Requests issued through `baseModelApi.fetchModels` (via `resetAndReload`/`refreshModels`) and duplicates badge updates.
## Shared Setup Considerations
1. Render full page templates using `renderLorasPage` / `renderCheckpointsPage` helpers before importing modules so DOM queries resolve.
2. Stub storage helpers (`getStorageItem`, `setStorageItem`, `getSessionItem`, `setSessionItem`) to observe persistence behavior without mutating real storage.
3. Mock `sidebarManager` to capture refresh calls triggered after sort/filter actions.
4. Provide fake API implementations exposing `resetAndReload`, `refreshModels`, `fetchFromCivitai`, `toggleBulkMode`, and `clearCustomFilter` so control events remain asynchronous but deterministic.
5. Supply a minimal `ModelDuplicatesManager` mock exposing `toggleDuplicateMode`, `checkDuplicatesCount`, and `updateDuplicatesBadgeAfterRefresh` to validate duplicate badge wiring.
## Scenario Matrix
| ID | Feature | Scenario | LoRAs Expectations | Checkpoints Expectations | Notes |
| --- | --- | --- | --- | --- | --- |
| F-01 | Search filter | Typing a query updates `pageState.filters.search`, persists to session, and triggers `resetAndReload` on submit | Validate `SearchManager` writes query and reloads via API stub; confirm LoRA cards pass query downstream | Same as LoRAs | Cover `enter` press and clicking search icon |
| F-02 | Tag filter | Selecting a tag chip cycles include ➜ exclude ➜ clear, updates storage, and reloads results | Tag state stored under `filters.tags[tagName] = 'include'|'exclude'`; `FilterManager.applyFilters` persists and triggers `resetAndReload(true)` | Same; ensure base model tag set is scoped to checkpoints dataset | Include removal path |
| F-03 | Base model filter | Toggling base model checkboxes updates `filters.baseModel`, persists, and reloads | Ensure only LoRA-supported models show; toggle multi-select | Ensure SDXL/Flux base models appear as expected | Capture UI state restored from storage on next init |
| F-04 | Favorites-only | Clicking favorites toggle updates session flag and calls `resetAndReload(true)` | Button gains `.active` class and API called | Same | Verify duplicates badge refresh when active |
| F-05 | Sort selection | Changing sort select saves preference (legacy + new format) and reloads | Confirm `PageControls.saveSortPreference` invoked with option and API called | Same with checkpoints-specific defaults | Cover `convertLegacySortFormat` branch |
| F-06 | Filter persistence | Re-initializing manager loads stored filters/sort and updates DOM | Filters pre-populate chips/checkboxes; favorites state restored | Same | Requires simulating repeated construction |
| F-07 | Combined filters | Applying search + tag + base model yields aggregated query params for fetch | Assert API receives merged filter payload | Same | Validate toast messaging for active filters |
| F-08 | Clearing filters | Using "Clear filters" resets state, storage, and reloads list | `FilterManager.clearFilters` empties `filters`, removes active class, shows toast | Same | Ensure favorites-only toggle unaffected |
| F-09 | Duplicate badge toggle | Pressing "Find duplicates" toggles duplicate mode and updates badge counts post-refresh | `ModelDuplicatesManager.toggleDuplicateMode` invoked and badge refresh called after API rebuild | Same plus checkpoint-specific duplicate badge dataset | Connects to future duplicate-specific specs |
| F-10 | Bulk actions menu | Opening bulk dropdown keeps filters intact and closes on outside click | Validate dropdown class toggling and no unintended reload | Same | Guard against regression when dropdown interacts with filters |
## Automation Coverage Status
- ✅ F-01 Search filter, F-02 Tag filter, F-03 Base model filter, F-04 Favorites-only toggle, F-05 Sort selection, and F-09 Duplicate badge toggle are covered by `tests/frontend/components/pageControls.filtering.test.js` for both LoRA and checkpoint pages.
- ⏳ F-06 Filter persistence, F-07 Combined filters, F-08 Clearing filters, and F-10 Bulk actions remain to be automated alongside upcoming bulk mode refinements.
## Coverage Gaps & Follow-Ups
- Write Vitest suites that exercise the matrix for both managers, sharing fixtures through page helpers to avoid duplication.
- Capture API parameter assertions by inspecting `baseModelApi.fetchModels` mocks rather than relying solely on state mutations.
- Add regression cases for legacy storage migrations (old filter keys) once fixtures exist for older payloads.
- Extend duplicate badge coverage with scenarios where `checkDuplicatesCount` signals zero duplicates versus pending calculations.

View File

@@ -1,33 +0,0 @@
# Frontend Automation Testing Roadmap
This roadmap tracks the planned rollout of automated testing for the ComfyUI LoRA Manager frontend. Each phase builds on the infrastructure introduced in this change set and records progress so future contributors can quickly identify the next tasks.
## Phase Overview
| Phase | Goal | Primary Focus | Status | Notes |
| --- | --- | --- | --- | --- |
| Phase 0 | Establish baseline tooling | Add Node test runner, jsdom environment, and seed smoke tests | ✅ Complete | Vitest + jsdom configured, example state tests committed |
| Phase 1 | Cover state management logic | Unit test selectors, derived data helpers, and storage utilities under `static/js/state` and `static/js/utils` | ✅ Complete | Storage helpers and state selectors now exercised via deterministic suites |
| Phase 2 | Test AppCore orchestration | Simulate page bootstrapping, infinite scroll hooks, and manager registration using JSDOM DOM fixtures | ✅ Complete | AppCore initialization + page feature suites now validate manager wiring, infinite scroll hooks, and onboarding gating |
| Phase 3 | Validate page-specific managers | Add focused suites for `loras`, `checkpoints`, `embeddings`, and `recipes` managers covering filtering, sorting, and bulk actions | ✅ Complete | LoRA/checkpoint suites expanded; embeddings + recipes managers now covered with initialization, filtering, and duplicate workflows |
| Phase 4 | Interaction-level regression tests | Exercise template fragments, modals, and menus to ensure UI wiring remains intact | ✅ Complete | Vitest DOM suites cover NSFW selector, recipe modal editing, and global context menus |
| Phase 5 | Continuous integration & coverage | Integrate frontend tests into CI workflow and track coverage metrics | ✅ Complete | CI workflow runs Vitest and aggregates V8 coverage into `coverage/frontend` via a dedicated script |
## Next Steps Checklist
- [x] Expand unit tests for `storageHelpers` covering migrations and namespace behavior.
- [x] Document DOM fixture strategy for reproducing template structures in tests.
- [x] Prototype AppCore initialization test that verifies manager bootstrapping with stubbed dependencies.
- [x] Add AppCore page feature suite exercising context menu creation and infinite scroll registration via DOM fixtures.
- [x] Extend AppCore orchestration tests to cover manager wiring, bulk menu setup, and onboarding gating scenarios.
- [x] Add interaction regression suites for context menus and recipe modals to complete Phase 4.
- [x] Evaluate integrating coverage reporting once test surface grows (> 20 specs).
- [x] Create shared fixtures for the loras and checkpoints pages once dedicated manager suites are added.
- [x] Draft focused test matrix for loras/checkpoints manager filtering and sorting paths ahead of Phase 3.
- [x] Implement LoRAs manager filtering/sorting specs for scenarios F-01F-05 & F-09; queue remaining edge cases after duplicate/bulk flows stabilize.
- [x] Implement checkpoints manager filtering/sorting specs for scenarios F-01F-05 & F-09; cover remaining paths alongside bulk action work.
- [x] Implement checkpoints page manager smoke tests covering initialization and duplicate badge wiring.
- [x] Outline focused checkpoints scenarios (filtering, sorting, duplicate badge toggles) to feed into the shared test matrix.
- [ ] Add duplicate badge regression coverage for zero/pending states after API refreshes.
Maintaining this roadmap alongside code changes will make it easier to append new automated test tasks and update their progress.

View File

@@ -1,28 +0,0 @@
# Library Switching and Preview Routes
Library switching no longer requires restarting the backend. The preview
thumbnails shown in the UI are now served through a dynamic endpoint that
resolves files against the folders registered for the active library at request
time. This allows the multi-library flow to update model roots without touching
the aiohttp router, so previews remain available immediately after a switch.
## How the dynamic preview endpoint works
* `config.get_preview_static_url()` now returns `/api/lm/previews?path=<encoded>`
for any preview path. The raw filesystem location is URL encoded so that it
can be passed through the query string without leaking directory structure in
the route itself.【F:py/config.py†L398-L404】
* `PreviewRoutes` exposes the `/api/lm/previews` handler which validates the
decoded path against the directories registered for the current library. The
request is rejected if it falls outside those roots or if the file does not
exist.【F:py/routes/preview_routes.py†L5-L21】【F:py/routes/handlers/preview_handlers.py†L9-L48】
* `Config` keeps an up-to-date cache of allowed preview roots. Every time a
library is applied the cache is rebuilt using the declared LoRA, checkpoint
and embedding directories (including symlink targets). The validation logic
checks preview requests against this cache.【F:py/config.py†L51-L68】【F:py/config.py†L180-L248】【F:py/config.py†L332-L346】
Both the ComfyUI runtime (`LoraManager.add_routes`) and the standalone launcher
(`StandaloneLoraManager.add_routes`) register the new preview routes instead of
mounting a static directory per root. Switching libraries therefore works
without restarting the application, and preview URLs generated before or after a
switch continue to resolve correctly.【F:py/lora_manager.py†L21-L82】【F:standalone.py†L302-L315】

View File

@@ -1,71 +0,0 @@
# Priority Tags Configuration Guide
This guide explains how to tailor the tag priority order that powers folder naming and tag suggestions in the LoRA Manager. You only need to edit the comma-separated list of entries shown in the **Priority Tags** field for each model type.
## 1. Pick the Model Type
In the **Priority Tags** dialog you will find one tab per model type (LoRA, Checkpoint, Embedding). Select the tab you want to update; changes on one tab do not affect the others.
## 2. Edit the Entry List
Inside the textarea you will see a line similar to:
```
character, concept, style(toon|toon_style)
```
This entire line is the **entry list**. Replace it with your own ordered list.
### Entry Rules
Each entry is separated by a comma, in order from highest to lowest priority:
- **Canonical tag only:** `realistic`
- **Canonical tag with aliases:** `character(char|chars)`
Aliases live inside `()` and are separated with `|`. The canonical name is what appears in folder names and UI suggestions when any of the aliases are detected. Matching is case-insensitive.
## Use `{first_tag}` in Path Templates
When your path template contains `{first_tag}`, the app picks a folder name based on your priority list and the models own tags:
- It checks the priority list from top to bottom. If a canonical tag or any of its aliases appear in the model tags, that canonical name becomes the folder name.
- If no priority tags are found but the model has tags, the very first model tag is used.
- If the model has no tags at all, the folder falls back to `no tags`.
### Example
With a template like `/{model_type}/{first_tag}` and the priority entry list `character(char|chars), style(anime|toon)`:
| Model Tags | Folder Name | Why |
| --- | --- | --- |
| `["chars", "female"]` | `character` | `chars` matches the `character` alias, so the canonical wins. |
| `["anime", "portrait"]` | `style` | `anime` hits the `style` entry, so its canonical label is used. |
| `["portrait", "bw"]` | `portrait` | No priority match, so the first model tag is used. |
| `[]` | `no tags` | Nothing to match, so the fallback is applied. |
## 3. Save the Settings
After editing the entry list, press **Enter** to save. Use **Shift+Enter** whenever you need a new line. Clicking outside the field also saves automatically. A success toast confirms the update.
## Examples
| Goal | Entry List |
| --- | --- |
| Prefer people over styles | `character, portraits, style(anime\|toon)` |
| Group sci-fi variants | `sci-fi(scifi\|science_fiction), cyberpunk(cyber\|punk)` |
| Alias shorthand tags | `realistic(real\|realisim), photorealistic(photo_real)` |
## Tips
- Keep canonical names short and meaningful—they become folder names.
- Place the most important categories first; the first match wins.
- Avoid duplicate canonical names within the same list; only the first instance is used.
## Troubleshooting
- **Unexpected folder name?** Check that the canonical name you want is placed before other matches.
- **Alias not working?** Ensure the alias is inside parentheses and separated with `|`, e.g. `character(char|chars)`.
- **Validation error?** Look for missing parentheses or stray commas. Each entry must follow the `canonical(alias|alias)` pattern or just `canonical`.
With these basics you can quickly adapt Priority Tags to match your librarys organization style.

View File

@@ -1,69 +0,0 @@
# Danbooru/E621 Tag Categories Reference
Reference for category values used in `danbooru_e621_merged.csv` tag files.
## Category Value Mapping
### Danbooru Categories
| Value | Description |
|-------|-------------|
| 0 | General |
| 1 | Artist |
| 2 | *(unused)* |
| 3 | Copyright |
| 4 | Character |
| 5 | Meta |
### e621 Categories
| Value | Description |
|-------|-------------|
| 6 | *(unused)* |
| 7 | General |
| 8 | Artist |
| 9 | Contributor |
| 10 | Copyright |
| 11 | Character |
| 12 | Species |
| 13 | *(unused)* |
| 14 | Meta |
| 15 | Lore |
## Danbooru Category Colors
| Description | Normal Color | Hover Color |
|-------------|--------------|-------------|
| General | #009be6 | #4bb4ff |
| Artist | #ff8a8b | #ffc3c3 |
| Copyright | #c797ff | #ddc9fb |
| Character | #35c64a | #93e49a |
| Meta | #ead084 | #f7e7c3 |
## CSV Column Structure
Each row in the merged CSV file contains 4 columns:
| Column | Description | Example |
|--------|-------------|---------|
| 1 | Tag name | `1girl`, `highres`, `solo` |
| 2 | Category value (0-15) | `0`, `5`, `7` |
| 3 | Post count | `6008644`, `5256195` |
| 4 | Aliases (comma-separated, quoted) | `"1girls,sole_female"`, empty string |
### Sample Data
```
1girl,0,6008644,"1girls,sole_female"
highres,5,5256195,"high_res,high_resolution,hires"
solo,0,5000954,"alone,female_solo,single,solo_female"
long_hair,0,4350743,"/lh,longhair"
mammal,12,3437444,"cetancodont,cetancodontamorph,feralmammal"
anthro,7,3381927,"adult_anthro,anhtro,antho,anthro_horse"
skirt,0,1557883,
```
## Source
- [PR #312: Add danbooru_e621_merged.csv](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/pull/312)
- [DraconicDragon/dbr-e621-lists-archive](https://github.com/DraconicDragon/dbr-e621-lists-archive)

View File

@@ -1,191 +0,0 @@
# Model Type 字段重构 - 遗留工作清单
> **状态**: Phase 1-4 已完成 | **创建日期**: 2026-01-30
> **相关文件**: `py/utils/models.py`, `py/services/model_query.py`, `py/services/checkpoint_scanner.py`, etc.
---
## 概述
本次重构旨在解决 `model_type` 字段语义不统一的问题。系统中有两个层面的"类型"概念:
1. **Scanner Type** (`scanner_type`): 架构层面的大类 - `lora`, `checkpoint`, `embedding`
2. **Sub Type** (`sub_type`): 业务层面的细分类型 - `lora`/`locon`/`dora`, `checkpoint`/`diffusion_model`, `embedding`
重构目标是统一使用 `sub_type` 表示细分类型,保留 `model_type` 作为向后兼容的别名。
---
## 已完成工作 ✅
### Phase 1: 后端字段重命名
- [x] `CheckpointMetadata.model_type``sub_type`
- [x] `EmbeddingMetadata.model_type``sub_type`
- [x] `model_scanner.py` `_build_cache_entry()` 同时处理 `sub_type``model_type`
### Phase 2: 查询逻辑更新
- [x] `model_query.py` 新增 `resolve_sub_type()``normalize_sub_type()`
- [x] ~~保持向后兼容的别名 `resolve_civitai_model_type`, `normalize_civitai_model_type`~~ (已在 Phase 5 移除)
- [x] `ModelFilterSet.apply()` 更新为使用新的解析函数
### Phase 3: API 响应更新
- [x] `LoraService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
- [x] `CheckpointService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
- [x] `EmbeddingService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
### Phase 4: 前端更新
- [x] `constants.js` 新增 `MODEL_SUBTYPE_DISPLAY_NAMES`
- [x] `MODEL_TYPE_DISPLAY_NAMES` 作为别名保留
### Phase 5: 清理废弃代码 ✅
- [x]`ModelScanner._build_cache_entry()` 中移除 `model_type` 向后兼容代码
- [x]`CheckpointScanner` 中移除 `model_type` 兼容处理
- [x]`model_query.py` 中移除 `resolve_civitai_model_type``normalize_civitai_model_type` 别名
- [x] 更新前端 `FilterManager.js` 使用 `sub_type` (已在使用 `MODEL_SUBTYPE_DISPLAY_NAMES`)
- [x] 更新所有相关测试
---
## 遗留工作 ⏳
### Phase 5: 清理废弃代码 ✅ **已完成**
所有 Phase 5 的清理工作已完成:
#### 5.1 移除 `model_type` 字段的向后兼容代码 ✅
-`ModelScanner._build_cache_entry()` 中移除了 `model_type` 的设置
- 现在只设置 `sub_type` 字段
#### 5.2 移除 CheckpointScanner 的 model_type 兼容处理 ✅
- `adjust_metadata()` 现在只处理 `sub_type`
- `adjust_cached_entry()` 现在只设置 `sub_type`
#### 5.3 移除 model_query 中的向后兼容别名 ✅
- 移除了 `resolve_civitai_model_type = resolve_sub_type`
- 移除了 `normalize_civitai_model_type = normalize_sub_type`
#### 5.4 前端清理 ✅
- `FilterManager.js` 已经在使用 `MODEL_SUBTYPE_DISPLAY_NAMES` (通过别名 `MODEL_TYPE_DISPLAY_NAMES`)
- API list endpoint 现在只返回 `sub_type`,不再返回 `model_type`
- `ModelCard.js` 现在设置 `card.dataset.sub_type` (所有模型类型通用)
- `CheckpointContextMenu.js` 现在读取 `card.dataset.sub_type`
- `MoveManager.js` 现在处理 `cache_entry.sub_type`
- `RecipeModal.js` 现在读取 `checkpoint.sub_type`
---
## 数据库迁移评估
### 当前状态
- `persistent_model_cache.py` 使用 `civitai_model_type` 列存储 CivitAI 原始类型
- 缓存 entry 中的 `sub_type` 在运行期动态计算
- 数据库 schema **无需立即修改**
### 未来可选优化
```sql
-- 可选:在 models 表中添加 sub_type 列(与 civitai_model_type 保持一致但语义更清晰)
ALTER TABLE models ADD COLUMN sub_type TEXT;
-- 数据迁移
UPDATE models SET sub_type = civitai_model_type WHERE sub_type IS NULL;
```
**建议**: 如果决定添加 `sub_type` 列,应与 Phase 5 一起进行。
---
## 测试覆盖率
### 新增/更新测试文件(已全部通过 ✅)
| 测试文件 | 数量 | 覆盖内容 |
|---------|------|---------|
| `tests/utils/test_models_sub_type.py` | 7 | Metadata sub_type 字段 |
| `tests/services/test_model_query_sub_type.py` | 19 | sub_type 解析和过滤 |
| `tests/services/test_checkpoint_scanner_sub_type.py` | 6 | CheckpointScanner sub_type |
| `tests/services/test_service_format_response_sub_type.py` | 6 | API 响应 sub_type 包含 |
| `tests/services/test_checkpoint_scanner.py` | 1 | Checkpoint 缓存 sub_type |
| `tests/services/test_model_scanner.py` | 1 | adjust_cached_entry hook |
| `tests/services/test_download_manager.py` | 1 | Checkpoint 下载 sub_type |
### 需要补充的测试(可选)
- [ ] 集成测试:验证前端过滤使用 sub_type 字段
- [ ] 数据库迁移测试(如果执行可选优化)
- [ ] 性能测试:确认 resolve_sub_type 的优先级查找没有显著性能影响
---
## 兼容性检查清单
### 已完成 ✅
- [x] 前端代码已全部改用 `sub_type` 字段
- [x] API list endpoint 已移除 `model_type`,只返回 `sub_type`
- [x] 后端 cache entry 已移除 `model_type`,只保留 `sub_type`
- [x] 所有测试已更新通过
- [x] 文档已更新
---
## 相关文件清单
### 核心文件
```
py/utils/models.py
py/utils/constants.py
py/services/model_scanner.py
py/services/model_query.py
py/services/checkpoint_scanner.py
py/services/base_model_service.py
py/services/lora_service.py
py/services/checkpoint_service.py
py/services/embedding_service.py
```
### 前端文件
```
static/js/utils/constants.js
static/js/managers/FilterManager.js
static/js/managers/MoveManager.js
static/js/components/shared/ModelCard.js
static/js/components/ContextMenu/CheckpointContextMenu.js
static/js/components/RecipeModal.js
```
### 测试文件
```
tests/utils/test_models_sub_type.py
tests/services/test_model_query_sub_type.py
tests/services/test_checkpoint_scanner_sub_type.py
tests/services/test_service_format_response_sub_type.py
```
---
## 风险评估
| 风险项 | 影响 | 缓解措施 |
|-------|------|---------|
| ~~第三方代码依赖 `model_type`~~ | ~~高~~ | ~~保持别名至少 1 个 major 版本~~ ✅ 已完成移除 |
| ~~数据库 schema 变更~~ | ~~中~~ | ~~暂缓 schema 变更,仅运行时计算~~ ✅ 无需变更 |
| ~~前端过滤失效~~ | ~~中~~ | ~~全面的集成测试覆盖~~ ✅ 测试通过 |
| CivitAI API 变化 | 低 | 保持多源解析策略 |
---
## 时间线
- **v1.x**: Phase 1-4 已完成,保持向后兼容
- **v2.0 (当前)**: ✅ Phase 5 已完成 - `model_type` 兼容代码已移除
- API list endpoint 只返回 `sub_type`
- Cache entry 只保留 `sub_type`
- 移除了 `resolve_civitai_model_type``normalize_civitai_model_type` 别名
---
## 备注
- 重构期间发现 `civitai_model_type` 数据库列命名尚可,但语义上应理解为存储 CivitAI API 返回的原始类型值
- Checkpoint 的 `diffusion_model` sub_type 不能通过 CivitAI API 获取必须通过文件路径model root判断
- LoRA 的 sub_typelora/locon/dora直接来自 CivitAI API 的 `version_info.model.type`

View File

@@ -1,26 +0,0 @@
# Backend Test Coverage Notes
## Pytest Execution
- Command: `python -m pytest`
- Result: All 283 collected tests passed in the current environment.
- Coverage tooling (``pytest-cov``/``coverage``) is unavailable in the offline sandbox, so line-level metrics could not be generated. The earlier attempt to install ``pytest-cov`` failed because the package index cannot be reached from the container.
## High-Priority Gaps to Address
### 1. Standalone server bootstrapping
* **Source:** [`standalone.py`](../../standalone.py)
* **Why it matters:** The standalone entry point wires together the aiohttp application, static asset routes, model-route registration, and configuration validation. None of these behaviours are covered by automated tests, leaving regressions in bootstrapping logic undetected.
* **Suggested coverage:** Add integration-style tests that instantiate `StandaloneServer`/`StandaloneLoraManager` with temporary settings and assert that routes (HTTP + websocket) are registered, configuration warnings fire for missing paths, and the mock ComfyUI shims behave as expected.
### 2. Model service registration factory
* **Source:** [`py/services/model_service_factory.py`](../../py/services/model_service_factory.py)
* **Why it matters:** The factory coordinates which model services and routes the API exposes, including error handling when unknown model types are requested. No current tests verify registration, memoization of route instances, or the logging path on failures.
* **Suggested coverage:** Unit tests that exercise `register_model_type`, `get_route_instance`, error branches in `get_service_class`/`get_route_class`, and `setup_all_routes` when a route setup raises. Use lightweight fakes to confirm the logger is called and state is cleared via `clear_registrations`.
### 3. Server-side i18n helper
* **Source:** [`py/services/server_i18n.py`](../../py/services/server_i18n.py)
* **Why it matters:** Template rendering relies on the `ServerI18nManager` to load locale JSON, perform key lookups, and format parameters. The fallback logic (dot-notation lookup, English fallbacks, placeholder substitution) is untested, so malformed locale files or regressions in placeholder handling would slip through.
* **Suggested coverage:** Tests that load fixture locale dictionaries, assert `set_locale` fallbacks, verify nested key resolution and placeholder substitution, and ensure missing keys return the original identifier.
## Next Steps
Prioritize creating focused unit tests around these modules, then re-run pytest once coverage tooling is available to confirm the new tests close the identified gaps.

Binary file not shown.

After

Width:  |  Height:  |  Size: 669 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 669 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 657 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 668 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

Before

Width:  |  Height:  |  Size: 739 KiB

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

2575
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +0,0 @@
{
"name": "comfyui-lora-manager-frontend",
"version": "0.1.0",
"private": true,
"type": "module",
"scripts": {
"test": "npm run test:js && npm run test:vue",
"test:js": "vitest run",
"test:vue": "cd vue-widgets && npx vitest run",
"test:watch": "vitest",
"test:coverage": "node scripts/run_frontend_coverage.js"
},
"devDependencies": {
"jsdom": "^24.0.0",
"vitest": "^1.6.0"
}
}

View File

@@ -1,12 +0,0 @@
"""Project namespace package."""
# pytest's internal compatibility layer still imports ``py.path.local`` from the
# historical ``py`` dependency. Because this project reuses the ``py`` package
# name, we expose a minimal shim so ``py.path.local`` resolves to ``pathlib.Path``
# during test runs without pulling in the external dependency.
from pathlib import Path
from types import SimpleNamespace
path = SimpleNamespace(local=Path)
__all__ = ["path"]

File diff suppressed because it is too large Load Diff

View File

@@ -2,15 +2,7 @@ import asyncio
import sys
import os
import logging
from .utils.logging_config import setup_logging
# Check if we're in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
# Only setup logging prefix if not in standalone mode
if not standalone_mode:
setup_logging()
from pathlib import Path
from server import PromptServer # type: ignore
from .config import config
@@ -19,47 +11,16 @@ from .routes.recipe_routes import RecipeRoutes
from .routes.stats_routes import StatsRoutes
from .routes.update_routes import UpdateRoutes
from .routes.misc_routes import MiscRoutes
from .routes.preview_routes import PreviewRoutes
from .routes.example_images_routes import ExampleImagesRoutes
from .services.service_registry import ServiceRegistry
from .services.settings_manager import get_settings_manager
from .services.settings_manager import settings
from .utils.example_images_migration import ExampleImagesMigration
from .services.websocket_manager import ws_manager
from .services.example_images_cleanup_service import ExampleImagesCleanupService
from .middleware.csp_middleware import relax_csp_for_remote_media
logger = logging.getLogger(__name__)
HEADER_SIZE_LIMIT = 16384
def _sanitize_size_limit(value):
"""Return a non-negative integer size for ``handler_args`` comparisons."""
try:
coerced = int(value)
except (TypeError, ValueError):
return 0
return coerced if coerced >= 0 else 0
class _SettingsProxy:
def __init__(self):
self._manager = None
def _resolve(self):
if self._manager is None:
self._manager = get_settings_manager()
return self._manager
def get(self, *args, **kwargs):
return self._resolve().get(*args, **kwargs)
def __getattr__(self, item):
return getattr(self._resolve(), item)
settings = _SettingsProxy()
# Check if we're in standalone mode
STANDALONE_MODE = 'nodes' not in sys.modules
class LoraManager:
"""Main entry point for LoRA Manager plugin"""
@@ -69,41 +30,6 @@ class LoraManager:
"""Initialize and register all routes using the new refactored architecture"""
app = PromptServer.instance.app
if relax_csp_for_remote_media not in app.middlewares:
# Ensure CSP relaxer executes after ComfyUI's block_external_middleware so it can
# see and extend the restrictive header instead of being overwritten by it.
block_middleware_index = next(
(
idx
for idx, middleware in enumerate(app.middlewares)
if getattr(middleware, "__name__", "") == "block_external_middleware"
),
None,
)
if block_middleware_index is None:
app.middlewares.append(relax_csp_for_remote_media)
else:
app.middlewares.insert(block_middleware_index, relax_csp_for_remote_media)
# Increase allowed header sizes so browsers with large localhost cookie
# jars (multiple UIs on 127.0.0.1) don't trip aiohttp's 8KB default
# limits. Cookies for unrelated apps are still sent to the plugin and
# may otherwise raise LineTooLong errors when the request parser reads
# them. Preserve any previously configured handler arguments while
# ensuring our minimum sizes are applied.
handler_args = getattr(app, "_handler_args", {}) or {}
updated_handler_args = dict(handler_args)
updated_handler_args["max_field_size"] = max(
_sanitize_size_limit(handler_args.get("max_field_size", 0)),
HEADER_SIZE_LIMIT,
)
updated_handler_args["max_line_size"] = max(
_sanitize_size_limit(handler_args.get("max_line_size", 0)),
HEADER_SIZE_LIMIT,
)
app._handler_args = updated_handler_args
# Configure aiohttp access logger to be less verbose
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
@@ -123,18 +49,103 @@ class LoraManager:
asyncio_logger = logging.getLogger("asyncio")
asyncio_logger.addFilter(ConnectionResetFilter())
added_targets = set() # Track already added target paths
# Add static route for example images if the path exists in settings
example_images_path = settings.get('example_images_path')
logger.info(f"Example images path: {example_images_path}")
if example_images_path and os.path.exists(example_images_path):
app.router.add_static('/example_images_static', example_images_path)
logger.info(f"Added static route for example images: /example_images_static -> {example_images_path}")
# Add static route for locales JSON files
if os.path.exists(config.i18n_path):
app.router.add_static('/locales', config.i18n_path)
logger.info(f"Added static route for locales: /locales -> {config.i18n_path}")
# Add static routes for each lora root
for idx, root in enumerate(config.loras_roots, start=1):
preview_path = f'/loras_static/root{idx}/preview'
real_root = root
if root in config._path_mappings.values():
for target, link in config._path_mappings.items():
if link == root:
real_root = target
break
# Add static route for original path
app.router.add_static(preview_path, real_root)
logger.info(f"Added static route {preview_path} -> {real_root}")
# Record route mapping
config.add_route_mapping(real_root, preview_path)
added_targets.add(real_root)
# Add static routes for each checkpoint root
for idx, root in enumerate(config.base_models_roots, start=1):
preview_path = f'/checkpoints_static/root{idx}/preview'
real_root = root
if root in config._path_mappings.values():
for target, link in config._path_mappings.items():
if link == root:
real_root = target
break
# Add static route for original path
app.router.add_static(preview_path, real_root)
logger.info(f"Added static route {preview_path} -> {real_root}")
# Record route mapping
config.add_route_mapping(real_root, preview_path)
added_targets.add(real_root)
# Add static routes for each embedding root
for idx, root in enumerate(config.embeddings_roots, start=1):
preview_path = f'/embeddings_static/root{idx}/preview'
real_root = root
if root in config._path_mappings.values():
for target, link in config._path_mappings.items():
if link == root:
real_root = target
break
# Add static route for original path
app.router.add_static(preview_path, real_root)
logger.info(f"Added static route {preview_path} -> {real_root}")
# Record route mapping
config.add_route_mapping(real_root, preview_path)
added_targets.add(real_root)
# Add static routes for symlink target paths
link_idx = {
'lora': 1,
'checkpoint': 1,
'embedding': 1
}
for target_path, link_path in config._path_mappings.items():
if target_path not in added_targets:
# Determine if this is a checkpoint, lora, or embedding link based on path
is_checkpoint = any(cp_root in link_path for cp_root in config.base_models_roots)
is_checkpoint = is_checkpoint or any(cp_root in target_path for cp_root in config.base_models_roots)
is_embedding = any(emb_root in link_path for emb_root in config.embeddings_roots)
is_embedding = is_embedding or any(emb_root in target_path for emb_root in config.embeddings_roots)
if is_checkpoint:
route_path = f'/checkpoints_static/link_{link_idx["checkpoint"]}/preview'
link_idx["checkpoint"] += 1
elif is_embedding:
route_path = f'/embeddings_static/link_{link_idx["embedding"]}/preview'
link_idx["embedding"] += 1
else:
route_path = f'/loras_static/link_{link_idx["lora"]}/preview'
link_idx["lora"] += 1
try:
app.router.add_static(route_path, Path(target_path).resolve(strict=False))
logger.info(f"Added static route for link target {route_path} -> {target_path}")
config.add_route_mapping(target_path, route_path)
added_targets.add(target_path)
except Exception as e:
logger.warning(f"Failed to add static route on initialization for {target_path}: {e}")
continue
# Add static route for plugin assets
app.router.add_static('/loras_static', config.static_path)
@@ -150,8 +161,7 @@ class LoraManager:
RecipeRoutes.setup_routes(app)
UpdateRoutes.setup_routes(app)
MiscRoutes.setup_routes(app)
ExampleImagesRoutes.setup_routes(app, ws_manager=ws_manager)
PreviewRoutes.setup_routes(app)
ExampleImagesRoutes.setup_routes(app)
# Setup WebSocket routes that are shared across all model types
app.router.add_get('/ws/fetch-progress', ws_manager.handle_connection)
@@ -164,6 +174,8 @@ class LoraManager:
# Add cleanup
app.on_shutdown.append(cls._cleanup)
logger.info(f"LoRA Manager: Set up routes for {len(ModelServiceFactory.get_registered_types())} model types: {', '.join(ModelServiceFactory.get_registered_types())}")
@classmethod
async def _initialize_services(cls):
"""Initialize all services using the ServiceRegistry"""
@@ -173,9 +185,6 @@ class LoraManager:
# Register DownloadManager with ServiceRegistry
await ServiceRegistry.get_download_manager()
from .services.metadata_service import initialize_metadata_providers
await initialize_metadata_providers()
# Initialize WebSocket manager
await ServiceRegistry.get_websocket_manager()
@@ -189,188 +198,29 @@ class LoraManager:
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
# Create low-priority initialization tasks
init_tasks = [
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init'),
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init'),
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init'),
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
]
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init')
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init')
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init')
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
await ExampleImagesMigration.check_and_run_migrations()
# Schedule post-initialization tasks to run after scanners complete
asyncio.create_task(
cls._run_post_initialization_tasks(init_tasks),
name='post_init_tasks'
)
logger.debug("LoRA Manager: All services initialized and background tasks scheduled")
logger.info("LoRA Manager: All services initialized and background tasks scheduled")
except Exception as e:
logger.error(f"LoRA Manager: Error initializing services: {e}", exc_info=True)
@classmethod
async def _run_post_initialization_tasks(cls, init_tasks):
"""Run post-initialization tasks after all scanners complete"""
try:
logger.debug("LoRA Manager: Waiting for scanner initialization to complete...")
# Wait for all scanner initialization tasks to complete
await asyncio.gather(*init_tasks, return_exceptions=True)
logger.debug("LoRA Manager: Scanner initialization completed, starting post-initialization tasks...")
# Run post-initialization tasks
post_tasks = [
asyncio.create_task(cls._cleanup_backup_files(), name='cleanup_bak_files'),
# Add more post-initialization tasks here as needed
# asyncio.create_task(cls._another_post_task(), name='another_task'),
]
# Run all post-initialization tasks
results = await asyncio.gather(*post_tasks, return_exceptions=True)
# Log results
for i, result in enumerate(results):
task_name = post_tasks[i].get_name()
if isinstance(result, Exception):
logger.error(f"Post-initialization task '{task_name}' failed: {result}")
else:
logger.debug(f"Post-initialization task '{task_name}' completed successfully")
logger.debug("LoRA Manager: All post-initialization tasks completed")
except Exception as e:
logger.error(f"LoRA Manager: Error in post-initialization tasks: {e}", exc_info=True)
@classmethod
async def _cleanup_backup_files(cls):
"""Clean up .bak files in all model roots"""
try:
logger.debug("Starting cleanup of .bak files in model directories...")
# Collect all model roots
all_roots = set()
all_roots.update(config.loras_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.embeddings_roots)
total_deleted = 0
total_size_freed = 0
for root_path in all_roots:
if not os.path.exists(root_path):
continue
try:
deleted_count, size_freed = await cls._cleanup_backup_files_in_directory(root_path)
total_deleted += deleted_count
total_size_freed += size_freed
if deleted_count > 0:
logger.debug(f"Cleaned up {deleted_count} .bak files in {root_path} (freed {size_freed / (1024*1024):.2f} MB)")
except Exception as e:
logger.error(f"Error cleaning up .bak files in {root_path}: {e}")
# Yield control periodically
await asyncio.sleep(0.01)
if total_deleted > 0:
logger.debug(f"Backup cleanup completed: removed {total_deleted} .bak files, freed {total_size_freed / (1024*1024):.2f} MB total")
else:
logger.debug("Backup cleanup completed: no .bak files found")
except Exception as e:
logger.error(f"Error during backup file cleanup: {e}", exc_info=True)
@classmethod
async def _cleanup_backup_files_in_directory(cls, directory_path: str):
"""Clean up .bak files in a specific directory recursively
Args:
directory_path: Path to the directory to clean
Returns:
Tuple[int, int]: (number of files deleted, total size freed in bytes)
"""
deleted_count = 0
size_freed = 0
visited_paths = set()
def cleanup_recursive(path):
nonlocal deleted_count, size_freed
try:
real_path = os.path.realpath(path)
if real_path in visited_paths:
return
visited_paths.add(real_path)
with os.scandir(path) as it:
for entry in it:
try:
if entry.is_file(follow_symlinks=True) and entry.name.endswith('.bak'):
file_size = entry.stat().st_size
os.remove(entry.path)
deleted_count += 1
size_freed += file_size
logger.debug(f"Deleted .bak file: {entry.path}")
elif entry.is_dir(follow_symlinks=True):
cleanup_recursive(entry.path)
except Exception as e:
logger.warning(f"Could not delete .bak file {entry.path}: {e}")
except Exception as e:
logger.error(f"Error scanning directory {path} for .bak files: {e}")
# Run the recursive cleanup in a thread pool to avoid blocking
loop = asyncio.get_event_loop()
await loop.run_in_executor(None, cleanup_recursive, directory_path)
return deleted_count, size_freed
@classmethod
async def _cleanup_example_images_folders(cls):
"""Invoke the example images cleanup service for manual execution."""
try:
service = ExampleImagesCleanupService()
result = await service.cleanup_example_image_folders()
if result.get('success'):
logger.debug(
"Manual example images cleanup completed: moved=%s",
result.get('moved_total'),
)
elif result.get('partial_success'):
logger.warning(
"Manual example images cleanup partially succeeded: moved=%s failures=%s",
result.get('moved_total'),
result.get('move_failures'),
)
else:
logger.debug(
"Manual example images cleanup skipped or failed: %s",
result.get('error', 'no changes'),
)
return result
except Exception as e: # pragma: no cover - defensive guard
logger.error(f"Error during example images cleanup: {e}", exc_info=True)
return {
'success': False,
'error': str(e),
'error_code': 'unexpected_error',
}
@classmethod
async def _cleanup(cls, app):
"""Cleanup resources using ServiceRegistry"""
try:
logger.info("LoRA Manager: Cleaning up services")
# Close CivitaiClient gracefully
civitai_client = await ServiceRegistry.get_service("civitai_client")
if civitai_client:
await civitai_client.close()
logger.info("Closed CivitaiClient connection")
except Exception as e:
logger.error(f"Error during cleanup: {e}", exc_info=True)

View File

@@ -1,10 +1,9 @@
import os
import logging
logger = logging.getLogger(__name__)
import importlib
import sys
# Check if running in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = 'nodes' not in sys.modules
if not standalone_mode:
from .metadata_hook import MetadataHook
@@ -17,7 +16,7 @@ if not standalone_mode:
# Initialize registry
registry = MetadataRegistry()
logger.info("ComfyUI Metadata Collector initialized")
print("ComfyUI Metadata Collector initialized")
def get_metadata(prompt_id=None):
"""Helper function to get metadata from the registry"""
@@ -26,7 +25,7 @@ if not standalone_mode:
else:
# Standalone mode - provide dummy implementations
def init():
logger.info("ComfyUI Metadata Collector disabled in standalone mode")
print("ComfyUI Metadata Collector disabled in standalone mode")
def get_metadata(prompt_id=None):
"""Dummy implementation for standalone mode"""

View File

@@ -1,10 +1,7 @@
import sys
import inspect
import logging
from .metadata_registry import MetadataRegistry
logger = logging.getLogger(__name__)
class MetadataHook:
"""Install hooks for metadata collection"""
@@ -26,7 +23,7 @@ class MetadataHook:
# If we can't find the execution module, we can't install hooks
if execution is None:
logger.warning("Could not locate ComfyUI execution module, metadata collection disabled")
print("Could not locate ComfyUI execution module, metadata collection disabled")
return
# Detect whether we're using the new async version of ComfyUI
@@ -40,16 +37,16 @@ class MetadataHook:
is_async = inspect.iscoroutinefunction(execution._map_node_over_list)
if is_async:
logger.info("Detected async ComfyUI execution, installing async metadata hooks")
print("Detected async ComfyUI execution, installing async metadata hooks")
MetadataHook._install_async_hooks(execution, map_node_func_name)
else:
logger.info("Detected sync ComfyUI execution, installing sync metadata hooks")
print("Detected sync ComfyUI execution, installing sync metadata hooks")
MetadataHook._install_sync_hooks(execution)
logger.info("Metadata collection hooks installed for runtime values")
print("Metadata collection hooks installed for runtime values")
except Exception as e:
logger.error(f"Error installing metadata hooks: {str(e)}")
print(f"Error installing metadata hooks: {str(e)}")
@staticmethod
def _install_sync_hooks(execution):
@@ -85,7 +82,7 @@ class MetadataHook:
if node_id is not None:
registry.record_node_execution(node_id, class_type, input_data_all, None)
except Exception as e:
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
print(f"Error collecting metadata (pre-execution): {str(e)}")
# Execute the original function
results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb)
@@ -116,7 +113,7 @@ class MetadataHook:
if node_id is not None:
registry.update_node_execution(node_id, class_type, results)
except Exception as e:
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
print(f"Error collecting metadata (post-execution): {str(e)}")
return results
@@ -149,40 +146,52 @@ class MetadataHook:
# Store the original _async_map_node_over_list function
original_map_node_over_list = getattr(execution, map_node_func_name)
# Wrapped async function, compatible with both stable and nightly
async def async_map_node_over_list_with_metadata(prompt_id, unique_id, obj, input_data_all, func, allow_interrupt=False, execution_block_cb=None, pre_execute_cb=None, *args, **kwargs):
hidden_inputs = kwargs.get('hidden_inputs', None)
# Define the wrapped async function - NOTE: Updated signature with prompt_id and unique_id!
async def async_map_node_over_list_with_metadata(prompt_id, unique_id, obj, input_data_all, func, allow_interrupt=False, execution_block_cb=None, pre_execute_cb=None):
# Only collect metadata when calling the main function of nodes
if func == obj.FUNCTION and hasattr(obj, '__class__'):
try:
# Get the current prompt_id from the registry
registry = MetadataRegistry()
# We now have prompt_id directly from the function parameters
if prompt_id is not None:
# Get node class type
class_type = obj.__class__.__name__
# Use the passed unique_id parameter instead of trying to extract it
node_id = unique_id
# Record inputs before execution
if node_id is not None:
registry.record_node_execution(node_id, class_type, input_data_all, None)
except Exception as e:
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
print(f"Error collecting metadata (pre-execution): {str(e)}")
# Call original function with all args/kwargs
results = await original_map_node_over_list(
prompt_id, unique_id, obj, input_data_all, func,
allow_interrupt, execution_block_cb, pre_execute_cb, *args, **kwargs
)
# Execute the original async function with ALL parameters in the correct order
results = await original_map_node_over_list(prompt_id, unique_id, obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb)
# After execution, collect outputs for relevant nodes
if func == obj.FUNCTION and hasattr(obj, '__class__'):
try:
# Get the current prompt_id from the registry
registry = MetadataRegistry()
if prompt_id is not None:
# Get node class type
class_type = obj.__class__.__name__
# Use the passed unique_id parameter
node_id = unique_id
# Record outputs after execution
if node_id is not None:
registry.update_node_execution(node_id, class_type, results)
except Exception as e:
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
print(f"Error collecting metadata (post-execution): {str(e)}")
return results
# Also hook the execute function to track the current prompt_id
original_execute = execution.execute

View File

@@ -1,9 +1,9 @@
import json
import os
import sys
from .constants import IMAGES
# Check if running in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
standalone_mode = 'nodes' not in sys.modules
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IS_SAMPLER
@@ -39,39 +39,8 @@ class MetadataProcessor:
if node_id in metadata.get(SAMPLING, {}) and metadata[SAMPLING][node_id].get(IS_SAMPLER, False):
candidate_samplers[node_id] = metadata[SAMPLING][node_id]
# If we found candidate samplers, apply primary sampler logic to these candidates only
# PRE-PROCESS: Ensure all candidate samplers have their parameters populated
# This is especially important for SamplerCustomAdvanced which needs tracing
prompt = metadata.get("current_prompt")
for node_id in candidate_samplers:
# If a sampler is missing common parameters like steps or denoise,
# try to populate them using tracing before ranking
sampler_info = candidate_samplers[node_id]
params = sampler_info.get("parameters", {})
if prompt and (params.get("steps") is None or params.get("denoise") is None):
# Create a temporary params dict to use the handler
temp_params = {
"steps": params.get("steps"),
"denoise": params.get("denoise"),
"sampler": params.get("sampler_name"),
"scheduler": params.get("scheduler")
}
# Check if it's SamplerCustomAdvanced
if prompt.original_prompt and node_id in prompt.original_prompt:
if prompt.original_prompt[node_id].get("class_type") == "SamplerCustomAdvanced":
MetadataProcessor.handle_custom_advanced_sampler(metadata, prompt, node_id, temp_params)
# Update the actual parameters with found values
params["steps"] = temp_params.get("steps")
params["denoise"] = temp_params.get("denoise")
if temp_params.get("sampler"):
params["sampler_name"] = temp_params.get("sampler")
if temp_params.get("scheduler"):
params["scheduler"] = temp_params.get("scheduler")
# If we found candidate samplers, apply primary sampler logic to these candidates only
if candidate_samplers:
# Collect potential primary samplers based on different criteria
custom_advanced_samplers = []
advanced_add_noise_samplers = []
@@ -80,6 +49,7 @@ class MetadataProcessor:
high_denoise_id = None
# First, check for SamplerCustomAdvanced among candidates
prompt = metadata.get("current_prompt")
if prompt and prompt.original_prompt:
for node_id in candidate_samplers:
node_info = prompt.original_prompt.get(node_id, {})
@@ -107,16 +77,15 @@ class MetadataProcessor:
# Combine all potential primary samplers
potential_samplers = custom_advanced_samplers + advanced_add_noise_samplers + high_denoise_samplers
# Find the first potential primary sampler (prefer base sampler over refine)
# Use forward search to prioritize the first one in execution order
for i in range(downstream_index):
# Find the most recent potential primary sampler (closest to downstream node)
for i in range(downstream_index - 1, -1, -1):
node_id = execution_order[i]
if node_id in potential_samplers:
return node_id, candidate_samplers[node_id]
# If no potential sampler found from our criteria, return the first sampler
# If no potential sampler found from our criteria, return the most recent sampler
if candidate_samplers:
for i in range(downstream_index):
for i in range(downstream_index - 1, -1, -1):
node_id = execution_order[i]
if node_id in candidate_samplers:
return node_id, candidate_samplers[node_id]
@@ -207,11 +176,8 @@ class MetadataProcessor:
found_node_id = input_value[0] # Connected node_id
# If we're looking for a specific node class
if target_class:
if found_node_id not in prompt.original_prompt:
return None
if prompt.original_prompt[found_node_id].get("class_type") == target_class:
return found_node_id
if target_class and prompt.original_prompt[found_node_id].get("class_type") == target_class:
return found_node_id
# If we're not looking for a specific class, update the last valid node
if not target_class:
@@ -219,19 +185,11 @@ class MetadataProcessor:
# Continue tracing through intermediate nodes
current_node_id = found_node_id
# Check if current source node exists
if current_node_id not in prompt.original_prompt:
return found_node_id if not target_class else None
# Determine which input to follow next on the source node
source_node_inputs = prompt.original_prompt[current_node_id].get("inputs", {})
if input_name in source_node_inputs:
current_input = input_name
elif "conditioning" in source_node_inputs:
# For most conditioning nodes, the input we want to follow is named "conditioning"
if "conditioning" in prompt.original_prompt[current_node_id].get("inputs", {}):
current_input = "conditioning"
else:
# If there's no suitable input to follow, return the current node
# If there's no "conditioning" input, return the current node
# if we're not looking for a specific target_class
return found_node_id if not target_class else None
else:
@@ -244,89 +202,12 @@ class MetadataProcessor:
return last_valid_node if not target_class else None
@staticmethod
def trace_model_path(metadata, prompt, start_node_id):
"""
Trace the model connection path upstream to find the checkpoint
"""
if not prompt or not prompt.original_prompt:
return None
current_node_id = start_node_id
depth = 0
max_depth = 50
while depth < max_depth:
# Check if current node is a registered checkpoint in our metadata
# This handles cached nodes correctly because metadata contains info for all nodes in the graph
if current_node_id in metadata.get(MODELS, {}):
if metadata[MODELS][current_node_id].get("type") == "checkpoint":
return current_node_id
if current_node_id not in prompt.original_prompt:
return None
node = prompt.original_prompt[current_node_id]
inputs = node.get("inputs", {})
class_type = node.get("class_type", "")
# Determine which input to follow next
next_input_name = "model"
# Special handling for initial node
if depth == 0:
if class_type == "SamplerCustomAdvanced":
next_input_name = "guider"
# If the specific input doesn't exist, try generic 'model'
if next_input_name not in inputs:
if "model" in inputs:
next_input_name = "model"
elif "basic_pipe" in inputs:
# Handle pipe nodes like FromBasicPipe by following the pipeline
next_input_name = "basic_pipe"
else:
# Dead end - no model input to follow
return None
# Get connected node
input_val = inputs[next_input_name]
if isinstance(input_val, list) and len(input_val) > 0:
current_node_id = input_val[0]
else:
return None
depth += 1
return None
@staticmethod
def find_primary_checkpoint(metadata, downstream_id=None, primary_sampler_id=None):
"""
Find the primary checkpoint model in the workflow
Parameters:
- metadata: The workflow metadata
- downstream_id: Optional ID of a downstream node to help identify the specific primary sampler
- primary_sampler_id: Optional ID of the primary sampler if already known
"""
def find_primary_checkpoint(metadata):
"""Find the primary checkpoint model in the workflow"""
if not metadata.get(MODELS):
return None
# Method 1: Topology-based tracing (More accurate for complex workflows)
# First, find the primary sampler if not provided
if not primary_sampler_id:
primary_sampler_id, _ = MetadataProcessor.find_primary_sampler(metadata, downstream_id)
if primary_sampler_id:
prompt = metadata.get("current_prompt")
if prompt:
# Trace back from the sampler to find the checkpoint
checkpoint_id = MetadataProcessor.trace_model_path(metadata, prompt, primary_sampler_id)
if checkpoint_id and checkpoint_id in metadata.get(MODELS, {}):
return metadata[MODELS][checkpoint_id].get("name")
# Method 2: Fallback to the first available checkpoint (Original behavior)
# In most simple workflows, there's only one checkpoint, so we can just take the first one
# In most workflows, there's only one checkpoint, so we can just take the first one
for node_id, model_info in metadata.get(MODELS, {}).items():
if model_info.get("type") == "checkpoint":
return model_info.get("name")
@@ -414,7 +295,7 @@ class MetadataProcessor:
"seed": None,
"steps": None,
"cfg_scale": None,
# "guidance": None, # Add guidance parameter
"guidance": None, # Add guidance parameter
"sampler": None,
"scheduler": None,
"checkpoint": None,
@@ -430,8 +311,7 @@ class MetadataProcessor:
primary_sampler_id, primary_sampler = MetadataProcessor.find_primary_sampler(metadata, id)
# Directly get checkpoint from metadata instead of tracing
# Pass primary_sampler_id to avoid redundant calculation
checkpoint = MetadataProcessor.find_primary_checkpoint(metadata, id, primary_sampler_id)
checkpoint = MetadataProcessor.find_primary_checkpoint(metadata)
if checkpoint:
params["checkpoint"] = checkpoint
@@ -459,8 +339,44 @@ class MetadataProcessor:
is_custom_advanced = prompt.original_prompt[primary_sampler_id].get("class_type") == "SamplerCustomAdvanced"
if is_custom_advanced:
# For SamplerCustomAdvanced, use the new handler method
MetadataProcessor.handle_custom_advanced_sampler(metadata, prompt, primary_sampler_id, params)
# For SamplerCustomAdvanced, trace specific inputs
# 1. Trace sigmas input to find BasicScheduler
scheduler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sigmas", "BasicScheduler", max_depth=5)
if scheduler_node_id and scheduler_node_id in metadata.get(SAMPLING, {}):
scheduler_params = metadata[SAMPLING][scheduler_node_id].get("parameters", {})
params["steps"] = scheduler_params.get("steps")
params["scheduler"] = scheduler_params.get("scheduler")
# 2. Trace sampler input to find KSamplerSelect
sampler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sampler", "KSamplerSelect", max_depth=5)
if sampler_node_id and sampler_node_id in metadata.get(SAMPLING, {}):
sampler_params = metadata[SAMPLING][sampler_node_id].get("parameters", {})
params["sampler"] = sampler_params.get("sampler_name")
# 3. Trace guider input for CFGGuider and CLIPTextEncode
guider_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "guider", max_depth=5)
if guider_node_id and guider_node_id in prompt.original_prompt:
# Check if the guider node is a CFGGuider
if prompt.original_prompt[guider_node_id].get("class_type") == "CFGGuider":
# Extract cfg value from the CFGGuider
if guider_node_id in metadata.get(SAMPLING, {}):
cfg_params = metadata[SAMPLING][guider_node_id].get("parameters", {})
params["cfg_scale"] = cfg_params.get("cfg")
# Find CLIPTextEncode for positive prompt
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "positive", "CLIPTextEncode", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
# Find CLIPTextEncode for negative prompt
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "negative", "CLIPTextEncode", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
else:
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
else:
# For standard samplers, match conditioning objects to prompts
@@ -485,9 +401,6 @@ class MetadataProcessor:
negative_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "negative", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
# For SamplerCustom, handle any additional parameters
MetadataProcessor.handle_custom_advanced_sampler(metadata, prompt, primary_sampler_id, params)
# Size extraction is same for all sampler types
# Check if the sampler itself has size information (from latent_image)
@@ -541,60 +454,3 @@ class MetadataProcessor:
"""Convert metadata to JSON string"""
params = MetadataProcessor.to_dict(metadata, id)
return json.dumps(params, indent=4)
@staticmethod
def handle_custom_advanced_sampler(metadata, prompt, primary_sampler_id, params):
"""
Handle parameter extraction for SamplerCustomAdvanced nodes
Parameters:
- metadata: The workflow metadata
- prompt: The prompt object containing node connections
- primary_sampler_id: ID of the SamplerCustomAdvanced node
- params: Parameters dictionary to update
"""
if not prompt.original_prompt or primary_sampler_id not in prompt.original_prompt:
return
sampler_inputs = prompt.original_prompt[primary_sampler_id].get("inputs", {})
# 1. Trace sigmas input to find BasicScheduler (only if sigmas input exists)
if "sigmas" in sampler_inputs:
scheduler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sigmas", None, max_depth=5)
if scheduler_node_id and scheduler_node_id in metadata.get(SAMPLING, {}):
scheduler_params = metadata[SAMPLING][scheduler_node_id].get("parameters", {})
params["steps"] = scheduler_params.get("steps")
params["scheduler"] = scheduler_params.get("scheduler")
params["denoise"] = scheduler_params.get("denoise")
# 2. Trace sampler input to find KSamplerSelect (only if sampler input exists)
if "sampler" in sampler_inputs:
sampler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sampler", "KSamplerSelect", max_depth=5)
if sampler_node_id and sampler_node_id in metadata.get(SAMPLING, {}):
sampler_params = metadata[SAMPLING][sampler_node_id].get("parameters", {})
params["sampler"] = sampler_params.get("sampler_name")
# 3. Trace guider input for CFGGuider and CLIPTextEncode
if "guider" in sampler_inputs:
guider_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "guider", max_depth=5)
if guider_node_id and guider_node_id in prompt.original_prompt:
# Check if the guider node is a CFGGuider
if prompt.original_prompt[guider_node_id].get("class_type") == "CFGGuider":
# Extract cfg value from the CFGGuider
if guider_node_id in metadata.get(SAMPLING, {}):
cfg_params = metadata[SAMPLING][guider_node_id].get("parameters", {})
params["cfg_scale"] = cfg_params.get("cfg")
# Find CLIPTextEncode for positive prompt
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "positive", "CLIPTextEncode", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
# Find CLIPTextEncode for negative prompt
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "negative", "CLIPTextEncode", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
else:
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")

View File

@@ -196,11 +196,9 @@ class MetadataRegistry:
node_metadata[category] = {}
node_metadata[category][node_id] = current_metadata[category][node_id]
# Save new metadata or clear stale cache entries when metadata is empty
# Save to cache if we have any metadata for this node
if any(node_metadata.values()):
self.node_cache[cache_key] = node_metadata
else:
self.node_cache.pop(cache_key, None)
def clear_unused_cache(self):
"""Clean up node_cache entries that are no longer in use"""

View File

@@ -3,18 +3,6 @@ import os
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES, IS_SAMPLER
def _store_checkpoint_metadata(metadata, node_id, model_name):
"""Store checkpoint model information when available."""
if not model_name:
return
metadata.setdefault(MODELS, {})
metadata[MODELS][node_id] = {
"name": model_name,
"type": "checkpoint",
"node_id": node_id
}
class NodeMetadataExtractor:
"""Base class for node-specific metadata extraction"""
@@ -41,48 +29,12 @@ class CheckpointLoaderExtractor(NodeMetadataExtractor):
return
model_name = inputs.get("ckpt_name")
_store_checkpoint_metadata(metadata, node_id, model_name)
class NunchakuFluxDiTLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "model_path" not in inputs:
return
model_name = inputs.get("model_path")
_store_checkpoint_metadata(metadata, node_id, model_name)
class NunchakuQwenImageDiTLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "model_name" not in inputs:
return
model_name = inputs.get("model_name")
_store_checkpoint_metadata(metadata, node_id, model_name)
class GGUFLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "gguf_name" not in inputs:
return
model_name = inputs.get("gguf_name")
_store_checkpoint_metadata(metadata, node_id, model_name)
class KJNodesModelLoaderExtractor(NodeMetadataExtractor):
"""Extract metadata from KJNodes loaders that expose `model_name`."""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "model_name" not in inputs:
return
model_name = inputs.get("model_name")
_store_checkpoint_metadata(metadata, node_id, model_name)
if model_name:
metadata[MODELS][node_id] = {
"name": model_name,
"type": "checkpoint",
"node_id": node_id
}
class TSCCheckpointLoaderExtractor(NodeMetadataExtractor):
@staticmethod
@@ -91,7 +43,12 @@ class TSCCheckpointLoaderExtractor(NodeMetadataExtractor):
return
model_name = inputs.get("ckpt_name")
_store_checkpoint_metadata(metadata, node_id, model_name)
if model_name:
metadata[MODELS][node_id] = {
"name": model_name,
"type": "checkpoint",
"node_id": node_id
}
# For loader node has lora_stack input, like Efficient Loader from Efficient Nodes
active_loras = []
@@ -685,45 +642,31 @@ NODE_EXTRACTORS = {
# Sampling
"KSampler": SamplerExtractor,
"KSamplerAdvanced": KSamplerAdvancedExtractor,
"SamplerCustom": KSamplerAdvancedExtractor,
"SamplerCustomAdvanced": SamplerCustomAdvancedExtractor,
"ClownsharKSampler_Beta": SamplerExtractor,
"TSC_KSampler": TSCKSamplerExtractor, # Efficient Nodes
"TSC_KSamplerAdvanced": TSCKSamplerAdvancedExtractor, # Efficient Nodes
"KSamplerBasicPipe": KSamplerBasicPipeExtractor, # comfyui-impact-pack
"KSamplerAdvancedBasicPipe": KSamplerAdvancedBasicPipeExtractor, # comfyui-impact-pack
"KSampler_inspire_pipe": KSamplerBasicPipeExtractor, # comfyui-inspire-pack
"KSamplerAdvanced_inspire_pipe": KSamplerAdvancedBasicPipeExtractor, # comfyui-inspire-pack
"KSampler_inspire": SamplerExtractor, # comfyui-inspire-pack
# Sampling Selectors
"KSamplerSelect": KSamplerSelectExtractor, # Add KSamplerSelect
"BasicScheduler": BasicSchedulerExtractor, # Add BasicScheduler
"AlignYourStepsScheduler": BasicSchedulerExtractor, # Add AlignYourStepsScheduler
# Loaders
"CheckpointLoaderSimple": CheckpointLoaderExtractor,
"comfyLoader": CheckpointLoaderExtractor, # easy comfyLoader
"CheckpointLoaderSimpleWithImages": CheckpointLoaderExtractor, # CheckpointLoader|pysssss
"TSC_EfficientLoader": TSCCheckpointLoaderExtractor, # Efficient Nodes
"NunchakuFluxDiTLoader": NunchakuFluxDiTLoaderExtractor, # ComfyUI-Nunchaku
"NunchakuQwenImageDiTLoader": NunchakuQwenImageDiTLoaderExtractor, # ComfyUI-Nunchaku
"LoaderGGUF": GGUFLoaderExtractor, # calcuis gguf
"LoaderGGUFAdvanced": GGUFLoaderExtractor, # calcuis gguf
"GGUFLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
"DiffusionModelLoaderKJ": KJNodesModelLoaderExtractor, # KJNodes
"CheckpointLoaderKJ": CheckpointLoaderExtractor, # KJNodes
"UNETLoader": UNETLoaderExtractor, # Updated to use dedicated extractor
"UnetLoaderGGUF": UNETLoaderExtractor, # Updated to use dedicated extractor
"LoraLoader": LoraLoaderExtractor,
"LoraLoaderLM": LoraLoaderManagerExtractor,
"LoraManagerLoader": LoraLoaderManagerExtractor,
# Conditioning
"CLIPTextEncode": CLIPTextEncodeExtractor,
"PromptLM": CLIPTextEncodeExtractor,
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
"AdvancedCLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
"smZ_CLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/shiimizu/ComfyUI_smZNodes
"CR_ApplyControlNetStack": CR_ApplyControlNetStackExtractor, # Add CR_ApplyControlNetStack
"PCTextEncode": CLIPTextEncodeExtractor, # From https://github.com/asagi4/comfyui-prompt-control
# Latent
"EmptyLatentImage": ImageSizeExtractor,
# Flux

View File

@@ -1 +0,0 @@
"""Server middleware modules"""

View File

@@ -1,53 +0,0 @@
"""Cache control middleware for ComfyUI server"""
from aiohttp import web
from typing import Callable, Awaitable
# Time in seconds
ONE_HOUR: int = 3600
ONE_DAY: int = 86400
IMG_EXTENSIONS = (
".jpg",
".jpeg",
".png",
".ppm",
".bmp",
".pgm",
".tif",
".tiff",
".webp",
".mp4"
)
@web.middleware
async def cache_control(
request: web.Request, handler: Callable[[web.Request], Awaitable[web.Response]]
) -> web.Response:
"""Cache control middleware that sets appropriate cache headers based on file type and response status"""
response: web.Response = await handler(request)
if (
request.path.endswith(".js")
or request.path.endswith(".css")
or request.path.endswith("index.json")
):
response.headers.setdefault("Cache-Control", "no-cache")
return response
# Early return for non-image files - no cache headers needed
if not request.path.lower().endswith(IMG_EXTENSIONS):
return response
# Handle image files
if response.status == 404:
response.headers.setdefault("Cache-Control", f"public, max-age={ONE_HOUR}")
elif response.status in (200, 201, 202, 203, 204, 205, 206, 301, 308):
# Success responses and permanent redirects - cache for 1 day
response.headers.setdefault("Cache-Control", f"public, max-age={ONE_DAY}")
elif response.status in (302, 303, 307):
# Temporary redirects - no cache
response.headers.setdefault("Cache-Control", "no-cache")
# Note: 304 Not Modified falls through - no cache headers set
return response

View File

@@ -1,65 +0,0 @@
"""Middleware helpers for adjusting Content Security Policy headers."""
from typing import Awaitable, Callable, Dict, List
from aiohttp import web
REMOTE_MEDIA_SOURCES = (
"https://image.civitai.com",
"https://img.genur.art",
)
@web.middleware
async def relax_csp_for_remote_media(
request: web.Request, handler: Callable[[web.Request], Awaitable[web.StreamResponse]]
) -> web.StreamResponse:
"""Allow LoRA Manager media previews to load from trusted remote domains.
When ComfyUI is started with ``--disable-api-nodes`` it injects a restrictive
``Content-Security-Policy`` header that blocks remote images and videos. The
LoRA Manager UI legitimately needs to fetch previews from Civitai and Genur,
so this middleware augments the existing CSP to whitelist those hosts while
preserving all other directives.
"""
response: web.StreamResponse = await handler(request)
header_value = response.headers.get("Content-Security-Policy")
if not header_value:
return response
directive_order: List[str] = []
directives: Dict[str, List[str]] = {}
for raw_directive in header_value.split(";"):
directive = raw_directive.strip()
if not directive:
continue
parts = directive.split()
name, values = parts[0], parts[1:]
if name not in directive_order:
directive_order.append(name)
directives[name] = values
def merge_sources(name: str, sources: List[str], defaults: List[str] | None = None) -> None:
existing = directives.get(name, list(defaults or []))
for source in sources:
if source not in existing:
existing.append(source)
directives[name] = existing
if name not in directive_order:
directive_order.append(name)
merge_sources("img-src", list(REMOTE_MEDIA_SOURCES))
merge_sources("media-src", ["'self'", *REMOTE_MEDIA_SOURCES], defaults=["'self'"])
updated_header = "; ".join(
f"{name} {' '.join(directives[name])}".rstrip() for name in directive_order
)
response.headers["Content-Security-Policy"] = f"{updated_header};"
return response

View File

@@ -1,15 +1,15 @@
import logging
from server import PromptServer # type: ignore
from ..metadata_collector.metadata_processor import MetadataProcessor
logger = logging.getLogger(__name__)
class DebugMetadataLM:
class DebugMetadata:
NAME = "Debug Metadata (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Debug node to verify metadata_processor functionality"
OUTPUT_NODE = True
@classmethod
def INPUT_TYPES(cls):
return {
@@ -25,37 +25,21 @@ class DebugMetadataLM:
FUNCTION = "process_metadata"
def process_metadata(self, images, id):
"""
Process metadata from the execution context and return it for UI display.
The metadata is returned via the 'ui' key in the return dict, which triggers
node.onExecuted on the frontend to update the JsonDisplayWidget.
Args:
images: Input images (required for execution flow)
id: Node's unique ID (hidden)
Returns:
Dict with 'result' (empty tuple) and 'ui' (metadata dict for widget display)
"""
try:
# Get the current execution context's metadata
from ..metadata_collector import get_metadata
metadata = get_metadata()
# Use the MetadataProcessor to convert it to dict
metadata_dict = MetadataProcessor.to_dict(metadata, id)
return {
"result": (),
# ComfyUI expects ui values to be lists, wrap the dict in a list
"ui": {"metadata": [metadata_dict]},
}
# Use the MetadataProcessor to convert it to JSON string
metadata_json = MetadataProcessor.to_json(metadata, id)
# Send metadata to frontend for display
PromptServer.instance.send_sync("metadata_update", {
"id": id,
"metadata": metadata_json
})
except Exception as e:
logger.error(f"Error processing metadata: {e}")
return {
"result": (),
"ui": {"metadata": [{"error": str(e)}]},
}
return ()

View File

@@ -1,134 +0,0 @@
"""
Lora Cycler Node - Sequentially cycles through LoRAs from a pool.
This node accepts optional pool_config input to filter available LoRAs, and outputs
a LORA_STACK with one LoRA at a time. Returns UI updates with current/next LoRA info
and tracks the cycle progress which persists across workflow save/load.
"""
import logging
import os
from ..utils.utils import get_lora_info
logger = logging.getLogger(__name__)
class LoraCyclerLM:
"""Node that sequentially cycles through LoRAs from a pool"""
NAME = "Lora Cycler (LoraManager)"
CATEGORY = "Lora Manager/randomizer"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"cycler_config": ("CYCLER_CONFIG", {}),
},
"optional": {
"pool_config": ("POOL_CONFIG", {}),
},
}
RETURN_TYPES = ("LORA_STACK",)
RETURN_NAMES = ("LORA_STACK",)
FUNCTION = "cycle"
OUTPUT_NODE = False
async def cycle(self, cycler_config, pool_config=None):
"""
Cycle through LoRAs based on configuration and pool filters.
Args:
cycler_config: Dict with cycler settings (current_index, model_strength, clip_strength, sort_by)
pool_config: Optional config from LoRA Pool node for filtering
Returns:
Dictionary with 'result' (LORA_STACK tuple) and 'ui' (for widget display)
"""
from ..services.service_registry import ServiceRegistry
from ..services.lora_service import LoraService
# Extract settings from cycler_config
current_index = cycler_config.get("current_index", 1) # 1-based
model_strength = float(cycler_config.get("model_strength", 1.0))
clip_strength = float(cycler_config.get("clip_strength", 1.0))
sort_by = "filename"
# Dual-index mechanism for batch queue synchronization
execution_index = cycler_config.get("execution_index") # Can be None
# next_index_from_config = cycler_config.get("next_index") # Not used on backend
# Get scanner and service
scanner = await ServiceRegistry.get_lora_scanner()
lora_service = LoraService(scanner)
# Get filtered and sorted LoRA list
lora_list = await lora_service.get_cycler_list(
pool_config=pool_config, sort_by=sort_by
)
total_count = len(lora_list)
if total_count == 0:
logger.warning("[LoraCyclerLM] No LoRAs available in pool")
return {
"result": ([],),
"ui": {
"current_index": [1],
"next_index": [1],
"total_count": [0],
"current_lora_name": [""],
"current_lora_filename": [""],
"error": ["No LoRAs available in pool"],
},
}
# Determine which index to use for this execution
# If execution_index is provided (batch queue case), use it
# Otherwise use current_index (first execution or non-batch case)
if execution_index is not None:
actual_index = execution_index
else:
actual_index = current_index
# Clamp index to valid range (1-based)
clamped_index = max(1, min(actual_index, total_count))
# Get LoRA at current index (convert to 0-based for list access)
current_lora = lora_list[clamped_index - 1]
# Build LORA_STACK with single LoRA
lora_path, _ = get_lora_info(current_lora["file_name"])
if not lora_path:
logger.warning(
f"[LoraCyclerLM] Could not find path for LoRA: {current_lora['file_name']}"
)
lora_stack = []
else:
# Normalize path separators
lora_path = lora_path.replace("/", os.sep)
lora_stack = [(lora_path, model_strength, clip_strength)]
# Calculate next index (wrap to 1 if at end)
next_index = clamped_index + 1
if next_index > total_count:
next_index = 1
# Get next LoRA for UI display (what will be used next generation)
next_lora = lora_list[next_index - 1]
next_display_name = next_lora["file_name"]
return {
"result": (lora_stack,),
"ui": {
"current_index": [clamped_index],
"next_index": [next_index],
"total_count": [total_count],
"current_lora_name": [current_lora["file_name"]],
"current_lora_filename": [current_lora["file_name"]],
"next_lora_name": [next_display_name],
"next_lora_filename": [next_lora["file_name"]],
},
}

View File

@@ -1,12 +1,12 @@
import logging
import re
from nodes import LoraLoader
from comfy.comfy_types import IO # type: ignore
from ..utils.utils import get_lora_info
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list, nunchaku_load_lora
logger = logging.getLogger(__name__)
class LoraLoaderLM:
class LoraManagerLoader:
NAME = "Lora Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@@ -16,15 +16,17 @@ class LoraLoaderLM:
"required": {
"model": ("MODEL",),
# "clip": ("CLIP",),
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"text": (IO.STRING, {
"multiline": True,
"dynamicPrompts": True,
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
RETURN_TYPES = ("MODEL", "CLIP", IO.STRING, IO.STRING)
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
FUNCTION = "load_loras"
@@ -107,143 +109,6 @@ class LoraLoaderLM:
# use ',, ' to separate trigger words for group mode
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format loaded_loras with support for both formats
formatted_loras = []
for item in loaded_loras:
parts = item.split(":")
lora_name = parts[0]
strength_parts = parts[1].strip().split(",")
if len(strength_parts) > 1:
# Different model and clip strengths
model_str = strength_parts[0].strip()
clip_str = strength_parts[1].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
else:
# Same strength for both
model_str = strength_parts[0].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
formatted_loras_text = " ".join(formatted_loras)
return (model, clip, trigger_words_text, formatted_loras_text)
class LoraTextLoaderLM:
NAME = "LoRA Text Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"model": ("MODEL",),
"lora_syntax": ("STRING", {
"forceInput": True,
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation"
}),
},
"optional": {
"clip": ("CLIP",),
"lora_stack": ("LORA_STACK",),
}
}
RETURN_TYPES = ("MODEL", "CLIP", "STRING", "STRING")
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
FUNCTION = "load_loras_from_text"
def parse_lora_syntax(self, text):
"""Parse LoRA syntax from text input."""
# Pattern to match <lora:name:strength> or <lora:name:model_strength:clip_strength>
pattern = r'<lora:([^:>]+):([^:>]+)(?::([^:>]+))?>'
matches = re.findall(pattern, text, re.IGNORECASE)
loras = []
for match in matches:
lora_name = match[0]
model_strength = float(match[1])
clip_strength = float(match[2]) if match[2] else model_strength
loras.append({
'name': lora_name,
'model_strength': model_strength,
'clip_strength': clip_strength
})
return loras
def load_loras_from_text(self, model, lora_syntax, clip=None, lora_stack=None):
"""Load LoRAs based on text syntax input."""
loaded_loras = []
all_trigger_words = []
# Check if model is a Nunchaku Flux model - simplified approach
is_nunchaku_model = False
try:
model_wrapper = model.model.diffusion_model
# Check if model is a Nunchaku Flux model using only class name
if model_wrapper.__class__.__name__ == "ComfyFluxWrapper":
is_nunchaku_model = True
logger.info("Detected Nunchaku Flux model")
except (AttributeError, TypeError):
# Not a model with the expected structure
pass
# First process lora_stack if available
if lora_stack:
for lora_path, model_strength, clip_strength in lora_stack:
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# Use our custom function for Flux models
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged for Nunchaku models
else:
# Use default loader for standard models
model, clip = LoraLoader().load_lora(model, clip, lora_path, model_strength, clip_strength)
# Extract lora name for trigger words lookup
lora_name = extract_lora_name(lora_path)
_, trigger_words = get_lora_info(lora_name)
all_trigger_words.extend(trigger_words)
# Add clip strength to output if different from model strength (except for Nunchaku models)
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Parse and process LoRAs from text syntax
parsed_loras = self.parse_lora_syntax(lora_syntax)
for lora in parsed_loras:
lora_name = lora['name']
model_strength = lora['model_strength']
clip_strength = lora['clip_strength']
# Get lora path and trigger words
lora_path, trigger_words = get_lora_info(lora_name)
# Apply the LoRA using the appropriate loader
if is_nunchaku_model:
# For Nunchaku models, use our custom function
model = nunchaku_load_lora(model, lora_path, model_strength)
# clip remains unchanged
else:
# Use default loader for standard models
model, clip = LoraLoader().load_lora(model, clip, lora_path, model_strength, clip_strength)
# Include clip strength in output if different from model strength and not a Nunchaku model
if not is_nunchaku_model and abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
# use ',, ' to separate trigger words for group mode
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format loaded_loras with support for both formats
formatted_loras = []
for item in loaded_loras:

View File

@@ -1,87 +0,0 @@
"""
LoRA Pool Node - Defines filter configuration for LoRA selection.
This node provides a visual filter editor that generates a LORA_POOL_CONFIG
object for use by downstream nodes (like LoRA Randomizer).
"""
import logging
logger = logging.getLogger(__name__)
class LoraPoolLM:
"""
A node that defines LoRA filter criteria through a Vue-based widget.
Outputs a LORA_POOL_CONFIG that can be consumed by:
- Frontend: LoRA Randomizer widget reads connected pool's widget value
- Backend: LoRA Randomizer receives config during workflow execution
"""
NAME = "Lora Pool (LoraManager)"
CATEGORY = "Lora Manager/randomizer"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"pool_config": ("LORA_POOL_CONFIG", {}),
},
"hidden": {
# Hidden input to pass through unique node ID for frontend
"unique_id": "UNIQUE_ID",
},
}
RETURN_TYPES = ("POOL_CONFIG",)
RETURN_NAMES = ("POOL_CONFIG",)
FUNCTION = "process"
OUTPUT_NODE = False
def process(self, pool_config, unique_id=None):
"""
Pass through the pool configuration filters.
The config is generated entirely by the frontend widget.
This function validates and returns only the filters field.
Args:
pool_config: Dict containing filter criteria from widget
unique_id: Node's unique ID (hidden)
Returns:
Tuple containing the filters dict from pool_config
"""
# Validate required structure
if not isinstance(pool_config, dict):
logger.warning("Invalid pool_config type, using empty config")
pool_config = self._default_config()
# Ensure version field exists
if "version" not in pool_config:
pool_config["version"] = 1
# Extract filters field
filters = pool_config.get("filters", self._default_config()["filters"])
# Log for debugging
logger.debug(f"[LoraPoolLM] Processing filters: {filters}")
return (filters,)
@staticmethod
def _default_config():
"""Return default empty configuration."""
return {
"version": 1,
"filters": {
"baseModels": [],
"tags": {"include": [], "exclude": []},
"folders": {"include": [], "exclude": []},
"favoritesOnly": False,
"license": {"noCreditRequired": False, "allowSelling": False},
},
"preview": {"matchCount": 0, "lastUpdated": 0},
}

View File

@@ -1,206 +0,0 @@
"""
Lora Randomizer Node - Randomly selects LoRAs from a pool with configurable settings.
This node accepts optional pool_config input to filter available LoRAs, and outputs
a LORA_STACK with randomly selected LoRAs. Returns UI updates with new random LoRAs
and tracks the last used combination for reuse.
"""
import logging
import random
import os
from ..utils.utils import get_lora_info
from .utils import extract_lora_name
logger = logging.getLogger(__name__)
class LoraRandomizerLM:
"""Node that randomly selects LoRAs from a pool"""
NAME = "Lora Randomizer (LoraManager)"
CATEGORY = "Lora Manager/randomizer"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"randomizer_config": ("RANDOMIZER_CONFIG", {}),
"loras": ("LORAS", {}),
},
"optional": {
"pool_config": ("POOL_CONFIG", {}),
},
}
RETURN_TYPES = ("LORA_STACK",)
RETURN_NAMES = ("LORA_STACK",)
FUNCTION = "randomize"
OUTPUT_NODE = False
def _preprocess_loras_input(self, loras):
"""
Preprocess loras input to handle different widget formats.
Args:
loras: Input from widget, either:
- List of LoRA dicts (expected format)
- Dict with '__value__' key containing the list
Returns:
List of LoRA dicts
"""
if isinstance(loras, dict) and "__value__" in loras:
return loras["__value__"]
return loras
async def randomize(self, randomizer_config, loras, pool_config=None):
"""
Randomize LoRAs based on configuration and pool filters.
Args:
randomizer_config: Dict with randomizer settings (count, strength ranges, roll_mode)
loras: List of LoRA dicts from LORAS widget (includes locked state)
pool_config: Optional config from LoRA Pool node for filtering
Returns:
Dictionary with 'result' (LORA_STACK tuple) and 'ui' (for widget display)
"""
from ..services.service_registry import ServiceRegistry
loras = self._preprocess_loras_input(loras)
roll_mode = randomizer_config.get("roll_mode", "always")
logger.debug(f"[LoraRandomizerLM] roll_mode: {roll_mode}")
# Dual seed mechanism for batch queue synchronization
# execution_seed: seed for generating execution_stack (= previous next_seed)
# next_seed: seed for generating ui_loras (= what will be displayed after execution)
execution_seed = randomizer_config.get("execution_seed", None)
next_seed = randomizer_config.get("next_seed", None)
if roll_mode == "fixed":
ui_loras = loras
execution_loras = loras
else:
scanner = await ServiceRegistry.get_lora_scanner()
# Generate execution_loras from execution_seed (if available)
if execution_seed is not None:
# Use execution_seed to regenerate the same loras that were shown to user
execution_loras = await self._generate_random_loras_for_ui(
scanner, randomizer_config, loras, pool_config, seed=execution_seed
)
else:
# First execution: use loras input (what user sees in the widget)
execution_loras = loras
# Generate ui_loras from next_seed (for display after execution)
ui_loras = await self._generate_random_loras_for_ui(
scanner, randomizer_config, loras, pool_config, seed=next_seed
)
execution_stack = self._build_execution_stack_from_input(execution_loras)
return {
"result": (execution_stack,),
"ui": {"loras": ui_loras, "last_used": execution_loras},
}
def _build_execution_stack_from_input(self, loras):
"""
Build LORA_STACK tuple from input loras list for execution.
Args:
loras: List of LoRA dicts with name, strength, clipStrength, active
Returns:
List of tuples (lora_path, model_strength, clip_strength)
"""
lora_stack = []
for lora in loras:
if not lora.get("active", False):
continue
# Get file path
lora_path, trigger_words = get_lora_info(lora["name"])
if not lora_path:
logger.warning(
f"[LoraRandomizerLM] Could not find path for LoRA: {lora['name']}"
)
continue
# Normalize path separators
lora_path = lora_path.replace("/", os.sep)
# Extract strengths (convert to float to prevent string subtraction errors)
model_strength = float(lora.get("strength", 1.0))
clip_strength = float(lora.get("clipStrength", model_strength))
lora_stack.append((lora_path, model_strength, clip_strength))
return lora_stack
async def _generate_random_loras_for_ui(
self, scanner, randomizer_config, input_loras, pool_config=None, seed=None
):
"""
Generate new random loras for UI display.
Args:
scanner: LoraScanner instance
randomizer_config: Dict with randomizer settings
input_loras: Current input loras (for extracting locked loras)
pool_config: Optional pool filters
seed: Optional seed for deterministic randomization
Returns:
List of LoRA dicts for UI display
"""
from ..services.lora_service import LoraService
# Parse randomizer settings (convert numeric values to float to prevent type errors)
count_mode = randomizer_config.get("count_mode", "range")
count_fixed = int(randomizer_config.get("count_fixed", 5))
count_min = int(randomizer_config.get("count_min", 3))
count_max = int(randomizer_config.get("count_max", 7))
model_strength_min = float(randomizer_config.get("model_strength_min", 0.0))
model_strength_max = float(randomizer_config.get("model_strength_max", 1.0))
use_same_clip_strength = randomizer_config.get("use_same_clip_strength", True)
clip_strength_min = float(randomizer_config.get("clip_strength_min", 0.0))
clip_strength_max = float(randomizer_config.get("clip_strength_max", 1.0))
use_recommended_strength = randomizer_config.get(
"use_recommended_strength", False
)
recommended_strength_scale_min = float(
randomizer_config.get("recommended_strength_scale_min", 0.5)
)
recommended_strength_scale_max = float(
randomizer_config.get("recommended_strength_scale_max", 1.0)
)
# Extract locked LoRAs from input
locked_loras = [lora for lora in input_loras if lora.get("locked", False)]
# Use LoraService to generate random LoRAs
lora_service = LoraService(scanner)
result_loras = await lora_service.get_random_loras(
count=count_fixed,
model_strength_min=model_strength_min,
model_strength_max=model_strength_max,
use_same_clip_strength=use_same_clip_strength,
clip_strength_min=clip_strength_min,
clip_strength_max=clip_strength_max,
locked_loras=locked_loras,
pool_config=pool_config,
count_mode=count_mode,
count_min=count_min,
count_max=count_max,
use_recommended_strength=use_recommended_strength,
recommended_strength_scale_min=recommended_strength_scale_min,
recommended_strength_scale_max=recommended_strength_scale_max,
seed=seed,
)
return result_loras

View File

@@ -1,3 +1,4 @@
from comfy.comfy_types import IO # type: ignore
import os
from ..utils.utils import get_lora_info
from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_loras_list
@@ -6,7 +7,7 @@ import logging
logger = logging.getLogger(__name__)
class LoraStackerLM:
class LoraStacker:
NAME = "Lora Stacker (LoraManager)"
CATEGORY = "Lora Manager/stackers"
@@ -14,15 +15,17 @@ class LoraStackerLM:
def INPUT_TYPES(cls):
return {
"required": {
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"text": (IO.STRING, {
"multiline": True,
"dynamicPrompts": True,
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),
}
RETURN_TYPES = ("LORA_STACK", "STRING", "STRING")
RETURN_TYPES = ("LORA_STACK", IO.STRING, IO.STRING)
RETURN_NAMES = ("LORA_STACK", "trigger_words", "active_loras")
FUNCTION = "stack_loras"

View File

@@ -1,58 +0,0 @@
from typing import Any, Optional
class PromptLM:
"""Encodes text (and optional trigger words) into CLIP conditioning."""
NAME = "Prompt (LoraManager)"
CATEGORY = "Lora Manager/conditioning"
DESCRIPTION = (
"Encodes a text prompt using a CLIP model into an embedding that can be used "
"to guide the diffusion model towards generating specific images."
)
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"text": (
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter prompt... /char, /artist for quick tag search",
"tooltip": "The text to be encoded.",
},
),
"clip": (
'CLIP',
{"tooltip": "The CLIP model used for encoding the text."},
),
},
"optional": {
"trigger_words": (
'STRING',
{
"forceInput": True,
"tooltip": (
"Optional trigger words to prepend to the text before "
"encoding."
)
},
)
},
}
RETURN_TYPES = ('CONDITIONING', 'STRING',)
RETURN_NAMES = ('CONDITIONING', 'PROMPT',)
OUTPUT_TOOLTIPS = (
"A conditioning containing the embedded text used to guide the diffusion model.",
)
FUNCTION = "encode"
def encode(self, text: str, clip: Any, trigger_words: Optional[str] = None):
prompt = text
if trigger_words:
prompt = ", ".join([trigger_words, text])
from nodes import CLIPTextEncode # type: ignore
conditioning = CLIPTextEncode().encode(clip, prompt)[0]
return (conditioning, prompt,)

View File

@@ -1,5 +1,6 @@
import json
import os
import asyncio
import re
import numpy as np
import folder_paths # type: ignore
@@ -8,11 +9,8 @@ from ..metadata_collector.metadata_processor import MetadataProcessor
from ..metadata_collector import get_metadata
from PIL import Image, PngImagePlugin
import piexif
import logging
logger = logging.getLogger(__name__)
class SaveImageLM:
class SaveImage:
NAME = "Save Image (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Save images with embedded generation metadata in compatible format"
@@ -276,15 +274,9 @@ class SaveImageLM:
length = int(parts[1])
prompt = prompt[:length]
filename = filename.replace(segment, prompt.strip())
elif key == "model":
model_value = metadata_dict.get('checkpoint')
if isinstance(model_value, (bytes, os.PathLike)):
model_value = str(model_value)
if not isinstance(model_value, str) or not model_value:
model = "model_unavailable"
else:
model = os.path.splitext(os.path.basename(model_value))[0]
elif key == "model" and 'checkpoint' in metadata_dict:
model = metadata_dict.get('checkpoint', '')
model = os.path.splitext(os.path.basename(model))[0]
if len(parts) >= 2:
length = int(parts[1])
model = model[:length]
@@ -388,7 +380,7 @@ class SaveImageLM:
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
logger.error(f"Error adding EXIF data: {e}")
print(f"Error adding EXIF data: {e}")
img.save(file_path, format="JPEG", **save_kwargs)
elif file_format == "webp":
try:
@@ -406,7 +398,7 @@ class SaveImageLM:
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
logger.error(f"Error adding EXIF data: {e}")
print(f"Error adding EXIF data: {e}")
img.save(file_path, format="WEBP", **save_kwargs)
@@ -417,7 +409,7 @@ class SaveImageLM:
})
except Exception as e:
logger.error(f"Error saving image: {e}")
print(f"Error saving image: {e}")
return results
@@ -427,15 +419,11 @@ class SaveImageLM:
# Make sure the output directory exists
os.makedirs(self.output_dir, exist_ok=True)
# If images is already a list or array of images, do nothing; otherwise, convert to list
if isinstance(images, (list, np.ndarray)):
pass
else:
# Ensure images is always a list of images
if len(images.shape) == 3: # Single image (height, width, channels)
images = [images]
else: # Multiple images (batch, height, width, channels)
images = [img for img in images]
# Ensure images is always a list of images
if len(images.shape) == 3: # Single image (height, width, channels)
images = [images]
else: # Multiple images (batch, height, width, channels)
images = [img for img in images]
# Save all images
results = self.save_images(
@@ -451,4 +439,4 @@ class SaveImageLM:
add_counter_to_filename
)
return (images,)
return (images,)

View File

@@ -1,33 +0,0 @@
class TextLM:
"""A simple text node with autocomplete support."""
NAME = "Text (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = (
"A simple text input node with autocomplete support for tags and styles."
)
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"text": (
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter text... /char, /artist for quick tag search",
"tooltip": "The text output.",
},
),
},
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("STRING",)
OUTPUT_TOOLTIPS = (
"The text output.",
)
FUNCTION = "process"
def process(self, text: str):
return (text,)

View File

@@ -1,41 +1,29 @@
import json
import re
from server import PromptServer # type: ignore
from .utils import FlexibleOptionalInputType, any_type
import logging
logger = logging.getLogger(__name__)
class TriggerWordToggleLM:
class TriggerWordToggle:
NAME = "TriggerWord Toggle (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Toggle trigger words on/off"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"group_mode": (
"BOOLEAN",
{
"default": True,
"tooltip": "When enabled, treats each group of trigger words as a single toggleable unit.",
},
),
"default_active": (
"BOOLEAN",
{
"default": True,
"tooltip": "Sets the default initial state (active or inactive) when trigger words are added.",
},
),
"allow_strength_adjustment": (
"BOOLEAN",
{
"default": False,
"tooltip": "Enable mouse wheel adjustment of each trigger word's strength.",
},
),
"group_mode": ("BOOLEAN", {
"default": True,
"tooltip": "When enabled, treats each group of trigger words as a single toggleable unit."
}),
"default_active": ("BOOLEAN", {
"default": True,
"tooltip": "Sets the default initial state (active or inactive) when trigger words are added."
}),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {
@@ -47,154 +35,63 @@ class TriggerWordToggleLM:
RETURN_NAMES = ("filtered_trigger_words",)
FUNCTION = "process_trigger_words"
def _get_toggle_data(self, kwargs, key="toggle_trigger_words"):
def _get_toggle_data(self, kwargs, key='toggle_trigger_words'):
"""Helper to extract data from either old or new kwargs format"""
if key not in kwargs:
return None
data = kwargs[key]
# Handle new format: {'key': {'__value__': ...}}
if isinstance(data, dict) and "__value__" in data:
return data["__value__"]
if isinstance(data, dict) and '__value__' in data:
return data['__value__']
# Handle old format: {'key': ...}
else:
return data
def _normalize_trigger_words(self, trigger_words):
"""Normalize trigger words by splitting by both single and double commas, stripping whitespace, and filtering empty strings"""
if not trigger_words or not isinstance(trigger_words, str):
return set()
# Split by double commas first to preserve groups, then by single commas
groups = re.split(r",{2,}", trigger_words)
words = []
for group in groups:
# Split each group by single comma
group_words = [word.strip() for word in group.split(",")]
words.extend(group_words)
# Filter out empty strings and return as set
return set(word for word in words if word)
def process_trigger_words(
self,
id,
group_mode,
default_active,
allow_strength_adjustment=False,
**kwargs,
):
def process_trigger_words(self, id, group_mode, default_active, **kwargs):
# Handle both old and new formats for trigger_words
trigger_words_data = self._get_toggle_data(kwargs, "orinalMessage")
trigger_words = (
trigger_words_data if isinstance(trigger_words_data, str) else ""
)
trigger_words_data = self._get_toggle_data(kwargs, 'orinalMessage')
trigger_words = trigger_words_data if isinstance(trigger_words_data, str) else ""
filtered_triggers = trigger_words
# Check if trigger_words is provided and different from orinalMessage
trigger_words_override = self._get_toggle_data(kwargs, "trigger_words")
if (
trigger_words_override
and isinstance(trigger_words_override, str)
and self._normalize_trigger_words(trigger_words_override) != self._normalize_trigger_words(trigger_words)
):
filtered_triggers = trigger_words_override
return (filtered_triggers,)
# Get toggle data with support for both formats
trigger_data = self._get_toggle_data(kwargs, "toggle_trigger_words")
trigger_data = self._get_toggle_data(kwargs, 'toggle_trigger_words')
if trigger_data:
try:
# Convert to list if it's a JSON string
if isinstance(trigger_data, str):
trigger_data = json.loads(trigger_data)
if isinstance(trigger_data, list):
if group_mode:
if allow_strength_adjustment:
parsed_items = [
self._parse_trigger_item(
item, allow_strength_adjustment
)
for item in trigger_data
]
filtered_groups = [
self._format_word_output(
item["text"],
item["strength"],
allow_strength_adjustment,
)
for item in parsed_items
if item["text"] and item["active"]
]
else:
filtered_groups = [
(item.get("text") or "").strip()
for item in trigger_data
if (item.get("text") or "").strip()
and item.get("active", False)
]
filtered_triggers = (
", ".join(filtered_groups) if filtered_groups else ""
)
# Create dictionaries to track active state of words or groups
active_state = {item['text']: item.get('active', False) for item in trigger_data}
if group_mode:
# Split by two or more consecutive commas to get groups
groups = re.split(r',{2,}', trigger_words)
# Remove leading/trailing whitespace from each group
groups = [group.strip() for group in groups]
# Filter groups: keep those not in toggle_trigger_words or those that are active
filtered_groups = [group for group in groups if group not in active_state or active_state[group]]
if filtered_groups:
filtered_triggers = ', '.join(filtered_groups)
else:
parsed_items = [
self._parse_trigger_item(item, allow_strength_adjustment)
for item in trigger_data
]
filtered_words = [
self._format_word_output(
item["text"],
item["strength"],
allow_strength_adjustment,
)
for item in parsed_items
if item["text"] and item["active"]
]
filtered_triggers = (
", ".join(filtered_words) if filtered_words else ""
)
filtered_triggers = ""
else:
# Fallback to original message parsing if data is not in the expected list format
if group_mode:
groups = re.split(r",{2,}", trigger_words)
groups = [group.strip() for group in groups if group.strip()]
filtered_triggers = ", ".join(groups)
# Original behavior for individual words mode
original_words = [word.strip() for word in trigger_words.split(',')]
# Filter out empty strings
original_words = [word for word in original_words if word]
filtered_words = [word for word in original_words if word not in active_state or active_state[word]]
if filtered_words:
filtered_triggers = ', '.join(filtered_words)
else:
words = [
word.strip()
for word in trigger_words.split(",")
if word.strip()
]
filtered_triggers = ", ".join(words)
filtered_triggers = ""
except Exception as e:
logger.error(f"Error processing trigger words: {e}")
return (filtered_triggers,)
def _parse_trigger_item(self, item, allow_strength_adjustment):
text = (item.get("text") or "").strip()
active = bool(item.get("active", False))
strength = item.get("strength")
strength_match = re.match(r"^\((.+):([\d.]+)\)$", text)
if strength_match:
text = strength_match.group(1).strip()
if strength is None:
try:
strength = float(strength_match.group(2))
except ValueError:
strength = None
return {
"text": text,
"active": active,
"strength": strength if allow_strength_adjustment else None,
}
def _format_word_output(self, base_word, strength, allow_strength_adjustment):
if allow_strength_adjustment and strength is not None:
return f"({base_word}:{strength:.2f})"
return base_word
return (filtered_triggers,)

View File

@@ -36,7 +36,6 @@ any_type = AnyType("*")
import os
import logging
import copy
import sys
import folder_paths
logger = logging.getLogger(__name__)
@@ -99,38 +98,22 @@ def to_diffusers(input_lora):
def nunchaku_load_lora(model, lora_name, lora_strength):
"""Load a Flux LoRA for Nunchaku model"""
# Get full path to the LoRA file. Allow both direct paths and registered LoRA names.
lora_path = lora_name if os.path.isfile(lora_name) else folder_paths.get_full_path("loras", lora_name)
if not lora_path or not os.path.isfile(lora_path):
logger.warning("Skipping LoRA '%s' because it could not be found", lora_name)
return model
model_wrapper = model.model.diffusion_model
transformer = model_wrapper.model
# Save the transformer temporarily
model_wrapper.model = None
ret_model = copy.deepcopy(model) # copy everything except the model
ret_model_wrapper = ret_model.model.diffusion_model
# Restore the model and set it for the copy
model_wrapper.model = transformer
ret_model_wrapper.model = transformer
# Get full path to the LoRA file
lora_path = folder_paths.get_full_path("loras", lora_name)
ret_model_wrapper.loras.append((lora_path, lora_strength))
# Try to find copy_with_ctx in the same module as ComfyFluxWrapper
module_name = model_wrapper.__class__.__module__
module = sys.modules.get(module_name)
copy_with_ctx = getattr(module, "copy_with_ctx", None)
if copy_with_ctx is not None:
# New logic using copy_with_ctx from ComfyUI-nunchaku 1.1.0+
ret_model_wrapper, ret_model = copy_with_ctx(model_wrapper)
ret_model_wrapper.loras = [*model_wrapper.loras, (lora_path, lora_strength)]
else:
# Fallback to legacy logic
logger.warning("Please upgrade ComfyUI-nunchaku to 1.1.0 or above for better LoRA support. Falling back to legacy loading logic.")
transformer = model_wrapper.model
# Save the transformer temporarily
model_wrapper.model = None
ret_model = copy.deepcopy(model) # copy everything except the model
ret_model_wrapper = ret_model.model.diffusion_model
# Restore the model and set it for the copy
model_wrapper.model = transformer
ret_model_wrapper.model = transformer
ret_model_wrapper.loras.append((lora_path, lora_strength))
# Convert the LoRA to diffusers format
sd = to_diffusers(lora_path)

View File

@@ -1,3 +1,4 @@
from comfy.comfy_types import IO # type: ignore
import folder_paths # type: ignore
from ..utils.utils import get_lora_info
from .utils import FlexibleOptionalInputType, any_type, get_loras_list
@@ -5,7 +6,7 @@ import logging
logger = logging.getLogger(__name__)
class WanVideoLoraSelectLM:
class WanVideoLoraSelect:
NAME = "WanVideo Lora Select (LoraManager)"
CATEGORY = "Lora Manager/stackers"
@@ -13,21 +14,22 @@ class WanVideoLoraSelectLM:
def INPUT_TYPES(cls):
return {
"required": {
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load LORA models with less VRAM usage, slower loading. This affects ALL LoRAs, not just the current ones. No effect if merge_loras is False"}),
"merge_loras": ("BOOLEAN", {"default": True, "tooltip": "Merge LoRAs into the model, otherwise they are loaded on the fly. Always disabled for GGUF and scaled fp8 models. This affects ALL LoRAs, not just the current one"}),
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load the LORA model with less VRAM usage, slower loading"}),
"text": (IO.STRING, {
"multiline": True,
"dynamicPrompts": True,
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),
}
RETURN_TYPES = ("WANVIDLORA", "STRING", "STRING")
RETURN_TYPES = ("WANVIDLORA", IO.STRING, IO.STRING)
RETURN_NAMES = ("lora", "trigger_words", "active_loras")
FUNCTION = "process_loras"
def process_loras(self, text, low_mem_load=False, merge_loras=True, **kwargs):
def process_loras(self, text, low_mem_load=False, **kwargs):
loras_list = []
all_trigger_words = []
active_loras = []
@@ -36,9 +38,6 @@ class WanVideoLoraSelectLM:
prev_lora = kwargs.get('prev_lora', None)
if prev_lora is not None:
loras_list.extend(prev_lora)
if not merge_loras:
low_mem_load = False # Unmerged LoRAs don't need low_mem_load
# Get blocks if available
blocks = kwargs.get('blocks', {})
@@ -66,7 +65,6 @@ class WanVideoLoraSelectLM:
"blocks": selected_blocks,
"layer_filter": layer_filter,
"low_mem_load": low_mem_load,
"merge_loras": merge_loras,
}
# Add to list and collect active loras

View File

@@ -1,117 +0,0 @@
import folder_paths # type: ignore
from ..utils.utils import get_lora_info
from .utils import any_type
import logging
# 初始化日志记录器
logger = logging.getLogger(__name__)
# 定义新节点的类
class WanVideoLoraTextSelectLM:
# 节点在UI中显示的名称
NAME = "WanVideo Lora Select From Text (LoraManager)"
# 节点所属的分类
CATEGORY = "Lora Manager/stackers"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load LORA models with less VRAM usage, slower loading. This affects ALL LoRAs, not just the current ones. No effect if merge_loras is False"}),
"merge_lora": ("BOOLEAN", {"default": True, "tooltip": "Merge LoRAs into the model, otherwise they are loaded on the fly. Always disabled for GGUF and scaled fp8 models. This affects ALL LoRAs, not just the current one"}),
"lora_syntax": ("STRING", {
"multiline": True,
"forceInput": True,
"tooltip": "Connect a TEXT output for LoRA syntax: <lora:name:strength>"
}),
},
"optional": {
"prev_lora": ("WANVIDLORA",),
"blocks": ("BLOCKS",)
}
}
RETURN_TYPES = ("WANVIDLORA", "STRING", "STRING")
RETURN_NAMES = ("lora", "trigger_words", "active_loras")
FUNCTION = "process_loras_from_syntax"
def process_loras_from_syntax(self, lora_syntax, low_mem_load=False, merge_lora=True, **kwargs):
text_to_process = lora_syntax
blocks = kwargs.get('blocks', {})
selected_blocks = blocks.get("selected_blocks", {})
layer_filter = blocks.get("layer_filter", "")
loras_list = []
all_trigger_words = []
active_loras = []
prev_lora = kwargs.get('prev_lora', None)
if prev_lora is not None:
loras_list.extend(prev_lora)
if not merge_lora:
low_mem_load = False
parts = text_to_process.split('<lora:')
for part in parts[1:]:
end_index = part.find('>')
if end_index == -1:
continue
content = part[:end_index]
lora_parts = content.split(':')
lora_name_raw = ""
model_strength = 1.0
clip_strength = 1.0
if len(lora_parts) == 2:
lora_name_raw = lora_parts[0].strip()
try:
model_strength = float(lora_parts[1])
clip_strength = model_strength
except (ValueError, IndexError):
logger.warning(f"Invalid strength for LoRA '{lora_name_raw}'. Skipping.")
continue
elif len(lora_parts) >= 3:
lora_name_raw = lora_parts[0].strip()
try:
model_strength = float(lora_parts[1])
clip_strength = float(lora_parts[2])
except (ValueError, IndexError):
logger.warning(f"Invalid strengths for LoRA '{lora_name_raw}'. Skipping.")
continue
else:
continue
lora_path, trigger_words = get_lora_info(lora_name_raw)
lora_item = {
"path": folder_paths.get_full_path("loras", lora_path),
"strength": model_strength,
"name": lora_path.split(".")[0],
"blocks": selected_blocks,
"layer_filter": layer_filter,
"low_mem_load": low_mem_load,
"merge_loras": merge_lora,
}
loras_list.append(lora_item)
active_loras.append((lora_name_raw, model_strength, clip_strength))
all_trigger_words.extend(trigger_words)
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
formatted_loras = []
for name, model_strength, clip_strength in active_loras:
if abs(model_strength - clip_strength) > 0.001:
formatted_loras.append(f"<lora:{name}:{str(model_strength).strip()}:{str(clip_strength).strip()}>")
else:
formatted_loras.append(f"<lora:{name}:{str(model_strength).strip()}>")
active_loras_text = " ".join(formatted_loras)
return (loras_list, trigger_words_text, active_loras_text)

View File

@@ -8,7 +8,6 @@ from typing import Dict, List, Any, Optional, Tuple
from abc import ABC, abstractmethod
from ..config import config
from ..utils.constants import VALID_LORA_TYPES
from ..utils.civitai_utils import rewrite_preview_url
logger = logging.getLogger(__name__)
@@ -37,8 +36,7 @@ class RecipeMetadataParser(ABC):
"""
pass
@staticmethod
async def populate_lora_from_civitai(lora_entry: Dict[str, Any], civitai_info_tuple: Tuple[Dict[str, Any], Optional[str]],
async def populate_lora_from_civitai(self, lora_entry: Dict[str, Any], civitai_info_tuple: Tuple[Dict[str, Any], Optional[str]],
recipe_scanner=None, base_model_counts=None, hash_value=None) -> Optional[Dict[str, Any]]:
"""
Populate a lora entry with information from Civitai API response
@@ -57,7 +55,7 @@ class RecipeMetadataParser(ABC):
# Unpack the tuple to get the actual data
civitai_info, error_msg = civitai_info_tuple if isinstance(civitai_info_tuple, tuple) else (civitai_info_tuple, None)
if not civitai_info or error_msg == "Model not found":
if not civitai_info or civitai_info.get("error") == "Model not found":
# Model not found or deleted
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
@@ -80,7 +78,7 @@ class RecipeMetadataParser(ABC):
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
lora_entry['name'] = civitai_info['model']['name']
lora_entry['id'] = civitai_info.get('id')
lora_entry['modelId'] = civitai_info.get('modelId')
@@ -90,10 +88,7 @@ class RecipeMetadataParser(ABC):
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
image_url = civitai_info['images'][0].get('url')
if image_url:
rewritten_image_url, _ = rewrite_preview_url(image_url, media_type='image')
lora_entry['thumbnailUrl'] = rewritten_image_url or image_url
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
current_base_model = civitai_info.get('baseModel', '')
@@ -124,10 +119,10 @@ class RecipeMetadataParser(ABC):
# Check if exists locally
if recipe_scanner and lora_entry['hash']:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_hash(lora_entry['hash'])
exists_locally = lora_scanner.has_lora_hash(lora_entry['hash'])
if exists_locally:
try:
local_path = lora_scanner.get_path_by_hash(lora_entry['hash'])
local_path = lora_scanner.get_lora_path_by_hash(lora_entry['hash'])
lora_entry['existsLocally'] = True
lora_entry['localPath'] = local_path
lora_entry['file_name'] = os.path.splitext(os.path.basename(local_path))[0]
@@ -149,68 +144,40 @@ class RecipeMetadataParser(ABC):
logger.error(f"Error populating lora from Civitai info: {e}")
return lora_entry
@staticmethod
async def populate_checkpoint_from_civitai(checkpoint: Dict[str, Any], civitai_info: Dict[str, Any]) -> Dict[str, Any]:
async def populate_checkpoint_from_civitai(self, checkpoint: Dict[str, Any], civitai_info: Dict[str, Any]) -> Dict[str, Any]:
"""
Populate checkpoint information from Civitai API response
Args:
checkpoint: The checkpoint entry to populate
civitai_info: The response from Civitai API or a (data, error_msg) tuple
civitai_info: The response from Civitai API
Returns:
The populated checkpoint dict
"""
try:
civitai_data, error_msg = (
(civitai_info, None)
if not isinstance(civitai_info, tuple)
else civitai_info
)
if not civitai_data or error_msg == "Model not found":
if civitai_info and civitai_info.get("error") != "Model not found":
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
checkpoint['name'] = civitai_info['model']['name']
# Update version if available
if 'name' in civitai_info:
checkpoint['version'] = civitai_info.get('name', '')
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
checkpoint['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
checkpoint['baseModel'] = civitai_info.get('baseModel', '')
# Get download URL
checkpoint['downloadUrl'] = civitai_info.get('downloadUrl', '')
else:
# Model not found or deleted
checkpoint['isDeleted'] = True
return checkpoint
if 'model' in civitai_data and 'name' in civitai_data['model']:
checkpoint['name'] = civitai_data['model']['name']
if 'name' in civitai_data:
checkpoint['version'] = civitai_data.get('name', '')
if 'images' in civitai_data and civitai_data['images']:
image_url = civitai_data['images'][0].get('url')
if image_url:
rewritten_image_url, _ = rewrite_preview_url(image_url, media_type='image')
checkpoint['thumbnailUrl'] = rewritten_image_url or image_url
checkpoint['baseModel'] = civitai_data.get('baseModel', '')
checkpoint['downloadUrl'] = civitai_data.get('downloadUrl', '')
checkpoint['modelId'] = civitai_data.get('modelId', checkpoint.get('modelId', 0))
checkpoint['id'] = civitai_data.get('id', 0)
if 'files' in civitai_data:
model_file = next(
(
file
for file in civitai_data.get('files', [])
if file.get('type') == 'Model'
),
None,
)
if model_file:
checkpoint['size'] = model_file.get('sizeKB', 0) * 1024
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
checkpoint['hash'] = sha256.lower()
file_name = model_file.get('name', '')
if file_name:
checkpoint['file_name'] = os.path.splitext(file_name)[0]
except Exception as e:
logger.error(f"Error populating checkpoint from Civitai info: {e}")

View File

@@ -1,216 +0,0 @@
import logging
import json
import re
import os
from typing import Any, Dict, Optional
from .merger import GenParamsMerger
from .base import RecipeMetadataParser
from ..services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
class RecipeEnricher:
"""Service to enrich recipe metadata from multiple sources (Civitai, Embedded, User)."""
@staticmethod
async def enrich_recipe(
recipe: Dict[str, Any],
civitai_client: Any,
request_params: Optional[Dict[str, Any]] = None
) -> bool:
"""
Enrich a recipe dictionary in-place with metadata from Civitai and embedded params.
Args:
recipe: The recipe dictionary to enrich. Must have 'gen_params' initialized.
civitai_client: Authenticated Civitai client instance.
request_params: (Optional) Parameters from a user request (e.g. import).
Returns:
bool: True if the recipe was modified, False otherwise.
"""
updated = False
gen_params = recipe.get("gen_params", {})
# 1. Fetch Civitai Info if available
civitai_meta = None
model_version_id = None
source_url = recipe.get("source_url") or recipe.get("source_path", "")
# Check if it's a Civitai image URL
image_id_match = re.search(r'civitai\.com/images/(\d+)', str(source_url))
if image_id_match:
image_id = image_id_match.group(1)
try:
image_info = await civitai_client.get_image_info(image_id)
if image_info:
# Handle nested meta often found in Civitai API responses
raw_meta = image_info.get("meta")
if isinstance(raw_meta, dict):
if "meta" in raw_meta and isinstance(raw_meta["meta"], dict):
civitai_meta = raw_meta["meta"]
else:
civitai_meta = raw_meta
model_version_id = image_info.get("modelVersionId")
# If not at top level, check resources in meta
if not model_version_id and civitai_meta:
resources = civitai_meta.get("civitaiResources", [])
for res in resources:
if res.get("type") == "checkpoint":
model_version_id = res.get("modelVersionId")
break
except Exception as e:
logger.warning(f"Failed to fetch Civitai image info: {e}")
# 2. Merge Parameters
# Priority: request_params > civitai_meta > embedded (existing gen_params)
new_gen_params = GenParamsMerger.merge(
request_params=request_params,
civitai_meta=civitai_meta,
embedded_metadata=gen_params
)
if new_gen_params != gen_params:
recipe["gen_params"] = new_gen_params
updated = True
# 3. Checkpoint Enrichment
# If we have a checkpoint entry, or we can find one
# Use 'id' (from Civitai version) as a marker that it's been enriched
checkpoint_entry = recipe.get("checkpoint")
has_full_checkpoint = checkpoint_entry and checkpoint_entry.get("name") and checkpoint_entry.get("id")
if not has_full_checkpoint:
# Helper to look up values in priority order
def start_lookup(keys):
for source in [request_params, civitai_meta, gen_params]:
if source:
if isinstance(keys, list):
for k in keys:
if k in source: return source[k]
else:
if keys in source: return source[keys]
return None
target_version_id = model_version_id or start_lookup("modelVersionId")
# Also check existing checkpoint entry
if not target_version_id and checkpoint_entry:
target_version_id = checkpoint_entry.get("modelVersionId") or checkpoint_entry.get("id")
# Check for version ID in resources (which might be a string in gen_params)
if not target_version_id:
# Look in all sources for "Civitai resources"
resources_val = start_lookup(["Civitai resources", "civitai_resources", "resources"])
if resources_val:
target_version_id = RecipeEnricher._extract_version_id_from_resources({"Civitai resources": resources_val})
target_hash = start_lookup(["Model hash", "checkpoint_hash", "hashes"])
if not target_hash and checkpoint_entry:
target_hash = checkpoint_entry.get("hash") or checkpoint_entry.get("model_hash")
# Look for 'Model' which sometimes is the hash or name
model_val = start_lookup("Model")
# Look for Checkpoint name fallback
checkpoint_val = checkpoint_entry.get("name") if checkpoint_entry else None
if not checkpoint_val:
checkpoint_val = start_lookup(["Checkpoint", "checkpoint"])
checkpoint_updated = await RecipeEnricher._resolve_and_populate_checkpoint(
recipe, target_version_id, target_hash, model_val, checkpoint_val
)
if checkpoint_updated:
updated = True
else:
# Checkpoint exists, no need to sync to gen_params anymore.
pass
# base_model resolution moved to _resolve_and_populate_checkpoint to support strict formatting
return updated
@staticmethod
def _extract_version_id_from_resources(gen_params: Dict[str, Any]) -> Optional[Any]:
"""Try to find modelVersionId in Civitai resources parameter."""
civitai_resources_raw = gen_params.get("Civitai resources")
if not civitai_resources_raw:
return None
resources_list = None
if isinstance(civitai_resources_raw, str):
try:
resources_list = json.loads(civitai_resources_raw)
except Exception:
pass
elif isinstance(civitai_resources_raw, list):
resources_list = civitai_resources_raw
if isinstance(resources_list, list):
for res in resources_list:
if res.get("type") == "checkpoint":
return res.get("modelVersionId")
return None
@staticmethod
async def _resolve_and_populate_checkpoint(
recipe: Dict[str, Any],
target_version_id: Optional[Any],
target_hash: Optional[str],
model_val: Optional[str],
checkpoint_val: Optional[str]
) -> bool:
"""Find checkpoint metadata and populate it in the recipe."""
metadata_provider = await get_default_metadata_provider()
civitai_info = None
if target_version_id:
civitai_info = await metadata_provider.get_model_version_info(str(target_version_id))
elif target_hash:
civitai_info = await metadata_provider.get_model_by_hash(target_hash)
else:
# Look for 'Model' which sometimes is the hash or name
if model_val and len(model_val) == 10: # Likely a short hash
civitai_info = await metadata_provider.get_model_by_hash(model_val)
if civitai_info and not (isinstance(civitai_info, tuple) and civitai_info[1] == "Model not found"):
# If we already have a partial checkpoint, use it as base
existing_cp = recipe.get("checkpoint")
if existing_cp is None:
existing_cp = {}
checkpoint_data = await RecipeMetadataParser.populate_checkpoint_from_civitai(existing_cp, civitai_info)
# 1. First, resolve base_model using full data before we format it away
current_base_model = recipe.get("base_model")
resolved_base_model = checkpoint_data.get("baseModel")
if resolved_base_model:
# Update if empty OR if it matches our generic prefix but is less specific
is_generic = not current_base_model or current_base_model.lower() in ["flux", "sdxl", "sd15"]
if is_generic and resolved_base_model != current_base_model:
recipe["base_model"] = resolved_base_model
# 2. Format according to requirements: type, modelId, modelVersionId, modelName, modelVersionName
formatted_checkpoint = {
"type": "checkpoint",
"modelId": checkpoint_data.get("modelId"),
"modelVersionId": checkpoint_data.get("id") or checkpoint_data.get("modelVersionId"),
"modelName": checkpoint_data.get("name"), # In base.py, 'name' is populated from civitai_data['model']['name']
"modelVersionName": checkpoint_data.get("version") # In base.py, 'version' is populated from civitai_data['name']
}
# Remove None values
recipe["checkpoint"] = {k: v for k, v in formatted_checkpoint.items() if v is not None}
return True
else:
# Fallback to name extraction if we don't already have one
existing_cp = recipe.get("checkpoint")
if not existing_cp or not existing_cp.get("modelName"):
cp_name = checkpoint_val
if cp_name:
recipe["checkpoint"] = {
"type": "checkpoint",
"modelName": cp_name
}
return True
return False

View File

@@ -1,98 +0,0 @@
from typing import Any, Dict, Optional
import logging
logger = logging.getLogger(__name__)
class GenParamsMerger:
"""Utility to merge generation parameters from multiple sources with priority."""
BLACKLISTED_KEYS = {
"id", "url", "userId", "username", "createdAt", "updatedAt", "hash", "meta",
"draft", "extra", "width", "height", "process", "quantity", "workflow",
"baseModel", "resources", "disablePoi", "aspectRatio", "Created Date",
"experimental", "civitaiResources", "civitai_resources", "Civitai resources",
"modelVersionId", "modelId", "hashes", "Model", "Model hash", "checkpoint_hash",
"checkpoint", "checksum", "model_checksum"
}
NORMALIZATION_MAPPING = {
# Civitai specific
"cfgScale": "cfg_scale",
"clipSkip": "clip_skip",
"negativePrompt": "negative_prompt",
# Case variations
"Sampler": "sampler",
"Steps": "steps",
"Seed": "seed",
"Size": "size",
"Prompt": "prompt",
"Negative prompt": "negative_prompt",
"Cfg scale": "cfg_scale",
"Clip skip": "clip_skip",
"Denoising strength": "denoising_strength",
}
@staticmethod
def merge(
request_params: Optional[Dict[str, Any]] = None,
civitai_meta: Optional[Dict[str, Any]] = None,
embedded_metadata: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
"""
Merge generation parameters from three sources.
Priority: request_params > civitai_meta > embedded_metadata
Args:
request_params: Params provided directly in the import request
civitai_meta: Params from Civitai Image API 'meta' field
embedded_metadata: Params extracted from image EXIF/embedded metadata
Returns:
Merged parameters dictionary
"""
result = {}
# 1. Start with embedded metadata (lowest priority)
if embedded_metadata:
# If it's a full recipe metadata, we use its gen_params
if "gen_params" in embedded_metadata and isinstance(embedded_metadata["gen_params"], dict):
GenParamsMerger._update_normalized(result, embedded_metadata["gen_params"])
else:
# Otherwise assume the dict itself contains gen_params
GenParamsMerger._update_normalized(result, embedded_metadata)
# 2. Layer Civitai meta (medium priority)
if civitai_meta:
GenParamsMerger._update_normalized(result, civitai_meta)
# 3. Layer request params (highest priority)
if request_params:
GenParamsMerger._update_normalized(result, request_params)
# Filter out blacklisted keys and also the original camelCase keys if they were normalized
final_result = {}
for k, v in result.items():
if k in GenParamsMerger.BLACKLISTED_KEYS:
continue
if k in GenParamsMerger.NORMALIZATION_MAPPING:
continue
final_result[k] = v
return final_result
@staticmethod
def _update_normalized(target: Dict[str, Any], source: Dict[str, Any]) -> None:
"""Update target dict with normalized keys from source."""
for k, v in source.items():
normalized_key = GenParamsMerger.NORMALIZATION_MAPPING.get(k, k)
target[normalized_key] = v
# Also keep the original key for now if it's not the same,
# so we can filter at the end or avoid losing it if it wasn't supposed to be renamed?
# Actually, if we rename it, we should probably NOT keep both in 'target'
# because we want to filter them out at the end anyway.
if normalized_key != k:
# If we are overwriting an existing snake_case key with a camelCase one's value,
# that's fine because of the priority order of calls to _update_normalized.
pass
target[k] = v

View File

@@ -1,13 +1,11 @@
"""Parser for Automatic1111 metadata format."""
import re
import os
import json
import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
@@ -23,7 +21,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
CIVITAI_METADATA_REGEX = r', Civitai metadata:\s*(\{.*?\})'
EXTRANETS_REGEX = r'<(lora|hypernet):([^:]+):(-?[0-9.]+)>'
MODEL_HASH_PATTERN = r'Model hash: ([a-zA-Z0-9]+)'
MODEL_NAME_PATTERN = r'Model: ([^,]+)'
VAE_HASH_PATTERN = r'VAE hash: ([a-zA-Z0-9]+)'
def is_metadata_matching(self, user_comment: str) -> bool:
@@ -33,9 +30,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Automatic1111 format"""
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
# Split on Negative prompt if it exists
if "Negative prompt:" in user_comment:
parts = user_comment.split('Negative prompt:', 1)
@@ -117,12 +111,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
except json.JSONDecodeError:
logger.error("Error parsing hashes JSON")
# Pick up model hash from parsed hashes if available
if "hashes" in metadata and not metadata.get("model_hash"):
model_hash_from_hashes = metadata["hashes"].get("model")
if model_hash_from_hashes:
metadata["model_hash"] = model_hash_from_hashes
# Extract Lora hashes in alternative format
lora_hashes_match = re.search(self.LORA_HASHES_REGEX, params_section)
if not hashes_match and lora_hashes_match:
@@ -145,17 +133,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
params_section = params_section.replace(lora_hashes_match.group(0), '')
except Exception as e:
logger.error(f"Error parsing Lora hashes: {e}")
# Extract checkpoint model hash/name when provided outside Civitai resources
model_hash_match = re.search(self.MODEL_HASH_PATTERN, params_section)
if model_hash_match:
metadata["model_hash"] = model_hash_match.group(1).strip()
params_section = params_section.replace(model_hash_match.group(0), '')
model_name_match = re.search(self.MODEL_NAME_PATTERN, params_section)
if model_name_match:
metadata["model_name"] = model_name_match.group(1).strip()
params_section = params_section.replace(model_name_match.group(0), '')
# Extract basic parameters
param_pattern = r'([A-Za-z\s]+): ([^,]+)'
@@ -197,82 +174,20 @@ class AutomaticMetadataParser(RecipeMetadataParser):
metadata["gen_params"] = gen_params
# Extract LoRA and checkpoint information
# Extract LoRA information
loras = []
base_model_counts = {}
checkpoint = None
# First use Civitai resources if available (more reliable source)
if metadata.get("civitai_resources"):
for resource in metadata.get("civitai_resources", []):
# --- Added: Parse 'air' field if present ---
air = resource.get("air")
if air:
# Format: urn:air:sdxl:lora:civitai:1221007@1375651
# Or: urn:air:sdxl:checkpoint:civitai:623891@2019115
air_pattern = r"urn:air:[^:]+:(?P<type>[^:]+):civitai:(?P<modelId>\d+)@(?P<modelVersionId>\d+)"
air_match = re.match(air_pattern, air)
if air_match:
air_type = air_match.group("type")
air_modelId = int(air_match.group("modelId"))
air_modelVersionId = int(air_match.group("modelVersionId"))
# checkpoint/lycoris/lora/hypernet
resource["type"] = air_type
resource["modelId"] = air_modelId
resource["modelVersionId"] = air_modelVersionId
# --- End added ---
if resource.get("type") == "checkpoint" and resource.get("modelVersionId"):
version_id = resource.get("modelVersionId")
version_id_str = str(version_id)
checkpoint_entry = {
'id': version_id,
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown Checkpoint"),
'version': resource.get("modelVersionName", resource.get("versionName", "")),
'type': resource.get("type", "checkpoint"),
'existsLocally': False,
'localPath': None,
'file_name': resource.get("modelName", ""),
'hash': resource.get("hash", "") or "",
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider:
try:
civitai_info = await metadata_provider.get_model_version_info(version_id_str)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(
"Error fetching Civitai info for checkpoint version %s: %s",
version_id,
e,
)
# Prefer the first checkpoint found
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
if checkpoint is None:
checkpoint = checkpoint_entry
continue
if resource.get("type") in ["lora", "lycoris", "hypernet"] and resource.get("modelVersionId"):
# Initialize lora entry
lora_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown LoRA"),
'version': resource.get("modelVersionName", resource.get("versionName", "")),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "lora"),
'weight': round(float(resource.get("weight", 1.0)), 2),
'existsLocally': False,
@@ -284,9 +199,9 @@ class AutomaticMetadataParser(RecipeMetadataParser):
}
# Get additional info from Civitai
if metadata_provider:
if civitai_client:
try:
civitai_info = await metadata_provider.get_model_version_info(resource.get("modelVersionId"))
civitai_info = await civitai_client.get_model_version_info(resource.get("modelVersionId"))
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
@@ -301,52 +216,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
loras.append(lora_entry)
# Fallback checkpoint parsing from generic "Model" and "Model hash" fields
if checkpoint is None:
model_hash = metadata.get("model_hash")
if not model_hash and metadata.get("hashes"):
model_hash = metadata["hashes"].get("model")
model_name = metadata.get("model_name")
file_name = ""
if model_name:
cleaned_name = re.split(r"[\\\\/]", model_name)[-1]
file_name = os.path.splitext(cleaned_name)[0]
if model_hash or model_name:
checkpoint_entry = {
'id': 0,
'modelId': 0,
'name': model_name or "Unknown Checkpoint",
'version': '',
'type': 'checkpoint',
'hash': model_hash or "",
'existsLocally': False,
'localPath': None,
'file_name': file_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(model_hash)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint hash {model_hash}: {e}")
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
checkpoint = checkpoint_entry
# If no LoRAs from Civitai resources or to supplement, extract from metadata["hashes"]
if not loras or len(loras) == 0:
# Extract lora weights from extranet tags in prompt (for later use)
@@ -385,11 +254,11 @@ class AutomaticMetadataParser(RecipeMetadataParser):
}
# Try to get info from Civitai
if metadata_provider:
if civitai_client:
try:
if lora_hash:
# If we have hash, use it for lookup
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
civitai_info = await civitai_client.get_model_by_hash(lora_hash)
else:
civitai_info = None
@@ -410,9 +279,7 @@ class AutomaticMetadataParser(RecipeMetadataParser):
# Try to get base model from resources or make educated guess
base_model = None
if checkpoint and checkpoint.get("baseModel"):
base_model = checkpoint.get("baseModel")
elif base_model_counts:
if base_model_counts:
# Use the most common base model from the loras
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
@@ -429,10 +296,6 @@ class AutomaticMetadataParser(RecipeMetadataParser):
'gen_params': filtered_gen_params,
'from_automatic_metadata': True
}
if checkpoint:
result['checkpoint'] = checkpoint
result['model'] = checkpoint
return result

View File

@@ -5,7 +5,6 @@ import logging
from typing import Dict, Any, Union
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
@@ -23,48 +22,13 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
"""
if not metadata or not isinstance(metadata, dict):
return False
def has_markers(payload: Dict[str, Any]) -> bool:
# Check for common CivitAI image metadata fields
civitai_image_fields = (
"resources",
"civitaiResources",
"additionalResources",
"hashes",
"prompt",
"negativePrompt",
"steps",
"sampler",
"cfgScale",
"seed",
"width",
"height",
"Model",
"Model hash"
)
return any(key in payload for key in civitai_image_fields)
# Check the main metadata object
if has_markers(metadata):
return True
# Check for LoRA hash patterns
hashes = metadata.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
return True
# Check nested meta object (common in CivitAI image responses)
nested_meta = metadata.get("meta")
if isinstance(nested_meta, dict):
if has_markers(nested_meta):
return True
# Also check for LoRA hash patterns in nested meta
hashes = nested_meta.get("hashes")
if isinstance(hashes, dict) and any(str(key).lower().startswith("lora:") for key in hashes):
return True
return False
# Check for key markers specific to Civitai image metadata
return any([
"resources" in metadata,
"civitaiResources" in metadata,
"additionalResources" in metadata
])
async def parse_metadata(self, metadata, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai image format
@@ -72,40 +36,16 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
Args:
metadata: The metadata from the image (dict)
recipe_scanner: Optional recipe scanner service
civitai_client: Optional Civitai API client (deprecated, use metadata_provider instead)
civitai_client: Optional Civitai API client
Returns:
Dict containing parsed recipe data
"""
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
# Civitai image responses may wrap the actual metadata inside a "meta" key
if (
isinstance(metadata, dict)
and "meta" in metadata
and isinstance(metadata["meta"], dict)
):
inner_meta = metadata["meta"]
if any(
key in inner_meta
for key in (
"resources",
"civitaiResources",
"additionalResources",
"hashes",
"prompt",
"negativePrompt",
)
):
metadata = inner_meta
# Initialize result structure
result = {
'base_model': None,
'loras': [],
'model': None,
'gen_params': {},
'from_civitai_image': True
}
@@ -113,15 +53,6 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Track already added LoRAs to prevent duplicates
added_loras = {} # key: model_version_id or hash, value: index in result["loras"]
# Extract hash information from hashes field for LoRA matching
lora_hashes = {}
if "hashes" in metadata and isinstance(metadata["hashes"], dict):
for key, hash_value in metadata["hashes"].items():
key_str = str(key)
if key_str.lower().startswith("lora:"):
lora_name = key_str.split(":", 1)[1]
lora_hashes[lora_name] = hash_value
# Extract prompt and negative prompt
if "prompt" in metadata:
result["gen_params"]["prompt"] = metadata["prompt"]
@@ -146,9 +77,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Extract base model information - directly if available
if "baseModel" in metadata:
result["base_model"] = metadata["baseModel"]
elif "Model hash" in metadata and metadata_provider:
elif "Model hash" in metadata and civitai_client:
model_hash = metadata["Model hash"]
model_info, error = await metadata_provider.get_model_by_hash(model_hash)
model_info = await civitai_client.get_model_by_hash(model_hash)
if model_info:
result["base_model"] = model_info.get("baseModel", "")
elif "Model" in metadata and isinstance(metadata.get("resources"), list):
@@ -156,8 +87,8 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
for resource in metadata.get("resources", []):
if resource.get("type") == "model" and resource.get("name") == metadata.get("Model"):
# This is likely the checkpoint model
if metadata_provider and resource.get("hash"):
model_info, error = await metadata_provider.get_model_by_hash(resource.get("hash"))
if civitai_client and resource.get("hash"):
model_info = await civitai_client.get_model_by_hash(resource.get("hash"))
if model_info:
result["base_model"] = model_info.get("baseModel", "")
@@ -170,15 +101,6 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if resource.get("type", "lora") == "lora":
lora_hash = resource.get("hash", "")
# Try to get hash from the hashes field if not present in resource
if not lora_hash and resource.get("name"):
lora_hash = lora_hashes.get(resource["name"], "")
# Skip LoRAs without proper identification (hash or modelVersionId)
if not lora_hash and not resource.get("modelVersionId"):
logger.debug(f"Skipping LoRA resource '{resource.get('name', 'Unknown')}' - no hash or modelVersionId")
continue
# Skip if we've already added this LoRA by hash
if lora_hash and lora_hash in added_loras:
continue
@@ -199,9 +121,9 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
}
# Try to get info from Civitai if hash is available
if lora_entry['hash'] and metadata_provider:
if lora_entry['hash'] and civitai_client:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
civitai_info = await civitai_client.get_model_by_hash(lora_hash)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
@@ -231,48 +153,17 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
# Process civitaiResources array
if "civitaiResources" in metadata and isinstance(metadata["civitaiResources"], list):
for resource in metadata["civitaiResources"]:
# Get resource type and identifier
resource_type = str(resource.get("type") or "").lower()
version_id = str(resource.get("modelVersionId", ""))
if resource_type == "checkpoint":
checkpoint_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown Checkpoint"),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "checkpoint"),
'existsLocally': False,
'localPath': None,
'file_name': resource.get("modelName", ""),
'hash': resource.get("hash", "") or "",
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if version_id and metadata_provider:
try:
civitai_info = await metadata_provider.get_model_version_info(version_id)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint version {version_id}: {e}")
if result["model"] is None:
result["model"] = checkpoint_entry
# Skip resources that aren't LoRAs or LyCORIS
if resource.get("type") not in ["lora", "lycoris"] and "type" not in resource:
continue
# Get unique identifier for deduplication
version_id = str(resource.get("modelVersionId", ""))
# Skip if we've already added this LoRA
if version_id and version_id in added_loras:
continue
# Initialize lora entry
lora_entry = {
'id': resource.get("modelVersionId", 0),
@@ -288,31 +179,35 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
'downloadUrl': '',
'isDeleted': False
}
# Try to get info from Civitai if modelVersionId is available
if version_id and metadata_provider:
if version_id and civitai_client:
try:
# Use get_model_version_info instead of get_model_version
civitai_info = await metadata_provider.get_model_version_info(version_id)
civitai_info, error = await civitai_client.get_model_version_info(version_id)
if error:
logger.warning(f"Error getting model version info: {error}")
continue
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for model version {version_id}: {e}")
# Track this LoRA in our deduplication dict
if version_id:
added_loras[version_id] = len(result["loras"])
result["loras"].append(lora_entry)
# Process additionalResources array
@@ -351,143 +246,34 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
'isDeleted': False
}
# If we have a version ID and metadata provider, try to get more info
if version_id and metadata_provider:
# If we have a version ID and civitai client, try to get more info
if version_id and civitai_client:
try:
# Use get_model_version_info with the version ID
civitai_info = await metadata_provider.get_model_version_info(version_id)
civitai_info, error = await civitai_client.get_model_version_info(version_id)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
if error:
logger.warning(f"Error getting model version info: {error}")
else:
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
lora_entry = populated_entry
# Track this LoRA for deduplication
if version_id:
added_loras[version_id] = len(result["loras"])
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
# Track this LoRA for deduplication
if version_id:
added_loras[version_id] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for model ID {version_id}: {e}")
result["loras"].append(lora_entry)
# If we found LoRA hashes in the metadata but haven't already
# populated entries for them, fall back to creating LoRAs from
# the hashes section. Some Civitai image responses only include
# LoRA information here without explicit resources entries.
for lora_name, lora_hash in lora_hashes.items():
if not lora_hash:
continue
# Skip LoRAs we've already added via resources or other fields
if lora_hash in added_loras:
continue
lora_entry = {
'name': lora_name,
'type': "lora",
'weight': 1.0,
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
)
if populated_entry is None:
continue
lora_entry = populated_entry
if 'id' in lora_entry and lora_entry['id']:
added_loras[str(lora_entry['id'])] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_hash}: {e}")
added_loras[lora_hash] = len(result["loras"])
result["loras"].append(lora_entry)
# Check for LoRA info in the format "Lora_0 Model hash", "Lora_0 Model name", etc.
lora_index = 0
while f"Lora_{lora_index} Model hash" in metadata and f"Lora_{lora_index} Model name" in metadata:
lora_hash = metadata[f"Lora_{lora_index} Model hash"]
lora_name = metadata[f"Lora_{lora_index} Model name"]
lora_strength_model = float(metadata.get(f"Lora_{lora_index} Strength model", 1.0))
# Skip if we've already added this LoRA by hash
if lora_hash and lora_hash in added_loras:
lora_index += 1
continue
lora_entry = {
'name': lora_name,
'type': "lora",
'weight': lora_strength_model,
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get info from Civitai if hash is available
if lora_entry['hash'] and metadata_provider:
try:
civitai_info = await metadata_provider.get_model_by_hash(lora_hash)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
)
if populated_entry is None:
lora_index += 1
continue # Skip invalid LoRA types
lora_entry = populated_entry
# If we have a version ID from Civitai, track it for deduplication
if 'id' in lora_entry and lora_entry['id']:
added_loras[str(lora_entry['id'])] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}")
# Track by hash if we have it
if lora_hash:
added_loras[lora_hash] = len(result["loras"])
result["loras"].append(lora_entry)
lora_index += 1
# If base model wasn't found earlier, use the most common one from LoRAs
if not result["base_model"] and base_model_counts:

View File

@@ -6,7 +6,6 @@ import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
@@ -27,15 +26,15 @@ class ComfyMetadataParser(RecipeMetadataParser):
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai ComfyUI metadata format"""
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
data = json.loads(user_comment)
loras = []
# Find all LoraLoader nodes
lora_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'LoraLoader'}
if not lora_nodes:
return {"error": "No LoRA information found in this ComfyUI workflow", "loras": []}
# Process each LoraLoader node
for node_id, node in lora_nodes.items():
if 'inputs' not in node or 'lora_name' not in node['inputs']:
@@ -74,10 +73,10 @@ class ComfyMetadataParser(RecipeMetadataParser):
'isDeleted': False
}
# Get additional info from Civitai if metadata provider is available
if metadata_provider:
# Get additional info from Civitai if client is available
if civitai_client:
try:
civitai_info_tuple = await metadata_provider.get_model_version_info(model_version_id)
civitai_info_tuple = await civitai_client.get_model_version_info(model_version_id)
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
@@ -117,9 +116,9 @@ class ComfyMetadataParser(RecipeMetadataParser):
}
# Get additional checkpoint info from Civitai
if metadata_provider:
if civitai_client:
try:
civitai_info_tuple = await metadata_provider.get_model_version_info(checkpoint_version_id)
civitai_info_tuple = await civitai_client.get_model_version_info(checkpoint_version_id)
civitai_info, _ = civitai_info_tuple if isinstance(civitai_info_tuple, tuple) else (civitai_info_tuple, None)
# Populate checkpoint with Civitai info
checkpoint = await self.populate_checkpoint_from_civitai(checkpoint, civitai_info)

View File

@@ -1,12 +1,10 @@
"""Parser for meta format (Lora_N Model hash) metadata."""
import os
import re
import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
@@ -20,11 +18,8 @@ class MetaFormatParser(RecipeMetadataParser):
return re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from images with meta format metadata (Lora_N Model hash format)"""
"""Parse metadata from images with meta format metadata"""
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
# Extract prompt and negative prompt
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
@@ -127,9 +122,9 @@ class MetaFormatParser(RecipeMetadataParser):
}
# Get info from Civitai by hash if available
if metadata_provider and hash_value:
if civitai_client and hash_value:
try:
civitai_info = await metadata_provider.get_model_by_hash(hash_value)
civitai_info = await civitai_client.get_model_by_hash(hash_value)
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
@@ -146,53 +141,14 @@ class MetaFormatParser(RecipeMetadataParser):
loras.append(lora_entry)
# Extract checkpoint information from generic Model/Model hash fields
checkpoint = None
model_hash = metadata.get("model_hash")
model_name = metadata.get("model")
if model_hash or model_name:
cleaned_name = None
if model_name:
cleaned_name = re.split(r"[\\\\/]", model_name)[-1]
cleaned_name = os.path.splitext(cleaned_name)[0]
checkpoint_entry = {
'id': 0,
'modelId': 0,
'name': model_name or "Unknown Checkpoint",
'version': '',
'type': 'checkpoint',
'hash': model_hash or "",
'existsLocally': False,
'localPath': None,
'file_name': cleaned_name or (model_name or ""),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider and model_hash:
try:
civitai_info = await metadata_provider.get_model_by_hash(model_hash)
checkpoint_entry = await self.populate_checkpoint_from_civitai(
checkpoint_entry,
civitai_info
)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint hash {model_hash}: {e}")
if checkpoint_entry.get("baseModel"):
base_model_value = checkpoint_entry["baseModel"]
base_model_counts[base_model_value] = base_model_counts.get(base_model_value, 0) + 1
checkpoint = checkpoint_entry
# Extract model information
model = None
if 'model' in metadata:
model = metadata['model']
# Set base_model to the most common one from civitai_info or checkpoint
base_model = checkpoint["baseModel"] if checkpoint and checkpoint.get("baseModel") else None
if not base_model and base_model_counts:
# Set base_model to the most common one from civitai_info
base_model = None
if base_model_counts:
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Extract generation parameters for recipe metadata
@@ -210,8 +166,7 @@ class MetaFormatParser(RecipeMetadataParser):
'loras': loras,
'gen_params': gen_params,
'raw_metadata': metadata,
'from_meta_format': True,
**({'checkpoint': checkpoint, 'model': checkpoint} if checkpoint else {})
'from_meta_format': True
}
except Exception as e:

View File

@@ -3,11 +3,10 @@
import re
import json
import logging
from typing import Dict, Any, Optional
from typing import Dict, Any
from ...config import config
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
from ...services.metadata_service import get_default_metadata_provider
logger = logging.getLogger(__name__)
@@ -16,28 +15,6 @@ class RecipeFormatParser(RecipeMetadataParser):
# Regular expression pattern for extracting recipe metadata
METADATA_MARKER = r'Recipe metadata: (\{.*\})'
async def _get_lora_from_version_index(self, recipe_scanner, model_version_id: Any) -> Optional[Dict[str, Any]]:
"""Return a cached LoRA entry by modelVersionId if available."""
if not recipe_scanner or not getattr(recipe_scanner, "_lora_scanner", None):
return None
try:
normalized_id = int(model_version_id)
except (TypeError, ValueError):
return None
try:
cache = await recipe_scanner._lora_scanner.get_cached_data()
except Exception as exc: # pragma: no cover - defensive logging
logger.debug("Unable to load lora cache for version lookup: %s", exc)
return None
if not cache or not getattr(cache, "version_index", None):
return None
return cache.version_index.get(normalized_id)
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
@@ -46,9 +23,6 @@ class RecipeFormatParser(RecipeMetadataParser):
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from images with dedicated recipe metadata format"""
try:
# Get metadata provider instead of using civitai_client directly
metadata_provider = await get_default_metadata_provider()
# Extract recipe metadata from user comment
try:
# Look for recipe metadata section
@@ -75,110 +49,49 @@ class RecipeFormatParser(RecipeMetadataParser):
'type': 'lora',
'weight': lora.get('strength', 1.0),
'file_name': lora.get('file_name', ''),
'hash': lora.get('hash', ''),
'existsLocally': False,
'inLibrary': False,
'localPath': None,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'size': 0
'hash': lora.get('hash', '')
}
# Check if this LoRA exists locally by SHA256 hash
if recipe_scanner:
if lora.get('hash') and recipe_scanner:
lora_scanner = recipe_scanner._lora_scanner
if lora.get('hash'):
exists_locally = lora_scanner.has_hash(lora['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'].lower() == lora['hash'].lower()), None)
if lora_item:
lora_entry['existsLocally'] = True
lora_entry['inLibrary'] = True
lora_entry['localPath'] = lora_item['file_path']
lora_entry['file_name'] = lora_item['file_name']
lora_entry['size'] = lora_item['size']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
else:
lora_entry['existsLocally'] = False
lora_entry['inLibrary'] = False
lora_entry['localPath'] = None
# If we still don't have a local match, try matching by modelVersionId
if not lora_entry['existsLocally'] and lora.get('modelVersionId') is not None:
cached_lora = await self._get_lora_from_version_index(recipe_scanner, lora.get('modelVersionId'))
if cached_lora:
exists_locally = lora_scanner.has_lora_hash(lora['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'].lower() == lora['hash'].lower()), None)
if lora_item:
lora_entry['existsLocally'] = True
lora_entry['inLibrary'] = True
lora_entry['localPath'] = cached_lora.get('file_path')
lora_entry['file_name'] = cached_lora.get('file_name') or lora_entry['file_name']
lora_entry['size'] = cached_lora.get('size', lora_entry['size'])
if cached_lora.get('sha256'):
lora_entry['hash'] = cached_lora['sha256']
preview_url = cached_lora.get('preview_url')
if preview_url:
lora_entry['thumbnailUrl'] = config.get_preview_static_url(preview_url)
# Try to get additional info from Civitai if we have a model version ID and still missing locally
if not lora_entry['existsLocally'] and lora.get('modelVersionId') and metadata_provider:
try:
civitai_info_tuple = await metadata_provider.get_model_version_info(lora['modelVersionId'])
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner,
None, # No need to track base model counts
lora_entry.get('hash', '')
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
lora_entry['localPath'] = lora_item['file_path']
lora_entry['file_name'] = lora_item['file_name']
lora_entry['size'] = lora_item['size']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
else:
lora_entry['existsLocally'] = False
lora_entry['localPath'] = None
# Try to get additional info from Civitai if we have a model version ID
if lora.get('modelVersionId') and civitai_client:
try:
civitai_info_tuple = await civitai_client.get_model_version_info(lora['modelVersionId'])
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner,
None, # No need to track base model counts
lora['hash']
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
loras.append(lora_entry)
logger.info(f"Found {len(loras)} loras in recipe metadata")
# Process checkpoint information if present
checkpoint = None
checkpoint_data = recipe_metadata.get('checkpoint') or {}
if isinstance(checkpoint_data, dict) and checkpoint_data:
version_id = checkpoint_data.get('modelVersionId') or checkpoint_data.get('id')
checkpoint_entry = {
'id': version_id or 0,
'modelId': checkpoint_data.get('modelId', 0),
'name': checkpoint_data.get('name', 'Unknown Checkpoint'),
'version': checkpoint_data.get('version', ''),
'type': checkpoint_data.get('type', 'checkpoint'),
'hash': checkpoint_data.get('hash', ''),
'existsLocally': False,
'localPath': None,
'file_name': checkpoint_data.get('file_name', ''),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
if metadata_provider:
try:
civitai_info = None
if version_id:
civitai_info = await metadata_provider.get_model_version_info(str(version_id))
elif checkpoint_entry.get('hash'):
civitai_info = await metadata_provider.get_model_by_hash(checkpoint_entry['hash'])
if civitai_info:
checkpoint_entry = await self.populate_checkpoint_from_civitai(checkpoint_entry, civitai_info)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint in recipe metadata: {e}")
checkpoint = checkpoint_entry
# Filter gen_params to only include recognized keys
filtered_gen_params = {}
@@ -188,13 +101,12 @@ class RecipeFormatParser(RecipeMetadataParser):
filtered_gen_params[key] = value
return {
'base_model': checkpoint['baseModel'] if checkpoint and checkpoint.get('baseModel') else recipe_metadata.get('base_model', ''),
'base_model': recipe_metadata.get('base_model', ''),
'loras': loras,
'gen_params': filtered_gen_params,
'tags': recipe_metadata.get('tags', []),
'title': recipe_metadata.get('title', ''),
'from_recipe_metadata': True,
**({'checkpoint': checkpoint, 'model': checkpoint} if checkpoint else {})
'from_recipe_metadata': True
}
except Exception as e:

View File

@@ -1,300 +1,619 @@
from __future__ import annotations
import logging
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Callable, Dict, Mapping
import asyncio
import json
import logging
from aiohttp import web
from typing import Dict
import jinja2
from aiohttp import web
from ..utils.routes_common import ModelRouteUtils
from ..services.websocket_manager import ws_manager
from ..services.settings_manager import settings
from ..config import config
from ..services.download_coordinator import DownloadCoordinator
from ..services.downloader import get_downloader
from ..services.metadata_service import get_default_metadata_provider, get_metadata_provider
from ..services.metadata_sync_service import MetadataSyncService
from ..services.model_file_service import ModelFileService, ModelMoveService
from ..services.model_lifecycle_service import ModelLifecycleService
from ..services.preview_asset_service import PreviewAssetService
from ..services.server_i18n import server_i18n as default_server_i18n
from ..services.service_registry import ServiceRegistry
from ..services.settings_manager import get_settings_manager
from ..services.tag_update_service import TagUpdateService
from ..services.websocket_manager import ws_manager as default_ws_manager
from ..services.use_cases import (
AutoOrganizeUseCase,
BulkMetadataRefreshUseCase,
DownloadModelUseCase,
)
from ..services.websocket_progress_callback import (
WebSocketBroadcastCallback,
WebSocketProgressCallback,
)
from ..utils.exif_utils import ExifUtils
from ..utils.metadata_manager import MetadataManager
from .model_route_registrar import COMMON_ROUTE_DEFINITIONS, ModelRouteRegistrar
from .handlers.model_handlers import (
ModelAutoOrganizeHandler,
ModelCivitaiHandler,
ModelDownloadHandler,
ModelHandlerSet,
ModelListingHandler,
ModelManagementHandler,
ModelMoveHandler,
ModelPageView,
ModelQueryHandler,
ModelUpdateHandler,
)
if TYPE_CHECKING:
from ..services.model_update_service import ModelUpdateService
logger = logging.getLogger(__name__)
class BaseModelRoutes(ABC):
"""Base route controller for all model types."""
template_name: str | None = None
def __init__(
self,
service=None,
*,
settings_service=None,
ws_manager=default_ws_manager,
server_i18n=default_server_i18n,
metadata_provider_factory=get_default_metadata_provider,
) -> None:
self.service = None
self.model_type = ""
self._settings = settings_service or get_settings_manager()
self._ws_manager = ws_manager
self._server_i18n = server_i18n
self._metadata_provider_factory = metadata_provider_factory
self.template_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(config.templates_path),
autoescape=True,
)
self.model_file_service: ModelFileService | None = None
self.model_move_service: ModelMoveService | None = None
self.model_lifecycle_service: ModelLifecycleService | None = None
self.websocket_progress_callback = WebSocketProgressCallback()
self.metadata_progress_callback = WebSocketBroadcastCallback()
self._handler_set: ModelHandlerSet | None = None
self._handler_mapping: Dict[str, Callable[[web.Request], web.StreamResponse]] | None = None
self._preview_service = PreviewAssetService(
metadata_manager=MetadataManager,
downloader_factory=get_downloader,
exif_utils=ExifUtils,
)
self._metadata_sync_service = MetadataSyncService(
metadata_manager=MetadataManager,
preview_service=self._preview_service,
settings=self._settings,
default_metadata_provider_factory=metadata_provider_factory,
metadata_provider_selector=get_metadata_provider,
)
self._tag_update_service = TagUpdateService(metadata_manager=MetadataManager)
self._download_coordinator = DownloadCoordinator(
ws_manager=self._ws_manager,
download_manager_factory=ServiceRegistry.get_download_manager,
)
self._model_update_service: ModelUpdateService | None = None
if service is not None:
self.attach_service(service)
def set_model_update_service(self, service: "ModelUpdateService") -> None:
"""Attach the model update tracking service."""
self._model_update_service = service
self._handler_set = None
self._handler_mapping = None
def attach_service(self, service) -> None:
"""Attach a model service and rebuild handler dependencies."""
"""Base route controller for all model types"""
def __init__(self, service):
"""Initialize the route controller
Args:
service: Model service instance (LoraService, CheckpointService, etc.)
"""
self.service = service
self.model_type = service.model_type
self.model_file_service = ModelFileService(service.scanner, service.model_type)
self.model_move_service = ModelMoveService(service.scanner, service.model_type)
self.model_lifecycle_service = ModelLifecycleService(
scanner=service.scanner,
metadata_manager=MetadataManager,
metadata_loader=self._metadata_sync_service.load_local_metadata,
recipe_scanner_factory=ServiceRegistry.get_recipe_scanner,
update_service=self._model_update_service,
self.template_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(config.templates_path),
autoescape=True
)
self._handler_set = None
self._handler_mapping = None
def _ensure_handler_mapping(self) -> Mapping[str, Callable[[web.Request], web.StreamResponse]]:
if self._handler_mapping is None:
handler_set = self._create_handler_set()
self._handler_set = handler_set
self._handler_mapping = handler_set.to_route_mapping()
return self._handler_mapping
def _create_handler_set(self) -> ModelHandlerSet:
service = self._ensure_service()
update_service = self._ensure_model_update_service()
page_view = ModelPageView(
template_env=self.template_env,
template_name=self.template_name or "",
service=service,
settings_service=self._settings,
server_i18n=self._server_i18n,
logger=logger,
)
listing = ModelListingHandler(
service=service,
parse_specific_params=self._parse_specific_params,
logger=logger,
)
management = ModelManagementHandler(
service=service,
logger=logger,
metadata_sync=self._metadata_sync_service,
preview_service=self._preview_service,
tag_update_service=self._tag_update_service,
lifecycle_service=self._ensure_lifecycle_service(),
)
query = ModelQueryHandler(service=service, logger=logger)
download_use_case = DownloadModelUseCase(download_coordinator=self._download_coordinator)
download = ModelDownloadHandler(
ws_manager=self._ws_manager,
logger=logger,
download_use_case=download_use_case,
download_coordinator=self._download_coordinator,
)
metadata_refresh_use_case = BulkMetadataRefreshUseCase(
service=service,
metadata_sync=self._metadata_sync_service,
settings_service=self._settings,
logger=logger,
)
civitai = ModelCivitaiHandler(
service=service,
settings_service=self._settings,
ws_manager=self._ws_manager,
logger=logger,
metadata_provider_factory=self._metadata_provider_factory,
validate_model_type=self._validate_civitai_model_type,
expected_model_types=self._get_expected_model_types,
find_model_file=self._find_model_file,
metadata_sync=self._metadata_sync_service,
metadata_refresh_use_case=metadata_refresh_use_case,
metadata_progress_callback=self.metadata_progress_callback,
)
move = ModelMoveHandler(move_service=self._ensure_move_service(), logger=logger)
auto_organize_use_case = AutoOrganizeUseCase(
file_service=self._ensure_file_service(),
lock_provider=self._ws_manager,
)
auto_organize = ModelAutoOrganizeHandler(
use_case=auto_organize_use_case,
progress_callback=self.websocket_progress_callback,
ws_manager=self._ws_manager,
logger=logger,
)
updates = ModelUpdateHandler(
service=service,
update_service=update_service,
metadata_provider_selector=get_metadata_provider,
logger=logger,
)
return ModelHandlerSet(
page_view=page_view,
listing=listing,
management=management,
query=query,
download=download,
civitai=civitai,
move=move,
auto_organize=auto_organize,
updates=updates,
)
@property
def route_handlers(self) -> Mapping[str, Callable[[web.Request], web.StreamResponse]]:
return self._ensure_handler_mapping()
def setup_routes(self, app: web.Application, prefix: str) -> None:
registrar = ModelRouteRegistrar(app)
handler_lookup = {
definition.handler_name: self._make_handler_proxy(definition.handler_name)
for definition in COMMON_ROUTE_DEFINITIONS
}
registrar.register_common_routes(prefix, handler_lookup)
self.setup_specific_routes(registrar, prefix)
def setup_routes(self, app: web.Application, prefix: str):
"""Setup common routes for the model type
Args:
app: aiohttp application
prefix: URL prefix (e.g., 'loras', 'checkpoints')
"""
# Common model management routes
app.router.add_get(f'/api/{prefix}', self.get_models)
app.router.add_post(f'/api/{prefix}/delete', self.delete_model)
app.router.add_post(f'/api/{prefix}/exclude', self.exclude_model)
app.router.add_post(f'/api/{prefix}/fetch-civitai', self.fetch_civitai)
app.router.add_post(f'/api/{prefix}/relink-civitai', self.relink_civitai)
app.router.add_post(f'/api/{prefix}/replace-preview', self.replace_preview)
app.router.add_post(f'/api/{prefix}/save-metadata', self.save_metadata)
app.router.add_post(f'/api/{prefix}/rename', self.rename_model)
app.router.add_post(f'/api/{prefix}/bulk-delete', self.bulk_delete_models)
app.router.add_post(f'/api/{prefix}/verify-duplicates', self.verify_duplicates)
# Common query routes
app.router.add_get(f'/api/{prefix}/top-tags', self.get_top_tags)
app.router.add_get(f'/api/{prefix}/base-models', self.get_base_models)
app.router.add_get(f'/api/{prefix}/scan', self.scan_models)
app.router.add_get(f'/api/{prefix}/roots', self.get_model_roots)
app.router.add_get(f'/api/{prefix}/folders', self.get_folders)
app.router.add_get(f'/api/{prefix}/find-duplicates', self.find_duplicate_models)
app.router.add_get(f'/api/{prefix}/find-filename-conflicts', self.find_filename_conflicts)
# Common Download management
app.router.add_post(f'/api/download-model', self.download_model)
app.router.add_get(f'/api/download-model-get', self.download_model_get)
app.router.add_get(f'/api/cancel-download-get', self.cancel_download_get)
app.router.add_get(f'/api/download-progress/{{download_id}}', self.get_download_progress)
# CivitAI integration routes
app.router.add_post(f'/api/{prefix}/fetch-all-civitai', self.fetch_all_civitai)
# app.router.add_get(f'/api/civitai/versions/{{model_id}}', self.get_civitai_versions)
# Add generic page route
app.router.add_get(f'/{prefix}', self.handle_models_page)
# Setup model-specific routes
self.setup_specific_routes(app, prefix)
@abstractmethod
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str) -> None:
"""Setup model-specific routes."""
raise NotImplementedError
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse model-specific parameters - to be overridden by subclasses."""
return {}
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type - to be overridden by subclasses."""
return True
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages - to be overridden by subclasses."""
return "any model type"
def _find_model_file(self, files):
"""Find the appropriate model file from the files list - can be overridden by subclasses."""
return next((file for file in files if file.get("type") == "Model" and file.get("primary") is True), None)
def get_handler(self, name: str) -> Callable[[web.Request], web.StreamResponse]:
"""Expose handlers for subclasses or tests."""
return self._ensure_handler_mapping()[name]
def _ensure_service(self):
if self.service is None:
raise RuntimeError("Model service has not been attached")
return self.service
def _ensure_file_service(self) -> ModelFileService:
if self.model_file_service is None:
service = self._ensure_service()
self.model_file_service = ModelFileService(service.scanner, service.model_type)
return self.model_file_service
def _ensure_move_service(self) -> ModelMoveService:
if self.model_move_service is None:
service = self._ensure_service()
self.model_move_service = ModelMoveService(service.scanner, service.model_type)
return self.model_move_service
def _ensure_lifecycle_service(self) -> ModelLifecycleService:
if self.model_lifecycle_service is None:
service = self._ensure_service()
self.model_lifecycle_service = ModelLifecycleService(
scanner=service.scanner,
metadata_manager=MetadataManager,
metadata_loader=self._metadata_sync_service.load_local_metadata,
recipe_scanner_factory=ServiceRegistry.get_recipe_scanner,
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup model-specific routes - to be implemented by subclasses"""
pass
async def handle_models_page(self, request: web.Request) -> web.Response:
"""
Generic handler for model pages (e.g., /loras, /checkpoints).
Subclasses should set self.template_env and template_name.
"""
try:
# Check if the scanner is initializing
is_initializing = (
self.service.scanner._cache is None or
(hasattr(self.service.scanner, 'is_initializing') and callable(self.service.scanner.is_initializing) and self.service.scanner.is_initializing()) or
(hasattr(self.service.scanner, '_is_initializing') and self.service.scanner._is_initializing)
)
return self.model_lifecycle_service
def _make_handler_proxy(self, name: str) -> Callable[[web.Request], web.StreamResponse]:
async def proxy(request: web.Request) -> web.StreamResponse:
template_name = getattr(self, "template_name", None)
if not self.template_env or not template_name:
return web.Response(text="Template environment or template name not set", status=500)
if is_initializing:
rendered = self.template_env.get_template(template_name).render(
folders=[],
is_initializing=True,
settings=settings,
request=request
)
else:
try:
cache = await self.service.scanner.get_cached_data(force_refresh=False)
rendered = self.template_env.get_template(template_name).render(
folders=getattr(cache, "folders", []),
is_initializing=False,
settings=settings,
request=request
)
except Exception as cache_error:
logger.error(f"Error loading cache data: {cache_error}")
rendered = self.template_env.get_template(template_name).render(
folders=[],
is_initializing=True,
settings=settings,
request=request
)
return web.Response(
text=rendered,
content_type='text/html'
)
except Exception as e:
logger.error(f"Error handling models page: {e}", exc_info=True)
return web.Response(
text="Error loading models page",
status=500
)
async def get_models(self, request: web.Request) -> web.Response:
"""Get paginated model data"""
try:
# Parse common query parameters
params = self._parse_common_params(request)
# Get data from service
result = await self.service.get_paginated_data(**params)
# Format response items
formatted_result = {
'items': [await self.service.format_response(item) for item in result['items']],
'total': result['total'],
'page': result['page'],
'page_size': result['page_size'],
'total_pages': result['total_pages']
}
return web.json_response(formatted_result)
except Exception as e:
logger.error(f"Error in get_{self.model_type}s: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
def _parse_common_params(self, request: web.Request) -> Dict:
"""Parse common query parameters"""
# Parse basic pagination and sorting
page = int(request.query.get('page', '1'))
page_size = min(int(request.query.get('page_size', '20')), 100)
sort_by = request.query.get('sort_by', 'name')
folder = request.query.get('folder', None)
search = request.query.get('search', None)
fuzzy_search = request.query.get('fuzzy_search', 'false').lower() == 'true'
# Parse filter arrays
base_models = request.query.getall('base_model', [])
tags = request.query.getall('tag', [])
favorites_only = request.query.get('favorites_only', 'false').lower() == 'true'
# Parse search options
search_options = {
'filename': request.query.get('search_filename', 'true').lower() == 'true',
'modelname': request.query.get('search_modelname', 'true').lower() == 'true',
'tags': request.query.get('search_tags', 'false').lower() == 'true',
'recursive': request.query.get('recursive', 'false').lower() == 'true',
}
# Parse hash filters if provided
hash_filters = {}
if 'hash' in request.query:
hash_filters['single_hash'] = request.query['hash']
elif 'hashes' in request.query:
try:
handler = self.get_handler(name)
except RuntimeError:
return web.json_response({"success": False, "error": "Service not ready"}, status=503)
return await handler(request)
return proxy
def _ensure_model_update_service(self) -> "ModelUpdateService":
if self._model_update_service is None:
raise RuntimeError("Model update service has not been attached")
return self._model_update_service
hash_list = json.loads(request.query['hashes'])
if isinstance(hash_list, list):
hash_filters['multiple_hashes'] = hash_list
except (json.JSONDecodeError, TypeError):
pass
return {
'page': page,
'page_size': page_size,
'sort_by': sort_by,
'folder': folder,
'search': search,
'fuzzy_search': fuzzy_search,
'base_models': base_models,
'tags': tags,
'search_options': search_options,
'hash_filters': hash_filters,
'favorites_only': favorites_only,
# Add model-specific parameters
**self._parse_specific_params(request)
}
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse model-specific parameters - to be overridden by subclasses"""
return {}
# Common route handlers
async def delete_model(self, request: web.Request) -> web.Response:
"""Handle model deletion request"""
return await ModelRouteUtils.handle_delete_model(request, self.service.scanner)
async def exclude_model(self, request: web.Request) -> web.Response:
"""Handle model exclusion request"""
return await ModelRouteUtils.handle_exclude_model(request, self.service.scanner)
async def fetch_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata fetch request"""
response = await ModelRouteUtils.handle_fetch_civitai(request, self.service.scanner)
# If successful, format the metadata before returning
if response.status == 200:
data = json.loads(response.body.decode('utf-8'))
if data.get("success") and data.get("metadata"):
formatted_metadata = await self.service.format_response(data["metadata"])
return web.json_response({
"success": True,
"metadata": formatted_metadata
})
return response
async def relink_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata re-linking request"""
return await ModelRouteUtils.handle_relink_civitai(request, self.service.scanner)
async def replace_preview(self, request: web.Request) -> web.Response:
"""Handle preview image replacement"""
return await ModelRouteUtils.handle_replace_preview(request, self.service.scanner)
async def save_metadata(self, request: web.Request) -> web.Response:
"""Handle saving metadata updates"""
return await ModelRouteUtils.handle_save_metadata(request, self.service.scanner)
async def rename_model(self, request: web.Request) -> web.Response:
"""Handle renaming a model file and its associated files"""
return await ModelRouteUtils.handle_rename_model(request, self.service.scanner)
async def bulk_delete_models(self, request: web.Request) -> web.Response:
"""Handle bulk deletion of models"""
return await ModelRouteUtils.handle_bulk_delete_models(request, self.service.scanner)
async def verify_duplicates(self, request: web.Request) -> web.Response:
"""Handle verification of duplicate model hashes"""
return await ModelRouteUtils.handle_verify_duplicates(request, self.service.scanner)
async def get_top_tags(self, request: web.Request) -> web.Response:
"""Handle request for top tags sorted by frequency"""
try:
limit = int(request.query.get('limit', '20'))
if limit < 1 or limit > 100:
limit = 20
top_tags = await self.service.get_top_tags(limit)
return web.json_response({
'success': True,
'tags': top_tags
})
except Exception as e:
logger.error(f"Error getting top tags: {str(e)}", exc_info=True)
return web.json_response({
'success': False,
'error': 'Internal server error'
}, status=500)
async def get_base_models(self, request: web.Request) -> web.Response:
"""Get base models used in models"""
try:
limit = int(request.query.get('limit', '20'))
if limit < 1 or limit > 100:
limit = 20
base_models = await self.service.get_base_models(limit)
return web.json_response({
'success': True,
'base_models': base_models
})
except Exception as e:
logger.error(f"Error retrieving base models: {e}")
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def scan_models(self, request: web.Request) -> web.Response:
"""Force a rescan of model files"""
try:
full_rebuild = request.query.get('full_rebuild', 'false').lower() == 'true'
await self.service.scan_models(force_refresh=True, rebuild_cache=full_rebuild)
return web.json_response({
"status": "success",
"message": f"{self.model_type.capitalize()} scan completed"
})
except Exception as e:
logger.error(f"Error in scan_{self.model_type}s: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_model_roots(self, request: web.Request) -> web.Response:
"""Return the model root directories"""
try:
roots = self.service.get_model_roots()
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting {self.model_type} roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_folders(self, request: web.Request) -> web.Response:
"""Get all folders in the cache"""
try:
cache = await self.service.scanner.get_cached_data()
return web.json_response({
'folders': cache.folders
})
except Exception as e:
logger.error(f"Error getting folders: {e}")
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def find_duplicate_models(self, request: web.Request) -> web.Response:
"""Find models with duplicate SHA256 hashes"""
try:
# Get duplicate hashes from service
duplicates = self.service.find_duplicate_hashes()
# Format the response
result = []
cache = await self.service.scanner.get_cached_data()
for sha256, paths in duplicates.items():
group = {
"hash": sha256,
"models": []
}
# Find matching models for each path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(await self.service.format_response(model))
# Add the primary model too
primary_path = self.service.get_path_by_hash(sha256)
if primary_path and primary_path not in paths:
primary_model = next((m for m in cache.raw_data if m['file_path'] == primary_path), None)
if primary_model:
group["models"].insert(0, await self.service.format_response(primary_model))
if len(group["models"]) > 1: # Only include if we found multiple models
result.append(group)
return web.json_response({
"success": True,
"duplicates": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding duplicate {self.model_type}s: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def find_filename_conflicts(self, request: web.Request) -> web.Response:
"""Find models with conflicting filenames"""
try:
# Get duplicate filenames from service
duplicates = self.service.find_duplicate_filenames()
# Format the response
result = []
cache = await self.service.scanner.get_cached_data()
for filename, paths in duplicates.items():
group = {
"filename": filename,
"models": []
}
# Find matching models for each path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(await self.service.format_response(model))
# Find the model from the main index too
hash_val = self.service.scanner._hash_index.get_hash_by_filename(filename)
if hash_val:
main_path = self.service.get_path_by_hash(hash_val)
if main_path and main_path not in paths:
main_model = next((m for m in cache.raw_data if m['file_path'] == main_path), None)
if main_model:
group["models"].insert(0, await self.service.format_response(main_model))
if group["models"]:
result.append(group)
return web.json_response({
"success": True,
"conflicts": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding filename conflicts for {self.model_type}s: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
# Download management methods
async def download_model(self, request: web.Request) -> web.Response:
"""Handle model download request"""
return await ModelRouteUtils.handle_download_model(request)
async def download_model_get(self, request: web.Request) -> web.Response:
"""Handle model download request via GET method"""
try:
# Extract query parameters
model_id = request.query.get('model_id')
if not model_id:
return web.Response(
status=400,
text="Missing required parameter: Please provide 'model_id'"
)
# Get optional parameters
model_version_id = request.query.get('model_version_id')
download_id = request.query.get('download_id')
use_default_paths = request.query.get('use_default_paths', 'false').lower() == 'true'
# Create a data dictionary that mimics what would be received from a POST request
data = {
'model_id': model_id
}
# Add optional parameters only if they are provided
if model_version_id:
data['model_version_id'] = model_version_id
if download_id:
data['download_id'] = download_id
data['use_default_paths'] = use_default_paths
# Create a mock request object with the data
future = asyncio.get_event_loop().create_future()
future.set_result(data)
mock_request = type('MockRequest', (), {
'json': lambda self=None: future
})()
# Call the existing download handler
return await ModelRouteUtils.handle_download_model(mock_request)
except Exception as e:
error_message = str(e)
logger.error(f"Error downloading model via GET: {error_message}", exc_info=True)
return web.Response(status=500, text=error_message)
async def cancel_download_get(self, request: web.Request) -> web.Response:
"""Handle GET request for cancelling a download by download_id"""
try:
download_id = request.query.get('download_id')
if not download_id:
return web.json_response({
'success': False,
'error': 'Download ID is required'
}, status=400)
# Create a mock request with match_info for compatibility
mock_request = type('MockRequest', (), {
'match_info': {'download_id': download_id}
})()
return await ModelRouteUtils.handle_cancel_download(mock_request)
except Exception as e:
logger.error(f"Error cancelling download via GET: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_download_progress(self, request: web.Request) -> web.Response:
"""Handle request for download progress by download_id"""
try:
# Get download_id from URL path
download_id = request.match_info.get('download_id')
if not download_id:
return web.json_response({
'success': False,
'error': 'Download ID is required'
}, status=400)
progress_data = ws_manager.get_download_progress(download_id)
if progress_data is None:
return web.json_response({
'success': False,
'error': 'Download ID not found'
}, status=404)
return web.json_response({
'success': True,
'progress': progress_data.get('progress', 0)
})
except Exception as e:
logger.error(f"Error getting download progress: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def fetch_all_civitai(self, request: web.Request) -> web.Response:
"""Fetch CivitAI metadata for all models in the background"""
try:
cache = await self.service.scanner.get_cached_data()
total = len(cache.raw_data)
processed = 0
success = 0
needs_resort = False
# Prepare models to process
to_process = [
model for model in cache.raw_data
if model.get('sha256') and (not model.get('civitai') or 'id' not in model.get('civitai')) and model.get('from_civitai', True)
]
total_to_process = len(to_process)
# Send initial progress
await ws_manager.broadcast({
'status': 'started',
'total': total_to_process,
'processed': 0,
'success': 0
})
# Process each model
for model in to_process:
try:
original_name = model.get('model_name')
if await ModelRouteUtils.fetch_and_update_model(
sha256=model['sha256'],
file_path=model['file_path'],
model_data=model,
update_cache_func=self.service.scanner.update_single_model_cache
):
success += 1
if original_name != model.get('model_name'):
needs_resort = True
processed += 1
# Send progress update
await ws_manager.broadcast({
'status': 'processing',
'total': total_to_process,
'processed': processed,
'success': success,
'current_name': model.get('model_name', 'Unknown')
})
except Exception as e:
logger.error(f"Error fetching CivitAI data for {model['file_path']}: {e}")
if needs_resort:
await cache.resort()
# Send completion message
await ws_manager.broadcast({
'status': 'completed',
'total': total_to_process,
'processed': processed,
'success': success
})
return web.json_response({
"success": True,
"message": f"Successfully updated {success} of {processed} processed {self.model_type}s (total: {total})"
})
except Exception as e:
# Send error message
await ws_manager.broadcast({
'status': 'error',
'error': str(e)
})
logger.error(f"Error in fetch_all_civitai for {self.model_type}s: {e}")
return web.Response(text=str(e), status=500)
async def get_civitai_versions(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai model with local availability info"""
# This will be implemented by subclasses as they need CivitAI client access
return web.json_response({
"error": "Not implemented in base class"
}, status=501)

View File

@@ -1,200 +0,0 @@
"""Base infrastructure shared across recipe routes."""
from __future__ import annotations
import logging
import os
from typing import Callable, Mapping
import jinja2
from aiohttp import web
from ..config import config
from ..recipes import RecipeParserFactory
from ..services.downloader import get_downloader
from ..services.recipes import (
RecipeAnalysisService,
RecipePersistenceService,
RecipeSharingService,
)
from ..services.server_i18n import server_i18n
from ..services.service_registry import ServiceRegistry
from ..services.settings_manager import get_settings_manager
from ..utils.constants import CARD_PREVIEW_WIDTH
from ..utils.exif_utils import ExifUtils
from .handlers.recipe_handlers import (
RecipeAnalysisHandler,
RecipeHandlerSet,
RecipeListingHandler,
RecipeManagementHandler,
RecipePageView,
RecipeQueryHandler,
RecipeSharingHandler,
)
from .recipe_route_registrar import ROUTE_DEFINITIONS
logger = logging.getLogger(__name__)
class BaseRecipeRoutes:
"""Common dependency and startup wiring for recipe routes."""
_HANDLER_NAMES: tuple[str, ...] = tuple(
definition.handler_name for definition in ROUTE_DEFINITIONS
)
template_name: str = "recipes.html"
def __init__(self) -> None:
self.recipe_scanner = None
self.lora_scanner = None
self.civitai_client = None
self.settings = get_settings_manager()
self.server_i18n = server_i18n
self.template_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(config.templates_path),
autoescape=True,
)
self._i18n_registered = False
self._startup_hooks_registered = False
self._handler_set: RecipeHandlerSet | None = None
self._handler_mapping: dict[str, Callable] | None = None
async def attach_dependencies(self, app: web.Application | None = None) -> None:
"""Resolve shared services from the registry."""
await self._ensure_services()
self._ensure_i18n_filter()
async def ensure_dependencies_ready(self) -> None:
"""Ensure dependencies are available for request handlers."""
if self.recipe_scanner is None or self.civitai_client is None:
await self.attach_dependencies()
def register_startup_hooks(self, app: web.Application) -> None:
"""Register startup hooks once for dependency wiring."""
if self._startup_hooks_registered:
return
app.on_startup.append(self.attach_dependencies)
self._startup_hooks_registered = True
def to_route_mapping(self) -> Mapping[str, Callable]:
"""Return a mapping of handler name to coroutine for registrar binding."""
if self._handler_mapping is None:
handler_set = self._create_handler_set()
self._handler_set = handler_set
self._handler_mapping = handler_set.to_route_mapping()
return self._handler_mapping
# Internal helpers -------------------------------------------------
async def _ensure_services(self) -> None:
if self.recipe_scanner is None:
self.recipe_scanner = await ServiceRegistry.get_recipe_scanner()
self.lora_scanner = getattr(self.recipe_scanner, "_lora_scanner", None)
if self.civitai_client is None:
self.civitai_client = await ServiceRegistry.get_civitai_client()
def _ensure_i18n_filter(self) -> None:
if not self._i18n_registered:
self.template_env.filters["t"] = self.server_i18n.create_template_filter()
self._i18n_registered = True
def get_handler_owner(self):
"""Return the object supplying bound handler coroutines."""
if self._handler_set is None:
self._handler_set = self._create_handler_set()
return self._handler_set
def _create_handler_set(self) -> RecipeHandlerSet:
recipe_scanner_getter = lambda: self.recipe_scanner
civitai_client_getter = lambda: self.civitai_client
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
if not standalone_mode:
from ..metadata_collector import get_metadata # type: ignore[import-not-found]
from ..metadata_collector.metadata_processor import ( # type: ignore[import-not-found]
MetadataProcessor,
)
from ..metadata_collector.metadata_registry import ( # type: ignore[import-not-found]
MetadataRegistry,
)
else: # pragma: no cover - optional dependency path
get_metadata = None # type: ignore[assignment]
MetadataProcessor = None # type: ignore[assignment]
MetadataRegistry = None # type: ignore[assignment]
analysis_service = RecipeAnalysisService(
exif_utils=ExifUtils,
recipe_parser_factory=RecipeParserFactory,
downloader_factory=get_downloader,
metadata_collector=get_metadata,
metadata_processor_cls=MetadataProcessor,
metadata_registry_cls=MetadataRegistry,
standalone_mode=standalone_mode,
logger=logger,
)
persistence_service = RecipePersistenceService(
exif_utils=ExifUtils,
card_preview_width=CARD_PREVIEW_WIDTH,
logger=logger,
)
sharing_service = RecipeSharingService(logger=logger)
page_view = RecipePageView(
ensure_dependencies_ready=self.ensure_dependencies_ready,
settings_service=self.settings,
server_i18n=self.server_i18n,
template_env=self.template_env,
template_name=self.template_name,
recipe_scanner_getter=recipe_scanner_getter,
logger=logger,
)
listing = RecipeListingHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
logger=logger,
)
query = RecipeQueryHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
format_recipe_file_url=listing.format_recipe_file_url,
logger=logger,
)
management = RecipeManagementHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
logger=logger,
persistence_service=persistence_service,
analysis_service=analysis_service,
downloader_factory=get_downloader,
civitai_client_getter=civitai_client_getter,
)
analysis = RecipeAnalysisHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
civitai_client_getter=civitai_client_getter,
logger=logger,
analysis_service=analysis_service,
)
sharing = RecipeSharingHandler(
ensure_dependencies_ready=self.ensure_dependencies_ready,
recipe_scanner_getter=recipe_scanner_getter,
logger=logger,
sharing_service=sharing_service,
)
return RecipeHandlerSet(
page_view=page_view,
listing=listing,
query=query,
management=management,
analysis=analysis,
sharing=sharing,
)

View File

@@ -1,12 +1,9 @@
import logging
from typing import Dict
from aiohttp import web
from .base_model_routes import BaseModelRoutes
from .model_route_registrar import ModelRouteRegistrar
from ..services.checkpoint_service import CheckpointService
from ..services.service_registry import ServiceRegistry
from ..config import config
logger = logging.getLogger(__name__)
@@ -15,18 +12,19 @@ class CheckpointRoutes(BaseModelRoutes):
def __init__(self):
"""Initialize Checkpoint routes with Checkpoint service"""
super().__init__()
# Service will be initialized later via setup_routes
self.service = None
self.civitai_client = None
self.template_name = "checkpoints.html"
async def initialize_services(self):
"""Initialize services from ServiceRegistry"""
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
update_service = await ServiceRegistry.get_model_update_service()
self.service = CheckpointService(checkpoint_scanner, update_service=update_service)
self.set_model_update_service(update_service)
# Attach service dependencies
self.attach_service(self.service)
self.service = CheckpointService(checkpoint_scanner)
self.civitai_client = await ServiceRegistry.get_civitai_client()
# Initialize parent with the service
super().__init__(self.service)
def setup_routes(self, app: web.Application):
"""Setup Checkpoint routes"""
@@ -36,35 +34,13 @@ class CheckpointRoutes(BaseModelRoutes):
# Setup common routes with 'checkpoints' prefix (includes page route)
super().setup_routes(app, 'checkpoints')
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup Checkpoint-specific routes"""
# Checkpoint-specific CivitAI integration
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_checkpoint)
# Checkpoint info by name
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/info/{name}', prefix, self.get_checkpoint_info)
# Checkpoint roots and Unet roots
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/checkpoints_roots', prefix, self.get_checkpoints_roots)
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/unet_roots', prefix, self.get_unet_roots)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Checkpoint"""
return model_type.lower() == 'checkpoint'
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "Checkpoint"
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse Checkpoint-specific parameters"""
params: Dict = {}
if 'checkpoint_hash' in request.query:
params['hash_filters'] = {'single_hash': request.query['checkpoint_hash'].lower()}
elif 'checkpoint_hashes' in request.query:
params['hash_filters'] = {
'multiple_hashes': [h.lower() for h in request.query['checkpoint_hashes'].split(',')]
}
return params
app.router.add_get(f'/api/{prefix}/info/{{name}}', self.get_checkpoint_info)
async def get_checkpoint_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific checkpoint by name"""
@@ -81,32 +57,49 @@ class CheckpointRoutes(BaseModelRoutes):
logger.error(f"Error in get_checkpoint_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_checkpoints_roots(self, request: web.Request) -> web.Response:
"""Return the list of checkpoint roots from config"""
async def get_civitai_versions_checkpoint(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai checkpoint model with local availability info"""
try:
roots = config.checkpoints_roots
return web.json_response({
"success": True,
"roots": roots
})
model_id = request.match_info['model_id']
response = await self.civitai_client.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be Checkpoint
if model_type.lower() != 'checkpoint':
return web.json_response({
'error': f"Model type mismatch. Expected Checkpoint, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the primary model file (type="Model" and primary=true) in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model' and file.get('primary') == True), None)
# If no primary file found, try to find any model file
if not model_file:
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error getting checkpoint roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_unet_roots(self, request: web.Request) -> web.Response:
"""Return the list of unet roots from config"""
try:
roots = config.unet_roots
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting unet roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
logger.error(f"Error fetching checkpoint model versions: {e}")
return web.Response(status=500, text=str(e))

View File

@@ -2,7 +2,6 @@ import logging
from aiohttp import web
from .base_model_routes import BaseModelRoutes
from .model_route_registrar import ModelRouteRegistrar
from ..services.embedding_service import EmbeddingService
from ..services.service_registry import ServiceRegistry
@@ -13,18 +12,19 @@ class EmbeddingRoutes(BaseModelRoutes):
def __init__(self):
"""Initialize Embedding routes with Embedding service"""
super().__init__()
# Service will be initialized later via setup_routes
self.service = None
self.civitai_client = None
self.template_name = "embeddings.html"
async def initialize_services(self):
"""Initialize services from ServiceRegistry"""
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
update_service = await ServiceRegistry.get_model_update_service()
self.service = EmbeddingService(embedding_scanner, update_service=update_service)
self.set_model_update_service(update_service)
# Attach service dependencies
self.attach_service(self.service)
self.service = EmbeddingService(embedding_scanner)
self.civitai_client = await ServiceRegistry.get_civitai_client()
# Initialize parent with the service
super().__init__(self.service)
def setup_routes(self, app: web.Application):
"""Setup Embedding routes"""
@@ -34,18 +34,13 @@ class EmbeddingRoutes(BaseModelRoutes):
# Setup common routes with 'embeddings' prefix (includes page route)
super().setup_routes(app, 'embeddings')
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup Embedding-specific routes"""
# Embedding-specific CivitAI integration
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_embedding)
# Embedding info by name
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/info/{name}', prefix, self.get_embedding_info)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Embedding"""
return model_type.lower() == 'textualinversion'
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "TextualInversion"
app.router.add_get(f'/api/{prefix}/info/{{name}}', self.get_embedding_info)
async def get_embedding_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific embedding by name"""
@@ -61,3 +56,50 @@ class EmbeddingRoutes(BaseModelRoutes):
except Exception as e:
logger.error(f"Error in get_embedding_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_civitai_versions_embedding(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai embedding model with local availability info"""
try:
model_id = request.match_info['model_id']
response = await self.civitai_client.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be TextualInversion (Embedding)
if model_type.lower() not in ['textualinversion', 'embedding']:
return web.json_response({
'error': f"Model type mismatch. Expected TextualInversion/Embedding, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the primary model file (type="Model" and primary=true) in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model' and file.get('primary') == True), None)
# If no primary file found, try to find any model file
if not model_file:
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching embedding model versions: {e}")
return web.Response(status=500, text=str(e))

View File

@@ -1,65 +0,0 @@
"""Route registrar for example image endpoints."""
from __future__ import annotations
from dataclasses import dataclass
from typing import Callable, Iterable, Mapping
from aiohttp import web
@dataclass(frozen=True)
class RouteDefinition:
"""Declarative configuration for a HTTP route."""
method: str
path: str
handler_name: str
ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("POST", "/api/lm/download-example-images", "download_example_images"),
RouteDefinition("POST", "/api/lm/import-example-images", "import_example_images"),
RouteDefinition("GET", "/api/lm/example-images-status", "get_example_images_status"),
RouteDefinition("POST", "/api/lm/pause-example-images", "pause_example_images"),
RouteDefinition("POST", "/api/lm/resume-example-images", "resume_example_images"),
RouteDefinition("POST", "/api/lm/stop-example-images", "stop_example_images"),
RouteDefinition("POST", "/api/lm/open-example-images-folder", "open_example_images_folder"),
RouteDefinition("GET", "/api/lm/example-image-files", "get_example_image_files"),
RouteDefinition("GET", "/api/lm/has-example-images", "has_example_images"),
RouteDefinition("POST", "/api/lm/delete-example-image", "delete_example_image"),
RouteDefinition("POST", "/api/lm/force-download-example-images", "force_download_example_images"),
RouteDefinition("POST", "/api/lm/cleanup-example-image-folders", "cleanup_example_image_folders"),
RouteDefinition("POST", "/api/lm/example-images/set-nsfw-level", "set_example_image_nsfw_level"),
RouteDefinition("POST", "/api/lm/check-example-images-needed", "check_example_images_needed"),
)
class ExampleImagesRouteRegistrar:
"""Bind declarative example image routes to an aiohttp router."""
_METHOD_MAP = {
"GET": "add_get",
"POST": "add_post",
"PUT": "add_put",
"DELETE": "add_delete",
}
def __init__(self, app: web.Application) -> None:
self._app = app
def register_routes(
self,
handler_lookup: Mapping[str, Callable[[web.Request], object]],
*,
definitions: Iterable[RouteDefinition] = ROUTE_DEFINITIONS,
) -> None:
"""Register each route definition using the supplied handlers."""
for definition in definitions:
handler = handler_lookup[definition.handler_name]
self._bind_route(definition.method, definition.path, handler)
def _bind_route(self, method: str, path: str, handler: Callable[[web.Request], object]) -> None:
add_method_name = self._METHOD_MAP[method.upper()]
add_method = getattr(self._app.router, add_method_name)
add_method(path, handler)

View File

@@ -1,88 +1,67 @@
from __future__ import annotations
import logging
from typing import Callable, Mapping
from aiohttp import web
from .example_images_route_registrar import ExampleImagesRouteRegistrar
from .handlers.example_images_handlers import (
ExampleImagesDownloadHandler,
ExampleImagesFileHandler,
ExampleImagesHandlerSet,
ExampleImagesManagementHandler,
)
from ..services.use_cases.example_images import (
DownloadExampleImagesUseCase,
ImportExampleImagesUseCase,
)
from ..utils.example_images_download_manager import (
DownloadManager,
get_default_download_manager,
)
from ..utils.example_images_file_manager import ExampleImagesFileManager
from ..utils.example_images_download_manager import DownloadManager
from ..utils.example_images_processor import ExampleImagesProcessor
from ..services.example_images_cleanup_service import ExampleImagesCleanupService
from ..utils.example_images_file_manager import ExampleImagesFileManager
logger = logging.getLogger(__name__)
class ExampleImagesRoutes:
"""Route controller for example image endpoints."""
"""Routes for example images related functionality"""
@staticmethod
def setup_routes(app):
"""Register example images routes"""
app.router.add_post('/api/download-example-images', ExampleImagesRoutes.download_example_images)
app.router.add_post('/api/import-example-images', ExampleImagesRoutes.import_example_images)
app.router.add_get('/api/example-images-status', ExampleImagesRoutes.get_example_images_status)
app.router.add_post('/api/pause-example-images', ExampleImagesRoutes.pause_example_images)
app.router.add_post('/api/resume-example-images', ExampleImagesRoutes.resume_example_images)
app.router.add_post('/api/open-example-images-folder', ExampleImagesRoutes.open_example_images_folder)
app.router.add_get('/api/example-image-files', ExampleImagesRoutes.get_example_image_files)
app.router.add_get('/api/has-example-images', ExampleImagesRoutes.has_example_images)
app.router.add_post('/api/delete-example-image', ExampleImagesRoutes.delete_example_image)
def __init__(
self,
*,
ws_manager,
download_manager: DownloadManager | None = None,
processor=ExampleImagesProcessor,
file_manager=ExampleImagesFileManager,
cleanup_service: ExampleImagesCleanupService | None = None,
) -> None:
if ws_manager is None:
raise ValueError("ws_manager is required")
self._download_manager = download_manager or get_default_download_manager(ws_manager)
self._processor = processor
self._file_manager = file_manager
self._cleanup_service = cleanup_service or ExampleImagesCleanupService()
self._handler_set: ExampleImagesHandlerSet | None = None
self._handler_mapping: Mapping[str, Callable[[web.Request], web.StreamResponse]] | None = None
@staticmethod
async def download_example_images(request):
"""Download example images for models from Civitai"""
return await DownloadManager.start_download(request)
@classmethod
def setup_routes(cls, app: web.Application, *, ws_manager) -> None:
"""Register routes on the given aiohttp application using default wiring."""
@staticmethod
async def get_example_images_status(request):
"""Get the current status of example images download"""
return await DownloadManager.get_status(request)
controller = cls(ws_manager=ws_manager)
controller.register(app)
@staticmethod
async def pause_example_images(request):
"""Pause the example images download"""
return await DownloadManager.pause_download(request)
def register(self, app: web.Application) -> None:
"""Bind the controller's handlers to the aiohttp router."""
@staticmethod
async def resume_example_images(request):
"""Resume the example images download"""
return await DownloadManager.resume_download(request)
@staticmethod
async def open_example_images_folder(request):
"""Open the example images folder for a specific model"""
return await ExampleImagesFileManager.open_folder(request)
registrar = ExampleImagesRouteRegistrar(app)
registrar.register_routes(self.to_route_mapping())
@staticmethod
async def get_example_image_files(request):
"""Get list of example image files for a specific model"""
return await ExampleImagesFileManager.get_files(request)
def to_route_mapping(self) -> Mapping[str, Callable[[web.Request], web.StreamResponse]]:
"""Return the registrar-compatible mapping of handler names to callables."""
@staticmethod
async def import_example_images(request):
"""Import local example images for a model"""
return await ExampleImagesProcessor.import_images(request)
@staticmethod
async def has_example_images(request):
"""Check if example images folder exists and is not empty for a model"""
return await ExampleImagesFileManager.has_images(request)
if self._handler_mapping is None:
handler_set = self._build_handler_set()
self._handler_set = handler_set
self._handler_mapping = handler_set.to_route_mapping()
return self._handler_mapping
def _build_handler_set(self) -> ExampleImagesHandlerSet:
logger.debug("Building ExampleImagesHandlerSet with %s, %s, %s", self._download_manager, self._processor, self._file_manager)
download_use_case = DownloadExampleImagesUseCase(download_manager=self._download_manager)
download_handler = ExampleImagesDownloadHandler(download_use_case, self._download_manager)
import_use_case = ImportExampleImagesUseCase(processor=self._processor)
management_handler = ExampleImagesManagementHandler(
import_use_case,
self._processor,
self._cleanup_service,
)
file_handler = ExampleImagesFileHandler(self._file_manager)
return ExampleImagesHandlerSet(
download=download_handler,
management=management_handler,
files=file_handler,
)
@staticmethod
async def delete_example_image(request):
"""Delete a custom example image for a model"""
return await ExampleImagesProcessor.delete_custom_image(request)

View File

@@ -1,185 +0,0 @@
"""Handler set for example image routes."""
from __future__ import annotations
from dataclasses import dataclass
from typing import Callable, Mapping
from aiohttp import web
from ...services.use_cases.example_images import (
DownloadExampleImagesConfigurationError,
DownloadExampleImagesInProgressError,
DownloadExampleImagesUseCase,
ImportExampleImagesUseCase,
ImportExampleImagesValidationError,
)
from ...utils.example_images_download_manager import (
DownloadConfigurationError,
DownloadInProgressError,
DownloadNotRunningError,
ExampleImagesDownloadError,
)
from ...utils.example_images_processor import ExampleImagesImportError
class ExampleImagesDownloadHandler:
"""HTTP adapters for download-related example image endpoints."""
def __init__(
self,
download_use_case: DownloadExampleImagesUseCase,
download_manager,
) -> None:
self._download_use_case = download_use_case
self._download_manager = download_manager
async def download_example_images(self, request: web.Request) -> web.StreamResponse:
try:
payload = await request.json()
result = await self._download_use_case.execute(payload)
return web.json_response(result)
except DownloadExampleImagesInProgressError as exc:
response = {
'success': False,
'error': str(exc),
'status': exc.progress,
}
return web.json_response(response, status=400)
except DownloadExampleImagesConfigurationError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
except ExampleImagesDownloadError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=500)
async def get_example_images_status(self, request: web.Request) -> web.StreamResponse:
result = await self._download_manager.get_status(request)
return web.json_response(result)
async def pause_example_images(self, request: web.Request) -> web.StreamResponse:
try:
result = await self._download_manager.pause_download(request)
return web.json_response(result)
except DownloadNotRunningError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
async def resume_example_images(self, request: web.Request) -> web.StreamResponse:
try:
result = await self._download_manager.resume_download(request)
return web.json_response(result)
except DownloadNotRunningError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
async def stop_example_images(self, request: web.Request) -> web.StreamResponse:
try:
result = await self._download_manager.stop_download(request)
return web.json_response(result)
except DownloadNotRunningError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
async def force_download_example_images(self, request: web.Request) -> web.StreamResponse:
try:
payload = await request.json()
result = await self._download_manager.start_force_download(payload)
return web.json_response(result)
except DownloadInProgressError as exc:
response = {
'success': False,
'error': str(exc),
'status': exc.progress_snapshot,
}
return web.json_response(response, status=400)
except DownloadConfigurationError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
except ExampleImagesDownloadError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=500)
async def check_example_images_needed(self, request: web.Request) -> web.StreamResponse:
"""Lightweight check to see if any models need example images downloaded."""
try:
payload = await request.json()
model_types = payload.get('model_types', ['lora', 'checkpoint', 'embedding'])
result = await self._download_manager.check_pending_models(model_types)
return web.json_response(result)
except Exception as exc:
return web.json_response(
{'success': False, 'error': str(exc)},
status=500
)
class ExampleImagesManagementHandler:
"""HTTP adapters for import/delete endpoints."""
def __init__(self, import_use_case: ImportExampleImagesUseCase, processor, cleanup_service) -> None:
self._import_use_case = import_use_case
self._processor = processor
self._cleanup_service = cleanup_service
async def import_example_images(self, request: web.Request) -> web.StreamResponse:
try:
result = await self._import_use_case.execute(request)
return web.json_response(result)
except ImportExampleImagesValidationError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=400)
except ExampleImagesImportError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=500)
async def delete_example_image(self, request: web.Request) -> web.StreamResponse:
return await self._processor.delete_custom_image(request)
async def set_example_image_nsfw_level(self, request: web.Request) -> web.StreamResponse:
return await self._processor.set_example_image_nsfw_level(request)
async def cleanup_example_image_folders(self, request: web.Request) -> web.StreamResponse:
result = await self._cleanup_service.cleanup_example_image_folders()
if result.get('success') or result.get('partial_success'):
return web.json_response(result, status=200)
error_code = result.get('error_code')
status = 400 if error_code in {'path_not_configured', 'path_not_found'} else 500
return web.json_response(result, status=status)
class ExampleImagesFileHandler:
"""HTTP adapters for filesystem-centric endpoints."""
def __init__(self, file_manager) -> None:
self._file_manager = file_manager
async def open_example_images_folder(self, request: web.Request) -> web.StreamResponse:
return await self._file_manager.open_folder(request)
async def get_example_image_files(self, request: web.Request) -> web.StreamResponse:
return await self._file_manager.get_files(request)
async def has_example_images(self, request: web.Request) -> web.StreamResponse:
return await self._file_manager.has_images(request)
@dataclass(frozen=True)
class ExampleImagesHandlerSet:
"""Aggregate of handlers exposed to the registrar."""
download: ExampleImagesDownloadHandler
management: ExampleImagesManagementHandler
files: ExampleImagesFileHandler
def to_route_mapping(self) -> Mapping[str, Callable[[web.Request], web.StreamResponse]]:
"""Flatten handler methods into the registrar mapping."""
return {
"download_example_images": self.download.download_example_images,
"get_example_images_status": self.download.get_example_images_status,
"pause_example_images": self.download.pause_example_images,
"resume_example_images": self.download.resume_example_images,
"stop_example_images": self.download.stop_example_images,
"force_download_example_images": self.download.force_download_example_images,
"check_example_images_needed": self.download.check_example_images_needed,
"import_example_images": self.management.import_example_images,
"delete_example_image": self.management.delete_example_image,
"set_example_image_nsfw_level": self.management.set_example_image_nsfw_level,
"cleanup_example_image_folders": self.management.cleanup_example_image_folders,
"open_example_images_folder": self.files.open_example_images_folder,
"get_example_image_files": self.files.get_example_image_files,
"has_example_images": self.files.has_example_images,
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,55 +0,0 @@
"""Handlers responsible for serving preview assets dynamically."""
from __future__ import annotations
import logging
import urllib.parse
from pathlib import Path
from aiohttp import web
from ...config import config as global_config
logger = logging.getLogger(__name__)
class PreviewHandler:
"""Serve preview assets for the active library at request time."""
def __init__(self, *, config=global_config) -> None:
self._config = config
async def serve_preview(self, request: web.Request) -> web.StreamResponse:
"""Return the preview file referenced by the encoded ``path`` query."""
raw_path = request.query.get("path", "")
if not raw_path:
raise web.HTTPBadRequest(text="Missing 'path' query parameter")
try:
decoded_path = urllib.parse.unquote(raw_path)
except Exception as exc: # pragma: no cover - defensive guard
logger.debug("Failed to decode preview path %s: %s", raw_path, exc)
raise web.HTTPBadRequest(text="Invalid preview path encoding") from exc
normalized = decoded_path.replace("\\", "/")
if not self._config.is_preview_path_allowed(normalized):
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
candidate = Path(normalized)
try:
resolved = candidate.expanduser().resolve(strict=False)
except Exception as exc:
logger.debug("Failed to resolve preview path %s: %s", normalized, exc)
raise web.HTTPBadRequest(text="Unable to resolve preview path") from exc
if not resolved.is_file():
logger.debug("Preview file not found at %s", str(resolved))
raise web.HTTPNotFound(text="Preview file not found")
# aiohttp's FileResponse handles range requests and content headers for us.
return web.FileResponse(path=resolved, chunk_size=256 * 1024)
__all__ = ["PreviewHandler"]

File diff suppressed because it is too large Load Diff

View File

@@ -5,351 +5,479 @@ from typing import Dict
from server import PromptServer # type: ignore
from .base_model_routes import BaseModelRoutes
from .model_route_registrar import ModelRouteRegistrar
from ..services.lora_service import LoraService
from ..services.service_registry import ServiceRegistry
from ..utils.routes_common import ModelRouteUtils
from ..utils.utils import get_lora_info
logger = logging.getLogger(__name__)
class LoraRoutes(BaseModelRoutes):
"""LoRA-specific route controller"""
def __init__(self):
"""Initialize LoRA routes with LoRA service"""
super().__init__()
# Service will be initialized later via setup_routes
self.service = None
self.civitai_client = None
self.template_name = "loras.html"
async def initialize_services(self):
"""Initialize services from ServiceRegistry"""
lora_scanner = await ServiceRegistry.get_lora_scanner()
update_service = await ServiceRegistry.get_model_update_service()
self.service = LoraService(lora_scanner, update_service=update_service)
self.set_model_update_service(update_service)
# Attach service dependencies
self.attach_service(self.service)
self.service = LoraService(lora_scanner)
self.civitai_client = await ServiceRegistry.get_civitai_client()
# Initialize parent with the service
super().__init__(self.service)
def setup_routes(self, app: web.Application):
"""Setup LoRA routes"""
# Schedule service initialization on app startup
app.on_startup.append(lambda _: self.initialize_services())
# Setup common routes with 'loras' prefix (includes page route)
super().setup_routes(app, "loras")
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
super().setup_routes(app, 'loras')
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup LoRA-specific routes"""
# LoRA-specific query routes
registrar.add_prefixed_route(
"GET", "/api/lm/{prefix}/letter-counts", prefix, self.get_letter_counts
)
registrar.add_prefixed_route(
"GET",
"/api/lm/{prefix}/get-trigger-words",
prefix,
self.get_lora_trigger_words,
)
registrar.add_prefixed_route(
"GET",
"/api/lm/{prefix}/usage-tips-by-path",
prefix,
self.get_lora_usage_tips_by_path,
)
# Randomizer routes
registrar.add_prefixed_route(
"POST", "/api/lm/{prefix}/random-sample", prefix, self.get_random_loras
)
# Cycler routes
registrar.add_prefixed_route(
"POST", "/api/lm/{prefix}/cycler-list", prefix, self.get_cycler_list
)
app.router.add_get(f'/api/{prefix}/letter-counts', self.get_letter_counts)
app.router.add_get(f'/api/{prefix}/get-notes', self.get_lora_notes)
app.router.add_get(f'/api/{prefix}/get-trigger-words', self.get_lora_trigger_words)
app.router.add_get(f'/api/{prefix}/preview-url', self.get_lora_preview_url)
app.router.add_get(f'/api/{prefix}/civitai-url', self.get_lora_civitai_url)
app.router.add_get(f'/api/{prefix}/model-description', self.get_lora_model_description)
# LoRA-specific management routes
app.router.add_post(f'/api/{prefix}/move_model', self.move_model)
app.router.add_post(f'/api/{prefix}/move_models_bulk', self.move_models_bulk)
# CivitAI integration with LoRA-specific validation
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_lora)
app.router.add_get(f'/api/{prefix}/civitai/model/version/{{modelVersionId}}', self.get_civitai_model_by_version)
app.router.add_get(f'/api/{prefix}/civitai/model/hash/{{hash}}', self.get_civitai_model_by_hash)
# ComfyUI integration
registrar.add_prefixed_route(
"POST", "/api/lm/{prefix}/get_trigger_words", prefix, self.get_trigger_words
)
app.router.add_post(f'/api/{prefix}/get_trigger_words', self.get_trigger_words)
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse LoRA-specific parameters"""
params = {}
# LoRA-specific parameters
if "first_letter" in request.query:
params["first_letter"] = request.query.get("first_letter")
if 'first_letter' in request.query:
params['first_letter'] = request.query.get('first_letter')
# Handle fuzzy search parameter name variation
if request.query.get("fuzzy") == "true":
params["fuzzy_search"] = True
if request.query.get('fuzzy') == 'true':
params['fuzzy_search'] = True
# Handle additional filter parameters for LoRAs
if "lora_hash" in request.query:
if not params.get("hash_filters"):
params["hash_filters"] = {}
params["hash_filters"]["single_hash"] = request.query["lora_hash"].lower()
elif "lora_hashes" in request.query:
if not params.get("hash_filters"):
params["hash_filters"] = {}
params["hash_filters"]["multiple_hashes"] = [
h.lower() for h in request.query["lora_hashes"].split(",")
]
if 'lora_hash' in request.query:
if not params.get('hash_filters'):
params['hash_filters'] = {}
params['hash_filters']['single_hash'] = request.query['lora_hash'].lower()
elif 'lora_hashes' in request.query:
if not params.get('hash_filters'):
params['hash_filters'] = {}
params['hash_filters']['multiple_hashes'] = [h.lower() for h in request.query['lora_hashes'].split(',')]
return params
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for LoRA"""
from ..utils.constants import VALID_LORA_TYPES
return model_type.lower() in VALID_LORA_TYPES
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "LORA, LoCon, or DORA"
# LoRA-specific route handlers
async def get_letter_counts(self, request: web.Request) -> web.Response:
"""Get count of LoRAs for each letter of the alphabet"""
try:
letter_counts = await self.service.get_letter_counts()
return web.json_response({"success": True, "letter_counts": letter_counts})
return web.json_response({
'success': True,
'letter_counts': letter_counts
})
except Exception as e:
logger.error(f"Error getting letter counts: {e}")
return web.json_response({"success": False, "error": str(e)}, status=500)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_lora_notes(self, request: web.Request) -> web.Response:
"""Get notes for a specific LoRA file"""
try:
lora_name = request.query.get("name")
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text="Lora file name is required", status=400)
return web.Response(text='Lora file name is required', status=400)
notes = await self.service.get_lora_notes(lora_name)
if notes is not None:
return web.json_response({"success": True, "notes": notes})
return web.json_response({
'success': True,
'notes': notes
})
else:
return web.json_response(
{"success": False, "error": "LoRA not found in cache"}, status=404
)
return web.json_response({
'success': False,
'error': 'LoRA not found in cache'
}, status=404)
except Exception as e:
logger.error(f"Error getting lora notes: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_lora_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for a specific LoRA file"""
try:
lora_name = request.query.get("name")
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text="Lora file name is required", status=400)
return web.Response(text='Lora file name is required', status=400)
trigger_words = await self.service.get_lora_trigger_words(lora_name)
return web.json_response({"success": True, "trigger_words": trigger_words})
return web.json_response({
'success': True,
'trigger_words': trigger_words
})
except Exception as e:
logger.error(f"Error getting lora trigger words: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def get_lora_usage_tips_by_path(self, request: web.Request) -> web.Response:
"""Get usage tips for a LoRA by its relative path"""
try:
relative_path = request.query.get("relative_path")
if not relative_path:
return web.Response(text="Relative path is required", status=400)
usage_tips = await self.service.get_lora_usage_tips_by_relative_path(
relative_path
)
return web.json_response({"success": True, "usage_tips": usage_tips or ""})
except Exception as e:
logger.error(f"Error getting lora usage tips by path: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_lora_preview_url(self, request: web.Request) -> web.Response:
"""Get the static preview URL for a LoRA file"""
try:
lora_name = request.query.get("name")
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text="Lora file name is required", status=400)
return web.Response(text='Lora file name is required', status=400)
preview_url = await self.service.get_lora_preview_url(lora_name)
if preview_url:
return web.json_response({"success": True, "preview_url": preview_url})
return web.json_response({
'success': True,
'preview_url': preview_url
})
else:
return web.json_response(
{
"success": False,
"error": "No preview URL found for the specified lora",
},
status=404,
)
return web.json_response({
'success': False,
'error': 'No preview URL found for the specified lora'
}, status=404)
except Exception as e:
logger.error(f"Error getting lora preview URL: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_lora_civitai_url(self, request: web.Request) -> web.Response:
"""Get the Civitai URL for a LoRA file"""
try:
lora_name = request.query.get("name")
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text="Lora file name is required", status=400)
return web.Response(text='Lora file name is required', status=400)
result = await self.service.get_lora_civitai_url(lora_name)
if result["civitai_url"]:
return web.json_response({"success": True, **result})
if result['civitai_url']:
return web.json_response({
'success': True,
**result
})
else:
return web.json_response(
{
"success": False,
"error": "No Civitai data found for the specified lora",
},
status=404,
)
return web.json_response({
'success': False,
'error': 'No Civitai data found for the specified lora'
}, status=404)
except Exception as e:
logger.error(f"Error getting lora Civitai URL: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def get_random_loras(self, request: web.Request) -> web.Response:
"""Get random LoRAs based on filters and strength ranges"""
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
# CivitAI integration methods
async def get_civitai_versions_lora(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai LoRA model with local availability info"""
try:
json_data = await request.json()
model_id = request.match_info['model_id']
response = await self.civitai_client.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be LORA, LoCon, or DORA
from ..utils.constants import VALID_LORA_TYPES
if model_type.lower() not in VALID_LORA_TYPES:
return web.json_response({
'error': f"Model type mismatch. Expected LORA or LoCon, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the model file (type="Model") in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching LoRA model versions: {e}")
return web.Response(status=500, text=str(e))
async def get_civitai_model_by_version(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by model version ID"""
try:
model_version_id = request.match_info.get('modelVersionId')
# Get model details from Civitai API
model, error_msg = await self.civitai_client.get_model_version_info(model_version_id)
if not model:
# Log warning for failed model retrieval
logger.warning(f"Failed to fetch model version {model_version_id}: {error_msg}")
# Determine status code based on error message
status_code = 404 if error_msg and "not found" in error_msg.lower() else 500
return web.json_response({
"success": False,
"error": error_msg or "Failed to fetch model information"
}, status=status_code)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_civitai_model_by_hash(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by hash"""
try:
hash = request.match_info.get('hash')
model = await self.civitai_client.get_model_by_hash(hash)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details by hash: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
# Model management methods
async def move_model(self, request: web.Request) -> web.Response:
"""Handle model move request"""
try:
data = await request.json()
file_path = data.get('file_path') # full path of the model file
target_path = data.get('target_path') # folder path to move the model to
if not file_path or not target_path:
return web.Response(text='File path and target path are required', status=400)
# Parse parameters
count = json_data.get("count", 5)
count_min = json_data.get("count_min")
count_max = json_data.get("count_max")
model_strength_min = float(json_data.get("model_strength_min", 0.0))
model_strength_max = float(json_data.get("model_strength_max", 1.0))
use_same_clip_strength = json_data.get("use_same_clip_strength", True)
clip_strength_min = float(json_data.get("clip_strength_min", 0.0))
clip_strength_max = float(json_data.get("clip_strength_max", 1.0))
locked_loras = json_data.get("locked_loras", [])
pool_config = json_data.get("pool_config")
use_recommended_strength = json_data.get("use_recommended_strength", False)
recommended_strength_scale_min = float(
json_data.get("recommended_strength_scale_min", 0.5)
)
recommended_strength_scale_max = float(
json_data.get("recommended_strength_scale_max", 1.0)
)
# Check if source and destination are the same
import os
source_dir = os.path.dirname(file_path)
if os.path.normpath(source_dir) == os.path.normpath(target_path):
logger.info(f"Source and target directories are the same: {source_dir}")
return web.json_response({'success': True, 'message': 'Source and target directories are the same'})
# Determine target count
if count_min is not None and count_max is not None:
import random
# Check if target file already exists
file_name = os.path.basename(file_path)
target_file_path = os.path.join(target_path, file_name).replace(os.sep, '/')
if os.path.exists(target_file_path):
return web.json_response({
'success': False,
'error': f"Target file already exists: {target_file_path}"
}, status=409) # 409 Conflict
target_count = random.randint(count_min, count_max)
# Call scanner to handle the move operation
success = await self.service.scanner.move_model(file_path, target_path)
if success:
return web.json_response({'success': True, 'new_file_path': target_file_path})
else:
target_count = count
# Validate parameters
if target_count < 1 or target_count > 100:
return web.json_response(
{"success": False, "error": "Count must be between 1 and 100"},
status=400,
)
if model_strength_min < -10 or model_strength_max > 10:
return web.json_response(
{
"success": False,
"error": "Model strength must be between -10 and 10",
},
status=400,
)
# Get random LoRAs from service
result_loras = await self.service.get_random_loras(
count=target_count,
model_strength_min=model_strength_min,
model_strength_max=model_strength_max,
use_same_clip_strength=use_same_clip_strength,
clip_strength_min=clip_strength_min,
clip_strength_max=clip_strength_max,
locked_loras=locked_loras,
pool_config=pool_config,
use_recommended_strength=use_recommended_strength,
recommended_strength_scale_min=recommended_strength_scale_min,
recommended_strength_scale_max=recommended_strength_scale_max,
)
return web.json_response(
{"success": True, "loras": result_loras, "count": len(result_loras)}
)
except ValueError as e:
logger.error(f"Invalid parameter for random LoRAs: {e}")
return web.json_response({"success": False, "error": str(e)}, status=400)
return web.Response(text='Failed to move model', status=500)
except Exception as e:
logger.error(f"Error getting random LoRAs: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def get_cycler_list(self, request: web.Request) -> web.Response:
"""Get filtered and sorted LoRA list for cycler widget"""
logger.error(f"Error moving model: {e}", exc_info=True)
return web.Response(text=str(e), status=500)
async def move_models_bulk(self, request: web.Request) -> web.Response:
"""Handle bulk model move request"""
try:
json_data = await request.json()
# Parse parameters
pool_config = json_data.get("pool_config")
sort_by = json_data.get("sort_by", "filename")
# Get cycler list from service
lora_list = await self.service.get_cycler_list(
pool_config=pool_config,
sort_by=sort_by
)
return web.json_response(
{"success": True, "loras": lora_list, "count": len(lora_list)}
)
data = await request.json()
file_paths = data.get('file_paths', []) # list of full paths of the model files
target_path = data.get('target_path') # folder path to move the models to
if not file_paths or not target_path:
return web.Response(text='File paths and target path are required', status=400)
results = []
import os
for file_path in file_paths:
# Check if source and destination are the same
source_dir = os.path.dirname(file_path)
if os.path.normpath(source_dir) == os.path.normpath(target_path):
results.append({
"path": file_path,
"success": True,
"message": "Source and target directories are the same"
})
continue
# Check if target file already exists
file_name = os.path.basename(file_path)
target_file_path = os.path.join(target_path, file_name).replace(os.sep, '/')
if os.path.exists(target_file_path):
results.append({
"path": file_path,
"success": False,
"message": f"Target file already exists: {target_file_path}"
})
continue
# Try to move the model
success = await self.service.scanner.move_model(file_path, target_path)
results.append({
"path": file_path,
"success": success,
"message": "Success" if success else "Failed to move model"
})
# Count successes and failures
success_count = sum(1 for r in results if r["success"])
failure_count = len(results) - success_count
return web.json_response({
'success': True,
'message': f'Moved {success_count} of {len(file_paths)} models',
'results': results,
'success_count': success_count,
'failure_count': failure_count
})
except Exception as e:
logger.error(f"Error getting cycler list: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
logger.error(f"Error moving models in bulk: {e}", exc_info=True)
return web.Response(text=str(e), status=500)
async def get_lora_model_description(self, request: web.Request) -> web.Response:
"""Get model description for a Lora model"""
try:
# Get parameters
model_id = request.query.get('model_id')
file_path = request.query.get('file_path')
if not model_id:
return web.json_response({
'success': False,
'error': 'Model ID is required'
}, status=400)
# Check if we already have the description stored in metadata
description = None
tags = []
creator = {}
if file_path:
import os
from ..utils.metadata_manager import MetadataManager
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
description = metadata.get('modelDescription')
tags = metadata.get('tags', [])
creator = metadata.get('creator', {})
# If description is not in metadata, fetch from CivitAI
if not description:
logger.info(f"Fetching model metadata for model ID: {model_id}")
model_metadata, _ = await self.civitai_client.get_model_metadata(model_id)
if model_metadata:
description = model_metadata.get('description')
tags = model_metadata.get('tags', [])
creator = model_metadata.get('creator', {})
# Save the metadata to file if we have a file path and got metadata
if file_path:
try:
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
metadata['modelDescription'] = description
metadata['tags'] = tags
# Ensure the civitai dict exists
if 'civitai' not in metadata:
metadata['civitai'] = {}
# Store creator in the civitai nested structure
metadata['civitai']['creator'] = creator
await MetadataManager.save_metadata(file_path, metadata, True)
except Exception as e:
logger.error(f"Error saving model metadata: {e}")
return web.json_response({
'success': True,
'description': description or "<p>No model description available.</p>",
'tags': tags,
'creator': creator
})
except Exception as e:
logger.error(f"Error getting model metadata: {e}")
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for specified LoRA models"""
try:
json_data = await request.json()
lora_names = json_data.get("lora_names", [])
node_ids = json_data.get("node_ids", [])
all_trigger_words = []
for lora_name in lora_names:
_, trigger_words = get_lora_info(lora_name)
all_trigger_words.extend(trigger_words)
# Format the trigger words
trigger_words_text = (
",, ".join(all_trigger_words) if all_trigger_words else ""
)
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Send update to all connected trigger word toggle nodes
for entry in node_ids:
node_identifier = entry
graph_identifier = None
if isinstance(entry, dict):
node_identifier = entry.get("node_id")
graph_identifier = entry.get("graph_id")
try:
parsed_node_id = int(node_identifier)
except (TypeError, ValueError):
parsed_node_id = node_identifier
payload = {"id": parsed_node_id, "message": trigger_words_text}
if graph_identifier is not None:
payload["graph_id"] = str(graph_identifier)
PromptServer.instance.send_sync("trigger_word_update", payload)
for node_id in node_ids:
PromptServer.instance.send_sync("trigger_word_update", {
"id": node_id,
"message": trigger_words_text
})
return web.json_response({"success": True})
except Exception as e:
logger.error(f"Error getting trigger words: {e}")
return web.json_response({"success": False, "error": str(e)}, status=500)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)

View File

@@ -1,74 +0,0 @@
"""Route registrar for miscellaneous endpoints.
This module mirrors the model route registrar architecture so that
miscellaneous endpoints share a consistent registration flow.
"""
from dataclasses import dataclass
from typing import Callable, Iterable, Mapping
from aiohttp import web
@dataclass(frozen=True)
class RouteDefinition:
"""Declarative definition for a HTTP route."""
method: str
path: str
handler_name: str
MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/settings", "get_settings"),
RouteDefinition("POST", "/api/lm/settings", "update_settings"),
RouteDefinition("GET", "/api/lm/priority-tags", "get_priority_tags"),
RouteDefinition("GET", "/api/lm/settings/libraries", "get_settings_libraries"),
RouteDefinition("POST", "/api/lm/settings/libraries/activate", "activate_library"),
RouteDefinition("GET", "/api/lm/health-check", "health_check"),
RouteDefinition("POST", "/api/lm/open-file-location", "open_file_location"),
RouteDefinition("POST", "/api/lm/update-usage-stats", "update_usage_stats"),
RouteDefinition("GET", "/api/lm/get-usage-stats", "get_usage_stats"),
RouteDefinition("POST", "/api/lm/update-lora-code", "update_lora_code"),
RouteDefinition("GET", "/api/lm/trained-words", "get_trained_words"),
RouteDefinition("GET", "/api/lm/model-example-files", "get_model_example_files"),
RouteDefinition("POST", "/api/lm/register-nodes", "register_nodes"),
RouteDefinition("POST", "/api/lm/update-node-widget", "update_node_widget"),
RouteDefinition("GET", "/api/lm/get-registry", "get_registry"),
RouteDefinition("GET", "/api/lm/check-model-exists", "check_model_exists"),
RouteDefinition("GET", "/api/lm/civitai/user-models", "get_civitai_user_models"),
RouteDefinition("POST", "/api/lm/download-metadata-archive", "download_metadata_archive"),
RouteDefinition("POST", "/api/lm/remove-metadata-archive", "remove_metadata_archive"),
RouteDefinition("GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"),
RouteDefinition("GET", "/api/lm/model-versions-status", "get_model_versions_status"),
RouteDefinition("POST", "/api/lm/settings/open-location", "open_settings_location"),
RouteDefinition("GET", "/api/lm/custom-words/search", "search_custom_words"),
)
class MiscRouteRegistrar:
"""Bind miscellaneous route definitions to an aiohttp router."""
_METHOD_MAP = {
"GET": "add_get",
"POST": "add_post",
"PUT": "add_put",
"DELETE": "add_delete",
}
def __init__(self, app: web.Application) -> None:
self._app = app
def register_routes(
self,
handler_lookup: Mapping[str, Callable[[web.Request], object]],
*,
definitions: Iterable[RouteDefinition] = MISC_ROUTE_DEFINITIONS,
) -> None:
for definition in definitions:
self._bind(definition.method, definition.path, handler_lookup[definition.handler_name])
def _bind(self, method: str, path: str, handler: Callable) -> None:
add_method_name = self._METHOD_MAP[method.upper()]
add_method = getattr(self._app.router, add_method_name)
add_method(path, handler)

View File

@@ -1,138 +1,707 @@
"""Route controller for miscellaneous endpoints."""
from __future__ import annotations
import json
import logging
import os
from typing import Awaitable, Callable, Mapping
import sys
import threading
import asyncio
from server import PromptServer # type: ignore
from aiohttp import web
from server import PromptServer # type: ignore
from ..services.metadata_service import (
get_metadata_archive_manager,
get_metadata_provider,
update_metadata_providers,
)
from ..services.settings_manager import get_settings_manager
from ..services.downloader import get_downloader
from ..services.settings_manager import settings
from ..utils.usage_stats import UsageStats
from .handlers.misc_handlers import (
CustomWordsHandler,
FileSystemHandler,
HealthCheckHandler,
LoraCodeHandler,
MetadataArchiveHandler,
MiscHandlerSet,
ModelExampleFilesHandler,
ModelLibraryHandler,
NodeRegistry,
NodeRegistryHandler,
SettingsHandler,
TrainedWordsHandler,
UsageStatsHandler,
build_service_registry_adapter,
)
from .misc_route_registrar import MiscRouteRegistrar
from ..utils.lora_metadata import extract_trained_words
from ..config import config
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS, NODE_TYPES, DEFAULT_NODE_COLOR
from ..services.service_registry import ServiceRegistry
import re
logger = logging.getLogger(__name__)
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get(
"HF_HUB_DISABLE_TELEMETRY", "0"
) == "0"
standalone_mode = 'nodes' not in sys.modules
# Node registry for tracking active workflow nodes
class NodeRegistry:
"""Thread-safe registry for tracking Lora nodes in active workflows"""
def __init__(self):
self._lock = threading.RLock()
self._nodes = {} # node_id -> node_info
self._registry_updated = threading.Event()
def register_nodes(self, nodes):
"""Register multiple nodes at once, replacing existing registry"""
with self._lock:
# Clear existing registry
self._nodes.clear()
# Register all new nodes
for node in nodes:
node_id = node['node_id']
node_type = node.get('type', '')
# Convert node type name to integer
type_id = NODE_TYPES.get(node_type, 0) # 0 for unknown types
# Handle null bgcolor with default color
bgcolor = node.get('bgcolor')
if bgcolor is None:
bgcolor = DEFAULT_NODE_COLOR
self._nodes[node_id] = {
'id': node_id,
'bgcolor': bgcolor,
'title': node.get('title'),
'type': type_id,
'type_name': node_type
}
logger.debug(f"Registered {len(nodes)} nodes in registry")
# Signal that registry has been updated
self._registry_updated.set()
def get_registry(self):
"""Get current registry information"""
with self._lock:
return {
'nodes': dict(self._nodes), # Return a copy
'node_count': len(self._nodes)
}
def clear_registry(self):
"""Clear the entire registry"""
with self._lock:
self._nodes.clear()
logger.info("Node registry cleared")
def wait_for_update(self, timeout=1.0):
"""Wait for registry update with timeout"""
self._registry_updated.clear()
return self._registry_updated.wait(timeout)
# Global registry instance
node_registry = NodeRegistry()
class MiscRoutes:
"""Route controller that mirrors the model route architecture."""
"""Miscellaneous routes for various utility functions"""
@staticmethod
def setup_routes(app):
"""Register miscellaneous routes"""
app.router.add_post('/api/settings', MiscRoutes.update_settings)
# Add new route for clearing cache
app.router.add_post('/api/clear-cache', MiscRoutes.clear_cache)
def __init__(
self,
*,
settings_service=None,
usage_stats_factory: Callable[[], UsageStats] = UsageStats,
prompt_server: type[PromptServer] = PromptServer,
service_registry_adapter=build_service_registry_adapter(),
metadata_provider_factory=get_metadata_provider,
metadata_archive_manager_factory=get_metadata_archive_manager,
metadata_provider_updater=update_metadata_providers,
downloader_factory=get_downloader,
registrar_factory=MiscRouteRegistrar,
handler_set_factory=MiscHandlerSet,
node_registry: NodeRegistry | None = None,
standalone_mode_flag: bool = standalone_mode,
) -> None:
self._settings = settings_service or get_settings_manager()
self._usage_stats_factory = usage_stats_factory
self._prompt_server = prompt_server
self._service_registry_adapter = service_registry_adapter
self._metadata_provider_factory = metadata_provider_factory
self._metadata_archive_manager_factory = metadata_archive_manager_factory
self._metadata_provider_updater = metadata_provider_updater
self._downloader_factory = downloader_factory
self._registrar_factory = registrar_factory
self._handler_set_factory = handler_set_factory
self._node_registry = node_registry or NodeRegistry()
self._standalone_mode = standalone_mode_flag
app.router.add_get('/api/health-check', lambda request: web.json_response({'status': 'ok'}))
self._handler_mapping: Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]] | None = None
# Usage stats routes
app.router.add_post('/api/update-usage-stats', MiscRoutes.update_usage_stats)
app.router.add_get('/api/get-usage-stats', MiscRoutes.get_usage_stats)
# Lora code update endpoint
app.router.add_post('/api/update-lora-code', MiscRoutes.update_lora_code)
# Add new route for getting trained words
app.router.add_get('/api/trained-words', MiscRoutes.get_trained_words)
# Add new route for getting model example files
app.router.add_get('/api/model-example-files', MiscRoutes.get_model_example_files)
# Node registry endpoints
app.router.add_post('/api/register-nodes', MiscRoutes.register_nodes)
app.router.add_get('/api/get-registry', MiscRoutes.get_registry)
# Add new route for checking if a model exists in the library
app.router.add_get('/api/check-model-exists', MiscRoutes.check_model_exists)
@staticmethod
def setup_routes(app: web.Application) -> None:
"""Entry point used by the application bootstrap."""
controller = MiscRoutes()
controller.bind(app)
async def clear_cache(request):
"""Clear all cache files from the cache folder"""
try:
# Get the cache folder path (relative to project directory)
project_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
cache_folder = os.path.join(project_dir, 'cache')
# Check if cache folder exists
if not os.path.exists(cache_folder):
logger.info("Cache folder does not exist, nothing to clear")
return web.json_response({'success': True, 'message': 'No cache folder found'})
# Get list of cache files before deleting for reporting
cache_files = [f for f in os.listdir(cache_folder) if os.path.isfile(os.path.join(cache_folder, f))]
deleted_files = []
# Delete each .msgpack file in the cache folder
for filename in cache_files:
if filename.endswith('.msgpack'):
file_path = os.path.join(cache_folder, filename)
try:
os.remove(file_path)
deleted_files.append(filename)
logger.info(f"Deleted cache file: {filename}")
except Exception as e:
logger.error(f"Failed to delete {filename}: {e}")
return web.json_response({
'success': False,
'error': f"Failed to delete {filename}: {str(e)}"
}, status=500)
return web.json_response({
'success': True,
'message': f"Successfully cleared {len(deleted_files)} cache files",
'deleted_files': deleted_files
})
except Exception as e:
logger.error(f"Error clearing cache files: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
def bind(self, app: web.Application) -> None:
registrar = self._registrar_factory(app)
registrar.register_routes(self._ensure_handler_mapping())
@staticmethod
async def update_settings(request):
"""Update application settings"""
try:
data = await request.json()
# Validate and update settings
for key, value in data.items():
# Special handling for example_images_path - verify path exists
if key == 'example_images_path' and value:
if not os.path.exists(value):
return web.json_response({
'success': False,
'error': f"Path does not exist: {value}"
})
# Path changed - server restart required for new path to take effect
old_path = settings.get('example_images_path')
if old_path != value:
logger.info(f"Example images path changed to {value} - server restart required")
# Special handling for base_model_path_mappings - parse JSON string
if key == 'base_model_path_mappings' and value:
try:
value = json.loads(value)
except json.JSONDecodeError:
return web.json_response({
'success': False,
'error': f"Invalid JSON format for base_model_path_mappings: {value}"
})
# Save to settings
settings.set(key, value)
return web.json_response({'success': True})
except Exception as e:
logger.error(f"Error updating settings: {e}", exc_info=True)
return web.Response(status=500, text=str(e))
@staticmethod
async def update_usage_stats(request):
"""
Update usage statistics based on a prompt_id
Expects a JSON body with:
{
"prompt_id": "string"
}
"""
try:
# Parse the request body
data = await request.json()
prompt_id = data.get('prompt_id')
if not prompt_id:
return web.json_response({
'success': False,
'error': 'Missing prompt_id'
}, status=400)
# Call the UsageStats to process this prompt_id synchronously
usage_stats = UsageStats()
await usage_stats.process_execution(prompt_id)
return web.json_response({
'success': True
})
except Exception as e:
logger.error(f"Failed to update usage stats: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_usage_stats(request):
"""Get current usage statistics"""
try:
usage_stats = UsageStats()
stats = await usage_stats.get_stats()
# Add version information to help clients handle format changes
stats_response = {
'success': True,
'data': stats,
'format_version': 2 # Indicate this is the new format with history
}
return web.json_response(stats_response)
except Exception as e:
logger.error(f"Failed to get usage stats: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def update_lora_code(request):
"""
Update Lora code in ComfyUI nodes
Expects a JSON body with:
{
"node_ids": [123, 456], # Optional - List of node IDs to update (for browser mode)
"lora_code": "<lora:modelname:1.0>", # The Lora code to send
"mode": "append" # or "replace" - whether to append or replace existing code
}
"""
try:
# Parse the request body
data = await request.json()
node_ids = data.get('node_ids')
lora_code = data.get('lora_code', '')
mode = data.get('mode', 'append')
if not lora_code:
return web.json_response({
'success': False,
'error': 'Missing lora_code parameter'
}, status=400)
results = []
# Desktop mode: no specific node_ids provided
if node_ids is None:
try:
# Send broadcast message with id=-1 to all Lora Loader nodes
PromptServer.instance.send_sync("lora_code_update", {
"id": -1,
"lora_code": lora_code,
"mode": mode
})
results.append({
'node_id': 'broadcast',
'success': True
})
except Exception as e:
logger.error(f"Error broadcasting lora code: {e}")
results.append({
'node_id': 'broadcast',
'success': False,
'error': str(e)
})
else:
# Browser mode: send to specific nodes
for node_id in node_ids:
try:
# Send the message to the frontend
PromptServer.instance.send_sync("lora_code_update", {
"id": node_id,
"lora_code": lora_code,
"mode": mode
})
results.append({
'node_id': node_id,
'success': True
})
except Exception as e:
logger.error(f"Error sending lora code to node {node_id}: {e}")
results.append({
'node_id': node_id,
'success': False,
'error': str(e)
})
return web.json_response({
'success': True,
'results': results
})
except Exception as e:
logger.error(f"Failed to update lora code: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
def _ensure_handler_mapping(self) -> Mapping[str, Callable[[web.Request], Awaitable[web.StreamResponse]]]:
if self._handler_mapping is None:
handler_set = self._create_handler_set()
self._handler_mapping = handler_set.to_route_mapping()
return self._handler_mapping
@staticmethod
async def get_trained_words(request):
"""
Get trained words from a safetensors file, sorted by frequency
Expects a query parameter:
file_path: Path to the safetensors file
"""
try:
# Get file path from query parameters
file_path = request.query.get('file_path')
if not file_path:
return web.json_response({
'success': False,
'error': 'Missing file_path parameter'
}, status=400)
# Check if file exists and is a safetensors file
if not os.path.exists(file_path):
return web.json_response({
'success': False,
'error': f"File not found: {file_path}"
}, status=404)
if not file_path.lower().endswith('.safetensors'):
return web.json_response({
'success': False,
'error': 'File is not a safetensors file'
}, status=400)
# Extract trained words and class_tokens
trained_words, class_tokens = await extract_trained_words(file_path)
# Return result with both trained words and class tokens
return web.json_response({
'success': True,
'trained_words': trained_words,
'class_tokens': class_tokens
})
except Exception as e:
logger.error(f"Failed to get trained words: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
def _create_handler_set(self) -> MiscHandlerSet:
health = HealthCheckHandler()
settings_handler = SettingsHandler(
settings_service=self._settings,
metadata_provider_updater=self._metadata_provider_updater,
downloader_factory=self._downloader_factory,
)
usage_stats = UsageStatsHandler(usage_stats_factory=self._usage_stats_factory)
lora_code = LoraCodeHandler(prompt_server=self._prompt_server)
trained_words = TrainedWordsHandler()
model_examples = ModelExampleFilesHandler()
metadata_archive = MetadataArchiveHandler(
metadata_archive_manager_factory=self._metadata_archive_manager_factory,
settings_service=self._settings,
metadata_provider_updater=self._metadata_provider_updater,
)
filesystem = FileSystemHandler(settings_service=self._settings)
node_registry_handler = NodeRegistryHandler(
node_registry=self._node_registry,
prompt_server=self._prompt_server,
standalone_mode=self._standalone_mode,
)
model_library = ModelLibraryHandler(
service_registry=self._service_registry_adapter,
metadata_provider_factory=self._metadata_provider_factory,
)
custom_words = CustomWordsHandler()
@staticmethod
async def get_model_example_files(request):
"""
Get list of example image files for a specific model based on file path
Expects:
- file_path in query parameters
Returns:
- List of image files with their paths as static URLs
"""
try:
# Get the model file path from query parameters
file_path = request.query.get('file_path')
if not file_path:
return web.json_response({
'success': False,
'error': 'Missing file_path parameter'
}, status=400)
# Extract directory and base filename
model_dir = os.path.dirname(file_path)
model_filename = os.path.basename(file_path)
model_name = os.path.splitext(model_filename)[0]
# Check if the directory exists
if not os.path.exists(model_dir):
return web.json_response({
'success': False,
'error': 'Model directory not found',
'files': []
}, status=404)
# Look for files matching the pattern modelname.example.<index>.<ext>
files = []
pattern = f"{model_name}.example."
for file in os.listdir(model_dir):
file_lower = file.lower()
if file_lower.startswith(pattern.lower()):
file_full_path = os.path.join(model_dir, file)
if os.path.isfile(file_full_path):
# Check if the file is a supported media file
file_ext = os.path.splitext(file)[1].lower()
if (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
# Extract the index from the filename
try:
# Extract the part after '.example.' and before file extension
index_part = file[len(pattern):].split('.')[0]
# Try to parse it as an integer
index = int(index_part)
except (ValueError, IndexError):
# If we can't parse the index, use infinity to sort at the end
index = float('inf')
# Convert file path to static URL
static_url = config.get_preview_static_url(file_full_path)
files.append({
'name': file,
'path': static_url,
'extension': file_ext,
'is_video': file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos'],
'index': index
})
# Sort files by their index for consistent ordering
files.sort(key=lambda x: x['index'])
# Remove the index field as it's only used for sorting
for file in files:
file.pop('index', None)
return web.json_response({
'success': True,
'files': files
})
except Exception as e:
logger.error(f"Failed to get model example files: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
return self._handler_set_factory(
health=health,
settings=settings_handler,
usage_stats=usage_stats,
lora_code=lora_code,
trained_words=trained_words,
model_examples=model_examples,
node_registry=node_registry_handler,
model_library=model_library,
metadata_archive=metadata_archive,
filesystem=filesystem,
custom_words=custom_words,
)
@staticmethod
async def register_nodes(request):
"""
Register multiple Lora nodes at once
Expects a JSON body with:
{
"nodes": [
{
"node_id": 123,
"bgcolor": "#535",
"title": "Lora Loader (LoraManager)"
},
...
]
}
"""
try:
data = await request.json()
# Validate required fields
nodes = data.get('nodes', [])
if not isinstance(nodes, list):
return web.json_response({
'success': False,
'error': 'nodes must be a list'
}, status=400)
# Validate each node
for i, node in enumerate(nodes):
if not isinstance(node, dict):
return web.json_response({
'success': False,
'error': f'Node {i} must be an object'
}, status=400)
node_id = node.get('node_id')
if node_id is None:
return web.json_response({
'success': False,
'error': f'Node {i} missing node_id parameter'
}, status=400)
# Validate node_id is an integer
try:
node['node_id'] = int(node_id)
except (ValueError, TypeError):
return web.json_response({
'success': False,
'error': f'Node {i} node_id must be an integer'
}, status=400)
# Register all nodes
node_registry.register_nodes(nodes)
return web.json_response({
'success': True,
'message': f'{len(nodes)} nodes registered successfully'
})
except Exception as e:
logger.error(f"Failed to register nodes: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_registry(request):
"""Get current node registry information by refreshing from frontend"""
try:
# Check if running in standalone mode
if standalone_mode:
logger.warning("Registry refresh not available in standalone mode")
return web.json_response({
'success': False,
'error': 'Standalone Mode Active',
'message': 'Cannot interact with ComfyUI in standalone mode.'
}, status=503)
# Send message to frontend to refresh registry
try:
PromptServer.instance.send_sync("lora_registry_refresh", {})
logger.debug("Sent registry refresh request to frontend")
except Exception as e:
logger.error(f"Failed to send registry refresh message: {e}")
return web.json_response({
'success': False,
'error': 'Communication Error',
'message': f'Failed to communicate with ComfyUI frontend: {str(e)}'
}, status=500)
# Wait for registry update with timeout
def wait_for_registry():
return node_registry.wait_for_update(timeout=1.0)
# Run the wait in a thread to avoid blocking the event loop
loop = asyncio.get_event_loop()
registry_updated = await loop.run_in_executor(None, wait_for_registry)
if not registry_updated:
logger.warning("Registry refresh timeout after 1 second")
return web.json_response({
'success': False,
'error': 'Timeout Error',
'message': 'Registry refresh timeout - ComfyUI frontend may not be responsive'
}, status=408)
# Get updated registry
registry_info = node_registry.get_registry()
return web.json_response({
'success': True,
'data': registry_info
})
except Exception as e:
logger.error(f"Failed to get registry: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': 'Internal Error',
'message': str(e)
}, status=500)
@staticmethod
async def check_model_exists(request):
"""
Check if a model with specified modelId and optionally modelVersionId exists in the library
Expects query parameters:
- modelId: int - Civitai model ID (required)
- modelVersionId: int - Civitai model version ID (optional)
Returns:
- If modelVersionId is provided: JSON with a boolean 'exists' field
- If modelVersionId is not provided: JSON with a list of modelVersionIds that exist in the library
"""
try:
# Get the modelId and modelVersionId from query parameters
model_id_str = request.query.get('modelId')
model_version_id_str = request.query.get('modelVersionId')
# Validate modelId parameter (required)
if not model_id_str:
return web.json_response({
'success': False,
'error': 'Missing required parameter: modelId'
}, status=400)
try:
# Convert modelId to integer
model_id = int(model_id_str)
except ValueError:
return web.json_response({
'success': False,
'error': 'Parameter modelId must be an integer'
}, status=400)
# Get all scanners
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
# If modelVersionId is provided, check for specific version
if model_version_id_str:
try:
model_version_id = int(model_version_id_str)
except ValueError:
return web.json_response({
'success': False,
'error': 'Parameter modelVersionId must be an integer'
}, status=400)
# Check lora scanner first
exists = False
model_type = None
__all__ = ["MiscRoutes"]
if await lora_scanner.check_model_version_exists(model_id, model_version_id):
exists = True
model_type = 'lora'
elif checkpoint_scanner and await checkpoint_scanner.check_model_version_exists(model_id, model_version_id):
exists = True
model_type = 'checkpoint'
elif embedding_scanner and await embedding_scanner.check_model_version_exists(model_id, model_version_id):
exists = True
model_type = 'embedding'
return web.json_response({
'success': True,
'exists': exists,
'modelType': model_type if exists else None
})
# If modelVersionId is not provided, return all version IDs for the model
else:
lora_versions = await lora_scanner.get_model_versions_by_id(model_id)
checkpoint_versions = []
embedding_versions = []
# 优先lora其次checkpoint最后embedding
if not lora_versions:
checkpoint_versions = await checkpoint_scanner.get_model_versions_by_id(model_id)
if not lora_versions and not checkpoint_versions:
embedding_versions = await embedding_scanner.get_model_versions_by_id(model_id)
model_type = None
versions = []
if lora_versions:
model_type = 'lora'
versions = lora_versions
elif checkpoint_versions:
model_type = 'checkpoint'
versions = checkpoint_versions
elif embedding_versions:
model_type = 'embedding'
versions = embedding_versions
return web.json_response({
'success': True,
'modelId': model_id,
'modelType': model_type,
'versions': versions
})
except Exception as e:
logger.error(f"Failed to check model existence: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)

Some files were not shown because too many files have changed in this diff Show More