Compare commits

..

35 Commits

Author SHA1 Message Date
Will Miao
36e3e62e70 feat: add filter presets and update version to v0.9.15
- Added filter presets feature allowing users to save and quickly switch between filter combinations
- Fixed various bugs to improve overall stability
- Updated project version from 0.9.14 to 0.9.15 in pyproject.toml
2026-02-04 09:12:37 +08:00
Will Miao
7bcf4e4491 feat(config): discover deep symlinks dynamically when accessing previews 2026-02-04 00:16:59 +08:00
Will Miao
c12aefa82a fix(recipes): detect duplicates for remote imports using modelVersionId and Civitai URL, #750
- Use modelVersionId as fallback for all loras in fingerprint calculation (not just deleted)
- Add URL-based duplicate detection using source_path field
- Combine both fingerprint and URL-based duplicate detection in API response
- Fix _download_remote_media return type and unbound variable issue
2026-02-03 21:32:15 +08:00
Will Miao
990a3527e4 feat(ui): improve filter preset delete button visibility and layout
- Hide delete button by default and show on hover for inactive presets
- Show delete button on active presets only when hovering over the preset
- Add ellipsis truncation for long preset names to prevent layout breakage
- Remove checkmark icon from active preset names for cleaner visual design
2026-02-03 20:05:39 +08:00
Will Miao
655d3cab71 fix(config): prioritize checkpoints over unet when paths overlap, #799
When checkpoints and unet folders point to the same physical location
(via symlinks), prioritize checkpoints for backward compatibility.

This prevents the 'Failed to load Checkpoint root' error that users
experience when they have incorrectly configured their ComfyUI paths.

Changes:
- Detect overlapping real paths between checkpoints and unet
- Log warning to inform users of the configuration issue
- Remove overlapping paths from unet_map, keeping checkpoints

Fixes #<issue-number>
2026-02-03 18:27:42 +08:00
Will Miao
358e658459 fix(trigger_word_toggle): add trigger word normalization method
Introduce a new private method `_normalize_trigger_words` to handle consistent splitting and cleaning of trigger word strings. This method splits input by both single and double commas, strips whitespace, and filters out empty strings, returning a set of normalized words. It is now used in `process_trigger_words` to compare trigger word overrides, ensuring accurate detection of changes by comparing normalized sets instead of raw strings.
2026-02-03 15:42:09 +08:00
Will Miao
f28c32f2b1 feat(lora-cycler): increase repeat input width for better usability
The width of the repeat input field in the LoRA cycler settings view has been increased from 40px to 50px. This change improves usability by providing more space for user input, making the control easier to interact with and reducing visual crowding.
2026-02-03 09:40:55 +08:00
Will Miao
f5dbd6b8e8 fix(metadata): auto-disable archive_db setting when database file is missing 2026-02-03 08:36:27 +08:00
Will Miao
2c026a2646 fix(metadata-sync): persist db_checked flag for deleted models
When a deleted model is checked against the SQLite archive and not found, the `db_checked` flag was set in memory but never saved to disk. This occurred because the save operation was only triggered when `civitai_api_not_found` was True, which is not the case for deleted models (since the CivitAI API is not attempted). As a result, deleted models would be rechecked on every refresh instead of being skipped.

Changes:
- Introduce a `needs_save` flag to track when metadata state is updated
- Save metadata whenever `db_checked` is set to True, regardless of API status
- Ensure `last_checked_at` is set for SQLite-only attempts
- Add regression test to verify the fix
2026-02-03 07:34:41 +08:00
Will Miao
bd83f7520e chore: bump version to 0.9.14 2026-02-02 23:17:35 +08:00
Will Miao
b9a4e7a09b docs(release): add v0.9.14 release notes
- Add LoRA Cycler node with iteration support
- Enhance Prompt node with tag autocomplete (Danbooru + e621)
- Add command system (/char, /artist, /ac, /noac) for tag operations
- Reference Lora Cycler and Lora Manager Basic template workflows
- Bug fixes and stability improvements
2026-02-02 23:09:06 +08:00
Will Miao
c30e57ede8 fix(recipes): add data-folder attribute to RecipeCard for correct drag-drop path calculation 2026-02-02 22:18:13 +08:00
Will Miao
0dba1b336d feat(template): update prompt node usage in basic template workflow 2026-02-02 21:58:51 +08:00
Will Miao
820afe9319 feat(recipe_scanner): ensure cache initialization and improve type safety
- Initialize RecipeCache in scan_recipes to prevent None reference errors
- Import PersistedRecipeData directly instead of using string annotation
- Remove redundant import inside _reconcile_recipe_cache method
2026-02-02 21:57:44 +08:00
Will Miao
5a97f4bc75 feat(recipe_scanner): optimize recipe lookup performance
Refactor recipe lookup logic to improve efficiency from O(n²) to O(n + m):
- Build recipe_by_id dictionary for O(1) recipe ID lookups
- Simplify persisted_by_path construction using recipe_id extraction
- Add fallback lookup by recipe_id when path lookup fails
- Maintain same functionality while reducing computational complexity
2026-02-02 19:37:06 +08:00
Will Miao
94da404cc5 fix: skip confirmed not-found models in bulk metadata refresh
When enable_metadata_archive_db=True, the previous filter logic would
repeatedly try to fetch metadata for models that were already confirmed
to not exist on CivitAI (from_civitai=False, civitai_deleted=True).

The fix adds a skip condition to exclude models that:
1. Are confirmed not from CivitAI (from_civitai=False)
2. Are marked as deleted/not found on CivitAI (civitai_deleted=True)
3. Either have no archive DB enabled, or have already been checked (db_checked=True)

This prevents unnecessary API calls to CivArchive for user-trained models
or models from non-CivitAI sources.

Fixes repeated "Error fetching version of CivArchive model by hash" logs
for models that will never be found on CivitAI/CivArchive.
2026-02-02 13:27:18 +08:00
Will Miao
1da476d858 feat(example-images): add check pending models endpoint and improve async handling
- Add /api/example-images/check-pending endpoint to quickly check models needing downloads
- Improve DownloadManager.start_download() to return immediately without blocking
- Add _handle_download_task_done callback for proper error handling and progress saving
- Add check_pending_models() method for lightweight pre-download validation
- Update frontend ExampleImagesManager to use new check-pending endpoint
- Add comprehensive tests for new functionality
2026-02-02 12:31:07 +08:00
Will Miao
1daaff6bd4 feat: add LoRa Manager E2E testing skill documentation
Introduce comprehensive documentation for the new `lora-manager-e2e` skill, which provides end-to-end testing workflows for LoRa Manager. The skill enables automated validation of standalone mode, including server management, UI interaction via Chrome DevTools MCP, and frontend-to-backend integration testing.

Key additions:
- Detailed skill description and prerequisites
- Quick start workflow for server setup and browser debugging
- Common E2E test patterns for page load verification, server restart, and API testing
- Example test flows demonstrating step-by-step validation procedures
- Scripts and MCP command examples for practical implementation

This documentation supports automated testing of LoRa Manager's web interface and backend functionality, ensuring reliable end-to-end validation of features.
2026-02-02 12:15:58 +08:00
Will Miao
e252e44403 refactor(logging): replace print statements with logger for consistency 2026-02-02 10:47:17 +08:00
Will Miao
778ad8abd2 feat(cache): add cache health monitoring and validation system, see #730
- Add cache entry validator service for data integrity checks
- Add cache health monitor service for periodic health checks
- Enhance model cache and scanner with validation support
- Update websocket manager for health status broadcasting
- Add initialization banner service for cache health alerts
- Add comprehensive test coverage for new services
- Update translations across all locales
- Refactor sync translation keys script
2026-02-02 08:30:59 +08:00
Will Miao
68cf381b50 feat(autocomplete): improve tag search to use last token for multi-word prompts
- Modify custom words search to extract last space-separated token from search term
- Add `_getLastSpaceToken` helper method for token extraction
- Update selection replacement logic to only replace last token in multi-word prompts
- Enables searching "hello 1gi" to find "1girl" and replace only "1gi" with "1girl"
- Maintains full command replacement for command mode (e.g., "/char miku")
2026-02-01 22:09:21 +08:00
Will Miao
337f73e711 fix(slider): fix floating point precision issues in SingleSlider and DualRangeSlider
JavaScript floating point arithmetic causes values like 1.1 to become
1.1000000000000014. Add precision limiting to 2 decimal places in
snapToStep function for both sliders.
2026-02-01 21:03:04 +08:00
Will Miao
04ba966a6e feat: Add LoRA selector modal to Cycler widget
- Add LoraListModal component with search and preview tooltip
- Make 'Next LoRA' name clickable to open selector modal
- Integrate PreviewTooltip with custom resolver for Vue widgets
- Disable selector when prompts are queued (consistent with pause button)
- Fix tooltip z-index to display above modal backdrop

Fixes issue: users couldn't easily identify which index corresponds
to specific LoRA in large lists
2026-02-01 20:58:30 +08:00
Will Miao
71c8cf84e0 refactor(LoraCyclerWidget): UI/UX improvements
- Replace REP badge with segmented progress bar for repeat indicator
- Reorganize Starting Index & Repeat controls into aligned groups
- Change repeat format from '× [count] times' to '[count] ×' for better alignment
- Remove unnecessary refresh button and related logic
2026-02-01 20:00:30 +08:00
Will Miao
db1aec94e5 refactor(logging): replace print statements with logger in metadata_collector 2026-02-01 15:41:41 +08:00
Will Miao
553e1868e1 perf(config): limit symlink scan to first level for faster startup
Replace recursive directory traversal with first-level-only symlink scanning
to fix severe performance issues on large model collections (220K+ files).

- Rename _scan_directory_links to _scan_first_level_symlinks
- Only scan symlinks directly under each root directory
- Skip traversal of normal subdirectories entirely
- Update tests to reflect first-level behavior
- Add test_deep_symlink_not_scanned to document intentional limitation

Startup time reduced from 15+ minutes to seconds for affected users.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-01 12:42:35 +08:00
Will Miao
938ceb49b2 feat(autocomplete): add toggle commands for autocomplete setting
- Add `/ac` and `/noac` commands to toggle prompt tag autocomplete on/off
- Commands only appear when relevant (e.g., `/ac` shows when autocomplete is off)
- Show toast notification when toggling setting
- Use ComfyUI's setting API with fallback to legacy API
- Clear autocomplete token after toggling to provide clean UX
2026-02-01 12:34:38 +08:00
Will Miao
c0f03b79a8 feat(settings): change model card footer action default to replace_preview 2026-02-01 07:38:04 +08:00
Will Miao
a492638133 feat(lora-cycler): disable pause button when prompts are queued
- Add `hasQueuedPrompts` reactive flag to track queued executions
- Pass `is-pause-disabled` prop to settings view to disable pause button
- Update pause button title to indicate why it's disabled
- Remove server queue clearing logic from pause toggle handler
- Clear `hasQueuedPrompts` flag when manually changing index or resetting
- Set `hasQueuedPrompts` to true when adding prompts to execution queue
- Update flag when processing queued executions to reflect current queue state
2026-02-01 01:12:39 +08:00
Will Miao
e17d6c8ebf feat(testing): enhance test configuration and add Vue component tests
- Update package.json test script to run both JS and Vue tests
- Simplify LoraCyclerLM output by removing redundant lora name fallback
- Extend Vitest config to include TypeScript test files
- Add Vue testing dependencies and setup for component testing
- Implement comprehensive test suite for BatchQueueSimulator component
- Add test setup file with global mocks for ComfyUI modules
2026-02-01 00:59:50 +08:00
Will Miao
ffcfe5ea3e fix(metadata): rename model_type to sub_type and add embedding subtype, see #797
- Change `model_type` field to `sub_type` for checkpoint models to improve naming consistency
- Add `sub_type="embedding"` for embedding models to properly categorize model subtypes
- Maintain backward compatibility with existing metadata structure
2026-01-31 22:54:53 +08:00
Will Miao
719e18adb6 feat(media): add media type hint support for file extension detection, fixes #795 and fixes #751
- Add optional `media_type_hint` parameter to `_get_file_extension_from_content_or_headers` method
- When `media_type_hint` is "video" and no extension can be determined from content/headers/URL, default to `.mp4`
- Pass image metadata type as hint in both `process_example_images` and `process_example_images_batch` methods
- Add unit tests to verify media type hint behavior and priority
2026-01-31 19:39:37 +08:00
Will Miao
92d471daf5 feat(ui): hide model sub-type in compact density mode, see #793
Add CSS rules to hide the model sub-type and separator elements when the compact-density class is applied. This change saves visual space in compact mode by removing less critical information, improving the layout for dense interfaces.
2026-01-31 11:17:49 +08:00
Will Miao
66babf9ee1 feat(lora-cycler): reset execution state on manual index change
Reset execution state when user manually changes LoRA index to ensure next execution starts from the user-set index. This prevents stale execution state from interfering with user-initiated index changes.
2026-01-31 09:04:26 +08:00
Will Miao
60df2df324 feat: add new Flux Klein models, ZImageBase, and LTXV2 to constants, see #792
- Add Flux.2 Klein 9B, 9B-base, 4B, and 4B-base models to BASE_MODELS, BASE_MODEL_ABBREVIATIONS, and Flux Models category
- Include ZImageBase model and its abbreviation
- Add LTXV2 video model to BASE_MODELS, BASE_MODEL_ABBREVIATIONS, and Video Models category
- Update model categories to reflect new additions
2026-01-31 07:57:21 +08:00
119 changed files with 11170 additions and 1965 deletions

View File

@@ -0,0 +1,201 @@
---
name: lora-manager-e2e
description: End-to-end testing and validation for LoRa Manager features. Use when performing automated E2E validation of LoRa Manager standalone mode, including starting/restarting the server, using Chrome DevTools MCP to interact with the web UI at http://127.0.0.1:8188/loras, and verifying frontend-to-backend functionality. Covers workflow validation, UI interaction testing, and integration testing between the standalone Python backend and the browser frontend.
---
# LoRa Manager E2E Testing
This skill provides workflows and utilities for end-to-end testing of LoRa Manager using Chrome DevTools MCP.
## Prerequisites
- LoRa Manager project cloned and dependencies installed (`pip install -r requirements.txt`)
- Chrome browser available for debugging
- Chrome DevTools MCP connected
## Quick Start Workflow
### 1. Start LoRa Manager Standalone
```python
# Use the provided script to start the server
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188
```
Or manually:
```bash
cd /home/miao/workspace/ComfyUI/custom_nodes/ComfyUI-Lora-Manager
python standalone.py --port 8188
```
Wait for server ready message before proceeding.
### 2. Open Chrome Debug Mode
```bash
# Chrome with remote debugging on port 9222
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras
```
### 3. Connect Chrome DevTools MCP
Ensure the MCP server is connected to Chrome at `http://localhost:9222`.
### 4. Navigate and Interact
Use Chrome DevTools MCP tools to:
- Take snapshots: `take_snapshot`
- Click elements: `click`
- Fill forms: `fill` or `fill_form`
- Evaluate scripts: `evaluate_script`
- Wait for elements: `wait_for`
## Common E2E Test Patterns
### Pattern: Full Page Load Verification
```python
# Navigate to LoRA list page
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
# Wait for page to load
wait_for(text="LoRAs", timeout=10000)
# Take snapshot to verify UI state
snapshot = take_snapshot()
```
### Pattern: Restart Server for Configuration Changes
```python
# Stop current server (if running)
# Start with new configuration
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188 --restart
# Wait and refresh browser
navigate_page(type="reload", ignoreCache=True)
wait_for(text="LoRAs", timeout=15000)
```
### Pattern: Verify Backend API via Frontend
```python
# Execute script in browser to call backend API
result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
const data = await response.json();
return { count: data.length, firstItem: data[0]?.name };
}
""")
```
### Pattern: Form Submission Flow
```python
# Fill a form (e.g., search or filter)
fill_form(elements=[
{"uid": "search-input", "value": "character"},
])
# Click submit button
click(uid="search-button")
# Wait for results
wait_for(text="Results", timeout=5000)
# Verify results via snapshot
snapshot = take_snapshot()
```
### Pattern: Modal Dialog Interaction
```python
# Open modal (e.g., add LoRA)
click(uid="add-lora-button")
# Wait for modal to appear
wait_for(text="Add LoRA", timeout=3000)
# Fill modal form
fill_form(elements=[
{"uid": "lora-name", "value": "Test LoRA"},
{"uid": "lora-path", "value": "/path/to/lora.safetensors"},
])
# Submit
click(uid="modal-submit-button")
# Wait for success message or close
wait_for(text="Success", timeout=5000)
```
## Available Scripts
### scripts/start_server.py
Starts or restarts the LoRa Manager standalone server.
```bash
python scripts/start_server.py [--port PORT] [--restart] [--wait]
```
Options:
- `--port`: Server port (default: 8188)
- `--restart`: Kill existing server before starting
- `--wait`: Wait for server to be ready before exiting
### scripts/wait_for_server.py
Polls server until ready or timeout.
```bash
python scripts/wait_for_server.py [--port PORT] [--timeout SECONDS]
```
## Test Scenarios Reference
See [references/test-scenarios.md](references/test-scenarios.md) for detailed test scenarios including:
- LoRA list display and filtering
- Model metadata editing
- Recipe creation and management
- Settings configuration
- Import/export functionality
## Network Request Verification
Use `list_network_requests` and `get_network_request` to verify API calls:
```python
# List recent XHR/fetch requests
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Get details of specific request
details = get_network_request(reqid=123)
```
## Console Message Monitoring
```python
# Check for errors or warnings
messages = list_console_messages(types=["error", "warn"])
```
## Performance Testing
```python
# Start performance trace
performance_start_trace(reload=True, autoStop=False)
# Perform actions...
# Stop and analyze
results = performance_stop_trace()
```
## Cleanup
Always ensure proper cleanup after tests:
1. Stop the standalone server
2. Close browser pages (keep at least one open)
3. Clear temporary data if needed

View File

@@ -0,0 +1,324 @@
# Chrome DevTools MCP Cheatsheet for LoRa Manager
Quick reference for common MCP commands used in LoRa Manager E2E testing.
## Navigation
```python
# Navigate to LoRA list page
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
# Reload page with cache clear
navigate_page(type="reload", ignoreCache=True)
# Go back/forward
navigate_page(type="back")
navigate_page(type="forward")
```
## Waiting
```python
# Wait for text to appear
wait_for(text="LoRAs", timeout=10000)
# Wait for specific element (via evaluate_script)
evaluate_script(function="""
() => {
return new Promise((resolve) => {
const check = () => {
if (document.querySelector('.lora-card')) {
resolve(true);
} else {
setTimeout(check, 100);
}
};
check();
});
}
""")
```
## Taking Snapshots
```python
# Full page snapshot
snapshot = take_snapshot()
# Verbose snapshot (more details)
snapshot = take_snapshot(verbose=True)
# Save to file
take_snapshot(filePath="test-snapshots/page-load.json")
```
## Element Interaction
```python
# Click element
click(uid="element-uid-from-snapshot")
# Double click
click(uid="element-uid", dblClick=True)
# Fill input
fill(uid="search-input", value="test query")
# Fill multiple inputs
fill_form(elements=[
{"uid": "input-1", "value": "value 1"},
{"uid": "input-2", "value": "value 2"},
])
# Hover
hover(uid="lora-card-1")
# Upload file
upload_file(uid="file-input", filePath="/path/to/file.safetensors")
```
## Keyboard Input
```python
# Press key
press_key(key="Enter")
press_key(key="Escape")
press_key(key="Tab")
# Keyboard shortcuts
press_key(key="Control+A") # Select all
press_key(key="Control+F") # Find
```
## JavaScript Evaluation
```python
# Simple evaluation
result = evaluate_script(function="() => document.title")
# Async evaluation
result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
return await response.json();
}
""")
# Check element existence
exists = evaluate_script(function="""
() => document.querySelector('.lora-card') !== null
""")
# Get element count
count = evaluate_script(function="""
() => document.querySelectorAll('.lora-card').length
""")
```
## Network Monitoring
```python
# List all network requests
requests = list_network_requests()
# Filter by resource type
xhr_requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Get specific request details
details = get_network_request(reqid=123)
# Include preserved requests from previous navigations
all_requests = list_network_requests(includePreservedRequests=True)
```
## Console Monitoring
```python
# List all console messages
messages = list_console_messages()
# Filter by type
errors = list_console_messages(types=["error", "warn"])
# Include preserved messages
all_messages = list_console_messages(includePreservedMessages=True)
# Get specific message
details = get_console_message(msgid=1)
```
## Performance Testing
```python
# Start trace with page reload
performance_start_trace(reload=True, autoStop=False)
# Start trace without reload
performance_start_trace(reload=False, autoStop=True, filePath="trace.json.gz")
# Stop trace
results = performance_stop_trace()
# Stop and save
performance_stop_trace(filePath="trace-results.json.gz")
# Analyze specific insight
insight = performance_analyze_insight(
insightSetId="results.insightSets[0].id",
insightName="LCPBreakdown"
)
```
## Page Management
```python
# List open pages
pages = list_pages()
# Select a page
select_page(pageId=0, bringToFront=True)
# Create new page
new_page(url="http://127.0.0.1:8188/loras")
# Close page (keep at least one open!)
close_page(pageId=1)
# Resize page
resize_page(width=1920, height=1080)
```
## Screenshots
```python
# Full page screenshot
take_screenshot(fullPage=True)
# Viewport screenshot
take_screenshot()
# Element screenshot
take_screenshot(uid="lora-card-1")
# Save to file
take_screenshot(filePath="screenshots/page.png", format="png")
# JPEG with quality
take_screenshot(filePath="screenshots/page.jpg", format="jpeg", quality=90)
```
## Dialog Handling
```python
# Accept dialog
handle_dialog(action="accept")
# Accept with text input
handle_dialog(action="accept", promptText="user input")
# Dismiss dialog
handle_dialog(action="dismiss")
```
## Device Emulation
```python
# Mobile viewport
emulate(viewport={"width": 375, "height": 667, "isMobile": True, "hasTouch": True})
# Tablet viewport
emulate(viewport={"width": 768, "height": 1024, "isMobile": True, "hasTouch": True})
# Desktop viewport
emulate(viewport={"width": 1920, "height": 1080})
# Network throttling
emulate(networkConditions="Slow 3G")
emulate(networkConditions="Fast 4G")
# CPU throttling
emulate(cpuThrottlingRate=4) # 4x slowdown
# Geolocation
emulate(geolocation={"latitude": 37.7749, "longitude": -122.4194})
# User agent
emulate(userAgent="Mozilla/5.0 (Custom)")
# Reset emulation
emulate(viewport=None, networkConditions="No emulation", userAgent=None)
```
## Drag and Drop
```python
# Drag element to another
drag(from_uid="draggable-item", to_uid="drop-zone")
```
## Common LoRa Manager Test Patterns
### Verify LoRA Cards Loaded
```python
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
wait_for(text="LoRAs", timeout=10000)
# Check if cards loaded
result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
return {
count: cards.length,
hasData: cards.length > 0
};
}
""")
```
### Search and Verify Results
```python
fill(uid="search-input", value="character")
press_key(key="Enter")
wait_for(timeout=2000) # Wait for debounce
# Check results
result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
const names = Array.from(cards).map(c => c.dataset.name || c.textContent);
return { count: cards.length, names };
}
""")
```
### Check API Response
```python
# Trigger API call
evaluate_script(function="""
() => window.loraApiCallPromise = fetch('/loras/api/list').then(r => r.json())
""")
# Wait and get result
import time
time.sleep(1)
result = evaluate_script(function="""
async () => await window.loraApiCallPromise
""")
```
### Monitor Console for Errors
```python
# Before test: clear console (navigate reloads)
navigate_page(type="reload")
# ... perform actions ...
# Check for errors
errors = list_console_messages(types=["error"])
assert len(errors) == 0, f"Console errors: {errors}"
```

View File

@@ -0,0 +1,272 @@
# LoRa Manager E2E Test Scenarios
This document provides detailed test scenarios for end-to-end validation of LoRa Manager features.
## Table of Contents
1. [LoRA List Page](#lora-list-page)
2. [Model Details](#model-details)
3. [Recipes](#recipes)
4. [Settings](#settings)
5. [Import/Export](#importexport)
---
## LoRA List Page
### Scenario: Page Load and Display
**Objective**: Verify the LoRA list page loads correctly and displays models.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/loras`
2. Wait for page title "LoRAs" to appear
3. Take snapshot to verify:
- Header with "LoRAs" title is visible
- Search/filter controls are present
- Grid/list view toggle exists
- LoRA cards are displayed (if models exist)
- Pagination controls (if applicable)
**Expected Result**: Page loads without errors, UI elements are present.
### Scenario: Search Functionality
**Objective**: Verify search filters LoRA models correctly.
**Steps**:
1. Ensure at least one LoRA exists with known name (e.g., "test-character")
2. Navigate to LoRA list page
3. Enter search term in search box: "test"
4. Press Enter or click search button
5. Wait for results to update
**Expected Result**: Only LoRAs matching search term are displayed.
**Verification Script**:
```python
# After search, verify filtered results
evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
const names = Array.from(cards).map(c => c.dataset.name);
return { count: cards.length, names };
}
""")
```
### Scenario: Filter by Tags
**Objective**: Verify tag filtering works correctly.
**Steps**:
1. Navigate to LoRA list page
2. Click on a tag (e.g., "character", "style")
3. Wait for filtered results
**Expected Result**: Only LoRAs with selected tag are displayed.
### Scenario: View Mode Toggle
**Objective**: Verify grid/list view toggle works.
**Steps**:
1. Navigate to LoRA list page
2. Click list view button
3. Verify list layout
4. Click grid view button
5. Verify grid layout
**Expected Result**: View mode changes correctly, layout updates.
---
## Model Details
### Scenario: Open Model Details
**Objective**: Verify clicking a LoRA opens its details.
**Steps**:
1. Navigate to LoRA list page
2. Click on a LoRA card
3. Wait for details panel/modal to open
**Expected Result**: Details panel shows:
- Model name
- Preview image
- Metadata (trigger words, tags, etc.)
- Action buttons (edit, delete, etc.)
### Scenario: Edit Model Metadata
**Objective**: Verify metadata editing works end-to-end.
**Steps**:
1. Open a LoRA's details
2. Click "Edit" button
3. Modify trigger words field
4. Add/remove tags
5. Save changes
6. Refresh page
7. Reopen the same LoRA
**Expected Result**: Changes persist after refresh.
### Scenario: Delete Model
**Objective**: Verify model deletion works.
**Steps**:
1. Open a LoRA's details
2. Click "Delete" button
3. Confirm deletion in dialog
4. Wait for removal
**Expected Result**: Model removed from list, success message shown.
---
## Recipes
### Scenario: Recipe List Display
**Objective**: Verify recipes page loads and displays recipes.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/recipes`
2. Wait for "Recipes" title
3. Take snapshot
**Expected Result**: Recipe list displayed with cards/items.
### Scenario: Create New Recipe
**Objective**: Verify recipe creation workflow.
**Steps**:
1. Navigate to recipes page
2. Click "New Recipe" button
3. Fill recipe form:
- Name: "Test Recipe"
- Description: "E2E test recipe"
- Add LoRA models
4. Save recipe
5. Verify recipe appears in list
**Expected Result**: New recipe created and displayed.
### Scenario: Apply Recipe
**Objective**: Verify applying a recipe to ComfyUI.
**Steps**:
1. Open a recipe
2. Click "Apply" or "Load in ComfyUI"
3. Verify action completes
**Expected Result**: Recipe applied successfully.
---
## Settings
### Scenario: Settings Page Load
**Objective**: Verify settings page displays correctly.
**Steps**:
1. Navigate to `http://127.0.0.1:8188/settings`
2. Wait for "Settings" title
3. Take snapshot
**Expected Result**: Settings form with various options displayed.
### Scenario: Change Setting and Restart
**Objective**: Verify settings persist after restart.
**Steps**:
1. Navigate to settings page
2. Change a setting (e.g., default view mode)
3. Save settings
4. Restart server: `python scripts/start_server.py --restart --wait`
5. Refresh browser page
6. Navigate to settings
**Expected Result**: Changed setting value persists.
---
## Import/Export
### Scenario: Export Models List
**Objective**: Verify export functionality.
**Steps**:
1. Navigate to LoRA list
2. Click "Export" button
3. Select format (JSON/CSV)
4. Download file
**Expected Result**: File downloaded with correct data.
### Scenario: Import Models
**Objective**: Verify import functionality.
**Steps**:
1. Prepare import file
2. Navigate to import page
3. Upload file
4. Verify import results
**Expected Result**: Models imported successfully, confirmation shown.
---
## API Integration Tests
### Scenario: Verify API Endpoints
**Objective**: Verify backend API responds correctly.
**Test via browser console**:
```javascript
// List LoRAs
fetch('/loras/api/list').then(r => r.json()).then(console.log)
// Get LoRA details
fetch('/loras/api/detail/<id>').then(r => r.json()).then(console.log)
// Search LoRAs
fetch('/loras/api/search?q=test').then(r => r.json()).then(console.log)
```
**Expected Result**: APIs return valid JSON with expected structure.
---
## Console Error Monitoring
During all tests, monitor browser console for errors:
```python
# Check for JavaScript errors
messages = list_console_messages(types=["error"])
assert len(messages) == 0, f"Console errors found: {messages}"
```
## Network Request Verification
Verify key API calls are made:
```python
# List XHR requests
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
# Look for specific endpoints
lora_list_requests = [r for r in requests if "/api/list" in r.get("url", "")]
assert len(lora_list_requests) > 0, "LoRA list API not called"
```

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env python3
"""
Example E2E test demonstrating LoRa Manager testing workflow.
This script shows how to:
1. Start the standalone server
2. Use Chrome DevTools MCP to interact with the UI
3. Verify functionality end-to-end
Note: This is a template. Actual execution requires Chrome DevTools MCP.
"""
import subprocess
import sys
import time
def run_test():
"""Run example E2E test flow."""
print("=" * 60)
print("LoRa Manager E2E Test Example")
print("=" * 60)
# Step 1: Start server
print("\n[1/5] Starting LoRa Manager standalone server...")
result = subprocess.run(
[sys.executable, "start_server.py", "--port", "8188", "--wait", "--timeout", "30"],
capture_output=True,
text=True
)
if result.returncode != 0:
print(f"Failed to start server: {result.stderr}")
return 1
print("Server ready!")
# Step 2: Open Chrome (manual step - show command)
print("\n[2/5] Open Chrome with debug mode:")
print("google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras")
print("(In actual test, this would be automated via MCP)")
# Step 3: Navigate and verify page load
print("\n[3/5] Page Load Verification:")
print("""
MCP Commands to execute:
1. navigate_page(type="url", url="http://127.0.0.1:8188/loras")
2. wait_for(text="LoRAs", timeout=10000)
3. snapshot = take_snapshot()
""")
# Step 4: Test search functionality
print("\n[4/5] Search Functionality Test:")
print("""
MCP Commands to execute:
1. fill(uid="search-input", value="test")
2. press_key(key="Enter")
3. wait_for(text="Results", timeout=5000)
4. result = evaluate_script(function="""
() => {
const cards = document.querySelectorAll('.lora-card');
return { count: cards.length };
}
""")
""")
# Step 5: Verify API
print("\n[5/5] API Verification:")
print("""
MCP Commands to execute:
1. api_result = evaluate_script(function="""
async () => {
const response = await fetch('/loras/api/list');
const data = await response.json();
return { count: data.length, status: response.status };
}
""")
2. Verify api_result['status'] == 200
""")
print("\n" + "=" * 60)
print("Test flow completed!")
print("=" * 60)
return 0
def example_restart_flow():
"""Example: Testing configuration change that requires restart."""
print("\n" + "=" * 60)
print("Example: Server Restart Flow")
print("=" * 60)
print("""
Scenario: Change setting and verify after restart
Steps:
1. Navigate to settings page
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
2. Change a setting (e.g., theme)
- fill(uid="theme-select", value="dark")
- click(uid="save-settings-button")
3. Restart server
- subprocess.run([python, "start_server.py", "--restart", "--wait"])
4. Refresh browser
- navigate_page(type="reload", ignoreCache=True)
- wait_for(text="LoRAs", timeout=15000)
5. Verify setting persisted
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
- theme = evaluate_script(function="() => document.querySelector('#theme-select').value")
- assert theme == "dark"
""")
def example_modal_interaction():
"""Example: Testing modal dialog interaction."""
print("\n" + "=" * 60)
print("Example: Modal Dialog Interaction")
print("=" * 60)
print("""
Scenario: Add new LoRA via modal
Steps:
1. Open modal
- click(uid="add-lora-button")
- wait_for(text="Add LoRA", timeout=3000)
2. Fill form
- fill_form(elements=[
{"uid": "lora-name", "value": "Test Character"},
{"uid": "lora-path", "value": "/models/test.safetensors"},
])
3. Submit
- click(uid="modal-submit-button")
4. Verify success
- wait_for(text="Successfully added", timeout=5000)
- snapshot = take_snapshot()
""")
def example_network_monitoring():
"""Example: Network request monitoring."""
print("\n" + "=" * 60)
print("Example: Network Request Monitoring")
print("=" * 60)
print("""
Scenario: Verify API calls during user interaction
Steps:
1. Clear network log (implicit on navigation)
- navigate_page(type="url", url="http://127.0.0.1:8188/loras")
2. Perform action that triggers API call
- fill(uid="search-input", value="character")
- press_key(key="Enter")
3. List network requests
- requests = list_network_requests(resourceTypes=["xhr", "fetch"])
4. Find search API call
- search_requests = [r for r in requests if "/api/search" in r.get("url", "")]
- assert len(search_requests) > 0, "Search API was not called"
5. Get request details
- if search_requests:
details = get_network_request(reqid=search_requests[0]["reqid"])
- Verify request method, response status, etc.
""")
if __name__ == "__main__":
print("LoRa Manager E2E Test Examples\n")
print("This script demonstrates E2E testing patterns.\n")
print("Note: Actual execution requires Chrome DevTools MCP connection.\n")
run_test()
example_restart_flow()
example_modal_interaction()
example_network_monitoring()
print("\n" + "=" * 60)
print("All examples shown!")
print("=" * 60)

View File

@@ -0,0 +1,169 @@
#!/usr/bin/env python3
"""
Start or restart LoRa Manager standalone server for E2E testing.
"""
import argparse
import subprocess
import sys
import time
import socket
import signal
import os
def find_server_process(port: int) -> list[int]:
"""Find PIDs of processes listening on the given port."""
try:
result = subprocess.run(
["lsof", "-ti", f":{port}"],
capture_output=True,
text=True,
check=False
)
if result.returncode == 0 and result.stdout.strip():
return [int(pid) for pid in result.stdout.strip().split("\n") if pid]
except FileNotFoundError:
# lsof not available, try netstat
try:
result = subprocess.run(
["netstat", "-tlnp"],
capture_output=True,
text=True,
check=False
)
pids = []
for line in result.stdout.split("\n"):
if f":{port}" in line:
parts = line.split()
for part in parts:
if "/" in part:
try:
pid = int(part.split("/")[0])
pids.append(pid)
except ValueError:
pass
return pids
except FileNotFoundError:
pass
return []
def kill_server(port: int) -> None:
"""Kill processes using the specified port."""
pids = find_server_process(port)
for pid in pids:
try:
os.kill(pid, signal.SIGTERM)
print(f"Sent SIGTERM to process {pid}")
except ProcessLookupError:
pass
# Wait for processes to terminate
time.sleep(1)
# Force kill if still running
pids = find_server_process(port)
for pid in pids:
try:
os.kill(pid, signal.SIGKILL)
print(f"Sent SIGKILL to process {pid}")
except ProcessLookupError:
pass
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
"""Check if server is accepting connections."""
try:
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
return True
except (socket.timeout, ConnectionRefusedError, OSError):
return False
def wait_for_server(port: int, timeout: int = 30) -> bool:
"""Wait for server to become ready."""
start = time.time()
while time.time() - start < timeout:
if is_server_ready(port):
return True
time.sleep(0.5)
return False
def main() -> int:
parser = argparse.ArgumentParser(
description="Start LoRa Manager standalone server for E2E testing"
)
parser.add_argument(
"--port",
type=int,
default=8188,
help="Server port (default: 8188)"
)
parser.add_argument(
"--restart",
action="store_true",
help="Kill existing server before starting"
)
parser.add_argument(
"--wait",
action="store_true",
help="Wait for server to be ready before exiting"
)
parser.add_argument(
"--timeout",
type=int,
default=30,
help="Timeout for waiting (default: 30)"
)
args = parser.parse_args()
# Get project root (parent of .agents directory)
script_dir = os.path.dirname(os.path.abspath(__file__))
skill_dir = os.path.dirname(script_dir)
project_root = os.path.dirname(os.path.dirname(os.path.dirname(skill_dir)))
# Restart if requested
if args.restart:
print(f"Killing existing server on port {args.port}...")
kill_server(args.port)
time.sleep(1)
# Check if already running
if is_server_ready(args.port):
print(f"Server already running on port {args.port}")
return 0
# Start server
print(f"Starting LoRa Manager standalone server on port {args.port}...")
cmd = [sys.executable, "standalone.py", "--port", str(args.port)]
# Start in background
process = subprocess.Popen(
cmd,
cwd=project_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
start_new_session=True
)
print(f"Server process started with PID {process.pid}")
# Wait for ready if requested
if args.wait:
print(f"Waiting for server to be ready (timeout: {args.timeout}s)...")
if wait_for_server(args.port, args.timeout):
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
return 0
else:
print(f"Timeout waiting for server")
return 1
print(f"Server starting at http://127.0.0.1:{args.port}/loras")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env python3
"""
Wait for LoRa Manager server to become ready.
"""
import argparse
import socket
import sys
import time
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
"""Check if server is accepting connections."""
try:
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
return True
except (socket.timeout, ConnectionRefusedError, OSError):
return False
def wait_for_server(port: int, timeout: int = 30) -> bool:
"""Wait for server to become ready."""
start = time.time()
while time.time() - start < timeout:
if is_server_ready(port):
return True
time.sleep(0.5)
return False
def main() -> int:
parser = argparse.ArgumentParser(
description="Wait for LoRa Manager server to become ready"
)
parser.add_argument(
"--port",
type=int,
default=8188,
help="Server port (default: 8188)"
)
parser.add_argument(
"--timeout",
type=int,
default=30,
help="Timeout in seconds (default: 30)"
)
args = parser.parse_args()
print(f"Waiting for server on port {args.port} (timeout: {args.timeout}s)...")
if wait_for_server(args.port, args.timeout):
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
return 0
else:
print(f"Timeout: Server not ready after {args.timeout}s")
return 1
if __name__ == "__main__":
sys.exit(main())

View File

@@ -34,6 +34,15 @@ Enhance your Civitai browsing experience with our companion browser extension! S
## Release Notes
### v0.9.15
* **Filter Presets** - Save filter combinations as presets for quick switching and reapplication.
* **Bug Fixes** - Fixed various bugs for improved stability.
### v0.9.14
* **LoRA Cycler Node** - Introduced a new LoRA Cycler node that enables iteration through specified LoRAs with support for repeat count and pause iteration functionality. Refer to the new "Lora Cycler" template workflow for concrete example.
* **Enhanced Prompt Node with Tag Autocomplete** - Enhanced the Prompt node with comprehensive tag autocomplete based on merged Danbooru + e621 tags. Supports tag search and autocomplete functionality. Implemented a command system with shortcuts like `/char` or `/artist` for category-specific tag searching. Added `/ac` or `/noac` commands to quickly enable or disable autocomplete. Refer to the "Lora Manager Basic" template workflow in ComfyUI -> Templates -> ComfyUI-Lora-Manager for detailed tips.
* **Bug Fixes & Stability** - Addressed multiple bugs and improved overall stability.
### v0.9.12
* **LoRA Randomizer System** - Introduced a comprehensive LoRA randomization system featuring LoRA Pool and LoRA Randomizer nodes for flexible and dynamic generation workflows.
* **LoRA Randomizer Template** - Refer to the new "LoRA Randomizer" template workflow for detailed examples of flexible randomization modes, lock & reuse options, and other features.

File diff suppressed because one or more lines are too long

View File

@@ -179,7 +179,6 @@
"recipes": "Rezepte",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Statistiken"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "LoRAs suchen...",
"recipes": "Rezepte suchen...",
"checkpoints": "Checkpoints suchen...",
"embeddings": "Embeddings suchen...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "Embeddings suchen..."
},
"options": "Suchoptionen",
"searchIn": "Suchen in:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embedding-Modelle"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Stammverzeichnis",
"collapseAll": "Alle Ordner einklappen",
@@ -1116,10 +1104,6 @@
"title": "Statistiken werden initialisiert",
"message": "Modelldaten für Statistiken werden verarbeitet. Dies kann einige Minuten dauern..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Tipps & Tricks",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "Rezept zum Workflow hinzugefügt",
"recipeReplaced": "Rezept im Workflow ersetzt",
"recipeFailedToSend": "Fehler beim Senden des Rezepts an den Workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "Keine kompatiblen Knoten im aktuellen Workflow verfügbar",
"noTargetNodeSelected": "Kein Zielknoten ausgewählt"
},
"nodeSelector": {
"recipe": "Rezept",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Ersetzen",
"append": "Anhängen",
"selectTargetNode": "Zielknoten auswählen",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "Cache-Korruption erkannt"
},
"degraded": {
"title": "Cache-Probleme erkannt"
},
"content": "{invalid} von {total} Cache-Einträgen sind ungültig ({rate}). Dies kann zu fehlenden Modellen oder Fehlern führen. Ein Neuaufbau des Caches wird empfohlen.",
"rebuildCache": "Cache neu aufbauen",
"dismiss": "Verwerfen",
"rebuilding": "Cache wird neu aufgebaut...",
"rebuildFailed": "Fehler beim Neuaufbau des Caches: {error}",
"retry": "Wiederholen"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "Recipes",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "Misc",
"statistics": "Stats"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "Search LoRAs...",
"recipes": "Search recipes...",
"checkpoints": "Search checkpoints...",
"embeddings": "Search embeddings...",
"misc": "Search VAE/Upscaler models..."
"embeddings": "Search embeddings..."
},
"options": "Search Options",
"searchIn": "Search In:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embedding Models"
},
"misc": {
"title": "VAE & Upscaler Models",
"modelTypes": {
"vae": "VAE",
"upscaler": "Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Root",
"collapseAll": "Collapse All Folders",
@@ -1116,10 +1104,6 @@
"title": "Initializing Statistics",
"message": "Processing model data for statistics. This may take a few minutes..."
},
"misc": {
"title": "Initializing Misc Model Manager",
"message": "Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Tips & Tricks",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "Recipe appended to workflow",
"recipeReplaced": "Recipe replaced in workflow",
"recipeFailedToSend": "Failed to send recipe to workflow",
"vaeUpdated": "VAE updated in workflow",
"vaeFailed": "Failed to update VAE in workflow",
"upscalerUpdated": "Upscaler updated in workflow",
"upscalerFailed": "Failed to update upscaler in workflow",
"noMatchingNodes": "No compatible nodes available in the current workflow",
"noTargetNodeSelected": "No target node selected"
},
"nodeSelector": {
"recipe": "Recipe",
"lora": "LoRA",
"vae": "VAE",
"upscaler": "Upscaler",
"replace": "Replace",
"append": "Append",
"selectTargetNode": "Select target node",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "Cache Corruption Detected"
},
"degraded": {
"title": "Cache Issues Detected"
},
"content": "{invalid} of {total} cache entries are invalid ({rate}). This may cause missing models or errors. Rebuilding the cache is recommended.",
"rebuildCache": "Rebuild Cache",
"dismiss": "Dismiss",
"rebuilding": "Rebuilding cache...",
"rebuildFailed": "Failed to rebuild cache: {error}",
"retry": "Retry"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "Recetas",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Estadísticas"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "Buscar LoRAs...",
"recipes": "Buscar recetas...",
"checkpoints": "Buscar checkpoints...",
"embeddings": "Buscar embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "Buscar embeddings..."
},
"options": "Opciones de búsqueda",
"searchIn": "Buscar en:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Modelos embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Raíz",
"collapseAll": "Colapsar todas las carpetas",
@@ -1116,10 +1104,6 @@
"title": "Inicializando estadísticas",
"message": "Procesando datos del modelo para estadísticas. Esto puede tomar unos minutos..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Consejos y trucos",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "Receta añadida al flujo de trabajo",
"recipeReplaced": "Receta reemplazada en el flujo de trabajo",
"recipeFailedToSend": "Error al enviar receta al flujo de trabajo",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "No hay nodos compatibles disponibles en el flujo de trabajo actual",
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino"
},
"nodeSelector": {
"recipe": "Receta",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Reemplazar",
"append": "Añadir",
"selectTargetNode": "Seleccionar nodo de destino",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "Corrupción de caché detectada"
},
"degraded": {
"title": "Problemas de caché detectados"
},
"content": "{invalid} de {total} entradas de caché son inválidas ({rate}). Esto puede causar modelos faltantes o errores. Se recomienda reconstruir la caché.",
"rebuildCache": "Reconstruir caché",
"dismiss": "Descartar",
"rebuilding": "Reconstruyendo caché...",
"rebuildFailed": "Error al reconstruir la caché: {error}",
"retry": "Reintentar"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "Recipes",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Statistiques"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "Rechercher des LoRAs...",
"recipes": "Rechercher des recipes...",
"checkpoints": "Rechercher des checkpoints...",
"embeddings": "Rechercher des embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "Rechercher des embeddings..."
},
"options": "Options de recherche",
"searchIn": "Rechercher dans :",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Modèles Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Racine",
"collapseAll": "Réduire tous les dossiers",
@@ -1116,10 +1104,6 @@
"title": "Initialisation des statistiques",
"message": "Traitement des données de modèle pour les statistiques. Cela peut prendre quelques minutes..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Astuces et conseils",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "Recipe ajoutée au workflow",
"recipeReplaced": "Recipe remplacée dans le workflow",
"recipeFailedToSend": "Échec de l'envoi de la recipe au workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "Aucun nœud compatible disponible dans le workflow actuel",
"noTargetNodeSelected": "Aucun nœud cible sélectionné"
},
"nodeSelector": {
"recipe": "Recipe",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Remplacer",
"append": "Ajouter",
"selectTargetNode": "Sélectionner le nœud cible",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "Corruption du cache détectée"
},
"degraded": {
"title": "Problèmes de cache détectés"
},
"content": "{invalid} des {total} entrées de cache sont invalides ({rate}). Cela peut provoquer des modèles manquants ou des erreurs. Il est recommandé de reconstruire le cache.",
"rebuildCache": "Reconstruire le cache",
"dismiss": "Ignorer",
"rebuilding": "Reconstruction du cache...",
"rebuildFailed": "Échec de la reconstruction du cache : {error}",
"retry": "Réessayer"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "מתכונים",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "סטטיסטיקה"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "חפש LoRAs...",
"recipes": "חפש מתכונים...",
"checkpoints": "חפש checkpoints...",
"embeddings": "חפש embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "חפש embeddings..."
},
"options": "אפשרויות חיפוש",
"searchIn": "חפש ב:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "מודלי Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "שורש",
"collapseAll": "כווץ את כל התיקיות",
@@ -1116,10 +1104,6 @@
"title": "מאתחל סטטיסטיקה",
"message": "מעבד נתוני מודלים עבור סטטיסטיקה. זה עשוי לקחת מספר דקות..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "טיפים וטריקים",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "מתכון נוסף ל-workflow",
"recipeReplaced": "מתכון הוחלף ב-workflow",
"recipeFailedToSend": "שליחת מתכון ל-workflow נכשלה",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "אין צמתים תואמים זמינים ב-workflow הנוכחי",
"noTargetNodeSelected": "לא נבחר צומת יעד"
},
"nodeSelector": {
"recipe": "מתכון",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "החלף",
"append": "הוסף",
"selectTargetNode": "בחר צומת יעד",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "זוהתה שחיתות במטמון"
},
"degraded": {
"title": "זוהו בעיות במטמון"
},
"content": "{invalid} מתוך {total} רשומות מטמון אינן תקינות ({rate}). זה עלול לגרום לדגמים חסרים או לשגיאות. מומלץ לבנות מחדש את המטמון.",
"rebuildCache": "בניית מטמון מחדש",
"dismiss": "ביטול",
"rebuilding": "בונה מחדש את המטמון...",
"rebuildFailed": "נכשלה בניית המטמון מחדש: {error}",
"retry": "נסה שוב"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "レシピ",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "統計"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "LoRAを検索...",
"recipes": "レシピを検索...",
"checkpoints": "checkpointを検索...",
"embeddings": "embeddingを検索...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "embeddingを検索..."
},
"options": "検索オプション",
"searchIn": "検索対象:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embeddingモデル"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "ルート",
"collapseAll": "すべてのフォルダを折りたたむ",
@@ -1116,10 +1104,6 @@
"title": "統計を初期化中",
"message": "統計用のモデルデータを処理中。数分かかる場合があります..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "ヒント&コツ",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "レシピがワークフローに追加されました",
"recipeReplaced": "レシピがワークフローで置換されました",
"recipeFailedToSend": "レシピをワークフローに送信できませんでした",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "現在のワークフローには互換性のあるノードがありません",
"noTargetNodeSelected": "ターゲットノードが選択されていません"
},
"nodeSelector": {
"recipe": "レシピ",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "置換",
"append": "追加",
"selectTargetNode": "ターゲットノードを選択",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "キャッシュの破損が検出されました"
},
"degraded": {
"title": "キャッシュの問題が検出されました"
},
"content": "{total}個のキャッシュエントリのうち{invalid}個が無効です({rate})。モデルが見つからない原因になったり、エラーが発生する可能性があります。キャッシュの再構築を推奨します。",
"rebuildCache": "キャッシュを再構築",
"dismiss": "閉じる",
"rebuilding": "キャッシュを再構築中...",
"rebuildFailed": "キャッシュの再構築に失敗しました: {error}",
"retry": "再試行"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "레시피",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "통계"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "LoRA 검색...",
"recipes": "레시피 검색...",
"checkpoints": "Checkpoint 검색...",
"embeddings": "Embedding 검색...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "Embedding 검색..."
},
"options": "검색 옵션",
"searchIn": "검색 범위:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embedding 모델"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "루트",
"collapseAll": "모든 폴더 접기",
@@ -1116,10 +1104,6 @@
"title": "통계 초기화 중",
"message": "통계를 위한 모델 데이터를 처리하고 있습니다. 몇 분이 걸릴 수 있습니다..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "팁 & 요령",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "레시피가 워크플로에 추가되었습니다",
"recipeReplaced": "레시피가 워크플로에서 교체되었습니다",
"recipeFailedToSend": "레시피를 워크플로로 전송하지 못했습니다",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "현재 워크플로에서 호환되는 노드가 없습니다",
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다"
},
"nodeSelector": {
"recipe": "레시피",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "교체",
"append": "추가",
"selectTargetNode": "대상 노드 선택",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "캐시 손상이 감지되었습니다"
},
"degraded": {
"title": "캐시 문제가 감지되었습니다"
},
"content": "{total}개의 캐시 항목 중 {invalid}개가 유효하지 않습니다 ({rate}). 모델 누락이나 오류가 발생할 수 있습니다. 캐시를 재구축하는 것이 좋습니다.",
"rebuildCache": "캐시 재구축",
"dismiss": "무시",
"rebuilding": "캐시 재구축 중...",
"rebuildFailed": "캐시 재구축 실패: {error}",
"retry": "다시 시도"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "Рецепты",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Статистика"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "Поиск LoRAs...",
"recipes": "Поиск рецептов...",
"checkpoints": "Поиск checkpoints...",
"embeddings": "Поиск embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "Поиск embeddings..."
},
"options": "Опции поиска",
"searchIn": "Искать в:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Модели Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Корень",
"collapseAll": "Свернуть все папки",
@@ -1116,10 +1104,6 @@
"title": "Инициализация статистики",
"message": "Обработка данных моделей для статистики. Это может занять несколько минут..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Советы и хитрости",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "Рецепт добавлен в workflow",
"recipeReplaced": "Рецепт заменён в workflow",
"recipeFailedToSend": "Не удалось отправить рецепт в workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "В текущем workflow нет совместимых узлов",
"noTargetNodeSelected": "Целевой узел не выбран"
},
"nodeSelector": {
"recipe": "Рецепт",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Заменить",
"append": "Добавить",
"selectTargetNode": "Выберите целевой узел",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "Обнаружено повреждение кэша"
},
"degraded": {
"title": "Обнаружены проблемы с кэшем"
},
"content": "{invalid} из {total} записей кэша недействительны ({rate}). Это может привести к отсутствию моделей или ошибкам. Рекомендуется перестроить кэш.",
"rebuildCache": "Перестроить кэш",
"dismiss": "Отклонить",
"rebuilding": "Перестроение кэша...",
"rebuildFailed": "Не удалось перестроить кэш: {error}",
"retry": "Повторить"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "配方",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "统计"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "搜索 LoRA...",
"recipes": "搜索配方...",
"checkpoints": "搜索 Checkpoint...",
"embeddings": "搜索 Embedding...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "搜索 Embedding..."
},
"options": "搜索选项",
"searchIn": "搜索范围:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embedding 模型"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "根目录",
"collapseAll": "折叠所有文件夹",
@@ -1116,10 +1104,6 @@
"title": "初始化统计",
"message": "正在处理模型数据以生成统计信息。这可能需要几分钟..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "技巧与提示",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "配方已追加到工作流",
"recipeReplaced": "配方已替换到工作流",
"recipeFailedToSend": "发送配方到工作流失败",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "当前工作流中没有兼容的节点",
"noTargetNodeSelected": "未选择目标节点"
},
"nodeSelector": {
"recipe": "配方",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "替换",
"append": "追加",
"selectTargetNode": "选择目标节点",
@@ -1594,6 +1572,20 @@
"content": "来爱发电为Lora Manager项目发电支持项目持续开发的同时获取浏览器插件验证码按季支付更优惠支付宝/微信方便支付。感谢支持!🚀",
"supportCta": "为LM发电",
"learnMore": "浏览器插件教程"
},
"cacheHealth": {
"corrupted": {
"title": "检测到缓存损坏"
},
"degraded": {
"title": "检测到缓存问题"
},
"content": "{total} 个缓存条目中有 {invalid} 个无效({rate})。这可能导致模型丢失或错误。建议重建缓存。",
"rebuildCache": "重建缓存",
"dismiss": "忽略",
"rebuilding": "正在重建缓存...",
"rebuildFailed": "重建缓存失败:{error}",
"retry": "重试"
}
}
}

View File

@@ -179,7 +179,6 @@
"recipes": "配方",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "統計"
},
"search": {
@@ -188,8 +187,7 @@
"loras": "搜尋 LoRA...",
"recipes": "搜尋配方...",
"checkpoints": "搜尋 checkpoint...",
"embeddings": "搜尋 embedding...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
"embeddings": "搜尋 embedding..."
},
"options": "搜尋選項",
"searchIn": "搜尋範圍:",
@@ -690,16 +688,6 @@
"embeddings": {
"title": "Embedding 模型"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "根目錄",
"collapseAll": "全部摺疊資料夾",
@@ -1116,10 +1104,6 @@
"title": "初始化統計",
"message": "正在處理模型資料以產生統計,可能需要幾分鐘..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "小技巧",
"civitai": {
@@ -1179,18 +1163,12 @@
"recipeAdded": "配方已附加到工作流",
"recipeReplaced": "配方已取代於工作流",
"recipeFailedToSend": "傳送配方到工作流失敗",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "目前工作流程中沒有相容的節點",
"noTargetNodeSelected": "未選擇目標節點"
},
"nodeSelector": {
"recipe": "配方",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "取代",
"append": "附加",
"selectTargetNode": "選擇目標節點",
@@ -1594,6 +1572,20 @@
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
"supportCta": "Support on Ko-fi",
"learnMore": "LM Civitai Extension Tutorial"
},
"cacheHealth": {
"corrupted": {
"title": "檢測到快取損壞"
},
"degraded": {
"title": "檢測到快取問題"
},
"content": "{total} 個快取項目中有 {invalid} 個無效({rate})。這可能會導致模型遺失或錯誤。建議重建快取。",
"rebuildCache": "重建快取",
"dismiss": "關閉",
"rebuilding": "重建快取中...",
"rebuildFailed": "重建快取失敗:{error}",
"retry": "重試"
}
}
}

View File

@@ -4,7 +4,9 @@
"private": true,
"type": "module",
"scripts": {
"test": "vitest run",
"test": "npm run test:js && npm run test:vue",
"test:js": "vitest run",
"test:vue": "cd vue-widgets && npx vitest run",
"test:watch": "vitest",
"test:coverage": "node scripts/run_frontend_coverage.js"
},

View File

@@ -89,11 +89,8 @@ class Config:
self.checkpoints_roots = None
self.unet_roots = None
self.embeddings_roots = None
self.vae_roots = None
self.upscaler_roots = None
self.base_models_roots = self._init_checkpoint_paths()
self.embeddings_roots = self._init_embedding_paths()
self.misc_roots = self._init_misc_paths()
# Scan symbolic links during initialization
self._initialize_symlink_mappings()
@@ -154,8 +151,6 @@ class Config:
'checkpoints': list(self.checkpoints_roots or []),
'unet': list(self.unet_roots or []),
'embeddings': list(self.embeddings_roots or []),
'vae': list(self.vae_roots or []),
'upscale_models': list(self.upscaler_roots or []),
}
normalized_target_paths = _normalize_folder_paths_for_comparison(target_folder_paths)
@@ -255,7 +250,6 @@ class Config:
roots.extend(self.loras_roots or [])
roots.extend(self.base_models_roots or [])
roots.extend(self.embeddings_roots or [])
roots.extend(self.misc_roots or [])
return roots
def _build_symlink_fingerprint(self) -> Dict[str, object]:
@@ -447,82 +441,53 @@ class Config:
logger.info("Failed to write symlink cache %s: %s", cache_path, exc)
def _scan_symbolic_links(self):
"""Scan all symbolic links in LoRA, Checkpoint, and Embedding root directories"""
"""Scan symbolic links in LoRA, Checkpoint, and Embedding root directories.
Only scans the first level of each root directory to avoid performance
issues with large file systems. Detects symlinks and Windows junctions
at the root level only (not nested symlinks in subdirectories).
"""
start = time.perf_counter()
# Reset mappings before rescanning to avoid stale entries
self._path_mappings.clear()
self._seed_root_symlink_mappings()
visited_dirs: Set[str] = set()
for root in self._symlink_roots():
self._scan_directory_links(root, visited_dirs)
self._scan_first_level_symlinks(root)
logger.debug(
"Symlink scan finished in %.2f ms with %d mappings",
(time.perf_counter() - start) * 1000,
len(self._path_mappings),
)
def _scan_directory_links(self, root: str, visited_dirs: Set[str]):
"""Iteratively scan directory symlinks to avoid deep recursion."""
def _scan_first_level_symlinks(self, root: str):
"""Scan only the first level of a directory for symlinks.
This avoids traversing the entire directory tree which can be extremely
slow for large model collections. Only symlinks directly under the root
are detected.
"""
try:
# Note: We only use realpath for the initial root if it's not already resolved
# to ensure we have a valid entry point.
root_real = self._normalize_path(os.path.realpath(root))
except OSError:
root_real = self._normalize_path(root)
with os.scandir(root) as it:
for entry in it:
try:
# Only detect symlinks including Windows junctions
# Skip normal directories to avoid deep traversal
if not self._entry_is_symlink(entry):
continue
if root_real in visited_dirs:
return
# Resolve the symlink target
target_path = os.path.realpath(entry.path)
if not os.path.isdir(target_path):
continue
visited_dirs.add(root_real)
# Stack entries: (display_path, real_resolved_path)
stack: List[Tuple[str, str]] = [(root, root_real)]
while stack:
current_display, current_real = stack.pop()
try:
with os.scandir(current_display) as it:
for entry in it:
try:
# 1. Detect symlinks including Windows junctions
is_link = self._entry_is_symlink(entry)
if is_link:
# Only resolve realpath when we actually find a link
target_path = os.path.realpath(entry.path)
if not os.path.isdir(target_path):
continue
normalized_target = self._normalize_path(target_path)
self.add_path_mapping(entry.path, target_path)
if normalized_target in visited_dirs:
continue
visited_dirs.add(normalized_target)
stack.append((target_path, normalized_target))
continue
# 2. Process normal directories
if not entry.is_dir(follow_symlinks=False):
continue
# For normal directories, we avoid realpath() call by
# incrementally building the real path relative to current_real.
# This is safe because 'entry' is NOT a symlink.
entry_real = self._normalize_path(os.path.join(current_real, entry.name))
if entry_real in visited_dirs:
continue
visited_dirs.add(entry_real)
stack.append((entry.path, entry_real))
except Exception as inner_exc:
logger.debug(
"Error processing directory entry %s: %s", entry.path, inner_exc
)
except Exception as e:
logger.error(f"Error scanning links in {current_display}: {e}")
self.add_path_mapping(entry.path, target_path)
except Exception as inner_exc:
logger.debug(
"Error processing directory entry %s: %s", entry.path, inner_exc
)
except Exception as e:
logger.error(f"Error scanning links in {root}: {e}")
@@ -605,8 +570,6 @@ class Config:
preview_roots.update(self._expand_preview_root(root))
for root in self.embeddings_roots or []:
preview_roots.update(self._expand_preview_root(root))
for root in self.misc_roots or []:
preview_roots.update(self._expand_preview_root(root))
for target, link in self._path_mappings.items():
preview_roots.update(self._expand_preview_root(target))
@@ -614,12 +577,11 @@ class Config:
self._preview_root_paths = {path for path in preview_roots if path.is_absolute()}
logger.debug(
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d misc roots, %d symlink mappings",
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d symlink mappings",
len(self._preview_root_paths),
len(self.loras_roots or []),
len(self.base_models_roots or []),
len(self.embeddings_roots or []),
len(self.misc_roots or []),
len(self._path_mappings),
)
@@ -683,6 +645,23 @@ class Config:
checkpoint_map = self._dedupe_existing_paths(checkpoint_paths)
unet_map = self._dedupe_existing_paths(unet_paths)
# Detect when checkpoints and unet share the same physical location
# This is a configuration issue that can cause duplicate model entries
overlapping_real_paths = set(checkpoint_map.keys()) & set(unet_map.keys())
if overlapping_real_paths:
logger.warning(
"Detected overlapping paths between 'checkpoints' and 'diffusion_models' (unet). "
"They should not point to the same physical folder as they are different model types. "
"Please fix your ComfyUI path configuration to separate these folders. "
"Falling back to 'checkpoints' for backward compatibility. "
"Overlapping real paths: %s",
[checkpoint_map.get(rp, rp) for rp in overlapping_real_paths]
)
# Remove overlapping paths from unet_map to prioritize checkpoints
for rp in overlapping_real_paths:
if rp in unet_map:
del unet_map[rp]
merged_map: Dict[str, str] = {}
for real_path, original in {**checkpoint_map, **unet_map}.items():
if real_path not in merged_map:
@@ -778,49 +757,6 @@ class Config:
logger.warning(f"Error initializing embedding paths: {e}")
return []
def _init_misc_paths(self) -> List[str]:
"""Initialize and validate misc (VAE and upscaler) paths from ComfyUI settings"""
try:
raw_vae_paths = folder_paths.get_folder_paths("vae")
raw_upscaler_paths = folder_paths.get_folder_paths("upscale_models")
unique_paths = self._prepare_misc_paths(raw_vae_paths, raw_upscaler_paths)
logger.info("Found misc roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
if not unique_paths:
logger.warning("No valid VAE or upscaler folders found in ComfyUI configuration")
return []
return unique_paths
except Exception as e:
logger.warning(f"Error initializing misc paths: {e}")
return []
def _prepare_misc_paths(
self, vae_paths: Iterable[str], upscaler_paths: Iterable[str]
) -> List[str]:
vae_map = self._dedupe_existing_paths(vae_paths)
upscaler_map = self._dedupe_existing_paths(upscaler_paths)
merged_map: Dict[str, str] = {}
for real_path, original in {**vae_map, **upscaler_map}.items():
if real_path not in merged_map:
merged_map[real_path] = original
unique_paths = sorted(merged_map.values(), key=lambda p: p.lower())
vae_values = set(vae_map.values())
upscaler_values = set(upscaler_map.values())
self.vae_roots = [p for p in unique_paths if p in vae_values]
self.upscaler_roots = [p for p in unique_paths if p in upscaler_values]
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
return unique_paths
def get_preview_static_url(self, preview_path: str) -> str:
if not preview_path:
return ""
@@ -830,7 +766,23 @@ class Config:
return f'/api/lm/previews?path={encoded_path}'
def is_preview_path_allowed(self, preview_path: str) -> bool:
"""Return ``True`` if ``preview_path`` is within an allowed directory."""
"""Return ``True`` if ``preview_path`` is within an allowed directory.
If the path is initially rejected, attempts to discover deep symlinks
that were not scanned during initialization. If a symlink is found,
updates the in-memory path mappings and retries the check.
"""
if self._is_path_in_allowed_roots(preview_path):
return True
if self._try_discover_deep_symlink(preview_path):
return self._is_path_in_allowed_roots(preview_path)
return False
def _is_path_in_allowed_roots(self, preview_path: str) -> bool:
"""Check if preview_path is within allowed preview roots without modification."""
if not preview_path:
return False
@@ -840,29 +792,72 @@ class Config:
except Exception:
return False
# Use os.path.normcase for case-insensitive comparison on Windows.
# On Windows, Path.relative_to() is case-sensitive for drive letters,
# causing paths like 'a:/folder' to not match 'A:/folder'.
candidate_str = os.path.normcase(str(candidate))
for root in self._preview_root_paths:
root_str = os.path.normcase(str(root))
# Check if candidate is equal to or under the root directory
if candidate_str == root_str or candidate_str.startswith(root_str + os.sep):
return True
if self._preview_root_paths:
logger.debug(
"Preview path rejected: %s (candidate=%s, num_roots=%d, first_root=%s)",
preview_path,
candidate_str,
len(self._preview_root_paths),
os.path.normcase(str(next(iter(self._preview_root_paths)))),
)
else:
logger.debug(
"Preview path rejected (no roots configured): %s",
preview_path,
)
logger.debug(
"Path not in allowed roots: %s (candidate=%s, num_roots=%d)",
preview_path,
candidate_str,
len(self._preview_root_paths),
)
return False
def _try_discover_deep_symlink(self, preview_path: str) -> bool:
"""Attempt to discover a deep symlink that contains the preview_path.
Walks up from the preview path to the root directories, checking each
parent directory for symlinks. If a symlink is found, updates the
in-memory path mappings and preview roots.
Only updates in-memory state (self._path_mappings and self._preview_root_paths),
does not modify the persistent cache file.
Returns:
True if a symlink was discovered and mappings updated, False otherwise.
"""
if not preview_path:
return False
try:
candidate = Path(preview_path).expanduser()
except Exception:
return False
current = candidate
while True:
try:
if self._is_link(str(current)):
try:
target = os.path.realpath(str(current))
normalized_target = self._normalize_path(target)
normalized_link = self._normalize_path(str(current))
self._path_mappings[normalized_target] = normalized_link
self._preview_root_paths.update(self._expand_preview_root(normalized_target))
self._preview_root_paths.update(self._expand_preview_root(normalized_link))
logger.debug(
"Discovered deep symlink: %s -> %s (preview path: %s)",
normalized_link,
normalized_target,
preview_path
)
return True
except OSError:
pass
except OSError:
pass
parent = current.parent
if parent == current:
break
current = parent
return False

View File

@@ -184,17 +184,15 @@ class LoraManager:
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
misc_scanner = await ServiceRegistry.get_misc_scanner()
# Initialize recipe scanner if needed
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
# Create low-priority initialization tasks
init_tasks = [
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init'),
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init'),
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init'),
asyncio.create_task(misc_scanner.initialize_in_background(), name='misc_cache_init'),
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
]
@@ -254,9 +252,8 @@ class LoraManager:
# Collect all model roots
all_roots = set()
all_roots.update(config.loras_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.embeddings_roots)
all_roots.update(config.misc_roots or [])
total_deleted = 0
total_size_freed = 0

View File

@@ -1,4 +1,7 @@
import os
import logging
logger = logging.getLogger(__name__)
# Check if running in standalone mode
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
@@ -14,7 +17,7 @@ if not standalone_mode:
# Initialize registry
registry = MetadataRegistry()
print("ComfyUI Metadata Collector initialized")
logger.info("ComfyUI Metadata Collector initialized")
def get_metadata(prompt_id=None):
"""Helper function to get metadata from the registry"""
@@ -23,7 +26,7 @@ if not standalone_mode:
else:
# Standalone mode - provide dummy implementations
def init():
print("ComfyUI Metadata Collector disabled in standalone mode")
logger.info("ComfyUI Metadata Collector disabled in standalone mode")
def get_metadata(prompt_id=None):
"""Dummy implementation for standalone mode"""

View File

@@ -1,7 +1,10 @@
import sys
import inspect
import logging
from .metadata_registry import MetadataRegistry
logger = logging.getLogger(__name__)
class MetadataHook:
"""Install hooks for metadata collection"""
@@ -23,7 +26,7 @@ class MetadataHook:
# If we can't find the execution module, we can't install hooks
if execution is None:
print("Could not locate ComfyUI execution module, metadata collection disabled")
logger.warning("Could not locate ComfyUI execution module, metadata collection disabled")
return
# Detect whether we're using the new async version of ComfyUI
@@ -37,16 +40,16 @@ class MetadataHook:
is_async = inspect.iscoroutinefunction(execution._map_node_over_list)
if is_async:
print("Detected async ComfyUI execution, installing async metadata hooks")
logger.info("Detected async ComfyUI execution, installing async metadata hooks")
MetadataHook._install_async_hooks(execution, map_node_func_name)
else:
print("Detected sync ComfyUI execution, installing sync metadata hooks")
logger.info("Detected sync ComfyUI execution, installing sync metadata hooks")
MetadataHook._install_sync_hooks(execution)
print("Metadata collection hooks installed for runtime values")
logger.info("Metadata collection hooks installed for runtime values")
except Exception as e:
print(f"Error installing metadata hooks: {str(e)}")
logger.error(f"Error installing metadata hooks: {str(e)}")
@staticmethod
def _install_sync_hooks(execution):
@@ -82,7 +85,7 @@ class MetadataHook:
if node_id is not None:
registry.record_node_execution(node_id, class_type, input_data_all, None)
except Exception as e:
print(f"Error collecting metadata (pre-execution): {str(e)}")
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
# Execute the original function
results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb)
@@ -113,7 +116,7 @@ class MetadataHook:
if node_id is not None:
registry.update_node_execution(node_id, class_type, results)
except Exception as e:
print(f"Error collecting metadata (post-execution): {str(e)}")
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
return results
@@ -159,7 +162,7 @@ class MetadataHook:
if node_id is not None:
registry.record_node_execution(node_id, class_type, input_data_all, None)
except Exception as e:
print(f"Error collecting metadata (pre-execution): {str(e)}")
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
# Call original function with all args/kwargs
results = await original_map_node_over_list(
@@ -176,7 +179,7 @@ class MetadataHook:
if node_id is not None:
registry.update_node_execution(node_id, class_type, results)
except Exception as e:
print(f"Error collecting metadata (post-execution): {str(e)}")
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
return results

View File

@@ -126,9 +126,7 @@ class LoraCyclerLM:
"current_index": [clamped_index],
"next_index": [next_index],
"total_count": [total_count],
"current_lora_name": [
current_lora.get("model_name", current_lora["file_name"])
],
"current_lora_name": [current_lora["file_name"]],
"current_lora_filename": [current_lora["file_name"]],
"next_lora_name": [next_display_name],
"next_lora_filename": [next_lora["file_name"]],

View File

@@ -8,6 +8,9 @@ from ..metadata_collector.metadata_processor import MetadataProcessor
from ..metadata_collector import get_metadata
from PIL import Image, PngImagePlugin
import piexif
import logging
logger = logging.getLogger(__name__)
class SaveImageLM:
NAME = "Save Image (LoraManager)"
@@ -385,7 +388,7 @@ class SaveImageLM:
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
print(f"Error adding EXIF data: {e}")
logger.error(f"Error adding EXIF data: {e}")
img.save(file_path, format="JPEG", **save_kwargs)
elif file_format == "webp":
try:
@@ -403,7 +406,7 @@ class SaveImageLM:
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
print(f"Error adding EXIF data: {e}")
logger.error(f"Error adding EXIF data: {e}")
img.save(file_path, format="WEBP", **save_kwargs)
@@ -414,7 +417,7 @@ class SaveImageLM:
})
except Exception as e:
print(f"Error saving image: {e}")
logger.error(f"Error saving image: {e}")
return results

View File

@@ -60,6 +60,22 @@ class TriggerWordToggleLM:
else:
return data
def _normalize_trigger_words(self, trigger_words):
"""Normalize trigger words by splitting by both single and double commas, stripping whitespace, and filtering empty strings"""
if not trigger_words or not isinstance(trigger_words, str):
return set()
# Split by double commas first to preserve groups, then by single commas
groups = re.split(r",{2,}", trigger_words)
words = []
for group in groups:
# Split each group by single comma
group_words = [word.strip() for word in group.split(",")]
words.extend(group_words)
# Filter out empty strings and return as set
return set(word for word in words if word)
def process_trigger_words(
self,
id,
@@ -81,7 +97,7 @@ class TriggerWordToggleLM:
if (
trigger_words_override
and isinstance(trigger_words_override, str)
and trigger_words_override != trigger_words
and self._normalize_trigger_words(trigger_words_override) != self._normalize_trigger_words(trigger_words)
):
filtered_triggers = trigger_words_override
return (filtered_triggers,)

View File

@@ -30,6 +30,7 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("POST", "/api/lm/force-download-example-images", "force_download_example_images"),
RouteDefinition("POST", "/api/lm/cleanup-example-image-folders", "cleanup_example_image_folders"),
RouteDefinition("POST", "/api/lm/example-images/set-nsfw-level", "set_example_image_nsfw_level"),
RouteDefinition("POST", "/api/lm/check-example-images-needed", "check_example_images_needed"),
)

View File

@@ -92,6 +92,19 @@ class ExampleImagesDownloadHandler:
except ExampleImagesDownloadError as exc:
return web.json_response({'success': False, 'error': str(exc)}, status=500)
async def check_example_images_needed(self, request: web.Request) -> web.StreamResponse:
"""Lightweight check to see if any models need example images downloaded."""
try:
payload = await request.json()
model_types = payload.get('model_types', ['lora', 'checkpoint', 'embedding'])
result = await self._download_manager.check_pending_models(model_types)
return web.json_response(result)
except Exception as exc:
return web.json_response(
{'success': False, 'error': str(exc)},
status=500
)
class ExampleImagesManagementHandler:
"""HTTP adapters for import/delete endpoints."""
@@ -161,6 +174,7 @@ class ExampleImagesHandlerSet:
"resume_example_images": self.download.resume_example_images,
"stop_example_images": self.download.stop_example_images,
"force_download_example_images": self.download.force_download_example_images,
"check_example_images_needed": self.download.check_example_images_needed,
"import_example_images": self.management.import_example_images,
"delete_example_image": self.management.delete_example_image,
"set_example_image_nsfw_level": self.management.set_example_image_nsfw_level,

View File

@@ -33,6 +33,10 @@ class PreviewHandler:
raise web.HTTPBadRequest(text="Invalid preview path encoding") from exc
normalized = decoded_path.replace("\\", "/")
if not self._config.is_preview_path_allowed(normalized):
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
candidate = Path(normalized)
try:
resolved = candidate.expanduser().resolve(strict=False)
@@ -40,12 +44,8 @@ class PreviewHandler:
logger.debug("Failed to resolve preview path %s: %s", normalized, exc)
raise web.HTTPBadRequest(text="Unable to resolve preview path") from exc
resolved_str = str(resolved)
if not self._config.is_preview_path_allowed(resolved_str):
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
if not resolved.is_file():
logger.debug("Preview file not found at %s", resolved_str)
logger.debug("Preview file not found at %s", str(resolved))
raise web.HTTPNotFound(text="Preview file not found")
# aiohttp's FileResponse handles range requests and content headers for us.

View File

@@ -412,10 +412,11 @@ class RecipeQueryHandler:
if recipe_scanner is None:
raise RuntimeError("Recipe scanner unavailable")
duplicate_groups = await recipe_scanner.find_all_duplicate_recipes()
fingerprint_groups = await recipe_scanner.find_all_duplicate_recipes()
url_groups = await recipe_scanner.find_duplicate_recipes_by_source()
response_data = []
for fingerprint, recipe_ids in duplicate_groups.items():
for fingerprint, recipe_ids in fingerprint_groups.items():
if len(recipe_ids) <= 1:
continue
@@ -439,12 +440,44 @@ class RecipeQueryHandler:
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
response_data.append(
{
"type": "fingerprint",
"fingerprint": fingerprint,
"count": len(recipes),
"recipes": recipes,
}
)
for url, recipe_ids in url_groups.items():
if len(recipe_ids) <= 1:
continue
recipes = []
for recipe_id in recipe_ids:
recipe = await recipe_scanner.get_recipe_by_id(recipe_id)
if recipe:
recipes.append(
{
"id": recipe.get("id"),
"title": recipe.get("title"),
"file_url": recipe.get("file_url")
or self._format_recipe_file_url(recipe.get("file_path", "")),
"modified": recipe.get("modified"),
"created_date": recipe.get("created_date"),
"lora_count": len(recipe.get("loras", [])),
}
)
if len(recipes) >= 2:
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
response_data.append(
{
"type": "source_url",
"fingerprint": url,
"count": len(recipes),
"recipes": recipes,
}
)
response_data.sort(key=lambda entry: entry["count"], reverse=True)
return web.json_response({"success": True, "duplicate_groups": response_data})
except Exception as exc:
@@ -1021,7 +1054,7 @@ class RecipeManagementHandler:
"exclude": False,
}
async def _download_remote_media(self, image_url: str) -> tuple[bytes, str]:
async def _download_remote_media(self, image_url: str) -> tuple[bytes, str, Any]:
civitai_client = self._civitai_client_getter()
downloader = await self._downloader_factory()
temp_path = None
@@ -1029,6 +1062,7 @@ class RecipeManagementHandler:
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
temp_path = temp_file.name
download_url = image_url
image_info = None
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", image_url)
if civitai_match:
if civitai_client is None:

View File

@@ -1,112 +0,0 @@
import logging
from typing import Dict
from aiohttp import web
from .base_model_routes import BaseModelRoutes
from .model_route_registrar import ModelRouteRegistrar
from ..services.misc_service import MiscService
from ..services.service_registry import ServiceRegistry
from ..config import config
logger = logging.getLogger(__name__)
class MiscModelRoutes(BaseModelRoutes):
"""Misc-specific route controller (VAE, Upscaler)"""
def __init__(self):
"""Initialize Misc routes with Misc service"""
super().__init__()
self.template_name = "misc.html"
async def initialize_services(self):
"""Initialize services from ServiceRegistry"""
misc_scanner = await ServiceRegistry.get_misc_scanner()
update_service = await ServiceRegistry.get_model_update_service()
self.service = MiscService(misc_scanner, update_service=update_service)
self.set_model_update_service(update_service)
# Attach service dependencies
self.attach_service(self.service)
def setup_routes(self, app: web.Application):
"""Setup Misc routes"""
# Schedule service initialization on app startup
app.on_startup.append(lambda _: self.initialize_services())
# Setup common routes with 'misc' prefix (includes page route)
super().setup_routes(app, 'misc')
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
"""Setup Misc-specific routes"""
# Misc info by name
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/info/{name}', prefix, self.get_misc_info)
# VAE roots and Upscaler roots
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/vae_roots', prefix, self.get_vae_roots)
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/upscaler_roots', prefix, self.get_upscaler_roots)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Misc (VAE or Upscaler)"""
return model_type.lower() in ['vae', 'upscaler']
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "VAE or Upscaler"
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse Misc-specific parameters"""
params: Dict = {}
if 'misc_hash' in request.query:
params['hash_filters'] = {'single_hash': request.query['misc_hash'].lower()}
elif 'misc_hashes' in request.query:
params['hash_filters'] = {
'multiple_hashes': [h.lower() for h in request.query['misc_hashes'].split(',')]
}
return params
async def get_misc_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific misc model by name"""
try:
name = request.match_info.get('name', '')
misc_info = await self.service.get_model_info_by_name(name)
if misc_info:
return web.json_response(misc_info)
else:
return web.json_response({"error": "Misc model not found"}, status=404)
except Exception as e:
logger.error(f"Error in get_misc_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_vae_roots(self, request: web.Request) -> web.Response:
"""Return the list of VAE roots from config"""
try:
roots = config.vae_roots
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting VAE roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_upscaler_roots(self, request: web.Request) -> web.Response:
"""Return the list of upscaler roots from config"""
try:
roots = config.upscaler_roots
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting upscaler roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)

View File

@@ -0,0 +1,259 @@
"""
Cache Entry Validator
Validates and repairs cache entries to prevent runtime errors from
missing or invalid critical fields.
"""
from dataclasses import dataclass, field
from typing import Any, Dict, List, Optional, Tuple
import logging
import os
logger = logging.getLogger(__name__)
@dataclass
class ValidationResult:
"""Result of validating a single cache entry."""
is_valid: bool
repaired: bool
errors: List[str] = field(default_factory=list)
entry: Optional[Dict[str, Any]] = None
class CacheEntryValidator:
"""
Validates and repairs cache entry core fields.
Critical fields that cause runtime errors when missing:
- file_path: KeyError in multiple locations
- sha256: KeyError/AttributeError in hash operations
Medium severity fields that may cause sorting/display issues:
- size: KeyError during sorting
- modified: KeyError during sorting
- model_name: AttributeError on .lower() calls
Low severity fields:
- tags: KeyError/TypeError in recipe operations
"""
# Field definitions: (default_value, is_required)
CORE_FIELDS: Dict[str, Tuple[Any, bool]] = {
'file_path': ('', True),
'sha256': ('', True),
'file_name': ('', False),
'model_name': ('', False),
'folder': ('', False),
'size': (0, False),
'modified': (0.0, False),
'tags': ([], False),
'preview_url': ('', False),
'base_model': ('', False),
'from_civitai': (True, False),
'favorite': (False, False),
'exclude': (False, False),
'db_checked': (False, False),
'preview_nsfw_level': (0, False),
'notes': ('', False),
'usage_tips': ('', False),
}
@classmethod
def validate(cls, entry: Dict[str, Any], *, auto_repair: bool = True) -> ValidationResult:
"""
Validate a single cache entry.
Args:
entry: The cache entry dictionary to validate
auto_repair: If True, attempt to repair missing/invalid fields
Returns:
ValidationResult with validation status and optionally repaired entry
"""
if entry is None:
return ValidationResult(
is_valid=False,
repaired=False,
errors=['Entry is None'],
entry=None
)
if not isinstance(entry, dict):
return ValidationResult(
is_valid=False,
repaired=False,
errors=[f'Entry is not a dict: {type(entry).__name__}'],
entry=None
)
errors: List[str] = []
repaired = False
working_entry = dict(entry) if auto_repair else entry
for field_name, (default_value, is_required) in cls.CORE_FIELDS.items():
value = working_entry.get(field_name)
# Check if field is missing or None
if value is None:
if is_required:
errors.append(f"Required field '{field_name}' is missing or None")
if auto_repair:
working_entry[field_name] = cls._get_default_copy(default_value)
repaired = True
continue
# Validate field type and value
field_error = cls._validate_field(field_name, value, default_value)
if field_error:
errors.append(field_error)
if auto_repair:
working_entry[field_name] = cls._get_default_copy(default_value)
repaired = True
# Special validation: file_path must not be empty for required field
file_path = working_entry.get('file_path', '')
if not file_path or (isinstance(file_path, str) and not file_path.strip()):
errors.append("Required field 'file_path' is empty")
# Cannot repair empty file_path - entry is invalid
return ValidationResult(
is_valid=False,
repaired=repaired,
errors=errors,
entry=working_entry if auto_repair else None
)
# Special validation: sha256 must not be empty for required field
sha256 = working_entry.get('sha256', '')
if not sha256 or (isinstance(sha256, str) and not sha256.strip()):
errors.append("Required field 'sha256' is empty")
# Cannot repair empty sha256 - entry is invalid
return ValidationResult(
is_valid=False,
repaired=repaired,
errors=errors,
entry=working_entry if auto_repair else None
)
# Normalize sha256 to lowercase if needed
if isinstance(sha256, str):
normalized_sha = sha256.lower().strip()
if normalized_sha != sha256:
working_entry['sha256'] = normalized_sha
repaired = True
# Determine if entry is valid
# Entry is valid if no critical required field errors remain after repair
# Critical fields are file_path and sha256
CRITICAL_REQUIRED_FIELDS = {'file_path', 'sha256'}
has_critical_errors = any(
"Required field" in error and
any(f"'{field}'" in error for field in CRITICAL_REQUIRED_FIELDS)
for error in errors
)
is_valid = not has_critical_errors
return ValidationResult(
is_valid=is_valid,
repaired=repaired,
errors=errors,
entry=working_entry if auto_repair else entry
)
@classmethod
def validate_batch(
cls,
entries: List[Dict[str, Any]],
*,
auto_repair: bool = True
) -> Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]:
"""
Validate a batch of cache entries.
Args:
entries: List of cache entry dictionaries to validate
auto_repair: If True, attempt to repair missing/invalid fields
Returns:
Tuple of (valid_entries, invalid_entries)
"""
if not entries:
return [], []
valid_entries: List[Dict[str, Any]] = []
invalid_entries: List[Dict[str, Any]] = []
for entry in entries:
result = cls.validate(entry, auto_repair=auto_repair)
if result.is_valid:
# Use repaired entry if available, otherwise original
valid_entries.append(result.entry if result.entry else entry)
else:
invalid_entries.append(entry)
# Log invalid entries for debugging
file_path = entry.get('file_path', '<unknown>') if isinstance(entry, dict) else '<not a dict>'
logger.warning(
f"Invalid cache entry for '{file_path}': {', '.join(result.errors)}"
)
return valid_entries, invalid_entries
@classmethod
def _validate_field(cls, field_name: str, value: Any, default_value: Any) -> Optional[str]:
"""
Validate a specific field value.
Returns an error message if invalid, None if valid.
"""
expected_type = type(default_value)
# Special handling for numeric types
if expected_type == int:
if not isinstance(value, (int, float)):
return f"Field '{field_name}' should be numeric, got {type(value).__name__}"
elif expected_type == float:
if not isinstance(value, (int, float)):
return f"Field '{field_name}' should be numeric, got {type(value).__name__}"
elif expected_type == bool:
# Be lenient with boolean fields - accept truthy/falsy values
pass
elif expected_type == str:
if not isinstance(value, str):
return f"Field '{field_name}' should be string, got {type(value).__name__}"
elif expected_type == list:
if not isinstance(value, (list, tuple)):
return f"Field '{field_name}' should be list, got {type(value).__name__}"
return None
@classmethod
def _get_default_copy(cls, default_value: Any) -> Any:
"""Get a copy of the default value to avoid shared mutable state."""
if isinstance(default_value, list):
return list(default_value)
if isinstance(default_value, dict):
return dict(default_value)
return default_value
@classmethod
def get_file_path_safe(cls, entry: Dict[str, Any], default: str = '') -> str:
"""Safely get file_path from an entry."""
if not isinstance(entry, dict):
return default
value = entry.get('file_path')
if isinstance(value, str):
return value
return default
@classmethod
def get_sha256_safe(cls, entry: Dict[str, Any], default: str = '') -> str:
"""Safely get sha256 from an entry."""
if not isinstance(entry, dict):
return default
value = entry.get('sha256')
if isinstance(value, str):
return value.lower()
return default

View File

@@ -0,0 +1,201 @@
"""
Cache Health Monitor
Monitors cache health status and determines when user intervention is needed.
"""
from dataclasses import dataclass, field
from enum import Enum
from typing import Any, Dict, List, Optional
import logging
from .cache_entry_validator import CacheEntryValidator, ValidationResult
logger = logging.getLogger(__name__)
class CacheHealthStatus(Enum):
"""Health status of the cache."""
HEALTHY = "healthy"
DEGRADED = "degraded"
CORRUPTED = "corrupted"
@dataclass
class HealthReport:
"""Report of cache health check."""
status: CacheHealthStatus
total_entries: int
valid_entries: int
invalid_entries: int
repaired_entries: int
invalid_paths: List[str] = field(default_factory=list)
message: str = ""
@property
def corruption_rate(self) -> float:
"""Calculate the percentage of invalid entries."""
if self.total_entries <= 0:
return 0.0
return self.invalid_entries / self.total_entries
def to_dict(self) -> Dict[str, Any]:
"""Convert to dictionary for JSON serialization."""
return {
'status': self.status.value,
'total_entries': self.total_entries,
'valid_entries': self.valid_entries,
'invalid_entries': self.invalid_entries,
'repaired_entries': self.repaired_entries,
'corruption_rate': f"{self.corruption_rate:.1%}",
'invalid_paths': self.invalid_paths[:10], # Limit to first 10
'message': self.message,
}
class CacheHealthMonitor:
"""
Monitors cache health and determines appropriate status.
Thresholds:
- HEALTHY: 0% invalid entries
- DEGRADED: 0-5% invalid entries (auto-repaired, user should rebuild)
- CORRUPTED: >5% invalid entries (significant data loss likely)
"""
# Threshold percentages
DEGRADED_THRESHOLD = 0.01 # 1% - show warning
CORRUPTED_THRESHOLD = 0.05 # 5% - critical warning
def __init__(
self,
*,
degraded_threshold: float = DEGRADED_THRESHOLD,
corrupted_threshold: float = CORRUPTED_THRESHOLD
):
"""
Initialize the health monitor.
Args:
degraded_threshold: Corruption rate threshold for DEGRADED status
corrupted_threshold: Corruption rate threshold for CORRUPTED status
"""
self.degraded_threshold = degraded_threshold
self.corrupted_threshold = corrupted_threshold
def check_health(
self,
entries: List[Dict[str, Any]],
*,
auto_repair: bool = True
) -> HealthReport:
"""
Check the health of cache entries.
Args:
entries: List of cache entry dictionaries to check
auto_repair: If True, attempt to repair entries during validation
Returns:
HealthReport with status and statistics
"""
if not entries:
return HealthReport(
status=CacheHealthStatus.HEALTHY,
total_entries=0,
valid_entries=0,
invalid_entries=0,
repaired_entries=0,
message="Cache is empty"
)
total_entries = len(entries)
valid_entries: List[Dict[str, Any]] = []
invalid_entries: List[Dict[str, Any]] = []
repaired_count = 0
invalid_paths: List[str] = []
for entry in entries:
result = CacheEntryValidator.validate(entry, auto_repair=auto_repair)
if result.is_valid:
valid_entries.append(result.entry if result.entry else entry)
if result.repaired:
repaired_count += 1
else:
invalid_entries.append(entry)
# Extract file path for reporting
file_path = CacheEntryValidator.get_file_path_safe(entry, '<unknown>')
invalid_paths.append(file_path)
invalid_count = len(invalid_entries)
valid_count = len(valid_entries)
# Determine status based on corruption rate
corruption_rate = invalid_count / total_entries if total_entries > 0 else 0.0
if invalid_count == 0:
status = CacheHealthStatus.HEALTHY
message = "Cache is healthy"
elif corruption_rate >= self.corrupted_threshold:
status = CacheHealthStatus.CORRUPTED
message = (
f"Cache is corrupted: {invalid_count} invalid entries "
f"({corruption_rate:.1%}). Rebuild recommended."
)
elif corruption_rate >= self.degraded_threshold or invalid_count > 0:
status = CacheHealthStatus.DEGRADED
message = (
f"Cache has {invalid_count} invalid entries "
f"({corruption_rate:.1%}). Consider rebuilding cache."
)
else:
# This shouldn't happen, but handle gracefully
status = CacheHealthStatus.HEALTHY
message = "Cache is healthy"
# Log the health check result
if status != CacheHealthStatus.HEALTHY:
logger.warning(
f"Cache health check: {status.value} - "
f"{invalid_count}/{total_entries} invalid, "
f"{repaired_count} repaired"
)
if invalid_paths:
logger.debug(f"Invalid entry paths: {invalid_paths[:5]}")
return HealthReport(
status=status,
total_entries=total_entries,
valid_entries=valid_count,
invalid_entries=invalid_count,
repaired_entries=repaired_count,
invalid_paths=invalid_paths,
message=message
)
def should_notify_user(self, report: HealthReport) -> bool:
"""
Determine if the user should be notified about cache health.
Args:
report: The health report to evaluate
Returns:
True if user should be notified
"""
return report.status != CacheHealthStatus.HEALTHY
def get_notification_severity(self, report: HealthReport) -> str:
"""
Get the severity level for user notification.
Args:
report: The health report to evaluate
Returns:
Severity string: 'warning' or 'error'
"""
if report.status == CacheHealthStatus.CORRUPTED:
return 'error'
return 'warning'

View File

@@ -9,7 +9,7 @@ from collections import OrderedDict
import uuid
from typing import Dict, List, Optional, Set, Tuple
from urllib.parse import urlparse
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata, MiscMetadata
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata
from ..utils.constants import CARD_PREVIEW_WIDTH, DIFFUSION_MODEL_BASE_MODELS, VALID_LORA_TYPES
from ..utils.civitai_utils import rewrite_preview_url
from ..utils.preview_selection import select_preview_media
@@ -60,10 +60,6 @@ class DownloadManager:
"""Get the checkpoint scanner from registry"""
return await ServiceRegistry.get_checkpoint_scanner()
async def _get_misc_scanner(self):
"""Get the misc scanner from registry"""
return await ServiceRegistry.get_misc_scanner()
async def download_from_civitai(
self,
model_id: int = None,
@@ -279,7 +275,6 @@ class DownloadManager:
lora_scanner = await self._get_lora_scanner()
checkpoint_scanner = await self._get_checkpoint_scanner()
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
misc_scanner = await self._get_misc_scanner()
# Check lora scanner first
if await lora_scanner.check_model_version_exists(model_version_id):
@@ -304,13 +299,6 @@ class DownloadManager:
"error": "Model version already exists in embedding library",
}
# Check misc scanner (VAE, Upscaler)
if await misc_scanner.check_model_version_exists(model_version_id):
return {
"success": False,
"error": "Model version already exists in misc library",
}
# Use CivArchive provider directly when source is 'civarchive'
# This prioritizes CivArchive metadata (with mirror availability info) over Civitai
if source == "civarchive":
@@ -349,10 +337,6 @@ class DownloadManager:
model_type = "lora"
elif model_type_from_info == "textualinversion":
model_type = "embedding"
elif model_type_from_info == "vae":
model_type = "misc"
elif model_type_from_info == "upscaler":
model_type = "misc"
else:
return {
"success": False,
@@ -395,14 +379,6 @@ class DownloadManager:
"success": False,
"error": "Model version already exists in embedding library",
}
elif model_type == "misc":
# Check misc scanner (VAE, Upscaler)
misc_scanner = await self._get_misc_scanner()
if await misc_scanner.check_model_version_exists(version_id):
return {
"success": False,
"error": "Model version already exists in misc library",
}
# Handle use_default_paths
if use_default_paths:
@@ -437,26 +413,6 @@ class DownloadManager:
"error": "Default embedding root path not set in settings",
}
save_dir = default_path
elif model_type == "misc":
from ..config import config
civitai_type = version_info.get("model", {}).get("type", "").lower()
if civitai_type == "vae":
default_paths = config.vae_roots
error_msg = "VAE root path not configured"
elif civitai_type == "upscaler":
default_paths = config.upscaler_roots
error_msg = "Upscaler root path not configured"
else:
default_paths = config.misc_roots
error_msg = "Misc root path not configured"
if not default_paths:
return {
"success": False,
"error": error_msg,
}
save_dir = default_paths[0] if default_paths else ""
# Calculate relative path using template
relative_path = self._calculate_relative_path(version_info, model_type)
@@ -559,11 +515,6 @@ class DownloadManager:
version_info, file_info, save_path
)
logger.info(f"Creating EmbeddingMetadata for {file_name}")
elif model_type == "misc":
metadata = MiscMetadata.from_civitai_info(
version_info, file_info, save_path
)
logger.info(f"Creating MiscMetadata for {file_name}")
# 6. Start download process
result = await self._execute_download(
@@ -669,8 +620,6 @@ class DownloadManager:
scanner = await self._get_checkpoint_scanner()
elif model_type == "embedding":
scanner = await ServiceRegistry.get_embedding_scanner()
elif model_type == "misc":
scanner = await self._get_misc_scanner()
except Exception as exc:
logger.debug("Failed to acquire scanner for %s models: %s", model_type, exc)
@@ -1067,9 +1016,6 @@ class DownloadManager:
elif model_type == "embedding":
scanner = await ServiceRegistry.get_embedding_scanner()
logger.info(f"Updating embedding cache for {actual_file_paths[0]}")
elif model_type == "misc":
scanner = await self._get_misc_scanner()
logger.info(f"Updating misc cache for {actual_file_paths[0]}")
adjust_cached_entry = (
getattr(scanner, "adjust_cached_entry", None)
@@ -1179,14 +1125,6 @@ class DownloadManager:
".pkl",
".sft",
}
if model_type == "misc":
return {
".ckpt",
".pt",
".bin",
".pth",
".safetensors",
}
return {".safetensors"}
async def _extract_model_files_from_archive(

View File

@@ -30,36 +30,36 @@ class LoraScanner(ModelScanner):
async def diagnose_hash_index(self):
"""Diagnostic method to verify hash index functionality"""
print("\n\n*** DIAGNOSING LORA HASH INDEX ***\n\n", file=sys.stderr)
logger.debug("\n\n*** DIAGNOSING LORA HASH INDEX ***\n\n")
# First check if the hash index has any entries
if hasattr(self, '_hash_index'):
index_entries = len(self._hash_index._hash_to_path)
print(f"Hash index has {index_entries} entries", file=sys.stderr)
logger.debug(f"Hash index has {index_entries} entries")
# Print a few example entries if available
if index_entries > 0:
print("\nSample hash index entries:", file=sys.stderr)
logger.debug("\nSample hash index entries:")
count = 0
for hash_val, path in self._hash_index._hash_to_path.items():
if count < 5: # Just show the first 5
print(f"Hash: {hash_val[:8]}... -> Path: {path}", file=sys.stderr)
logger.debug(f"Hash: {hash_val[:8]}... -> Path: {path}")
count += 1
else:
break
else:
print("Hash index not initialized", file=sys.stderr)
logger.debug("Hash index not initialized")
# Try looking up by a known hash for testing
if not hasattr(self, '_hash_index') or not self._hash_index._hash_to_path:
print("No hash entries to test lookup with", file=sys.stderr)
logger.debug("No hash entries to test lookup with")
return
test_hash = next(iter(self._hash_index._hash_to_path.keys()))
test_path = self._hash_index.get_path(test_hash)
print(f"\nTest lookup by hash: {test_hash[:8]}... -> {test_path}", file=sys.stderr)
logger.debug(f"\nTest lookup by hash: {test_hash[:8]}... -> {test_path}")
# Also test reverse lookup
test_hash_result = self._hash_index.get_hash(test_path)
print(f"Test reverse lookup: {test_path} -> {test_hash_result[:8]}...\n\n", file=sys.stderr)
logger.debug(f"Test reverse lookup: {test_path} -> {test_hash_result[:8]}...\n\n")

View File

@@ -44,6 +44,8 @@ async def initialize_metadata_providers():
logger.debug(f"SQLite metadata provider registered with database: {db_path}")
else:
logger.warning("Metadata archive database is enabled but database file not found")
logger.info("Automatically disabling enable_metadata_archive_db setting")
settings_manager.set('enable_metadata_archive_db', False)
except Exception as e:
logger.error(f"Failed to initialize SQLite metadata provider: {e}")

View File

@@ -243,17 +243,27 @@ class MetadataSyncService:
last_error = error or last_error
if civitai_metadata is None or metadata_provider is None:
# Track if we need to save metadata
needs_save = False
if sqlite_attempted:
model_data["db_checked"] = True
needs_save = True
if civitai_api_not_found:
model_data["from_civitai"] = False
model_data["civitai_deleted"] = True
model_data["db_checked"] = sqlite_attempted or (enable_archive and model_data.get("db_checked", False))
model_data["last_checked_at"] = datetime.now().timestamp()
needs_save = True
# Save metadata if any state was updated
if needs_save:
data_to_save = model_data.copy()
data_to_save.pop("folder", None)
# Update last_checked_at for sqlite-only attempts if not already set
if "last_checked_at" not in data_to_save:
data_to_save["last_checked_at"] = datetime.now().timestamp()
await self._metadata_manager.save_metadata(file_path, data_to_save)
default_error = (

View File

@@ -1,55 +0,0 @@
import logging
from typing import Any, Dict, List, Optional
from ..utils.models import MiscMetadata
from ..config import config
from .model_scanner import ModelScanner
from .model_hash_index import ModelHashIndex
logger = logging.getLogger(__name__)
class MiscScanner(ModelScanner):
"""Service for scanning and managing misc files (VAE, Upscaler)"""
def __init__(self):
# Define supported file extensions (combined from VAE and upscaler)
file_extensions = {'.safetensors', '.pt', '.bin', '.ckpt', '.pth'}
super().__init__(
model_type="misc",
model_class=MiscMetadata,
file_extensions=file_extensions,
hash_index=ModelHashIndex()
)
def _resolve_sub_type(self, root_path: Optional[str]) -> Optional[str]:
"""Resolve the sub-type based on the root path."""
if not root_path:
return None
if config.vae_roots and root_path in config.vae_roots:
return "vae"
if config.upscaler_roots and root_path in config.upscaler_roots:
return "upscaler"
return None
def adjust_metadata(self, metadata, file_path, root_path):
"""Adjust metadata during scanning to set sub_type."""
sub_type = self._resolve_sub_type(root_path)
if sub_type:
metadata.sub_type = sub_type
return metadata
def adjust_cached_entry(self, entry: Dict[str, Any]) -> Dict[str, Any]:
"""Adjust entries loaded from the persisted cache to ensure sub_type is set."""
sub_type = self._resolve_sub_type(
self._find_root_for_file(entry.get("file_path"))
)
if sub_type:
entry["sub_type"] = sub_type
return entry
def get_model_roots(self) -> List[str]:
"""Get misc root directories (VAE and upscaler)"""
return config.misc_roots

View File

@@ -1,55 +0,0 @@
import os
import logging
from typing import Dict
from .base_model_service import BaseModelService
from ..utils.models import MiscMetadata
from ..config import config
logger = logging.getLogger(__name__)
class MiscService(BaseModelService):
"""Misc-specific service implementation (VAE, Upscaler)"""
def __init__(self, scanner, update_service=None):
"""Initialize Misc service
Args:
scanner: Misc scanner instance
update_service: Optional service for remote update tracking.
"""
super().__init__("misc", scanner, MiscMetadata, update_service=update_service)
async def format_response(self, misc_data: Dict) -> Dict:
"""Format Misc data for API response"""
# Get sub_type from cache entry (new canonical field)
sub_type = misc_data.get("sub_type", "vae")
return {
"model_name": misc_data["model_name"],
"file_name": misc_data["file_name"],
"preview_url": config.get_preview_static_url(misc_data.get("preview_url", "")),
"preview_nsfw_level": misc_data.get("preview_nsfw_level", 0),
"base_model": misc_data.get("base_model", ""),
"folder": misc_data["folder"],
"sha256": misc_data.get("sha256", ""),
"file_path": misc_data["file_path"].replace(os.sep, "/"),
"file_size": misc_data.get("size", 0),
"modified": misc_data.get("modified", ""),
"tags": misc_data.get("tags", []),
"from_civitai": misc_data.get("from_civitai", True),
"usage_count": misc_data.get("usage_count", 0),
"notes": misc_data.get("notes", ""),
"sub_type": sub_type,
"favorite": misc_data.get("favorite", False),
"update_available": bool(misc_data.get("update_available", False)),
"civitai": self.filter_civitai_data(misc_data.get("civitai", {}), minimal=True)
}
def find_duplicate_hashes(self) -> Dict:
"""Find Misc models with duplicate SHA256 hashes"""
return self.scanner._hash_index.get_duplicate_hashes()
def find_duplicate_filenames(self) -> Dict:
"""Find Misc models with conflicting filenames"""
return self.scanner._hash_index.get_duplicate_filenames()

View File

@@ -5,7 +5,6 @@ import logging
logger = logging.getLogger(__name__)
from typing import Any, Dict, List, Optional, Tuple
from dataclasses import dataclass, field
from operator import itemgetter
from natsort import natsorted
# Supported sort modes: (sort_key, order)
@@ -229,17 +228,17 @@ class ModelCache:
reverse=reverse
)
elif sort_key == 'date':
# Sort by modified timestamp
# Sort by modified timestamp (use .get() with default to handle missing fields)
result = sorted(
data,
key=itemgetter('modified'),
key=lambda x: x.get('modified', 0.0),
reverse=reverse
)
elif sort_key == 'size':
# Sort by file size
# Sort by file size (use .get() with default to handle missing fields)
result = sorted(
data,
key=itemgetter('size'),
key=lambda x: x.get('size', 0),
reverse=reverse
)
elif sort_key == 'usage':

View File

@@ -676,10 +676,12 @@ class ModelMetadataProviderManager:
def _get_provider(self, provider_name: str = None) -> ModelMetadataProvider:
"""Get provider by name or default provider"""
if provider_name and provider_name in self.providers:
if provider_name:
if provider_name not in self.providers:
raise ValueError(f"Provider '{provider_name}' is not registered")
return self.providers[provider_name]
if self.default_provider is None:
raise ValueError("No default provider set and no valid provider specified")
return self.providers[self.default_provider]

View File

@@ -20,6 +20,8 @@ from .service_registry import ServiceRegistry
from .websocket_manager import ws_manager
from .persistent_model_cache import get_persistent_cache
from .settings_manager import get_settings_manager
from .cache_entry_validator import CacheEntryValidator
from .cache_health_monitor import CacheHealthMonitor, CacheHealthStatus
logger = logging.getLogger(__name__)
@@ -468,6 +470,39 @@ class ModelScanner:
for tag in adjusted_item.get('tags') or []:
tags_count[tag] = tags_count.get(tag, 0) + 1
# Validate cache entries and check health
valid_entries, invalid_entries = CacheEntryValidator.validate_batch(
adjusted_raw_data, auto_repair=True
)
if invalid_entries:
monitor = CacheHealthMonitor()
report = monitor.check_health(adjusted_raw_data, auto_repair=True)
if report.status != CacheHealthStatus.HEALTHY:
# Broadcast health warning to frontend
await ws_manager.broadcast_cache_health_warning(report, page_type)
logger.warning(
f"{self.model_type.capitalize()} Scanner: Cache health issue detected - "
f"{report.invalid_entries} invalid entries, {report.repaired_entries} repaired"
)
# Use only valid entries
adjusted_raw_data = valid_entries
# Rebuild tags count from valid entries only
tags_count = {}
for item in adjusted_raw_data:
for tag in item.get('tags') or []:
tags_count[tag] = tags_count.get(tag, 0) + 1
# Remove invalid entries from hash index
for invalid_entry in invalid_entries:
file_path = CacheEntryValidator.get_file_path_safe(invalid_entry)
sha256 = CacheEntryValidator.get_sha256_safe(invalid_entry)
if file_path:
hash_index.remove_by_path(file_path, sha256)
scan_result = CacheBuildResult(
raw_data=adjusted_raw_data,
hash_index=hash_index,
@@ -651,7 +686,6 @@ class ModelScanner:
async def _initialize_cache(self) -> None:
"""Initialize or refresh the cache"""
print("init start", flush=True)
self._is_initializing = True # Set flag
try:
start_time = time.time()
@@ -665,7 +699,6 @@ class ModelScanner:
scan_result = await self._gather_model_data()
await self._apply_scan_result(scan_result)
await self._save_persistent_cache(scan_result)
print("init end", flush=True)
logger.info(
f"{self.model_type.capitalize()} Scanner: Cache initialization completed in {time.time() - start_time:.2f} seconds, "
@@ -776,6 +809,18 @@ class ModelScanner:
model_data = self.adjust_cached_entry(dict(model_data))
if not model_data:
continue
# Validate the new entry before adding
validation_result = CacheEntryValidator.validate(
model_data, auto_repair=True
)
if not validation_result.is_valid:
logger.warning(
f"Skipping invalid entry during reconcile: {path}"
)
continue
model_data = validation_result.entry
self._ensure_license_flags(model_data)
# Add to cache
self._cache.raw_data.append(model_data)
@@ -1090,6 +1135,17 @@ class ModelScanner:
processed_files += 1
if result:
# Validate the entry before adding
validation_result = CacheEntryValidator.validate(
result, auto_repair=True
)
if not validation_result.is_valid:
logger.warning(
f"Skipping invalid scan result: {file_path}"
)
continue
result = validation_result.entry
self._ensure_license_flags(result)
raw_data.append(result)

View File

@@ -118,24 +118,19 @@ class ModelServiceFactory:
def register_default_model_types():
"""Register the default model types (LoRA, Checkpoint, Embedding, and Misc)"""
"""Register the default model types (LoRA, Checkpoint, and Embedding)"""
from ..services.lora_service import LoraService
from ..services.checkpoint_service import CheckpointService
from ..services.embedding_service import EmbeddingService
from ..services.misc_service import MiscService
from ..routes.lora_routes import LoraRoutes
from ..routes.checkpoint_routes import CheckpointRoutes
from ..routes.embedding_routes import EmbeddingRoutes
from ..routes.misc_model_routes import MiscModelRoutes
# Register LoRA model type
ModelServiceFactory.register_model_type('lora', LoraService, LoraRoutes)
# Register Checkpoint model type
ModelServiceFactory.register_model_type('checkpoint', CheckpointService, CheckpointRoutes)
# Register Embedding model type
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)
# Register Misc model type (VAE, Upscaler)
ModelServiceFactory.register_model_type('misc', MiscService, MiscModelRoutes)
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)

View File

@@ -9,7 +9,7 @@ from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple
from ..config import config
from .recipe_cache import RecipeCache
from .recipe_fts_index import RecipeFTSIndex
from .persistent_recipe_cache import PersistentRecipeCache, get_persistent_recipe_cache
from .persistent_recipe_cache import PersistentRecipeCache, get_persistent_recipe_cache, PersistedRecipeData
from .service_registry import ServiceRegistry
from .lora_scanner import LoraScanner
from .metadata_service import get_default_metadata_provider
@@ -431,6 +431,16 @@ class RecipeScanner:
4. Persist results for next startup
"""
try:
# Ensure cache exists to avoid None reference errors
if self._cache is None:
self._cache = RecipeCache(
raw_data=[],
sorted_by_name=[],
sorted_by_date=[],
folders=[],
folder_tree={},
)
# Create a new event loop for this thread
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
@@ -492,7 +502,7 @@ class RecipeScanner:
def _reconcile_recipe_cache(
self,
persisted: "PersistedRecipeData",
persisted: PersistedRecipeData,
recipes_dir: str,
) -> Tuple[List[Dict], bool, Dict[str, str]]:
"""Reconcile persisted cache with current filesystem state.
@@ -504,8 +514,6 @@ class RecipeScanner:
Returns:
Tuple of (recipes list, changed flag, json_paths dict).
"""
from .persistent_recipe_cache import PersistedRecipeData
recipes: List[Dict] = []
json_paths: Dict[str, str] = {}
changed = False
@@ -522,32 +530,37 @@ class RecipeScanner:
except OSError:
continue
# Build lookup of persisted recipes by json_path
persisted_by_path: Dict[str, Dict] = {}
for recipe in persisted.raw_data:
recipe_id = str(recipe.get('id', ''))
if recipe_id:
# Find the json_path from file_stats
for json_path, (mtime, size) in persisted.file_stats.items():
if os.path.basename(json_path).startswith(recipe_id):
persisted_by_path[json_path] = recipe
break
# Also index by recipe ID for faster lookups
persisted_by_id: Dict[str, Dict] = {
# Build recipe_id -> recipe lookup (O(n) instead of O(n²))
recipe_by_id: Dict[str, Dict] = {
str(r.get('id', '')): r for r in persisted.raw_data if r.get('id')
}
# Build json_path -> recipe lookup from file_stats (O(m))
persisted_by_path: Dict[str, Dict] = {}
for json_path in persisted.file_stats.keys():
basename = os.path.basename(json_path)
if basename.lower().endswith('.recipe.json'):
recipe_id = basename[:-len('.recipe.json')]
if recipe_id in recipe_by_id:
persisted_by_path[json_path] = recipe_by_id[recipe_id]
# Process current files
for file_path, (current_mtime, current_size) in current_files.items():
cached_stats = persisted.file_stats.get(file_path)
# Extract recipe_id from current file for fallback lookup
basename = os.path.basename(file_path)
recipe_id_from_file = basename[:-len('.recipe.json')] if basename.lower().endswith('.recipe.json') else None
if cached_stats:
cached_mtime, cached_size = cached_stats
# Check if file is unchanged
if abs(current_mtime - cached_mtime) < 1.0 and current_size == cached_size:
# Use cached data
# Try direct path lookup first
cached_recipe = persisted_by_path.get(file_path)
# Fallback to recipe_id lookup if path lookup fails
if not cached_recipe and recipe_id_from_file:
cached_recipe = recipe_by_id.get(recipe_id_from_file)
if cached_recipe:
recipe_id = str(cached_recipe.get('id', ''))
# Track folder from file path
@@ -2218,3 +2231,26 @@ class RecipeScanner:
duplicate_groups = {k: v for k, v in fingerprint_groups.items() if len(v) > 1}
return duplicate_groups
async def find_duplicate_recipes_by_source(self) -> dict:
"""Find all recipe duplicates based on source_path (Civitai image URLs)
Returns:
Dictionary where keys are source URLs and values are lists of recipe IDs
"""
cache = await self.get_cached_data()
url_groups = {}
for recipe in cache.raw_data:
source_url = recipe.get('source_path', '').strip()
if not source_url:
continue
if source_url not in url_groups:
url_groups[source_url] = []
url_groups[source_url].append(recipe.get('id'))
duplicate_groups = {k: v for k, v in url_groups.items() if len(v) > 1}
return duplicate_groups

View File

@@ -233,44 +233,23 @@ class ServiceRegistry:
async def get_embedding_scanner(cls):
"""Get or create Embedding scanner instance"""
service_name = "embedding_scanner"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
# Double-check after acquiring lock
if service_name in cls._services:
return cls._services[service_name]
# Import here to avoid circular imports
from .embedding_scanner import EmbeddingScanner
scanner = await EmbeddingScanner.get_instance()
cls._services[service_name] = scanner
logger.debug(f"Created and registered {service_name}")
return scanner
@classmethod
async def get_misc_scanner(cls):
"""Get or create Misc scanner instance (VAE, Upscaler)"""
service_name = "misc_scanner"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
# Double-check after acquiring lock
if service_name in cls._services:
return cls._services[service_name]
# Import here to avoid circular imports
from .misc_scanner import MiscScanner
scanner = await MiscScanner.get_instance()
cls._services[service_name] = scanner
logger.debug(f"Created and registered {service_name}")
return scanner
@classmethod
def clear_services(cls):
"""Clear all registered services - mainly for testing"""

View File

@@ -63,7 +63,7 @@ DEFAULT_SETTINGS: Dict[str, Any] = {
"compact_mode": False,
"priority_tags": DEFAULT_PRIORITY_TAG_CONFIG.copy(),
"model_name_display": "model_name",
"model_card_footer_action": "example_images",
"model_card_footer_action": "replace_preview",
"update_flag_strategy": "same_base",
"auto_organize_exclusions": [],
}

View File

@@ -48,9 +48,14 @@ class BulkMetadataRefreshUseCase:
for model in cache.raw_data
if model.get("sha256")
and (not model.get("civitai") or not model["civitai"].get("id"))
and (
(enable_metadata_archive_db and not model.get("db_checked", False))
or (not enable_metadata_archive_db and model.get("from_civitai") is True)
and not (
# Skip models confirmed not on CivitAI when no need to retry
model.get("from_civitai") is False
and model.get("civitai_deleted") is True
and (
not enable_metadata_archive_db
or model.get("db_checked", False)
)
)
]

View File

@@ -255,6 +255,42 @@ class WebSocketManager:
self._download_progress.pop(download_id, None)
logger.debug(f"Cleaned up old download progress for {download_id}")
async def broadcast_cache_health_warning(self, report: 'HealthReport', page_type: str = None):
"""
Broadcast cache health warning to frontend.
Args:
report: HealthReport instance from CacheHealthMonitor
page_type: The page type (loras, checkpoints, embeddings)
"""
from .cache_health_monitor import CacheHealthStatus
# Only broadcast if there are issues
if report.status == CacheHealthStatus.HEALTHY:
return
payload = {
'type': 'cache_health_warning',
'status': report.status.value,
'message': report.message,
'pageType': page_type,
'details': {
'total': report.total_entries,
'valid': report.valid_entries,
'invalid': report.invalid_entries,
'repaired': report.repaired_entries,
'corruption_rate': f"{report.corruption_rate:.1%}",
'invalid_paths': report.invalid_paths[:5], # Limit to first 5
}
}
logger.info(
f"Broadcasting cache health warning: {report.status.value} "
f"({report.invalid_entries} invalid entries)"
)
await self.broadcast(payload)
def get_connected_clients_count(self) -> int:
"""Get number of connected clients"""
return len(self._websockets)

View File

@@ -49,7 +49,6 @@ SUPPORTED_MEDIA_EXTENSIONS = {
VALID_LORA_SUB_TYPES = ["lora", "locon", "dora"]
VALID_CHECKPOINT_SUB_TYPES = ["checkpoint", "diffusion_model"]
VALID_EMBEDDING_SUB_TYPES = ["embedding"]
VALID_MISC_SUB_TYPES = ["vae", "upscaler"]
# Backward compatibility alias
VALID_LORA_TYPES = VALID_LORA_SUB_TYPES
@@ -95,7 +94,6 @@ DEFAULT_PRIORITY_TAG_CONFIG = {
"lora": ", ".join(CIVITAI_MODEL_TAGS),
"checkpoint": ", ".join(CIVITAI_MODEL_TAGS),
"embedding": ", ".join(CIVITAI_MODEL_TAGS),
"misc": ", ".join(CIVITAI_MODEL_TAGS),
}
# baseModel values from CivitAI that should be treated as diffusion models (unet)

View File

@@ -216,6 +216,11 @@ class DownloadManager:
self._progress["failed_models"] = set()
self._is_downloading = True
snapshot = self._progress.snapshot()
# Create the download task without awaiting it
# This ensures the HTTP response is returned immediately
# while the actual processing happens in the background
self._download_task = asyncio.create_task(
self._download_all_example_images(
output_dir,
@@ -227,7 +232,10 @@ class DownloadManager:
)
)
snapshot = self._progress.snapshot()
# Add a callback to handle task completion/errors
self._download_task.add_done_callback(
lambda t: self._handle_download_task_done(t, output_dir)
)
except ExampleImagesDownloadError:
# Re-raise our own exception types without wrapping
self._is_downloading = False
@@ -241,10 +249,25 @@ class DownloadManager:
)
raise ExampleImagesDownloadError(str(e)) from e
await self._broadcast_progress(status="running")
# Broadcast progress in the background without blocking the response
# This ensures the HTTP response is returned immediately
asyncio.create_task(self._broadcast_progress(status="running"))
return {"success": True, "message": "Download started", "status": snapshot}
def _handle_download_task_done(self, task: asyncio.Task, output_dir: str) -> None:
"""Handle download task completion, including saving progress on error."""
try:
# This will re-raise any exception from the task
task.result()
except Exception as e:
logger.error(f"Download task failed with error: {e}", exc_info=True)
# Ensure progress is saved even on failure
try:
self._save_progress(output_dir)
except Exception as save_error:
logger.error(f"Failed to save progress after task failure: {save_error}")
async def get_status(self, request):
"""Get the current status of example images download."""
@@ -254,6 +277,130 @@ class DownloadManager:
"status": self._progress.snapshot(),
}
async def check_pending_models(self, model_types: list[str]) -> dict:
"""Quickly check how many models need example images downloaded.
This is a lightweight check that avoids the overhead of starting
a full download task when no work is needed.
Returns:
dict with keys:
- total_models: Total number of models across specified types
- pending_count: Number of models needing example images
- processed_count: Number of already processed models
- failed_count: Number of models marked as failed
- needs_download: True if there are pending models to process
"""
from ..services.service_registry import ServiceRegistry
if self._is_downloading:
return {
"success": True,
"is_downloading": True,
"total_models": 0,
"pending_count": 0,
"processed_count": 0,
"failed_count": 0,
"needs_download": False,
"message": "Download already in progress",
}
try:
# Get scanners
scanners = []
if "lora" in model_types:
lora_scanner = await ServiceRegistry.get_lora_scanner()
scanners.append(("lora", lora_scanner))
if "checkpoint" in model_types:
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
scanners.append(("checkpoint", checkpoint_scanner))
if "embedding" in model_types:
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
scanners.append(("embedding", embedding_scanner))
# Load progress file to check processed models
settings_manager = get_settings_manager()
active_library = settings_manager.get_active_library_name()
output_dir = self._resolve_output_dir(active_library)
processed_models: set[str] = set()
failed_models: set[str] = set()
if output_dir:
progress_file = os.path.join(output_dir, ".download_progress.json")
if os.path.exists(progress_file):
try:
with open(progress_file, "r", encoding="utf-8") as f:
saved_progress = json.load(f)
processed_models = set(saved_progress.get("processed_models", []))
failed_models = set(saved_progress.get("failed_models", []))
except Exception:
pass # Ignore progress file errors for quick check
# Count models
total_models = 0
models_with_hash = 0
for scanner_type, scanner in scanners:
cache = await scanner.get_cached_data()
if cache and cache.raw_data:
for model in cache.raw_data:
total_models += 1
if model.get("sha256"):
models_with_hash += 1
# Calculate pending count
# A model is pending if it has a hash and is not in processed_models
# We also exclude failed_models unless force mode would be used
pending_count = models_with_hash - len(processed_models.intersection(
{m.get("sha256", "").lower() for scanner_type, scanner in scanners
for m in (await scanner.get_cached_data()).raw_data if m.get("sha256")}
))
# More accurate pending count: check which models actually need processing
pending_hashes = set()
for scanner_type, scanner in scanners:
cache = await scanner.get_cached_data()
if cache and cache.raw_data:
for model in cache.raw_data:
raw_hash = model.get("sha256")
if not raw_hash:
continue
model_hash = raw_hash.lower()
if model_hash not in processed_models:
# Check if model folder exists with files
model_dir = ExampleImagePathResolver.get_model_folder(
model_hash, active_library
)
if not _model_directory_has_files(model_dir):
pending_hashes.add(model_hash)
pending_count = len(pending_hashes)
return {
"success": True,
"is_downloading": False,
"total_models": total_models,
"pending_count": pending_count,
"processed_count": len(processed_models),
"failed_count": len(failed_models),
"needs_download": pending_count > 0,
}
except Exception as e:
logger.error(f"Error checking pending models: {e}", exc_info=True)
return {
"success": False,
"error": str(e),
"total_models": 0,
"pending_count": 0,
"processed_count": 0,
"failed_count": 0,
"needs_download": False,
}
async def pause_download(self, request):
"""Pause the example images download."""

View File

@@ -43,8 +43,15 @@ class ExampleImagesProcessor:
return media_url
@staticmethod
def _get_file_extension_from_content_or_headers(content, headers, fallback_url=None):
"""Determine file extension from content magic bytes or headers"""
def _get_file_extension_from_content_or_headers(content, headers, fallback_url=None, media_type_hint=None):
"""Determine file extension from content magic bytes or headers
Args:
content: File content bytes
headers: HTTP response headers
fallback_url: Original URL for extension extraction
media_type_hint: Optional media type hint from metadata (e.g., "video" or "image")
"""
# Check magic bytes for common formats
if content:
if content.startswith(b'\xFF\xD8\xFF'):
@@ -82,6 +89,10 @@ class ExampleImagesProcessor:
if ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or ext in SUPPORTED_MEDIA_EXTENSIONS['videos']:
return ext
# Use media type hint from metadata if available
if media_type_hint == "video":
return '.mp4'
# Default fallback
return '.jpg'
@@ -136,7 +147,7 @@ class ExampleImagesProcessor:
if success:
# Determine file extension from content or headers
media_ext = ExampleImagesProcessor._get_file_extension_from_content_or_headers(
content, headers, original_url
content, headers, original_url, image.get("type")
)
# Check if the detected file type is supported
@@ -219,7 +230,7 @@ class ExampleImagesProcessor:
if success:
# Determine file extension from content or headers
media_ext = ExampleImagesProcessor._get_file_extension_from_content_or_headers(
content, headers, original_url
content, headers, original_url, image.get("type")
)
# Check if the detected file type is supported

View File

@@ -17,7 +17,7 @@ async def extract_lora_metadata(file_path: str) -> Dict:
base_model = determine_base_model(metadata.get("ss_base_model_version"))
return {"base_model": base_model}
except Exception as e:
print(f"Error reading metadata from {file_path}: {str(e)}")
logger.error(f"Error reading metadata from {file_path}: {str(e)}")
return {"base_model": "Unknown"}
async def extract_checkpoint_metadata(file_path: str) -> dict:

View File

@@ -223,7 +223,7 @@ class MetadataManager:
preview_url=normalize_path(preview_url),
tags=[],
modelDescription="",
model_type="checkpoint",
sub_type="checkpoint",
from_civitai=True
)
elif model_class.__name__ == "EmbeddingMetadata":
@@ -238,6 +238,7 @@ class MetadataManager:
preview_url=normalize_path(preview_url),
tags=[],
modelDescription="",
sub_type="embedding",
from_civitai=True
)
else: # Default to LoraMetadata

View File

@@ -219,7 +219,7 @@ class EmbeddingMetadata(BaseModelMetadata):
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
sub_type = version_info.get('type', 'embedding')
# Extract tags and description if available
tags = []
description = ""
@@ -228,53 +228,7 @@ class EmbeddingMetadata(BaseModelMetadata):
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
file_path=save_path.replace(os.sep, '/'),
size=file_info.get('sizeKB', 0) * 1024,
modified=datetime.now().timestamp(),
sha256=file_info['hashes'].get('SHA256', '').lower(),
base_model=base_model,
preview_url=None, # Will be updated after preview download
preview_nsfw_level=0,
from_civitai=True,
civitai=version_info,
sub_type=sub_type,
tags=tags,
modelDescription=description
)
@dataclass
class MiscMetadata(BaseModelMetadata):
"""Represents the metadata structure for a Misc model (VAE, Upscaler)"""
sub_type: str = "vae"
@classmethod
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'MiscMetadata':
"""Create MiscMetadata instance from Civitai version info"""
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
# Determine sub_type from CivitAI model type
civitai_type = version_info.get('model', {}).get('type', '').lower()
if civitai_type == 'vae':
sub_type = 'vae'
elif civitai_type == 'upscaler':
sub_type = 'upscaler'
else:
sub_type = 'vae' # Default to vae
# Extract tags and description if available
tags = []
description = ""
if 'model' in version_info:
if 'tags' in version_info['model']:
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),

View File

@@ -138,19 +138,15 @@ def calculate_recipe_fingerprint(loras):
if not loras:
return ""
# Filter valid entries and extract hash and strength
valid_loras = []
for lora in loras:
# Skip excluded loras
if lora.get("exclude", False):
continue
# Get the hash - use modelVersionId as fallback if hash is empty
hash_value = lora.get("hash", "").lower()
if not hash_value and lora.get("isDeleted", False) and lora.get("modelVersionId"):
if not hash_value and lora.get("modelVersionId"):
hash_value = str(lora.get("modelVersionId"))
# Skip entries without a valid hash
if not hash_value:
continue

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-lora-manager"
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
version = "0.9.13"
version = "0.9.15"
license = {file = "LICENSE"}
dependencies = [
"aiohttp",

0
scripts/sync_translation_keys.py Normal file → Executable file
View File

View File

@@ -113,6 +113,12 @@
max-width: 110px;
}
/* Compact mode: hide sub-type to save space */
.compact-density .model-sub-type,
.compact-density .model-separator {
display: none;
}
.compact-density .card-actions i {
font-size: 0.95em;
padding: 3px;

View File

@@ -512,6 +512,10 @@
.filter-preset.active .preset-delete-btn {
color: white;
opacity: 0;
}
.filter-preset:hover.active .preset-delete-btn {
opacity: 0.8;
}
@@ -529,13 +533,16 @@
align-items: center;
gap: 6px;
white-space: nowrap;
max-width: 120px; /* Prevent long names from breaking layout */
overflow: hidden;
text-overflow: ellipsis;
}
.preset-delete-btn {
background: none;
border: none;
color: var(--text-color);
opacity: 0.5;
opacity: 0; /* Hidden by default */
cursor: pointer;
padding: 4px;
display: flex;
@@ -546,6 +553,10 @@
margin-left: auto;
}
.filter-preset:hover .preset-delete-btn {
opacity: 0.5; /* Show on hover */
}
.preset-delete-btn:hover {
opacity: 1;
color: var(--lora-error, #e74c3c);

View File

@@ -9,8 +9,7 @@ import { state } from '../state/index.js';
export const MODEL_TYPES = {
LORA: 'loras',
CHECKPOINT: 'checkpoints',
EMBEDDING: 'embeddings',
MISC: 'misc'
EMBEDDING: 'embeddings' // Future model type
};
// Base API configuration for each model type
@@ -41,15 +40,6 @@ export const MODEL_CONFIG = {
supportsBulkOperations: true,
supportsMove: true,
templateName: 'embeddings.html'
},
[MODEL_TYPES.MISC]: {
displayName: 'Misc',
singularName: 'misc',
defaultPageSize: 100,
supportsLetterFilter: false,
supportsBulkOperations: true,
supportsMove: true,
templateName: 'misc.html'
}
};
@@ -143,11 +133,6 @@ export const MODEL_SPECIFIC_ENDPOINTS = {
},
[MODEL_TYPES.EMBEDDING]: {
metadata: `/api/lm/${MODEL_TYPES.EMBEDDING}/metadata`,
},
[MODEL_TYPES.MISC]: {
metadata: `/api/lm/${MODEL_TYPES.MISC}/metadata`,
vae_roots: `/api/lm/${MODEL_TYPES.MISC}/vae_roots`,
upscaler_roots: `/api/lm/${MODEL_TYPES.MISC}/upscaler_roots`,
}
};

View File

@@ -1,62 +0,0 @@
import { BaseModelApiClient } from './baseModelApi.js';
import { getSessionItem } from '../utils/storageHelpers.js';
export class MiscApiClient extends BaseModelApiClient {
_addModelSpecificParams(params, pageState) {
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
const filterMiscHashes = getSessionItem('recipe_to_misc_filterHashes');
if (filterMiscHash) {
params.append('misc_hash', filterMiscHash);
} else if (filterMiscHashes) {
try {
if (Array.isArray(filterMiscHashes) && filterMiscHashes.length > 0) {
params.append('misc_hashes', filterMiscHashes.join(','));
}
} catch (error) {
console.error('Error parsing misc hashes from session storage:', error);
}
}
if (pageState.subType) {
params.append('sub_type', pageState.subType);
}
}
async getMiscInfo(filePath) {
try {
const response = await fetch(this.apiConfig.endpoints.specific.info, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ file_path: filePath })
});
if (!response.ok) throw new Error('Failed to fetch misc info');
return await response.json();
} catch (error) {
console.error('Error fetching misc info:', error);
throw error;
}
}
async getVaeRoots() {
try {
const response = await fetch(this.apiConfig.endpoints.specific.vae_roots, { method: 'GET' });
if (!response.ok) throw new Error('Failed to fetch VAE roots');
return await response.json();
} catch (error) {
console.error('Error fetching VAE roots:', error);
throw error;
}
}
async getUpscalerRoots() {
try {
const response = await fetch(this.apiConfig.endpoints.specific.upscaler_roots, { method: 'GET' });
if (!response.ok) throw new Error('Failed to fetch upscaler roots');
return await response.json();
} catch (error) {
console.error('Error fetching upscaler roots:', error);
throw error;
}
}
}

View File

@@ -1,7 +1,6 @@
import { LoraApiClient } from './loraApi.js';
import { CheckpointApiClient } from './checkpointApi.js';
import { EmbeddingApiClient } from './embeddingApi.js';
import { MiscApiClient } from './miscApi.js';
import { MODEL_TYPES, isValidModelType } from './apiConfig.js';
import { state } from '../state/index.js';
@@ -13,8 +12,6 @@ export function createModelApiClient(modelType) {
return new CheckpointApiClient(MODEL_TYPES.CHECKPOINT);
case MODEL_TYPES.EMBEDDING:
return new EmbeddingApiClient(MODEL_TYPES.EMBEDDING);
case MODEL_TYPES.MISC:
return new MiscApiClient(MODEL_TYPES.MISC);
default:
throw new Error(`Unsupported model type: ${modelType}`);
}

View File

@@ -1,85 +0,0 @@
import { BaseContextMenu } from './BaseContextMenu.js';
import { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
import { showDeleteModal, showExcludeModal } from '../../utils/modalUtils.js';
import { moveManager } from '../../managers/MoveManager.js';
import { i18n } from '../../i18n/index.js';
export class MiscContextMenu extends BaseContextMenu {
constructor() {
super('miscContextMenu', '.model-card');
this.nsfwSelector = document.getElementById('nsfwLevelSelector');
this.modelType = 'misc';
this.resetAndReload = resetAndReload;
this.initNSFWSelector();
}
// Implementation needed by the mixin
async saveModelMetadata(filePath, data) {
return getModelApiClient().saveModelMetadata(filePath, data);
}
showMenu(x, y, card) {
super.showMenu(x, y, card);
// Update the "Move to other root" label based on current model type
const moveOtherItem = this.menu.querySelector('[data-action="move-other"]');
if (moveOtherItem) {
const currentType = card.dataset.sub_type || 'vae';
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
const typeLabel = i18n.t(`misc.modelTypes.${otherType}`);
moveOtherItem.innerHTML = `<i class="fas fa-exchange-alt"></i> ${i18n.t('misc.contextMenu.moveToOtherTypeFolder', { otherType: typeLabel })}`;
}
}
handleMenuAction(action) {
// First try to handle with common actions
if (ModelContextMenuMixin.handleCommonMenuActions.call(this, action)) {
return;
}
const apiClient = getModelApiClient();
// Otherwise handle misc-specific actions
switch (action) {
case 'details':
// Show misc details
this.currentCard.click();
break;
case 'replace-preview':
// Add new action for replacing preview images
apiClient.replaceModelPreview(this.currentCard.dataset.filepath);
break;
case 'delete':
showDeleteModal(this.currentCard.dataset.filepath);
break;
case 'copyname':
// Copy misc model name
if (this.currentCard.querySelector('.fa-copy')) {
this.currentCard.querySelector('.fa-copy').click();
}
break;
case 'refresh-metadata':
// Refresh metadata from CivitAI
apiClient.refreshSingleModelMetadata(this.currentCard.dataset.filepath);
break;
case 'move':
moveManager.showMoveModal(this.currentCard.dataset.filepath, this.currentCard.dataset.sub_type);
break;
case 'move-other':
{
const currentType = this.currentCard.dataset.sub_type || 'vae';
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
moveManager.showMoveModal(this.currentCard.dataset.filepath, otherType);
}
break;
case 'exclude':
showExcludeModal(this.currentCard.dataset.filepath);
break;
}
}
}
// Mix in shared methods
Object.assign(MiscContextMenu.prototype, ModelContextMenuMixin);

View File

@@ -2,7 +2,6 @@ export { LoraContextMenu } from './LoraContextMenu.js';
export { RecipeContextMenu } from './RecipeContextMenu.js';
export { CheckpointContextMenu } from './CheckpointContextMenu.js';
export { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
export { MiscContextMenu } from './MiscContextMenu.js';
export { GlobalContextMenu } from './GlobalContextMenu.js';
export { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
@@ -10,7 +9,6 @@ import { LoraContextMenu } from './LoraContextMenu.js';
import { RecipeContextMenu } from './RecipeContextMenu.js';
import { CheckpointContextMenu } from './CheckpointContextMenu.js';
import { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
import { MiscContextMenu } from './MiscContextMenu.js';
import { GlobalContextMenu } from './GlobalContextMenu.js';
// Factory method to create page-specific context menu instances
@@ -24,8 +22,6 @@ export function createPageContextMenu(pageType) {
return new CheckpointContextMenu();
case 'embeddings':
return new EmbeddingContextMenu();
case 'misc':
return new MiscContextMenu();
default:
return null;
}

View File

@@ -32,7 +32,6 @@ export class HeaderManager {
if (path.includes('/checkpoints')) return 'checkpoints';
if (path.includes('/embeddings')) return 'embeddings';
if (path.includes('/statistics')) return 'statistics';
if (path.includes('/misc')) return 'misc';
if (path.includes('/loras')) return 'loras';
return 'unknown';
}

View File

@@ -26,6 +26,7 @@ class RecipeCard {
card.dataset.nsfwLevel = this.recipe.preview_nsfw_level || 0;
card.dataset.created = this.recipe.created_date;
card.dataset.id = this.recipe.id || '';
card.dataset.folder = this.recipe.folder || '';
// Get base model with fallback
const baseModelLabel = (this.recipe.base_model || '').trim() || 'Unknown';

View File

@@ -1,119 +0,0 @@
// MiscControls.js - Specific implementation for the Misc (VAE/Upscaler) page
import { PageControls } from './PageControls.js';
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
import { getSessionItem, removeSessionItem } from '../../utils/storageHelpers.js';
import { downloadManager } from '../../managers/DownloadManager.js';
/**
* MiscControls class - Extends PageControls for Misc-specific functionality
*/
export class MiscControls extends PageControls {
constructor() {
// Initialize with 'misc' page type
super('misc');
// Register API methods specific to the Misc page
this.registerMiscAPI();
// Check for custom filters (e.g., from recipe navigation)
this.checkCustomFilters();
}
/**
* Register Misc-specific API methods
*/
registerMiscAPI() {
const miscAPI = {
// Core API functions
loadMoreModels: async (resetPage = false, updateFolders = false) => {
return await getModelApiClient().loadMoreWithVirtualScroll(resetPage, updateFolders);
},
resetAndReload: async (updateFolders = false) => {
return await resetAndReload(updateFolders);
},
refreshModels: async (fullRebuild = false) => {
return await getModelApiClient().refreshModels(fullRebuild);
},
// Add fetch from Civitai functionality for misc models
fetchFromCivitai: async () => {
return await getModelApiClient().fetchCivitaiMetadata();
},
// Add show download modal functionality
showDownloadModal: () => {
downloadManager.showDownloadModal();
},
toggleBulkMode: () => {
if (window.bulkManager) {
window.bulkManager.toggleBulkMode();
} else {
console.error('Bulk manager not available');
}
},
clearCustomFilter: async () => {
await this.clearCustomFilter();
}
};
// Register the API
this.registerAPI(miscAPI);
}
/**
* Check for custom filters sent from other pages (e.g., recipe modal)
*/
checkCustomFilters() {
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
const filterRecipeName = getSessionItem('filterMiscRecipeName');
if (filterMiscHash && filterRecipeName) {
const indicator = document.getElementById('customFilterIndicator');
const filterText = indicator?.querySelector('.customFilterText');
if (indicator && filterText) {
indicator.classList.remove('hidden');
const displayText = `Viewing misc model from: ${filterRecipeName}`;
filterText.textContent = this._truncateText(displayText, 30);
filterText.setAttribute('title', displayText);
const filterElement = indicator.querySelector('.filter-active');
if (filterElement) {
filterElement.classList.add('animate');
setTimeout(() => filterElement.classList.remove('animate'), 600);
}
}
}
}
/**
* Clear misc custom filter and reload
*/
async clearCustomFilter() {
removeSessionItem('recipe_to_misc_filterHash');
removeSessionItem('recipe_to_misc_filterHashes');
removeSessionItem('filterMiscRecipeName');
const indicator = document.getElementById('customFilterIndicator');
if (indicator) {
indicator.classList.add('hidden');
}
await resetAndReload();
}
/**
* Helper to truncate text with ellipsis
* @param {string} text
* @param {number} maxLength
* @returns {string}
*/
_truncateText(text, maxLength) {
return text.length > maxLength ? `${text.substring(0, maxLength - 3)}...` : text;
}
}

View File

@@ -3,14 +3,13 @@ import { PageControls } from './PageControls.js';
import { LorasControls } from './LorasControls.js';
import { CheckpointsControls } from './CheckpointsControls.js';
import { EmbeddingsControls } from './EmbeddingsControls.js';
import { MiscControls } from './MiscControls.js';
// Export the classes
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls, MiscControls };
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls };
/**
* Factory function to create the appropriate controls based on page type
* @param {string} pageType - The type of page ('loras', 'checkpoints', 'embeddings', or 'misc')
* @param {string} pageType - The type of page ('loras', 'checkpoints', or 'embeddings')
* @returns {PageControls} - The appropriate controls instance
*/
export function createPageControls(pageType) {
@@ -20,8 +19,6 @@ export function createPageControls(pageType) {
return new CheckpointsControls();
} else if (pageType === 'embeddings') {
return new EmbeddingsControls();
} else if (pageType === 'misc') {
return new MiscControls();
} else {
console.error(`Unknown page type: ${pageType}`);
return null;

View File

@@ -198,6 +198,12 @@ class InitializationManager {
handleProgressUpdate(data) {
if (!data) return;
console.log('Received progress update:', data);
// Handle cache health warning messages
if (data.type === 'cache_health_warning') {
this.handleCacheHealthWarning(data);
return;
}
// Check if this update is for our page type
if (data.pageType && data.pageType !== this.pageType) {
@@ -466,6 +472,29 @@ class InitializationManager {
}
}
/**
* Handle cache health warning messages from WebSocket
*/
handleCacheHealthWarning(data) {
console.log('Cache health warning received:', data);
// Import bannerService dynamically to avoid circular dependencies
import('../managers/BannerService.js').then(({ bannerService }) => {
// Initialize banner service if not already done
if (!bannerService.initialized) {
bannerService.initialize().then(() => {
bannerService.registerCacheHealthBanner(data);
}).catch(err => {
console.error('Failed to initialize banner service:', err);
});
} else {
bannerService.registerCacheHealthBanner(data);
}
}).catch(err => {
console.error('Failed to load banner service:', err);
});
}
/**
* Clean up resources when the component is destroyed
*/

View File

@@ -214,52 +214,6 @@ function handleSendToWorkflow(card, replaceMode, modelType) {
missingNodesMessage,
missingTargetMessage,
});
} else if (modelType === MODEL_TYPES.MISC) {
const modelPath = card.dataset.filepath;
if (!modelPath) {
const message = translate('modelCard.sendToWorkflow.missingPath', {}, 'Unable to determine model path for this card');
showToast(message, {}, 'error');
return;
}
const subtype = (card.dataset.sub_type || 'vae').toLowerCase();
const isVae = subtype === 'vae';
const widgetName = isVae ? 'vae_name' : 'model_name';
const actionTypeText = translate(
isVae ? 'uiHelpers.nodeSelector.vae' : 'uiHelpers.nodeSelector.upscaler',
{},
isVae ? 'VAE' : 'Upscaler'
);
const successMessage = translate(
isVae ? 'uiHelpers.workflow.vaeUpdated' : 'uiHelpers.workflow.upscalerUpdated',
{},
isVae ? 'VAE updated in workflow' : 'Upscaler updated in workflow'
);
const failureMessage = translate(
isVae ? 'uiHelpers.workflow.vaeFailed' : 'uiHelpers.workflow.upscalerFailed',
{},
isVae ? 'Failed to update VAE node' : 'Failed to update upscaler node'
);
const missingNodesMessage = translate(
'uiHelpers.workflow.noMatchingNodes',
{},
'No compatible nodes available in the current workflow'
);
const missingTargetMessage = translate(
'uiHelpers.workflow.noTargetNodeSelected',
{},
'No target node selected'
);
sendModelPathToWorkflow(modelPath, {
widgetName,
collectionType: MODEL_TYPES.MISC,
actionTypeText,
successMessage,
failureMessage,
missingNodesMessage,
missingTargetMessage,
});
} else {
showToast('modelCard.sendToWorkflow.checkpointNotImplemented', {}, 'info');
}
@@ -276,10 +230,6 @@ function handleCopyAction(card, modelType) {
} else if (modelType === MODEL_TYPES.EMBEDDING) {
const embeddingName = card.dataset.file_name;
copyToClipboard(embeddingName, 'Embedding name copied');
} else if (modelType === MODEL_TYPES.MISC) {
const miscName = card.dataset.file_name;
const message = translate('modelCard.actions.miscNameCopied', {}, 'Model name copied');
copyToClipboard(miscName, message);
}
}

View File

@@ -99,7 +99,7 @@ export class AppCore {
initializePageFeatures() {
const pageType = this.getPageType();
if (['loras', 'recipes', 'checkpoints', 'embeddings', 'misc'].includes(pageType)) {
if (['loras', 'recipes', 'checkpoints', 'embeddings'].includes(pageType)) {
this.initializeContextMenus(pageType);
initializeInfiniteScroll(pageType);
}

View File

@@ -4,9 +4,11 @@ import {
removeStorageItem
} from '../utils/storageHelpers.js';
import { translate } from '../utils/i18nHelpers.js';
import { state } from '../state/index.js'
import { state } from '../state/index.js';
import { getModelApiClient } from '../api/modelApiFactory.js';
const COMMUNITY_SUPPORT_BANNER_ID = 'community-support';
const CACHE_HEALTH_BANNER_ID = 'cache-health-warning';
const COMMUNITY_SUPPORT_BANNER_DELAY_MS = 5 * 24 * 60 * 60 * 1000; // 5 days
const COMMUNITY_SUPPORT_FIRST_SEEN_AT_KEY = 'community_support_banner_first_seen_at';
const COMMUNITY_SUPPORT_VERSION_KEY = 'community_support_banner_state_version';
@@ -293,6 +295,177 @@ class BannerService {
location.reload();
}
/**
* Register a cache health warning banner
* @param {Object} healthData - Health data from WebSocket
*/
registerCacheHealthBanner(healthData) {
if (!healthData || healthData.status === 'healthy') {
return;
}
// Remove existing cache health banner if any
this.removeBannerElement(CACHE_HEALTH_BANNER_ID);
const isCorrupted = healthData.status === 'corrupted';
const titleKey = isCorrupted
? 'banners.cacheHealth.corrupted.title'
: 'banners.cacheHealth.degraded.title';
const defaultTitle = isCorrupted
? 'Cache Corruption Detected'
: 'Cache Issues Detected';
const title = translate(titleKey, {}, defaultTitle);
const contentKey = 'banners.cacheHealth.content';
const defaultContent = 'Found {invalid} of {total} cache entries are invalid ({rate}). This may cause missing models or errors. Rebuilding the cache is recommended.';
const content = translate(contentKey, {
invalid: healthData.details?.invalid || 0,
total: healthData.details?.total || 0,
rate: healthData.details?.corruption_rate || '0%'
}, defaultContent);
this.registerBanner(CACHE_HEALTH_BANNER_ID, {
id: CACHE_HEALTH_BANNER_ID,
title: title,
content: content,
pageType: healthData.pageType,
actions: [
{
text: translate('banners.cacheHealth.rebuildCache', {}, 'Rebuild Cache'),
icon: 'fas fa-sync-alt',
action: 'rebuild-cache',
type: 'primary'
},
{
text: translate('banners.cacheHealth.dismiss', {}, 'Dismiss'),
icon: 'fas fa-times',
action: 'dismiss',
type: 'secondary'
}
],
dismissible: true,
priority: 10, // High priority
onRegister: (bannerElement) => {
// Attach click handlers for actions
const rebuildBtn = bannerElement.querySelector('[data-action="rebuild-cache"]');
const dismissBtn = bannerElement.querySelector('[data-action="dismiss"]');
if (rebuildBtn) {
rebuildBtn.addEventListener('click', (e) => {
e.preventDefault();
this.handleRebuildCache(bannerElement, healthData.pageType);
});
}
if (dismissBtn) {
dismissBtn.addEventListener('click', (e) => {
e.preventDefault();
this.dismissBanner(CACHE_HEALTH_BANNER_ID);
});
}
}
});
}
/**
* Handle rebuild cache action from banner
* @param {HTMLElement} bannerElement - The banner element
* @param {string} pageType - The page type (loras, checkpoints, embeddings)
*/
async handleRebuildCache(bannerElement, pageType) {
const currentPageType = pageType || this.getCurrentPageType();
try {
const apiClient = getModelApiClient(currentPageType);
// Update banner to show rebuilding status
const actionsContainer = bannerElement.querySelector('.banner-actions');
if (actionsContainer) {
actionsContainer.innerHTML = `
<span class="banner-loading">
<i class="fas fa-spinner fa-spin"></i>
<span>${translate('banners.cacheHealth.rebuilding', {}, 'Rebuilding cache...')}</span>
</span>
`;
}
await apiClient.refreshModels(true);
// Remove banner on success without marking as dismissed
this.removeBannerElement(CACHE_HEALTH_BANNER_ID);
} catch (error) {
console.error('Cache rebuild failed:', error);
const actionsContainer = bannerElement.querySelector('.banner-actions');
if (actionsContainer) {
actionsContainer.innerHTML = `
<span class="banner-error">
<i class="fas fa-exclamation-triangle"></i>
<span>${translate('banners.cacheHealth.rebuildFailed', {}, 'Rebuild failed. Please try again.')}</span>
</span>
<a href="#" class="banner-action banner-action-primary" data-action="rebuild-cache">
<i class="fas fa-sync-alt"></i>
<span>${translate('banners.cacheHealth.retry', {}, 'Retry')}</span>
</a>
`;
// Re-attach click handler
const retryBtn = actionsContainer.querySelector('[data-action="rebuild-cache"]');
if (retryBtn) {
retryBtn.addEventListener('click', (e) => {
e.preventDefault();
this.handleRebuildCache(bannerElement, pageType);
});
}
}
}
}
/**
* Get the current page type from the URL
* @returns {string} Page type (loras, checkpoints, embeddings, recipes)
*/
getCurrentPageType() {
const path = window.location.pathname;
if (path.includes('/checkpoints')) return 'checkpoints';
if (path.includes('/embeddings')) return 'embeddings';
if (path.includes('/recipes')) return 'recipes';
return 'loras';
}
/**
* Get the rebuild cache endpoint for the given page type
* @param {string} pageType - The page type
* @returns {string} The API endpoint URL
*/
getRebuildEndpoint(pageType) {
const endpoints = {
'loras': '/api/lm/loras/reload?rebuild=true',
'checkpoints': '/api/lm/checkpoints/reload?rebuild=true',
'embeddings': '/api/lm/embeddings/reload?rebuild=true'
};
return endpoints[pageType] || endpoints['loras'];
}
/**
* Remove a banner element from DOM without marking as dismissed
* @param {string} bannerId - Banner ID to remove
*/
removeBannerElement(bannerId) {
const bannerElement = document.querySelector(`[data-banner-id="${bannerId}"]`);
if (bannerElement) {
bannerElement.style.animation = 'banner-slide-up 0.3s ease-in-out forwards';
setTimeout(() => {
bannerElement.remove();
this.updateContainerVisibility();
}, 300);
}
// Also remove from banners map
this.banners.delete(bannerId);
}
prepareCommunitySupportBanner() {
if (this.isBannerDismissed(COMMUNITY_SUPPORT_BANNER_ID)) {
return;

View File

@@ -64,17 +64,6 @@ export class BulkManager {
deleteAll: true,
setContentRating: true
},
[MODEL_TYPES.MISC]: {
addTags: true,
sendToWorkflow: false,
copyAll: false,
refreshAll: true,
checkUpdates: true,
moveAll: true,
autoOrganize: true,
deleteAll: true,
setContentRating: true
},
recipes: {
addTags: false,
sendToWorkflow: false,

View File

@@ -21,7 +21,7 @@ export class ExampleImagesManager {
// Auto download properties
this.autoDownloadInterval = null;
this.lastAutoDownloadCheck = 0;
this.autoDownloadCheckInterval = 10 * 60 * 1000; // 10 minutes in milliseconds
this.autoDownloadCheckInterval = 30 * 60 * 1000; // 30 minutes in milliseconds
this.pageInitTime = Date.now(); // Track when page was initialized
// Initialize download path field and check download status
@@ -808,19 +808,58 @@ export class ExampleImagesManager {
return;
}
this.lastAutoDownloadCheck = now;
if (!this.canAutoDownload()) {
console.log('Auto download conditions not met, skipping check');
return;
}
try {
console.log('Performing auto download check...');
console.log('Performing auto download pre-check...');
// Step 1: Lightweight pre-check to see if any work is needed
const checkResponse = await fetch('/api/lm/check-example-images-needed', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
model_types: ['lora', 'checkpoint', 'embedding']
})
});
if (!checkResponse.ok) {
console.warn('Auto download pre-check HTTP error:', checkResponse.status);
return;
}
const checkData = await checkResponse.json();
if (!checkData.success) {
console.warn('Auto download pre-check failed:', checkData.error);
return;
}
// Update the check timestamp only after successful pre-check
this.lastAutoDownloadCheck = now;
// If download already in progress, skip
if (checkData.is_downloading) {
console.log('Download already in progress, skipping auto check');
return;
}
// If no models need downloading, skip
if (!checkData.needs_download || checkData.pending_count === 0) {
console.log(`Auto download pre-check complete: ${checkData.processed_count}/${checkData.total_models} models already processed, no work needed`);
return;
}
console.log(`Auto download pre-check: ${checkData.pending_count} models need processing, starting download...`);
// Step 2: Start the actual download (fire-and-forget)
const optimize = state.global.settings.optimize_example_images;
const response = await fetch('/api/lm/download-example-images', {
fetch('/api/lm/download-example-images', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -830,18 +869,29 @@ export class ExampleImagesManager {
model_types: ['lora', 'checkpoint', 'embedding'],
auto_mode: true // Flag to indicate this is an automatic download
})
}).then(response => {
if (!response.ok) {
console.warn('Auto download start HTTP error:', response.status);
return null;
}
return response.json();
}).then(data => {
if (data && !data.success) {
console.warn('Auto download start failed:', data.error);
// If already in progress, push back the next check to avoid hammering the API
if (data.error && data.error.includes('already in progress')) {
console.log('Download already in progress, backing off next check');
this.lastAutoDownloadCheck = now + (5 * 60 * 1000); // Back off for 5 extra minutes
}
} else if (data && data.success) {
console.log('Auto download started:', data.message || 'Download started');
}
}).catch(error => {
console.error('Auto download start error:', error);
});
const data = await response.json();
if (!data.success) {
console.warn('Auto download check failed:', data.error);
// If already in progress, push back the next check to avoid hammering the API
if (data.error && data.error.includes('already in progress')) {
console.log('Download already in progress, backing off next check');
this.lastAutoDownloadCheck = now + (5 * 60 * 1000); // Back off for 5 extra minutes
}
}
// Immediately return without waiting for the download fetch to complete
// This keeps the UI responsive
} catch (error) {
console.error('Auto download check error:', error);
}

View File

@@ -751,12 +751,7 @@ export class FilterPresetManager {
const presetName = document.createElement('span');
presetName.className = 'preset-name';
if (isActive) {
presetName.innerHTML = `<i class="fas fa-check"></i> ${preset.name}`;
} else {
presetName.textContent = preset.name;
}
presetName.textContent = preset.name;
presetName.title = translate('header.filter.presetClickTooltip', { name: preset.name }, `Click to apply preset "${preset.name}"`);
const deleteBtn = document.createElement('button');

View File

@@ -1,51 +0,0 @@
import { appCore } from './core.js';
import { confirmDelete, closeDeleteModal, confirmExclude, closeExcludeModal } from './utils/modalUtils.js';
import { createPageControls } from './components/controls/index.js';
import { ModelDuplicatesManager } from './components/ModelDuplicatesManager.js';
import { MODEL_TYPES } from './api/apiConfig.js';
// Initialize the Misc (VAE/Upscaler) page
export class MiscPageManager {
constructor() {
// Initialize page controls
this.pageControls = createPageControls(MODEL_TYPES.MISC);
// Initialize the ModelDuplicatesManager
this.duplicatesManager = new ModelDuplicatesManager(this, MODEL_TYPES.MISC);
// Expose only necessary functions to global scope
this._exposeRequiredGlobalFunctions();
}
_exposeRequiredGlobalFunctions() {
// Minimal set of functions that need to remain global
window.confirmDelete = confirmDelete;
window.closeDeleteModal = closeDeleteModal;
window.confirmExclude = confirmExclude;
window.closeExcludeModal = closeExcludeModal;
// Expose duplicates manager
window.modelDuplicatesManager = this.duplicatesManager;
}
async initialize() {
// Initialize common page features (including context menus)
appCore.initializePageFeatures();
console.log('Misc Manager initialized');
}
}
export async function initializeMiscPage() {
// Initialize core application
await appCore.initialize();
// Initialize misc page
const miscPage = new MiscPageManager();
await miscPage.initialize();
return miscPage;
}
// Initialize everything when DOM is ready
document.addEventListener('DOMContentLoaded', initializeMiscPage);

View File

@@ -177,35 +177,6 @@ export const state = {
showFavoritesOnly: false,
showUpdateAvailableOnly: false,
duplicatesMode: false,
},
[MODEL_TYPES.MISC]: {
currentPage: 1,
isLoading: false,
hasMore: true,
sortBy: 'name',
activeFolder: getStorageItem(`${MODEL_TYPES.MISC}_activeFolder`),
previewVersions: new Map(),
searchManager: null,
searchOptions: {
filename: true,
modelname: true,
creator: false,
recursive: getStorageItem(`${MODEL_TYPES.MISC}_recursiveSearch`, true),
},
filters: {
baseModel: [],
tags: {},
license: {},
modelTypes: []
},
bulkMode: false,
selectedModels: new Set(),
metadataCache: new Map(),
showFavoritesOnly: false,
showUpdateAvailableOnly: false,
duplicatesMode: false,
subType: 'vae'
}
},

View File

@@ -27,6 +27,10 @@ export const BASE_MODELS = {
FLUX_1_KREA: "Flux.1 Krea",
FLUX_1_KONTEXT: "Flux.1 Kontext",
FLUX_2_D: "Flux.2 D",
FLUX_2_KLEIN_9B: "Flux.2 Klein 9B",
FLUX_2_KLEIN_9B_BASE: "Flux.2 Klein 9B-base",
FLUX_2_KLEIN_4B: "Flux.2 Klein 4B",
FLUX_2_KLEIN_4B_BASE: "Flux.2 Klein 4B-base",
AURAFLOW: "AuraFlow",
CHROMA: "Chroma",
PIXART_A: "PixArt a",
@@ -40,10 +44,12 @@ export const BASE_MODELS = {
HIDREAM: "HiDream",
QWEN: "Qwen",
ZIMAGE_TURBO: "ZImageTurbo",
ZIMAGE_BASE: "ZImageBase",
// Video models
SVD: "SVD",
LTXV: "LTXV",
LTXV2: "LTXV2",
WAN_VIDEO: "Wan Video",
WAN_VIDEO_1_3B_T2V: "Wan Video 1.3B t2v",
WAN_VIDEO_14B_T2V: "Wan Video 14B t2v",
@@ -68,9 +74,6 @@ export const MODEL_SUBTYPE_DISPLAY_NAMES = {
diffusion_model: "Diffusion Model",
// Embedding sub-types
embedding: "Embedding",
// Misc sub-types
vae: "VAE",
upscaler: "Upscaler",
};
// Backward compatibility alias
@@ -84,8 +87,6 @@ export const MODEL_SUBTYPE_ABBREVIATIONS = {
checkpoint: "CKPT",
diffusion_model: "DM",
embedding: "EMB",
vae: "VAE",
upscaler: "UP",
};
export function getSubTypeAbbreviation(subType) {
@@ -125,6 +126,10 @@ export const BASE_MODEL_ABBREVIATIONS = {
[BASE_MODELS.FLUX_1_KREA]: 'F1KR',
[BASE_MODELS.FLUX_1_KONTEXT]: 'F1KX',
[BASE_MODELS.FLUX_2_D]: 'F2D',
[BASE_MODELS.FLUX_2_KLEIN_9B]: 'FK9',
[BASE_MODELS.FLUX_2_KLEIN_9B_BASE]: 'FK9B',
[BASE_MODELS.FLUX_2_KLEIN_4B]: 'FK4',
[BASE_MODELS.FLUX_2_KLEIN_4B_BASE]: 'FK4B',
// Other diffusion models
[BASE_MODELS.AURAFLOW]: 'AF',
@@ -140,10 +145,12 @@ export const BASE_MODEL_ABBREVIATIONS = {
[BASE_MODELS.HIDREAM]: 'HID',
[BASE_MODELS.QWEN]: 'QWEN',
[BASE_MODELS.ZIMAGE_TURBO]: 'ZIT',
[BASE_MODELS.ZIMAGE_BASE]: 'ZIB',
// Video models
[BASE_MODELS.SVD]: 'SVD',
[BASE_MODELS.LTXV]: 'LTXV',
[BASE_MODELS.LTXV2]: 'LTV2',
[BASE_MODELS.WAN_VIDEO]: 'WAN',
[BASE_MODELS.WAN_VIDEO_1_3B_T2V]: 'WAN',
[BASE_MODELS.WAN_VIDEO_14B_T2V]: 'WAN',
@@ -333,16 +340,16 @@ export const BASE_MODEL_CATEGORIES = {
'Stable Diffusion 3.x': [BASE_MODELS.SD_3, BASE_MODELS.SD_3_5, BASE_MODELS.SD_3_5_MEDIUM, BASE_MODELS.SD_3_5_LARGE, BASE_MODELS.SD_3_5_LARGE_TURBO],
'SDXL': [BASE_MODELS.SDXL, BASE_MODELS.SDXL_LIGHTNING, BASE_MODELS.SDXL_HYPER],
'Video Models': [
BASE_MODELS.SVD, BASE_MODELS.LTXV, BASE_MODELS.HUNYUAN_VIDEO, BASE_MODELS.WAN_VIDEO,
BASE_MODELS.SVD, BASE_MODELS.LTXV, BASE_MODELS.LTXV2, BASE_MODELS.HUNYUAN_VIDEO, BASE_MODELS.WAN_VIDEO,
BASE_MODELS.WAN_VIDEO_1_3B_T2V, BASE_MODELS.WAN_VIDEO_14B_T2V,
BASE_MODELS.WAN_VIDEO_14B_I2V_480P, BASE_MODELS.WAN_VIDEO_14B_I2V_720P,
BASE_MODELS.WAN_VIDEO_2_2_TI2V_5B, BASE_MODELS.WAN_VIDEO_2_2_T2V_A14B,
BASE_MODELS.WAN_VIDEO_2_2_I2V_A14B
],
'Flux Models': [BASE_MODELS.FLUX_1_D, BASE_MODELS.FLUX_1_S, BASE_MODELS.FLUX_1_KONTEXT, BASE_MODELS.FLUX_1_KREA, BASE_MODELS.FLUX_2_D],
'Flux Models': [BASE_MODELS.FLUX_1_D, BASE_MODELS.FLUX_1_S, BASE_MODELS.FLUX_1_KONTEXT, BASE_MODELS.FLUX_1_KREA, BASE_MODELS.FLUX_2_D, BASE_MODELS.FLUX_2_KLEIN_9B, BASE_MODELS.FLUX_2_KLEIN_9B_BASE, BASE_MODELS.FLUX_2_KLEIN_4B, BASE_MODELS.FLUX_2_KLEIN_4B_BASE],
'Other Models': [
BASE_MODELS.ILLUSTRIOUS, BASE_MODELS.PONY, BASE_MODELS.HIDREAM,
BASE_MODELS.QWEN, BASE_MODELS.AURAFLOW, BASE_MODELS.CHROMA, BASE_MODELS.ZIMAGE_TURBO,
BASE_MODELS.QWEN, BASE_MODELS.AURAFLOW, BASE_MODELS.CHROMA, BASE_MODELS.ZIMAGE_TURBO, BASE_MODELS.ZIMAGE_BASE,
BASE_MODELS.PIXART_A, BASE_MODELS.PIXART_E, BASE_MODELS.HUNYUAN_1,
BASE_MODELS.LUMINA, BASE_MODELS.KOLORS, BASE_MODELS.NOOBAI,
BASE_MODELS.UNKNOWN

View File

@@ -28,9 +28,10 @@ async function getCardCreator(pageType) {
// Function to get the appropriate data fetcher based on page type
async function getDataFetcher(pageType) {
if (pageType === 'loras' || pageType === 'embeddings' || pageType === 'checkpoints' || pageType === 'misc') {
if (pageType === 'loras' || pageType === 'embeddings' || pageType === 'checkpoints') {
return (page = 1, pageSize = 100) => getModelApiClient().fetchModelsPage(page, pageSize);
} else if (pageType === 'recipes') {
// Import the recipeApi module and use the fetchRecipesPage function
const { fetchRecipesPage } = await import('../api/recipeApi.js');
return fetchRecipesPage;
}

View File

@@ -13,8 +13,6 @@
{% set current_page = 'checkpoints' %}
{% elif current_path.startswith('/embeddings') %}
{% set current_page = 'embeddings' %}
{% elif current_path.startswith('/misc') %}
{% set current_page = 'misc' %}
{% elif current_path.startswith('/statistics') %}
{% set current_page = 'statistics' %}
{% else %}
@@ -40,10 +38,6 @@
id="embeddingsNavItem">
<i class="fas fa-code"></i> <span>{{ t('header.navigation.embeddings') }}</span>
</a>
<a href="/misc" class="nav-item{% if current_path.startswith('/misc') %} active{% endif %}"
id="miscNavItem">
<i class="fas fa-puzzle-piece"></i> <span>{{ t('header.navigation.misc') }}</span>
</a>
<a href="/statistics" class="nav-item{% if current_path.startswith('/statistics') %} active{% endif %}"
id="statisticsNavItem">
<i class="fas fa-chart-bar"></i> <span>{{ t('header.navigation.statistics') }}</span>
@@ -122,11 +116,6 @@
<div class="search-option-tag active" data-option="modelname">{{ t('header.search.filters.modelname') }}</div>
<div class="search-option-tag active" data-option="tags">{{ t('header.search.filters.tags') }}</div>
<div class="search-option-tag" data-option="creator">{{ t('header.search.filters.creator') }}</div>
{% elif request.path == '/misc' %}
<div class="search-option-tag active" data-option="filename">{{ t('header.search.filters.filename') }}</div>
<div class="search-option-tag active" data-option="modelname">{{ t('header.search.filters.modelname') }}</div>
<div class="search-option-tag active" data-option="tags">{{ t('header.search.filters.tags') }}</div>
<div class="search-option-tag" data-option="creator">{{ t('header.search.filters.creator') }}</div>
{% else %}
<!-- Default options for LoRAs page -->
<div class="search-option-tag active" data-option="filename">{{ t('header.search.filters.filename') }}</div>
@@ -167,7 +156,7 @@
<div class="tags-loading">{{ t('common.status.loading') }}</div>
</div>
</div>
{% if current_page == 'loras' or current_page == 'checkpoints' or current_page == 'misc' %}
{% if current_page == 'loras' or current_page == 'checkpoints' %}
<div class="filter-section">
<h4>{{ t('header.filter.modelTypes') }}</h4>
<div class="filter-tags" id="modelTypeTags">

View File

@@ -1,45 +0,0 @@
{% extends "base.html" %}
{% block title %}{{ t('misc.title') }}{% endblock %}
{% block page_id %}misc{% endblock %}
{% block init_title %}{{ t('initialization.misc.title') }}{% endblock %}
{% block init_message %}{{ t('initialization.misc.message') }}{% endblock %}
{% block init_check_url %}/api/lm/misc/list?page=1&page_size=1{% endblock %}
{% block additional_components %}
<div id="miscContextMenu" class="context-menu" style="display: none;">
<div class="context-menu-item" data-action="refresh-metadata"><i class="fas fa-sync"></i> {{ t('loras.contextMenu.refreshMetadata') }}</div>
<div class="context-menu-item" data-action="relink-civitai"><i class="fas fa-link"></i> {{ t('loras.contextMenu.relinkCivitai') }}</div>
<div class="context-menu-item" data-action="copyname"><i class="fas fa-copy"></i> {{ t('loras.contextMenu.copyFilename') }}</div>
<div class="context-menu-item" data-action="preview"><i class="fas fa-folder-open"></i> {{ t('loras.contextMenu.openExamples') }}</div>
<div class="context-menu-item" data-action="download-examples"><i class="fas fa-download"></i> {{ t('loras.contextMenu.downloadExamples') }}</div>
<div class="context-menu-item" data-action="replace-preview"><i class="fas fa-image"></i> {{ t('loras.contextMenu.replacePreview') }}</div>
<div class="context-menu-item" data-action="set-nsfw"><i class="fas fa-exclamation-triangle"></i> {{ t('loras.contextMenu.setContentRating') }}</div>
<div class="context-menu-separator"></div>
<div class="context-menu-item" data-action="move"><i class="fas fa-folder-open"></i> {{ t('loras.contextMenu.moveToFolder') }}</div>
<div class="context-menu-item" data-action="move-other"><i class="fas fa-exchange-alt"></i> {{ t('misc.contextMenu.moveToOtherTypeFolder', {otherType: '...'}) }}</div>
<div class="context-menu-item" data-action="exclude"><i class="fas fa-eye-slash"></i> {{ t('loras.contextMenu.excludeModel') }}</div>
<div class="context-menu-item delete-item" data-action="delete"><i class="fas fa-trash"></i> {{ t('loras.contextMenu.deleteModel') }}</div>
</div>
{% endblock %}
{% block content %}
{% include 'components/controls.html' %}
{% include 'components/duplicates_banner.html' %}
{% include 'components/folder_sidebar.html' %}
<!-- Misc cards container -->
<div class="card-grid" id="modelGrid">
<!-- Cards will be dynamically inserted here -->
</div>
{% endblock %}
{% block overlay %}
<div class="bulk-mode-overlay"></div>
{% endblock %}
{% block main_script %}
<script type="module" src="/loras_static/js/misc.js?v={{ version }}"></script>
{% endblock %}

View File

@@ -0,0 +1,160 @@
"""Tests for checkpoint path overlap detection."""
import logging
import os
import pytest
from py import config as config_module
def _normalize(path: str) -> str:
return os.path.normpath(path).replace(os.sep, "/")
class TestCheckpointPathOverlap:
"""Test detection of overlapping paths between checkpoints and unet."""
def test_overlapping_paths_prioritizes_checkpoints(
self, monkeypatch: pytest.MonkeyPatch, tmp_path, caplog
):
"""Test that overlapping paths prioritize checkpoints for backward compatibility."""
# Create a shared physical folder
shared_dir = tmp_path / "shared_models"
shared_dir.mkdir()
# Create two symlinks pointing to the same physical folder
checkpoints_link = tmp_path / "checkpoints"
unet_link = tmp_path / "unet"
checkpoints_link.symlink_to(shared_dir, target_is_directory=True)
unet_link.symlink_to(shared_dir, target_is_directory=True)
# Create Config instance with overlapping paths
with caplog.at_level(logging.WARNING, logger=config_module.logger.name):
config = config_module.Config.__new__(config_module.Config)
config._path_mappings = {}
config._preview_root_paths = set()
config._cached_fingerprint = None
# Call the method under test
result = config._prepare_checkpoint_paths(
[str(checkpoints_link)], [str(unet_link)]
)
# Verify warning was logged
warning_messages = [
record.message
for record in caplog.records
if record.levelname == "WARNING"
and "overlapping paths" in record.message.lower()
]
assert len(warning_messages) == 1
assert "checkpoints" in warning_messages[0].lower()
assert "diffusion_models" in warning_messages[0].lower() or "unet" in warning_messages[0].lower()
# Verify warning mentions backward compatibility fallback
assert "falling back" in warning_messages[0].lower() or "backward compatibility" in warning_messages[0].lower()
# Verify only one path is returned (deduplication still works)
assert len(result) == 1
# Prioritizes checkpoints path for backward compatibility
assert _normalize(result[0]) == _normalize(str(checkpoints_link))
# Verify checkpoints_roots has the path (prioritized)
assert len(config.checkpoints_roots) == 1
assert _normalize(config.checkpoints_roots[0]) == _normalize(str(checkpoints_link))
# Verify unet_roots is empty (overlapping paths removed)
assert config.unet_roots == []
def test_non_overlapping_paths_no_warning(
self, monkeypatch: pytest.MonkeyPatch, tmp_path, caplog
):
"""Test that non-overlapping paths do not trigger a warning."""
# Create separate physical folders
checkpoints_dir = tmp_path / "checkpoints"
checkpoints_dir.mkdir()
unet_dir = tmp_path / "unet"
unet_dir.mkdir()
# Create Config instance with separate paths
with caplog.at_level(logging.WARNING, logger=config_module.logger.name):
config = config_module.Config.__new__(config_module.Config)
config._path_mappings = {}
config._preview_root_paths = set()
config._cached_fingerprint = None
result = config._prepare_checkpoint_paths(
[str(checkpoints_dir)], [str(unet_dir)]
)
# Verify no overlapping paths warning was logged
warning_messages = [
record.message
for record in caplog.records
if record.levelname == "WARNING"
and "overlapping paths" in record.message.lower()
]
assert len(warning_messages) == 0
# Verify both paths are returned
assert len(result) == 2
normalized_result = [_normalize(p) for p in result]
assert _normalize(str(checkpoints_dir)) in normalized_result
assert _normalize(str(unet_dir)) in normalized_result
# Verify both roots are properly set
assert len(config.checkpoints_roots) == 1
assert len(config.unet_roots) == 1
def test_partial_overlap_prioritizes_checkpoints(
self, monkeypatch: pytest.MonkeyPatch, tmp_path, caplog
):
"""Test partial overlap - overlapping paths prioritize checkpoints."""
# Create folders
shared_dir = tmp_path / "shared"
shared_dir.mkdir()
separate_checkpoint = tmp_path / "separate_ckpt"
separate_checkpoint.mkdir()
separate_unet = tmp_path / "separate_unet"
separate_unet.mkdir()
# Create symlinks - one shared, others separate
shared_link = tmp_path / "shared_link"
shared_link.symlink_to(shared_dir, target_is_directory=True)
with caplog.at_level(logging.WARNING, logger=config_module.logger.name):
config = config_module.Config.__new__(config_module.Config)
config._path_mappings = {}
config._preview_root_paths = set()
config._cached_fingerprint = None
# One checkpoint path overlaps with one unet path
result = config._prepare_checkpoint_paths(
[str(shared_link), str(separate_checkpoint)],
[str(shared_link), str(separate_unet)]
)
# Verify warning was logged for the overlapping path
warning_messages = [
record.message
for record in caplog.records
if record.levelname == "WARNING"
and "overlapping paths" in record.message.lower()
]
assert len(warning_messages) == 1
# Verify 3 unique paths (shared counted once as checkpoint, plus separate ones)
assert len(result) == 3
# Verify the overlapping path appears in warning message
assert str(shared_link.name) in warning_messages[0] or str(shared_dir.name) in warning_messages[0]
# Verify checkpoints_roots includes both checkpoint paths (including the shared one)
assert len(config.checkpoints_roots) == 2
checkpoint_normalized = [_normalize(p) for p in config.checkpoints_roots]
assert _normalize(str(shared_link)) in checkpoint_normalized
assert _normalize(str(separate_checkpoint)) in checkpoint_normalized
# Verify unet_roots only includes the non-overlapping unet path
assert len(config.unet_roots) == 1
assert _normalize(config.unet_roots[0]) == _normalize(str(separate_unet))

View File

@@ -230,8 +230,58 @@ def test_new_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
assert normalized_external in second_cfg._path_mappings
def test_removed_deep_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Removing a deep symlink should trigger cache invalidation."""
def test_removed_first_level_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Removing a first-level symlink should trigger cache invalidation."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create first-level symlink (directly under loras root)
external_dir = tmp_path / "external"
external_dir.mkdir()
symlink = loras_dir / "external_models"
symlink.symlink_to(external_dir, target_is_directory=True)
# Initial scan finds the symlink
first_cfg = config_module.Config()
normalized_external = _normalize(str(external_dir))
assert normalized_external in first_cfg._path_mappings
# Remove the symlink
symlink.unlink()
# Second config should detect invalid cached mapping and rescan
second_cfg = config_module.Config()
assert normalized_external not in second_cfg._path_mappings
def test_retargeted_first_level_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Changing a first-level symlink's target should trigger cache invalidation."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create first-level symlink
target_v1 = tmp_path / "external_v1"
target_v1.mkdir()
target_v2 = tmp_path / "external_v2"
target_v2.mkdir()
symlink = loras_dir / "external_models"
symlink.symlink_to(target_v1, target_is_directory=True)
# Initial scan
first_cfg = config_module.Config()
assert _normalize(str(target_v1)) in first_cfg._path_mappings
# Retarget the symlink
symlink.unlink()
symlink.symlink_to(target_v2, target_is_directory=True)
# Second config should detect changed target and rescan
second_cfg = config_module.Config()
assert _normalize(str(target_v2)) in second_cfg._path_mappings
assert _normalize(str(target_v1)) not in second_cfg._path_mappings
def test_deep_symlink_not_scanned(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Deep symlinks (below first level) are not scanned to avoid performance issues."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure with deep symlink
@@ -242,46 +292,140 @@ def test_removed_deep_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, t
deep_symlink = subdir / "styles"
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Initial scan finds the deep symlink
first_cfg = config_module.Config()
# Config should not detect deep symlinks (only first-level)
cfg = config_module.Config()
normalized_external = _normalize(str(external_dir))
assert normalized_external in first_cfg._path_mappings
# Remove the deep symlink
deep_symlink.unlink()
# Second config should detect invalid cached mapping and rescan
second_cfg = config_module.Config()
assert normalized_external not in second_cfg._path_mappings
assert normalized_external not in cfg._path_mappings
def test_retargeted_deep_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Changing a deep symlink's target should trigger cache invalidation."""
def test_deep_symlink_discovered_on_preview_access(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Deep symlinks are discovered dynamically when preview is accessed."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure
# Create nested structure with deep symlink at second level
subdir = loras_dir / "anime"
subdir.mkdir()
target_v1 = tmp_path / "external_v1"
target_v1.mkdir()
target_v2 = tmp_path / "external_v2"
target_v2.mkdir()
external_dir = tmp_path / "external"
external_dir.mkdir()
deep_symlink = subdir / "styles"
deep_symlink.symlink_to(target_v1, target_is_directory=True)
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Initial scan
first_cfg = config_module.Config()
assert _normalize(str(target_v1)) in first_cfg._path_mappings
# Create preview file under deep symlink
preview_file = deep_symlink / "model.preview.jpeg"
preview_file.write_bytes(b"preview")
# Config should not initially detect deep symlinks
cfg = config_module.Config()
normalized_external = _normalize(str(external_dir))
normalized_deep_link = _normalize(str(deep_symlink))
assert normalized_external not in cfg._path_mappings
# First preview access triggers symlink discovery automatically and returns True
is_allowed = cfg.is_preview_path_allowed(str(preview_file))
# After discovery, preview should be allowed
assert is_allowed
assert normalized_external in cfg._path_mappings
assert cfg._path_mappings[normalized_external] == normalized_deep_link
# Verify preview path is now allowed without triggering discovery again
assert cfg.is_preview_path_allowed(str(preview_file))
def test_deep_symlink_at_third_level(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Deep symlinks at third level are also discovered dynamically."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure with deep symlink at third level
level1 = loras_dir / "category"
level1.mkdir()
level2 = level1 / "subcategory"
level2.mkdir()
external_dir = tmp_path / "external_deep"
external_dir.mkdir()
deep_symlink = level2 / "deep"
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Create preview file under deep symlink
preview_file = deep_symlink / "preview.webp"
preview_file.write_bytes(b"test")
cfg = config_module.Config()
# First preview access triggers symlink discovery at third level
is_allowed = cfg.is_preview_path_allowed(str(preview_file))
assert is_allowed
normalized_external = _normalize(str(external_dir))
normalized_deep_link = _normalize(str(deep_symlink))
assert normalized_external in cfg._path_mappings
assert cfg._path_mappings[normalized_external] == normalized_deep_link
def test_deep_symlink_points_outside_roots(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Deep symlinks can point to locations outside configured roots."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure with deep symlink pointing outside roots
subdir = loras_dir / "shared"
subdir.mkdir()
outside_root = tmp_path / "storage"
outside_root.mkdir()
deep_symlink = subdir / "models"
deep_symlink.symlink_to(outside_root, target_is_directory=True)
# Create preview file under deep symlink (outside original roots)
preview_file = deep_symlink / "external.png"
preview_file.write_bytes(b"external")
cfg = config_module.Config()
# Preview access triggers symlink discovery
is_allowed = cfg.is_preview_path_allowed(str(preview_file))
# After discovery, preview should be allowed even though target is outside roots
assert is_allowed
normalized_outside = _normalize(str(outside_root))
assert normalized_outside in cfg._path_mappings
def test_normal_path_unaffected_by_discovery(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Normal paths (no symlinks) are not affected by symlink discovery logic."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create normal file structure (no symlinks)
preview_file = loras_dir / "normal.preview.jpeg"
preview_file.write_bytes(b"normal")
cfg = config_module.Config()
# Normal paths work without any discovery
assert cfg.is_preview_path_allowed(str(preview_file))
assert len(cfg._path_mappings) == 0
def test_first_level_symlink_still_works(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""First-level symlinks continue to work as before."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create first-level symlink
external_dir = tmp_path / "first_level_external"
external_dir.mkdir()
first_symlink = loras_dir / "first_level"
first_symlink.symlink_to(external_dir, target_is_directory=True)
# Create preview file under first-level symlink
preview_file = first_symlink / "model.png"
preview_file.write_bytes(b"first_level")
cfg = config_module.Config()
# First-level symlinks are scanned during initialization
normalized_external = _normalize(str(external_dir))
assert normalized_external in cfg._path_mappings
assert cfg.is_preview_path_allowed(str(preview_file))
# Retarget the symlink
deep_symlink.unlink()
deep_symlink.symlink_to(target_v2, target_is_directory=True)
# Second config should detect changed target and rescan
second_cfg = config_module.Config()
assert _normalize(str(target_v2)) in second_cfg._path_mappings
assert _normalize(str(target_v1)) not in second_cfg._path_mappings
def test_legacy_symlink_cache_automatic_cleanup(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Test that legacy symlink cache is automatically cleaned up after migration."""
settings_dir = tmp_path / "settings"

View File

@@ -47,6 +47,8 @@ class StubDownloadManager:
self.resume_error: Exception | None = None
self.stop_error: Exception | None = None
self.force_error: Exception | None = None
self.check_pending_result: dict[str, Any] | None = None
self.check_pending_calls: list[list[str]] = []
async def get_status(self, request: web.Request) -> dict[str, Any]:
return {"success": True, "status": "idle"}
@@ -75,6 +77,20 @@ class StubDownloadManager:
raise self.force_error
return {"success": True, "payload": payload}
async def check_pending_models(self, model_types: list[str]) -> dict[str, Any]:
self.check_pending_calls.append(model_types)
if self.check_pending_result is not None:
return self.check_pending_result
return {
"success": True,
"is_downloading": False,
"total_models": 100,
"pending_count": 10,
"processed_count": 90,
"failed_count": 0,
"needs_download": True,
}
class StubImportUseCase:
def __init__(self) -> None:
@@ -236,3 +252,123 @@ async def test_import_route_returns_validation_errors():
assert response.status == 400
body = await _json(response)
assert body == {"success": False, "error": "bad payload"}
async def test_check_example_images_needed_returns_pending_counts():
"""Test that check_example_images_needed endpoint returns pending model counts."""
async with registrar_app() as harness:
harness.download_manager.check_pending_result = {
"success": True,
"is_downloading": False,
"total_models": 5500,
"pending_count": 12,
"processed_count": 5488,
"failed_count": 45,
"needs_download": True,
}
response = await harness.client.post(
"/api/lm/check-example-images-needed",
json={"model_types": ["lora", "checkpoint"]},
)
assert response.status == 200
body = await _json(response)
assert body["success"] is True
assert body["total_models"] == 5500
assert body["pending_count"] == 12
assert body["processed_count"] == 5488
assert body["failed_count"] == 45
assert body["needs_download"] is True
assert body["is_downloading"] is False
# Verify the manager was called with correct model types
assert harness.download_manager.check_pending_calls == [["lora", "checkpoint"]]
async def test_check_example_images_needed_handles_download_in_progress():
"""Test that check_example_images_needed returns correct status when download is running."""
async with registrar_app() as harness:
harness.download_manager.check_pending_result = {
"success": True,
"is_downloading": True,
"total_models": 0,
"pending_count": 0,
"processed_count": 0,
"failed_count": 0,
"needs_download": False,
"message": "Download already in progress",
}
response = await harness.client.post(
"/api/lm/check-example-images-needed",
json={"model_types": ["lora"]},
)
assert response.status == 200
body = await _json(response)
assert body["success"] is True
assert body["is_downloading"] is True
assert body["needs_download"] is False
async def test_check_example_images_needed_handles_no_pending_models():
"""Test that check_example_images_needed returns correct status when no work is needed."""
async with registrar_app() as harness:
harness.download_manager.check_pending_result = {
"success": True,
"is_downloading": False,
"total_models": 5500,
"pending_count": 0,
"processed_count": 5500,
"failed_count": 0,
"needs_download": False,
}
response = await harness.client.post(
"/api/lm/check-example-images-needed",
json={"model_types": ["lora", "checkpoint", "embedding"]},
)
assert response.status == 200
body = await _json(response)
assert body["success"] is True
assert body["pending_count"] == 0
assert body["needs_download"] is False
assert body["processed_count"] == 5500
async def test_check_example_images_needed_uses_default_model_types():
"""Test that check_example_images_needed uses default model types when not specified."""
async with registrar_app() as harness:
response = await harness.client.post(
"/api/lm/check-example-images-needed",
json={}, # No model_types specified
)
assert response.status == 200
# Should use default model types
assert harness.download_manager.check_pending_calls == [["lora", "checkpoint", "embedding"]]
async def test_check_example_images_needed_returns_error_on_exception():
"""Test that check_example_images_needed returns 500 on internal error."""
async with registrar_app() as harness:
# Simulate an error by setting result to an error state
# Actually, we need to make the method raise an exception
original_method = harness.download_manager.check_pending_models
async def failing_check(_model_types):
raise RuntimeError("Database connection failed")
harness.download_manager.check_pending_models = failing_check
response = await harness.client.post(
"/api/lm/check-example-images-needed",
json={"model_types": ["lora"]},
)
assert response.status == 500
body = await _json(response)
assert body["success"] is False
assert "Database connection failed" in body["error"]

View File

@@ -502,6 +502,7 @@ def test_handler_set_route_mapping_includes_all_handlers() -> None:
"resume_example_images",
"stop_example_images",
"force_download_example_images",
"check_example_images_needed",
"import_example_images",
"delete_example_image",
"set_example_image_nsfw_level",

View File

@@ -188,3 +188,91 @@ def test_is_preview_path_allowed_rejects_prefix_without_separator(tmp_path):
# The sibling path should NOT be allowed even though it shares a prefix
assert not config.is_preview_path_allowed(str(sibling_file)), \
f"Path in '{sibling_root}' should NOT be allowed when root is '{library_root}'"
async def test_preview_handler_serves_from_deep_symlink(tmp_path):
"""Test that previews under deep symlinks are served correctly."""
library_root = tmp_path / "library"
library_root.mkdir()
# Create nested structure with deep symlink at second level
subdir = library_root / "anime"
subdir.mkdir()
external_dir = tmp_path / "external"
external_dir.mkdir()
deep_symlink = subdir / "styles"
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Create preview file under deep symlink
preview_file = deep_symlink / "model.preview.webp"
preview_file.write_bytes(b"preview_content")
config = Config()
config.apply_library_settings(
{
"folder_paths": {
"loras": [str(library_root)],
"checkpoints": [],
"unet": [],
"embeddings": [],
}
}
)
handler = PreviewHandler(config=config)
encoded_path = urllib.parse.quote(str(preview_file), safe="")
request = make_mocked_request("GET", f"/api/lm/previews?path={encoded_path}")
response = await handler.serve_preview(request)
assert isinstance(response, web.FileResponse)
assert response.status == 200
assert Path(response._path) == preview_file.resolve()
async def test_deep_symlink_discovered_on_first_access(tmp_path):
"""Test that deep symlinks are discovered on first preview access."""
library_root = tmp_path / "library"
library_root.mkdir()
# Create nested structure with deep symlink at second level
subdir = library_root / "category"
subdir.mkdir()
external_dir = tmp_path / "storage"
external_dir.mkdir()
deep_symlink = subdir / "models"
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Create preview file under deep symlink
preview_file = deep_symlink / "test.png"
preview_file.write_bytes(b"test_image")
config = Config()
config.apply_library_settings(
{
"folder_paths": {
"loras": [str(library_root)],
"checkpoints": [],
"unet": [],
"embeddings": [],
}
}
)
# Deep symlink should not be in mappings initially
normalized_external = os.path.normpath(str(external_dir)).replace(os.sep, '/')
assert normalized_external not in config._path_mappings
handler = PreviewHandler(config=config)
encoded_path = urllib.parse.quote(str(preview_file), safe="")
request = make_mocked_request("GET", f"/api/lm/previews?path={encoded_path}")
# First access should trigger symlink discovery and serve the preview
response = await handler.serve_preview(request)
assert isinstance(response, web.FileResponse)
assert response.status == 200
assert Path(response._path) == preview_file.resolve()
# Deep symlink should now be in mappings
assert normalized_external in config._path_mappings

View File

@@ -0,0 +1,283 @@
"""
Unit tests for CacheEntryValidator
"""
import pytest
from py.services.cache_entry_validator import (
CacheEntryValidator,
ValidationResult,
)
class TestCacheEntryValidator:
"""Tests for CacheEntryValidator class"""
def test_validate_valid_entry(self):
"""Test validation of a valid cache entry"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123def456',
'file_name': 'test.safetensors',
'model_name': 'Test Model',
'size': 1024,
'modified': 1234567890.0,
'tags': ['tag1', 'tag2'],
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is True
assert result.repaired is False
assert len(result.errors) == 0
assert result.entry == entry
def test_validate_missing_required_field_sha256(self):
"""Test validation fails when required sha256 field is missing"""
entry = {
'file_path': '/models/test.safetensors',
# sha256 missing
'file_name': 'test.safetensors',
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('sha256' in error for error in result.errors)
def test_validate_missing_required_field_file_path(self):
"""Test validation fails when required file_path field is missing"""
entry = {
# file_path missing
'sha256': 'abc123def456',
'file_name': 'test.safetensors',
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('file_path' in error for error in result.errors)
def test_validate_empty_required_field_sha256(self):
"""Test validation fails when sha256 is empty string"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': '', # Empty string
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('sha256' in error for error in result.errors)
def test_validate_empty_required_field_file_path(self):
"""Test validation fails when file_path is empty string"""
entry = {
'file_path': '', # Empty string
'sha256': 'abc123def456',
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('file_path' in error for error in result.errors)
def test_validate_none_required_field(self):
"""Test validation fails when required field is None"""
entry = {
'file_path': None,
'sha256': 'abc123def456',
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('file_path' in error for error in result.errors)
def test_validate_none_entry(self):
"""Test validation handles None entry"""
result = CacheEntryValidator.validate(None, auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('None' in error for error in result.errors)
assert result.entry is None
def test_validate_non_dict_entry(self):
"""Test validation handles non-dict entry"""
result = CacheEntryValidator.validate("not a dict", auto_repair=False)
assert result.is_valid is False
assert result.repaired is False
assert any('not a dict' in error for error in result.errors)
assert result.entry is None
def test_auto_repair_missing_non_required_field(self):
"""Test auto-repair adds missing non-required fields"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123def456',
# file_name, model_name, tags missing
}
result = CacheEntryValidator.validate(entry, auto_repair=True)
assert result.is_valid is True
assert result.repaired is True
assert result.entry['file_name'] == ''
assert result.entry['model_name'] == ''
assert result.entry['tags'] == []
def test_auto_repair_wrong_type_field(self):
"""Test auto-repair fixes fields with wrong type"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123def456',
'size': 'not a number', # Should be int
'tags': 'not a list', # Should be list
}
result = CacheEntryValidator.validate(entry, auto_repair=True)
assert result.is_valid is True
assert result.repaired is True
assert result.entry['size'] == 0 # Default value
assert result.entry['tags'] == [] # Default value
def test_normalize_sha256_lowercase(self):
"""Test sha256 is normalized to lowercase"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'ABC123DEF456', # Uppercase
}
result = CacheEntryValidator.validate(entry, auto_repair=True)
assert result.is_valid is True
assert result.entry['sha256'] == 'abc123def456'
def test_validate_batch_all_valid(self):
"""Test batch validation with all valid entries"""
entries = [
{
'file_path': '/models/test1.safetensors',
'sha256': 'abc123',
},
{
'file_path': '/models/test2.safetensors',
'sha256': 'def456',
},
]
valid, invalid = CacheEntryValidator.validate_batch(entries, auto_repair=False)
assert len(valid) == 2
assert len(invalid) == 0
def test_validate_batch_mixed_validity(self):
"""Test batch validation with mixed valid/invalid entries"""
entries = [
{
'file_path': '/models/test1.safetensors',
'sha256': 'abc123',
},
{
'file_path': '/models/test2.safetensors',
# sha256 missing - invalid
},
{
'file_path': '/models/test3.safetensors',
'sha256': 'def456',
},
]
valid, invalid = CacheEntryValidator.validate_batch(entries, auto_repair=False)
assert len(valid) == 2
assert len(invalid) == 1
# invalid list contains the actual invalid entries (not by index)
assert invalid[0]['file_path'] == '/models/test2.safetensors'
def test_validate_batch_empty_list(self):
"""Test batch validation with empty list"""
valid, invalid = CacheEntryValidator.validate_batch([], auto_repair=False)
assert len(valid) == 0
assert len(invalid) == 0
def test_get_file_path_safe(self):
"""Test safe file_path extraction"""
entry = {'file_path': '/models/test.safetensors', 'sha256': 'abc123'}
assert CacheEntryValidator.get_file_path_safe(entry) == '/models/test.safetensors'
def test_get_file_path_safe_missing(self):
"""Test safe file_path extraction when missing"""
entry = {'sha256': 'abc123'}
assert CacheEntryValidator.get_file_path_safe(entry) == ''
def test_get_file_path_safe_not_dict(self):
"""Test safe file_path extraction from non-dict"""
assert CacheEntryValidator.get_file_path_safe(None) == ''
assert CacheEntryValidator.get_file_path_safe('string') == ''
def test_get_sha256_safe(self):
"""Test safe sha256 extraction"""
entry = {'file_path': '/models/test.safetensors', 'sha256': 'ABC123'}
assert CacheEntryValidator.get_sha256_safe(entry) == 'abc123'
def test_get_sha256_safe_missing(self):
"""Test safe sha256 extraction when missing"""
entry = {'file_path': '/models/test.safetensors'}
assert CacheEntryValidator.get_sha256_safe(entry) == ''
def test_get_sha256_safe_not_dict(self):
"""Test safe sha256 extraction from non-dict"""
assert CacheEntryValidator.get_sha256_safe(None) == ''
assert CacheEntryValidator.get_sha256_safe('string') == ''
def test_validate_with_all_optional_fields(self):
"""Test validation with all optional fields present"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123',
'file_name': 'test.safetensors',
'model_name': 'Test Model',
'folder': 'test_folder',
'size': 1024,
'modified': 1234567890.0,
'tags': ['tag1', 'tag2'],
'preview_url': 'http://example.com/preview.jpg',
'base_model': 'SD1.5',
'from_civitai': True,
'favorite': True,
'exclude': False,
'db_checked': True,
'preview_nsfw_level': 1,
'notes': 'Test notes',
'usage_tips': 'Test tips',
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is True
assert result.repaired is False
assert result.entry == entry
def test_validate_numeric_field_accepts_float_for_int(self):
"""Test that numeric fields accept float for int type"""
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123',
'size': 1024.5, # Float for int field
'modified': 1234567890.0,
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is True
assert result.repaired is False

View File

@@ -0,0 +1,364 @@
"""
Unit tests for CacheHealthMonitor
"""
import pytest
from py.services.cache_health_monitor import (
CacheHealthMonitor,
CacheHealthStatus,
HealthReport,
)
class TestCacheHealthMonitor:
"""Tests for CacheHealthMonitor class"""
def test_check_health_all_valid_entries(self):
"""Test health check with 100% valid entries"""
monitor = CacheHealthMonitor()
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(100)
]
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.HEALTHY
assert report.total_entries == 100
assert report.valid_entries == 100
assert report.invalid_entries == 0
assert report.repaired_entries == 0
assert report.corruption_rate == 0.0
assert report.message == "Cache is healthy"
def test_check_health_degraded_cache(self):
"""Test health check with 1-5% invalid entries (degraded)"""
monitor = CacheHealthMonitor()
# Create 100 entries, 2 invalid (2%)
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(98)
]
# Add 2 invalid entries
entries.append({'file_path': '/models/invalid1.safetensors'}) # Missing sha256
entries.append({'file_path': '/models/invalid2.safetensors'}) # Missing sha256
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.DEGRADED
assert report.total_entries == 100
assert report.valid_entries == 98
assert report.invalid_entries == 2
assert report.corruption_rate == 0.02
# Message describes the issue without necessarily containing the word "degraded"
assert 'invalid entries' in report.message.lower()
def test_check_health_corrupted_cache(self):
"""Test health check with >5% invalid entries (corrupted)"""
monitor = CacheHealthMonitor()
# Create 100 entries, 10 invalid (10%)
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(90)
]
# Add 10 invalid entries
for i in range(10):
entries.append({'file_path': f'/models/invalid{i}.safetensors'})
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.CORRUPTED
assert report.total_entries == 100
assert report.valid_entries == 90
assert report.invalid_entries == 10
assert report.corruption_rate == 0.10
assert 'corrupted' in report.message.lower()
def test_check_health_empty_cache(self):
"""Test health check with empty cache"""
monitor = CacheHealthMonitor()
report = monitor.check_health([], auto_repair=False)
assert report.status == CacheHealthStatus.HEALTHY
assert report.total_entries == 0
assert report.valid_entries == 0
assert report.invalid_entries == 0
assert report.corruption_rate == 0.0
assert report.message == "Cache is empty"
def test_check_health_single_invalid_entry(self):
"""Test health check with 1 invalid entry out of 1 (100% corruption)"""
monitor = CacheHealthMonitor()
entries = [{'file_path': '/models/invalid.safetensors'}]
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.CORRUPTED
assert report.total_entries == 1
assert report.valid_entries == 0
assert report.invalid_entries == 1
assert report.corruption_rate == 1.0
def test_check_health_boundary_degraded_threshold(self):
"""Test health check at degraded threshold (1%)"""
monitor = CacheHealthMonitor(degraded_threshold=0.01)
# 100 entries, 1 invalid (exactly 1%)
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(99)
]
entries.append({'file_path': '/models/invalid.safetensors'})
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.DEGRADED
assert report.corruption_rate == 0.01
def test_check_health_boundary_corrupted_threshold(self):
"""Test health check at corrupted threshold (5%)"""
monitor = CacheHealthMonitor(corrupted_threshold=0.05)
# 100 entries, 5 invalid (exactly 5%)
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(95)
]
for i in range(5):
entries.append({'file_path': f'/models/invalid{i}.safetensors'})
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.CORRUPTED
assert report.corruption_rate == 0.05
def test_check_health_below_degraded_threshold(self):
"""Test health check below degraded threshold (0%)"""
monitor = CacheHealthMonitor(degraded_threshold=0.01)
# All entries valid
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(100)
]
report = monitor.check_health(entries, auto_repair=False)
assert report.status == CacheHealthStatus.HEALTHY
assert report.corruption_rate == 0.0
def test_check_health_auto_repair(self):
"""Test health check with auto_repair enabled"""
monitor = CacheHealthMonitor()
# 1 entry with all fields (won't be repaired), 1 entry with missing non-required fields (will be repaired)
complete_entry = {
'file_path': '/models/test1.safetensors',
'sha256': 'hash1',
'file_name': 'test1.safetensors',
'model_name': 'Model 1',
'folder': '',
'size': 0,
'modified': 0.0,
'tags': ['tag1'],
'preview_url': '',
'base_model': '',
'from_civitai': True,
'favorite': False,
'exclude': False,
'db_checked': False,
'preview_nsfw_level': 0,
'notes': '',
'usage_tips': '',
}
incomplete_entry = {
'file_path': '/models/test2.safetensors',
'sha256': 'hash2',
# Missing many optional fields (will be repaired)
}
entries = [complete_entry, incomplete_entry]
report = monitor.check_health(entries, auto_repair=True)
assert report.status == CacheHealthStatus.HEALTHY
assert report.total_entries == 2
assert report.valid_entries == 2
assert report.invalid_entries == 0
assert report.repaired_entries == 1
def test_should_notify_user_healthy(self):
"""Test should_notify_user for healthy cache"""
monitor = CacheHealthMonitor()
report = HealthReport(
status=CacheHealthStatus.HEALTHY,
total_entries=100,
valid_entries=100,
invalid_entries=0,
repaired_entries=0,
message="Cache is healthy"
)
assert monitor.should_notify_user(report) is False
def test_should_notify_user_degraded(self):
"""Test should_notify_user for degraded cache"""
monitor = CacheHealthMonitor()
report = HealthReport(
status=CacheHealthStatus.DEGRADED,
total_entries=100,
valid_entries=98,
invalid_entries=2,
repaired_entries=0,
message="Cache is degraded"
)
assert monitor.should_notify_user(report) is True
def test_should_notify_user_corrupted(self):
"""Test should_notify_user for corrupted cache"""
monitor = CacheHealthMonitor()
report = HealthReport(
status=CacheHealthStatus.CORRUPTED,
total_entries=100,
valid_entries=90,
invalid_entries=10,
repaired_entries=0,
message="Cache is corrupted"
)
assert monitor.should_notify_user(report) is True
def test_get_notification_severity_degraded(self):
"""Test get_notification_severity for degraded cache"""
monitor = CacheHealthMonitor()
report = HealthReport(
status=CacheHealthStatus.DEGRADED,
total_entries=100,
valid_entries=98,
invalid_entries=2,
repaired_entries=0,
message="Cache is degraded"
)
assert monitor.get_notification_severity(report) == 'warning'
def test_get_notification_severity_corrupted(self):
"""Test get_notification_severity for corrupted cache"""
monitor = CacheHealthMonitor()
report = HealthReport(
status=CacheHealthStatus.CORRUPTED,
total_entries=100,
valid_entries=90,
invalid_entries=10,
repaired_entries=0,
message="Cache is corrupted"
)
assert monitor.get_notification_severity(report) == 'error'
def test_report_to_dict(self):
"""Test HealthReport to_dict conversion"""
report = HealthReport(
status=CacheHealthStatus.DEGRADED,
total_entries=100,
valid_entries=98,
invalid_entries=2,
repaired_entries=1,
invalid_paths=['/path1', '/path2'],
message="Cache issues detected"
)
result = report.to_dict()
assert result['status'] == 'degraded'
assert result['total_entries'] == 100
assert result['valid_entries'] == 98
assert result['invalid_entries'] == 2
assert result['repaired_entries'] == 1
assert result['corruption_rate'] == '2.0%'
assert len(result['invalid_paths']) == 2
assert result['message'] == "Cache issues detected"
def test_report_corruption_rate_zero_division(self):
"""Test corruption_rate calculation with zero entries"""
report = HealthReport(
status=CacheHealthStatus.HEALTHY,
total_entries=0,
valid_entries=0,
invalid_entries=0,
repaired_entries=0,
message="Cache is empty"
)
assert report.corruption_rate == 0.0
def test_check_health_collects_invalid_paths(self):
"""Test health check collects invalid entry paths"""
monitor = CacheHealthMonitor()
entries = [
{
'file_path': '/models/valid.safetensors',
'sha256': 'hash1',
},
{
'file_path': '/models/invalid1.safetensors',
},
{
'file_path': '/models/invalid2.safetensors',
},
]
report = monitor.check_health(entries, auto_repair=False)
assert len(report.invalid_paths) == 2
assert '/models/invalid1.safetensors' in report.invalid_paths
assert '/models/invalid2.safetensors' in report.invalid_paths
def test_report_to_dict_limits_invalid_paths(self):
"""Test that to_dict limits invalid_paths to first 10"""
report = HealthReport(
status=CacheHealthStatus.CORRUPTED,
total_entries=15,
valid_entries=0,
invalid_entries=15,
repaired_entries=0,
invalid_paths=[f'/path{i}' for i in range(15)],
message="Cache corrupted"
)
result = report.to_dict()
assert len(result['invalid_paths']) == 10
assert result['invalid_paths'][0] == '/path0'
assert result['invalid_paths'][-1] == '/path9'

View File

@@ -0,0 +1,368 @@
"""Tests for the check_pending_models lightweight pre-check functionality."""
from __future__ import annotations
import json
from types import SimpleNamespace
import pytest
from py.services.settings_manager import get_settings_manager
from py.utils import example_images_download_manager as download_module
class StubScanner:
"""Scanner double returning predetermined cache contents."""
def __init__(self, models: list[dict]) -> None:
self._cache = SimpleNamespace(raw_data=models)
async def get_cached_data(self):
return self._cache
def _patch_scanners(
monkeypatch: pytest.MonkeyPatch,
lora_scanner: StubScanner | None = None,
checkpoint_scanner: StubScanner | None = None,
embedding_scanner: StubScanner | None = None,
) -> None:
"""Patch ServiceRegistry to return stub scanners."""
async def _get_lora_scanner(cls):
return lora_scanner or StubScanner([])
async def _get_checkpoint_scanner(cls):
return checkpoint_scanner or StubScanner([])
async def _get_embedding_scanner(cls):
return embedding_scanner or StubScanner([])
monkeypatch.setattr(
download_module.ServiceRegistry,
"get_lora_scanner",
classmethod(_get_lora_scanner),
)
monkeypatch.setattr(
download_module.ServiceRegistry,
"get_checkpoint_scanner",
classmethod(_get_checkpoint_scanner),
)
monkeypatch.setattr(
download_module.ServiceRegistry,
"get_embedding_scanner",
classmethod(_get_embedding_scanner),
)
class RecordingWebSocketManager:
"""Collects broadcast payloads for assertions."""
def __init__(self) -> None:
self.payloads: list[dict] = []
async def broadcast(self, payload: dict) -> None:
self.payloads.append(payload)
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_returns_zero_when_all_processed(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models returns 0 pending when all models are processed."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
# Create processed models
processed_hashes = ["a" * 64, "b" * 64, "c" * 64]
models = [
{"sha256": h, "model_name": f"Model {i}"}
for i, h in enumerate(processed_hashes)
]
# Create progress file with all models processed
progress_file = tmp_path / ".download_progress.json"
progress_file.write_text(
json.dumps({"processed_models": processed_hashes, "failed_models": []}),
encoding="utf-8",
)
# Create model directories with files (simulating completed downloads)
for h in processed_hashes:
model_dir = tmp_path / h
model_dir.mkdir()
(model_dir / "image_0.png").write_text("data")
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["is_downloading"] is False
assert result["total_models"] == 3
assert result["pending_count"] == 0
assert result["processed_count"] == 3
assert result["needs_download"] is False
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_finds_unprocessed_models(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models correctly identifies unprocessed models."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
# Create models - some processed, some not
processed_hash = "a" * 64
unprocessed_hash = "b" * 64
models = [
{"sha256": processed_hash, "model_name": "Processed Model"},
{"sha256": unprocessed_hash, "model_name": "Unprocessed Model"},
]
# Create progress file with only one model processed
progress_file = tmp_path / ".download_progress.json"
progress_file.write_text(
json.dumps({"processed_models": [processed_hash], "failed_models": []}),
encoding="utf-8",
)
# Create directory only for processed model
processed_dir = tmp_path / processed_hash
processed_dir.mkdir()
(processed_dir / "image_0.png").write_text("data")
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["total_models"] == 2
assert result["pending_count"] == 1
assert result["processed_count"] == 1
assert result["needs_download"] is True
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_skips_models_without_hash(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that models without sha256 are not counted as pending."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
# Models - one with hash, one without
models = [
{"sha256": "a" * 64, "model_name": "Hashed Model"},
{"sha256": None, "model_name": "No Hash Model"},
{"model_name": "Missing Hash Model"}, # No sha256 key at all
]
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["total_models"] == 3
assert result["pending_count"] == 1 # Only the one with hash
assert result["needs_download"] is True
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_handles_multiple_model_types(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models aggregates counts across multiple model types."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
lora_models = [
{"sha256": "a" * 64, "model_name": "Lora 1"},
{"sha256": "b" * 64, "model_name": "Lora 2"},
]
checkpoint_models = [
{"sha256": "c" * 64, "model_name": "Checkpoint 1"},
]
embedding_models = [
{"sha256": "d" * 64, "model_name": "Embedding 1"},
{"sha256": "e" * 64, "model_name": "Embedding 2"},
{"sha256": "f" * 64, "model_name": "Embedding 3"},
]
_patch_scanners(
monkeypatch,
lora_scanner=StubScanner(lora_models),
checkpoint_scanner=StubScanner(checkpoint_models),
embedding_scanner=StubScanner(embedding_models),
)
result = await manager.check_pending_models(["lora", "checkpoint", "embedding"])
assert result["success"] is True
assert result["total_models"] == 6 # 2 + 1 + 3
assert result["pending_count"] == 6 # All unprocessed
assert result["needs_download"] is True
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_returns_error_when_download_in_progress(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models returns special response when download is running."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
# Simulate download in progress
manager._is_downloading = True
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["is_downloading"] is True
assert result["needs_download"] is False
assert result["pending_count"] == 0
assert "already in progress" in result["message"].lower()
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_handles_empty_library(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models handles empty model library."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
_patch_scanners(monkeypatch, lora_scanner=StubScanner([]))
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["total_models"] == 0
assert result["pending_count"] == 0
assert result["processed_count"] == 0
assert result["needs_download"] is False
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_reads_failed_models(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models correctly reports failed model count."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
models = [{"sha256": "a" * 64, "model_name": "Model"}]
# Create progress file with failed models
progress_file = tmp_path / ".download_progress.json"
progress_file.write_text(
json.dumps({"processed_models": [], "failed_models": ["a" * 64, "b" * 64]}),
encoding="utf-8",
)
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["failed_count"] == 2
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_handles_missing_progress_file(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models works correctly when no progress file exists."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
models = [
{"sha256": "a" * 64, "model_name": "Model 1"},
{"sha256": "b" * 64, "model_name": "Model 2"},
]
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
# No progress file created
result = await manager.check_pending_models(["lora"])
assert result["success"] is True
assert result["total_models"] == 2
assert result["pending_count"] == 2 # All pending since no progress
assert result["processed_count"] == 0
assert result["failed_count"] == 0
assert result["needs_download"] is True
@pytest.mark.asyncio
@pytest.mark.usefixtures("tmp_path")
async def test_check_pending_models_handles_corrupted_progress_file(
monkeypatch: pytest.MonkeyPatch,
tmp_path,
settings_manager,
):
"""Test that check_pending_models handles corrupted progress file gracefully."""
ws_manager = RecordingWebSocketManager()
manager = download_module.DownloadManager(ws_manager=ws_manager)
monkeypatch.setitem(settings_manager.settings, "example_images_path", str(tmp_path))
models = [{"sha256": "a" * 64, "model_name": "Model"}]
# Create corrupted progress file
progress_file = tmp_path / ".download_progress.json"
progress_file.write_text("not valid json", encoding="utf-8")
_patch_scanners(monkeypatch, lora_scanner=StubScanner(models))
result = await manager.check_pending_models(["lora"])
# Should still succeed, treating all as unprocessed
assert result["success"] is True
assert result["total_models"] == 1
assert result["pending_count"] == 1
@pytest.fixture
def settings_manager():
return get_settings_manager()

View File

@@ -0,0 +1,110 @@
"""Test for duplicate detection by source URL."""
import pytest
from unittest.mock import AsyncMock, MagicMock
@pytest.mark.asyncio
async def test_find_duplicate_recipes_by_source():
"""Test that duplicate recipes are detected by source URL."""
from py.services.recipe_scanner import RecipeScanner
scanner = MagicMock(spec=RecipeScanner)
scanner.get_cached_data = AsyncMock()
cache = MagicMock()
cache.raw_data = [
{
'id': '8705c972-ef08-47f3-8ac3-9ac3b8ff4c0b',
'source_path': 'https://civitai.com/images/119165946',
'title': 'Recipe 1'
},
{
'id': '52e636ce-ea9f-4f64-a6a9-c704bd715889',
'source_path': 'https://civitai.com/images/119165946',
'title': 'Recipe 2'
},
{
'id': '00000000-0000-0000-0000-000000000001',
'source_path': 'https://civitai.com/images/999999999',
'title': 'Recipe 3'
},
{
'id': '00000000-0000-0000-0000-000000000002',
'source_path': '',
'title': 'Recipe 4 (no source)'
},
]
scanner.get_cached_data.return_value = cache
# Call the actual method on the mocked scanner
from py.services.recipe_scanner import RecipeScanner as RealRecipeScanner
result = await RealRecipeScanner.find_duplicate_recipes_by_source(scanner)
assert len(result) == 1
assert 'https://civitai.com/images/119165946' in result
assert len(result['https://civitai.com/images/119165946']) == 2
assert '8705c972-ef08-47f3-8ac3-9ac3b8ff4c0b' in result['https://civitai.com/images/119165946']
assert '52e636ce-ea9f-4f64-a6a9-c704bd715889' in result['https://civitai.com/images/119165946']
@pytest.mark.asyncio
async def test_find_duplicate_recipes_by_source_empty():
"""Test that empty result is returned when no duplicates found."""
from py.services.recipe_scanner import RecipeScanner
scanner = MagicMock(spec=RecipeScanner)
scanner.get_cached_data = AsyncMock()
cache = MagicMock()
cache.raw_data = [
{
'id': '8705c972-ef08-47f3-8ac3-9ac3b8ff4c0b',
'source_path': 'https://civitai.com/images/119165946',
'title': 'Recipe 1'
},
{
'id': '00000000-0000-0000-0000-000000000002',
'source_path': '',
'title': 'Recipe 2 (no source)'
},
]
scanner.get_cached_data.return_value = cache
from py.services.recipe_scanner import RecipeScanner as RealRecipeScanner
result = await RealRecipeScanner.find_duplicate_recipes_by_source(scanner)
assert len(result) == 0
@pytest.mark.asyncio
async def test_find_duplicate_recipes_by_source_trimming_whitespace():
"""Test that whitespace is trimmed from source URLs."""
from py.services.recipe_scanner import RecipeScanner
scanner = MagicMock(spec=RecipeScanner)
scanner.get_cached_data = AsyncMock()
cache = MagicMock()
cache.raw_data = [
{
'id': '8705c972-ef08-47f3-8ac3-9ac3b8ff4c0b',
'source_path': 'https://civitai.com/images/119165946',
'title': 'Recipe 1'
},
{
'id': '52e636ce-ea9f-4f64-a6a9-c704bd715889',
'source_path': ' https://civitai.com/images/119165946 ',
'title': 'Recipe 2'
},
]
scanner.get_cached_data.return_value = cache
from py.services.recipe_scanner import RecipeScanner as RealRecipeScanner
result = await RealRecipeScanner.find_duplicate_recipes_by_source(scanner)
assert len(result) == 1
assert 'https://civitai.com/images/119165946' in result
assert len(result['https://civitai.com/images/119165946']) == 2

View File

@@ -482,6 +482,81 @@ async def test_relink_metadata_raises_when_version_missing():
model_version_id=None,
)
@pytest.mark.asyncio
async def test_fetch_and_update_model_persists_db_checked_when_sqlite_fails(tmp_path):
"""
Regression test: When a deleted model is checked against sqlite and not found,
db_checked=True must be persisted to disk so the model is skipped in future refreshes.
Previously, db_checked was set in memory but never saved because the save_metadata
call was inside the `if civitai_api_not_found:` block, which is False for deleted
models (since the default CivitAI API is never tried).
"""
default_provider = SimpleNamespace(
get_model_by_hash=AsyncMock(),
get_model_version=AsyncMock(),
)
civarchive_provider = SimpleNamespace(
get_model_by_hash=AsyncMock(return_value=(None, "Model not found")),
get_model_version=AsyncMock(),
)
sqlite_provider = SimpleNamespace(
get_model_by_hash=AsyncMock(return_value=(None, "Model not found")),
get_model_version=AsyncMock(),
)
async def select_provider(name: str):
if name == "civarchive_api":
return civarchive_provider
if name == "sqlite":
return sqlite_provider
return default_provider
provider_selector = AsyncMock(side_effect=select_provider)
helpers = build_service(
settings_values={"enable_metadata_archive_db": True},
default_provider=default_provider,
provider_selector=provider_selector,
)
model_path = tmp_path / "model.safetensors"
model_data = {
"civitai_deleted": True,
"db_checked": False,
"from_civitai": False,
"file_path": str(model_path),
"model_name": "Deleted Model",
}
update_cache = AsyncMock()
ok, error = await helpers.service.fetch_and_update_model(
sha256="deadbeef",
file_path=str(model_path),
model_data=model_data,
update_cache_func=update_cache,
)
# The call should fail because neither provider found metadata
assert not ok
assert error is not None
assert "Model not found" in error or "not found in metadata archive DB" in error
# Both providers should have been tried
assert civarchive_provider.get_model_by_hash.await_count == 1
assert sqlite_provider.get_model_by_hash.await_count == 1
# db_checked should be True in memory
assert model_data["db_checked"] is True
# CRITICAL: metadata should have been saved to disk with db_checked=True
helpers.metadata_manager.save_metadata.assert_awaited_once()
saved_call = helpers.metadata_manager.save_metadata.await_args
saved_data = saved_call.args[1]
assert saved_data["db_checked"] is True
assert "folder" not in saved_data # folder should be stripped
assert "last_checked_at" in saved_data # timestamp should be set
@pytest.mark.asyncio
async def test_fetch_and_update_model_does_not_overwrite_api_metadata_with_archive(tmp_path):
helpers = build_service()

View File

@@ -0,0 +1,167 @@
"""
Integration tests for cache validation in ModelScanner
"""
import pytest
import asyncio
from py.services.model_scanner import ModelScanner
from py.services.cache_entry_validator import CacheEntryValidator
from py.services.cache_health_monitor import CacheHealthMonitor, CacheHealthStatus
@pytest.mark.asyncio
async def test_model_scanner_validates_cache_entries(tmp_path_factory):
"""Test that ModelScanner validates cache entries during initialization"""
# Create temporary test data
tmp_dir = tmp_path_factory.mktemp("test_loras")
# Create test files
test_file = tmp_dir / "test_model.safetensors"
test_file.write_bytes(b"fake model data" * 100)
# Mock model scanner (we can't easily instantiate a full scanner in tests)
# Instead, test the validation logic directly
entries = [
{
'file_path': str(test_file),
'sha256': 'abc123def456',
'file_name': 'test_model.safetensors',
},
{
'file_path': str(tmp_dir / 'invalid.safetensors'),
# Missing sha256 - invalid
},
]
valid, invalid = CacheEntryValidator.validate_batch(entries, auto_repair=True)
assert len(valid) == 1
assert len(invalid) == 1
assert valid[0]['sha256'] == 'abc123def456'
@pytest.mark.asyncio
async def test_model_scanner_detects_degraded_cache():
"""Test that ModelScanner detects degraded cache health"""
# Create 100 entries with 2% corruption
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(98)
]
# Add 2 invalid entries
entries.append({'file_path': '/models/invalid1.safetensors'})
entries.append({'file_path': '/models/invalid2.safetensors'})
monitor = CacheHealthMonitor()
report = monitor.check_health(entries, auto_repair=True)
assert report.status == CacheHealthStatus.DEGRADED
assert report.invalid_entries == 2
assert report.valid_entries == 98
@pytest.mark.asyncio
async def test_model_scanner_detects_corrupted_cache():
"""Test that ModelScanner detects corrupted cache health"""
# Create 100 entries with 10% corruption
entries = [
{
'file_path': f'/models/test{i}.safetensors',
'sha256': f'hash{i}',
}
for i in range(90)
]
# Add 10 invalid entries
for i in range(10):
entries.append({'file_path': f'/models/invalid{i}.safetensors'})
monitor = CacheHealthMonitor()
report = monitor.check_health(entries, auto_repair=True)
assert report.status == CacheHealthStatus.CORRUPTED
assert report.invalid_entries == 10
assert report.valid_entries == 90
@pytest.mark.asyncio
async def test_model_scanner_removes_invalid_from_hash_index():
"""Test that ModelScanner removes invalid entries from hash index"""
from py.services.model_hash_index import ModelHashIndex
# Create a hash index with some entries
hash_index = ModelHashIndex()
valid_entry = {
'file_path': '/models/valid.safetensors',
'sha256': 'abc123',
}
invalid_entry = {
'file_path': '/models/invalid.safetensors',
'sha256': '', # Empty sha256
}
# Add entries to hash index
hash_index.add_entry(valid_entry['sha256'], valid_entry['file_path'])
hash_index.add_entry(invalid_entry['sha256'], invalid_entry['file_path'])
# Verify both entries are in the index (using get_hash method)
assert hash_index.get_hash(valid_entry['file_path']) == valid_entry['sha256']
# Invalid entry won't be added due to empty sha256
assert hash_index.get_hash(invalid_entry['file_path']) is None
# Simulate removing invalid entry (it's not actually there, but let's test the method)
hash_index.remove_by_path(
CacheEntryValidator.get_file_path_safe(invalid_entry),
CacheEntryValidator.get_sha256_safe(invalid_entry)
)
# Verify valid entry remains
assert hash_index.get_hash(valid_entry['file_path']) == valid_entry['sha256']
def test_cache_entry_validator_handles_various_field_types():
"""Test that validator handles various field types correctly"""
# Test with different field types
entry = {
'file_path': '/models/test.safetensors',
'sha256': 'abc123',
'size': 1024, # int
'modified': 1234567890.0, # float
'favorite': True, # bool
'tags': ['tag1', 'tag2'], # list
'exclude': False, # bool
}
result = CacheEntryValidator.validate(entry, auto_repair=False)
assert result.is_valid is True
assert result.repaired is False
def test_cache_health_report_serialization():
"""Test that HealthReport can be serialized to dict"""
from py.services.cache_health_monitor import HealthReport
report = HealthReport(
status=CacheHealthStatus.DEGRADED,
total_entries=100,
valid_entries=98,
invalid_entries=2,
repaired_entries=1,
invalid_paths=['/path1', '/path2'],
message="Cache issues detected"
)
result = report.to_dict()
assert result['status'] == 'degraded'
assert result['total_entries'] == 100
assert result['valid_entries'] == 98
assert result['invalid_entries'] == 2
assert result['repaired_entries'] == 1
assert result['corruption_rate'] == '2.0%'
assert len(result['invalid_paths']) == 2
assert result['message'] == "Cache issues detected"

View File

@@ -242,6 +242,148 @@ async def test_bulk_metadata_refresh_reports_errors() -> None:
assert progress.events[-1]["error"] == "boom"
async def test_bulk_metadata_refresh_skips_confirmed_not_found_models(
monkeypatch: pytest.MonkeyPatch,
) -> None:
"""Models marked as from_civitai=False and civitai_deleted=True should be skipped."""
scanner = MockScanner()
scanner._cache.raw_data = [
{
"file_path": "model1.safetensors",
"sha256": "hash1",
"from_civitai": False,
"civitai_deleted": True,
"model_name": "NotOnCivitAI",
},
{
"file_path": "model2.safetensors",
"sha256": "hash2",
"from_civitai": True,
"model_name": "OnCivitAI",
},
]
service = MockModelService(scanner)
metadata_sync = StubMetadataSync()
settings = StubSettings(enable_metadata_archive_db=False)
progress = ProgressCollector()
async def fake_hydrate(model_data: Dict[str, Any]) -> Dict[str, Any]:
# Preserve the original data (simulating no metadata file on disk)
return model_data
monkeypatch.setattr(MetadataManager, "hydrate_model_data", staticmethod(fake_hydrate))
use_case = BulkMetadataRefreshUseCase(
service=service,
metadata_sync=metadata_sync,
settings_service=settings,
logger=logging.getLogger("test"),
)
result = await use_case.execute_with_error_handling(progress_callback=progress)
assert result["success"] is True
# Only model2 should be processed (model1 is skipped)
assert result["processed"] == 1
assert result["updated"] == 1
assert len(metadata_sync.calls) == 1
assert metadata_sync.calls[0]["file_path"] == "model2.safetensors"
async def test_bulk_metadata_refresh_skips_when_archive_checked(
monkeypatch: pytest.MonkeyPatch,
) -> None:
"""Models with db_checked=True should be skipped even if archive DB is enabled."""
scanner = MockScanner()
scanner._cache.raw_data = [
{
"file_path": "model1.safetensors",
"sha256": "hash1",
"from_civitai": False,
"civitai_deleted": True,
"db_checked": True,
"model_name": "ArchiveChecked",
},
{
"file_path": "model2.safetensors",
"sha256": "hash2",
"from_civitai": False,
"civitai_deleted": True,
"db_checked": False,
"model_name": "ArchiveNotChecked",
},
]
service = MockModelService(scanner)
metadata_sync = StubMetadataSync()
settings = StubSettings(enable_metadata_archive_db=True)
progress = ProgressCollector()
async def fake_hydrate(model_data: Dict[str, Any]) -> Dict[str, Any]:
return model_data
monkeypatch.setattr(MetadataManager, "hydrate_model_data", staticmethod(fake_hydrate))
use_case = BulkMetadataRefreshUseCase(
service=service,
metadata_sync=metadata_sync,
settings_service=settings,
logger=logging.getLogger("test"),
)
result = await use_case.execute_with_error_handling(progress_callback=progress)
assert result["success"] is True
# Only model2 should be processed (model1 has db_checked=True)
assert result["processed"] == 1
assert result["updated"] == 1
assert len(metadata_sync.calls) == 1
assert metadata_sync.calls[0]["file_path"] == "model2.safetensors"
async def test_bulk_metadata_refresh_processes_never_fetched_models(
monkeypatch: pytest.MonkeyPatch,
) -> None:
"""Models that have never been fetched (from_civitai=None) should be processed."""
scanner = MockScanner()
scanner._cache.raw_data = [
{
"file_path": "model1.safetensors",
"sha256": "hash1",
"from_civitai": None,
"model_name": "NeverFetched",
},
{
"file_path": "model2.safetensors",
"sha256": "hash2",
"model_name": "NoFromCivitaiField",
},
]
service = MockModelService(scanner)
metadata_sync = StubMetadataSync()
settings = StubSettings(enable_metadata_archive_db=False)
progress = ProgressCollector()
async def fake_hydrate(model_data: Dict[str, Any]) -> Dict[str, Any]:
return model_data
monkeypatch.setattr(MetadataManager, "hydrate_model_data", staticmethod(fake_hydrate))
use_case = BulkMetadataRefreshUseCase(
service=service,
metadata_sync=metadata_sync,
settings_service=settings,
logger=logging.getLogger("test"),
)
result = await use_case.execute_with_error_handling(progress_callback=progress)
assert result["success"] is True
# Both models should be processed
assert result["processed"] == 2
assert result["updated"] == 2
assert len(metadata_sync.calls) == 2
async def test_download_model_use_case_raises_validation_error() -> None:
coordinator = StubDownloadCoordinator(error="validation")
use_case = DownloadModelUseCase(download_coordinator=coordinator)

View File

@@ -75,6 +75,31 @@ def test_get_file_extension_defaults_to_jpg() -> None:
assert ext == ".jpg"
def test_get_file_extension_from_media_type_hint_video() -> None:
"""Test that media_type_hint='video' returns .mp4 when other methods fail"""
ext = processor_module.ExampleImagesProcessor._get_file_extension_from_content_or_headers(
b"", {}, "https://c.genur.art/536be3c9-e506-4365-b078-bfbc5df9ceec", "video"
)
assert ext == ".mp4"
def test_get_file_extension_from_media_type_hint_image() -> None:
"""Test that media_type_hint='image' falls back to .jpg"""
ext = processor_module.ExampleImagesProcessor._get_file_extension_from_content_or_headers(
b"", {}, "https://example.com/no-extension", "image"
)
assert ext == ".jpg"
def test_get_file_extension_media_type_hint_low_priority() -> None:
"""Test that media_type_hint is only used as last resort (after URL extension)"""
# URL has extension, should use that instead of media_type_hint
ext = processor_module.ExampleImagesProcessor._get_file_extension_from_content_or_headers(
b"", {}, "https://example.com/video.mp4", "image"
)
assert ext == ".mp4"
class StubScanner:
def __init__(self, models: list[Dict[str, Any]]) -> None:
self._cache = SimpleNamespace(raw_data=models)

View File

@@ -0,0 +1,100 @@
"""Test for modelVersionId fallback in fingerprint calculation."""
import pytest
from py.utils.utils import calculate_recipe_fingerprint
def test_calculate_fingerprint_with_model_version_id_fallback():
"""Test that fingerprint uses modelVersionId when hash is empty, even when not deleted."""
loras = [
{
"hash": "",
"strength": 1.0,
"modelVersionId": 2639467,
"isDeleted": False,
"exclude": False
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == "2639467:1.0"
def test_calculate_fingerprint_with_multiple_model_version_ids():
"""Test fingerprint with multiple loras using modelVersionId fallback."""
loras = [
{
"hash": "",
"strength": 1.0,
"modelVersionId": 2639467,
"isDeleted": False,
"exclude": False
},
{
"hash": "",
"strength": 0.8,
"modelVersionId": 1234567,
"isDeleted": False,
"exclude": False
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == "1234567:0.8|2639467:1.0"
def test_calculate_fingerprint_with_deleted_lora():
"""Test that deleted loras with modelVersionId are still included."""
loras = [
{
"hash": "",
"strength": 1.0,
"modelVersionId": 2639467,
"isDeleted": True,
"exclude": False
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == "2639467:1.0"
def test_calculate_fingerprint_with_excluded_lora():
"""Test that excluded loras are skipped even with modelVersionId."""
loras = [
{
"hash": "",
"strength": 1.0,
"modelVersionId": 2639467,
"isDeleted": False,
"exclude": True
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == ""
def test_calculate_fingerprint_prefers_hash_over_version_id():
"""Test that hash is used even when modelVersionId is present."""
loras = [
{
"hash": "abc123",
"strength": 1.0,
"modelVersionId": 2639467,
"isDeleted": False,
"exclude": False
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == "abc123:1.0"
def test_calculate_fingerprint_without_hash_or_version_id():
"""Test that loras without hash or modelVersionId are skipped."""
loras = [
{
"hash": "",
"strength": 1.0,
"modelVersionId": 0,
"isDeleted": False,
"exclude": False
}
]
fingerprint = calculate_recipe_fingerprint(loras)
assert fingerprint == ""

View File

@@ -6,7 +6,8 @@ export default defineConfig({
globals: true,
setupFiles: ['tests/frontend/setup.js'],
include: [
'tests/frontend/**/*.test.js'
'tests/frontend/**/*.test.js',
'tests/frontend/**/*.test.ts'
],
coverage: {
enabled: process.env.VITEST_COVERAGE === 'true',

File diff suppressed because it is too large Load Diff

View File

@@ -12,9 +12,13 @@
"@comfyorg/comfyui-frontend-types": "^1.35.4",
"@types/node": "^22.10.1",
"@vitejs/plugin-vue": "^5.2.3",
"@vitest/coverage-v8": "^3.2.4",
"@vue/test-utils": "^2.4.6",
"jsdom": "^26.0.0",
"typescript": "^5.7.2",
"vite": "^6.3.5",
"vite-plugin-css-injected-by-js": "^3.5.2",
"vitest": "^3.0.0",
"vue-tsc": "^2.1.10"
},
"scripts": {
@@ -24,6 +28,9 @@
"typecheck": "vue-tsc --noEmit",
"clean": "rm -rf ../web/comfyui/vue-widgets",
"rebuild": "npm run clean && npm run build",
"prepare": "npm run build"
"prepare": "npm run build",
"test": "vitest run",
"test:watch": "vitest",
"test:coverage": "vitest run --coverage"
}
}

View File

@@ -10,11 +10,28 @@
:use-custom-clip-range="state.useCustomClipRange.value"
:is-clip-strength-disabled="state.isClipStrengthDisabled.value"
:is-loading="state.isLoading.value"
:repeat-count="state.repeatCount.value"
:repeat-used="state.displayRepeatUsed.value"
:is-paused="state.isPaused.value"
:is-pause-disabled="hasQueuedPrompts"
:is-workflow-executing="state.isWorkflowExecuting.value"
:executing-repeat-step="state.executingRepeatStep.value"
@update:current-index="handleIndexUpdate"
@update:model-strength="state.modelStrength.value = $event"
@update:clip-strength="state.clipStrength.value = $event"
@update:use-custom-clip-range="handleUseCustomClipRangeChange"
@refresh="handleRefresh"
@update:repeat-count="handleRepeatCountChange"
@toggle-pause="handleTogglePause"
@reset-index="handleResetIndex"
@open-lora-selector="isModalOpen = true"
/>
<LoraListModal
:visible="isModalOpen"
:lora-list="cachedLoraList"
:current-index="state.currentIndex.value"
@close="isModalOpen = false"
@select="handleModalSelect"
/>
</div>
</template>
@@ -22,8 +39,9 @@
<script setup lang="ts">
import { onMounted, ref } from 'vue'
import LoraCyclerSettingsView from './lora-cycler/LoraCyclerSettingsView.vue'
import LoraListModal from './lora-cycler/LoraListModal.vue'
import { useLoraCyclerState } from '../composables/useLoraCyclerState'
import type { ComponentWidget, CyclerConfig, LoraPoolConfig } from '../composables/types'
import type { ComponentWidget, CyclerConfig, LoraPoolConfig, LoraItem } from '../composables/types'
type CyclerWidget = ComponentWidget<CyclerConfig>
@@ -31,6 +49,7 @@ type CyclerWidget = ComponentWidget<CyclerConfig>
const props = defineProps<{
widget: CyclerWidget
node: { id: number; inputs?: any[]; widgets?: any[]; graph?: any }
api?: any // ComfyUI API for execution events
}>()
// State management
@@ -39,12 +58,50 @@ const state = useLoraCyclerState(props.widget)
// Symbol to track if the widget has been executed at least once
const HAS_EXECUTED = Symbol('HAS_EXECUTED')
// Execution context queue for batch queue synchronization
// In batch queue mode, all beforeQueued calls happen BEFORE any onExecuted calls,
// so we need to snapshot the state at queue time and replay it during execution
interface ExecutionContext {
isPaused: boolean
repeatUsed: number
repeatCount: number
shouldAdvanceDisplay: boolean
displayRepeatUsed: number // Value to show in UI after completion
}
const executionQueue: ExecutionContext[] = []
// Reactive flag to track if there are queued prompts (for disabling pause button)
const hasQueuedPrompts = ref(false)
// Track pending executions for batch queue support (deferred UI updates)
// Uses FIFO order since executions are processed in the order they were queued
interface PendingExecution {
repeatUsed: number
repeatCount: number
shouldAdvanceDisplay: boolean
displayRepeatUsed: number // Value to show in UI after completion
output?: {
nextIndex: number
nextLoraName: string
nextLoraFilename: string
currentLoraName: string
currentLoraFilename: string
}
}
const pendingExecutions: PendingExecution[] = []
// Track last known pool config hash
const lastPoolConfigHash = ref('')
// Track if component is mounted
const isMounted = ref(false)
// Modal state
const isModalOpen = ref(false)
// Cache for LoRA list (used by modal)
const cachedLoraList = ref<LoraItem[]>([])
// Get pool config from connected node
const getPoolConfig = (): LoraPoolConfig | null => {
// Check if getPoolConfig method exists on node (added by main.ts)
@@ -54,27 +111,47 @@ const getPoolConfig = (): LoraPoolConfig | null => {
return null
}
// Update display from LoRA list and index
const updateDisplayFromLoraList = (loraList: LoraItem[], index: number) => {
if (loraList.length > 0 && index > 0 && index <= loraList.length) {
const currentLora = loraList[index - 1]
if (currentLora) {
state.currentLoraName.value = currentLora.file_name
state.currentLoraFilename.value = currentLora.file_name
}
}
}
// Handle index update from user
const handleIndexUpdate = async (newIndex: number) => {
// Reset execution state when user manually changes index
// This ensures the next execution starts from the user-set index
;(props.widget as any)[HAS_EXECUTED] = false
state.executionIndex.value = null
state.nextIndex.value = null
// Clear execution queue since user is manually changing state
executionQueue.length = 0
hasQueuedPrompts.value = false
state.setIndex(newIndex)
// Refresh list to update current LoRA display
try {
const poolConfig = getPoolConfig()
const loraList = await state.fetchCyclerList(poolConfig)
if (loraList.length > 0 && newIndex > 0 && newIndex <= loraList.length) {
const currentLora = loraList[newIndex - 1]
if (currentLora) {
state.currentLoraName.value = currentLora.file_name
state.currentLoraFilename.value = currentLora.file_name
}
}
cachedLoraList.value = loraList
updateDisplayFromLoraList(loraList, newIndex)
} catch (error) {
console.error('[LoraCyclerWidget] Error updating index:', error)
}
}
// Handle LoRA selection from modal
const handleModalSelect = (index: number) => {
handleIndexUpdate(index)
}
// Handle use custom clip range toggle
const handleUseCustomClipRangeChange = (newValue: boolean) => {
state.useCustomClipRange.value = newValue
@@ -84,13 +161,41 @@ const handleUseCustomClipRangeChange = (newValue: boolean) => {
}
}
// Handle refresh button click
const handleRefresh = async () => {
// Handle repeat count change
const handleRepeatCountChange = (newValue: number) => {
state.repeatCount.value = newValue
// Reset repeatUsed when changing repeat count
state.repeatUsed.value = 0
state.displayRepeatUsed.value = 0
}
// Handle pause toggle
const handleTogglePause = () => {
state.togglePause()
}
// Handle reset index
const handleResetIndex = async () => {
// Reset execution state
;(props.widget as any)[HAS_EXECUTED] = false
state.executionIndex.value = null
state.nextIndex.value = null
// Clear execution queue since user is resetting state
executionQueue.length = 0
hasQueuedPrompts.value = false
// Reset index and repeat state
state.resetIndex()
// Refresh list to update current LoRA display
try {
const poolConfig = getPoolConfig()
await state.refreshList(poolConfig)
const loraList = await state.fetchCyclerList(poolConfig)
cachedLoraList.value = loraList
updateDisplayFromLoraList(loraList, 1)
} catch (error) {
console.error('[LoraCyclerWidget] Error refreshing:', error)
console.error('[LoraCyclerWidget] Error resetting index:', error)
}
}
@@ -106,6 +211,9 @@ const checkPoolConfigChanges = async () => {
lastPoolConfigHash.value = newHash
try {
await state.refreshList(poolConfig)
// Update cached list when pool config changes
const loraList = await state.fetchCyclerList(poolConfig)
cachedLoraList.value = loraList
} catch (error) {
console.error('[LoraCyclerWidget] Error on pool config change:', error)
}
@@ -129,17 +237,68 @@ onMounted(async () => {
// Add beforeQueued hook to handle index shifting for batch queue synchronization
// This ensures each execution uses a different LoRA in the cycle
// Now with support for repeat count and pause features
//
// IMPORTANT: In batch queue mode, ALL beforeQueued calls happen BEFORE any execution.
// We push an "execution context" snapshot to a queue so that onExecuted can use the
// correct state values that were captured at queue time (not the live state).
;(props.widget as any).beforeQueued = () => {
if (state.isPaused.value) {
// When paused: use current index, don't advance, don't count toward repeat limit
// Push context indicating this execution should NOT advance display
executionQueue.push({
isPaused: true,
repeatUsed: state.repeatUsed.value,
repeatCount: state.repeatCount.value,
shouldAdvanceDisplay: false,
displayRepeatUsed: state.displayRepeatUsed.value // Keep current display value when paused
})
hasQueuedPrompts.value = true
// CRITICAL: Clear execution_index when paused to force backend to use current_index
// This ensures paused executions use the same LoRA regardless of any
// execution_index set by previous non-paused beforeQueued calls
const pausedConfig = state.buildConfig()
pausedConfig.execution_index = null
props.widget.value = pausedConfig
return
}
if ((props.widget as any)[HAS_EXECUTED]) {
// After first execution: shift indices (previous next_index becomes execution_index)
state.generateNextIndex()
// After first execution: check repeat logic
if (state.repeatUsed.value < state.repeatCount.value) {
// Still repeating: increment repeatUsed, use same index
state.repeatUsed.value++
} else {
// Repeat complete: reset repeatUsed to 1, advance to next index
state.repeatUsed.value = 1
state.generateNextIndex()
}
} else {
// First execution: just initialize next_index (execution_index stays null)
// This means first execution uses current_index from widget
// First execution: initialize
state.repeatUsed.value = 1
state.initializeNextIndex()
;(props.widget as any)[HAS_EXECUTED] = true
}
// Determine if this execution should advance the display
// (only when repeat cycle is complete for this queued item)
const shouldAdvanceDisplay = state.repeatUsed.value >= state.repeatCount.value
// Calculate the display value to show after this execution completes
// When advancing to a new LoRA: reset to 0 (fresh start for new LoRA)
// When repeating same LoRA: show current repeat step
const displayRepeatUsed = shouldAdvanceDisplay ? 0 : state.repeatUsed.value
// Push execution context snapshot to queue
executionQueue.push({
isPaused: false,
repeatUsed: state.repeatUsed.value,
repeatCount: state.repeatCount.value,
shouldAdvanceDisplay,
displayRepeatUsed
})
hasQueuedPrompts.value = true
// Update the widget value so the indices are included in the serialized config
props.widget.value = state.buildConfig()
}
@@ -152,40 +311,71 @@ onMounted(async () => {
const poolConfig = getPoolConfig()
lastPoolConfigHash.value = state.hashPoolConfig(poolConfig)
await state.refreshList(poolConfig)
// Cache the initial LoRA list for modal
const loraList = await state.fetchCyclerList(poolConfig)
cachedLoraList.value = loraList
} catch (error) {
console.error('[LoraCyclerWidget] Error on initial load:', error)
}
// Override onExecuted to handle backend UI updates
// This defers the UI update until workflow completes (via API events)
const originalOnExecuted = (props.node as any).onExecuted?.bind(props.node)
;(props.node as any).onExecuted = function(output: any) {
console.log("[LoraCyclerWidget] Node executed with output:", output)
// Update state from backend response (values are wrapped in arrays)
if (output?.next_index !== undefined) {
const val = Array.isArray(output.next_index) ? output.next_index[0] : output.next_index
state.currentIndex.value = val
}
// Pop execution context from queue (FIFO order)
const context = executionQueue.shift()
hasQueuedPrompts.value = executionQueue.length > 0
// Determine if we should advance the display index
const shouldAdvanceDisplay = context
? context.shouldAdvanceDisplay
: (!state.isPaused.value && state.repeatUsed.value >= state.repeatCount.value)
// Extract output values
const nextIndex = output?.next_index !== undefined
? (Array.isArray(output.next_index) ? output.next_index[0] : output.next_index)
: state.currentIndex.value
const nextLoraName = output?.next_lora_name !== undefined
? (Array.isArray(output.next_lora_name) ? output.next_lora_name[0] : output.next_lora_name)
: ''
const nextLoraFilename = output?.next_lora_filename !== undefined
? (Array.isArray(output.next_lora_filename) ? output.next_lora_filename[0] : output.next_lora_filename)
: ''
const currentLoraName = output?.current_lora_name !== undefined
? (Array.isArray(output.current_lora_name) ? output.current_lora_name[0] : output.current_lora_name)
: ''
const currentLoraFilename = output?.current_lora_filename !== undefined
? (Array.isArray(output.current_lora_filename) ? output.current_lora_filename[0] : output.current_lora_filename)
: ''
// Update total count immediately (doesn't need to wait for workflow completion)
if (output?.total_count !== undefined) {
const val = Array.isArray(output.total_count) ? output.total_count[0] : output.total_count
state.totalCount.value = val
}
if (output?.current_lora_name !== undefined) {
const val = Array.isArray(output.current_lora_name) ? output.current_lora_name[0] : output.current_lora_name
state.currentLoraName.value = val
}
if (output?.current_lora_filename !== undefined) {
const val = Array.isArray(output.current_lora_filename) ? output.current_lora_filename[0] : output.current_lora_filename
state.currentLoraFilename.value = val
}
if (output?.next_lora_name !== undefined) {
const val = Array.isArray(output.next_lora_name) ? output.next_lora_name[0] : output.next_lora_name
state.currentLoraName.value = val
}
if (output?.next_lora_filename !== undefined) {
const val = Array.isArray(output.next_lora_filename) ? output.next_lora_filename[0] : output.next_lora_filename
state.currentLoraFilename.value = val
// Store pending update (will be applied on workflow completion)
if (context) {
pendingExecutions.push({
repeatUsed: context.repeatUsed,
repeatCount: context.repeatCount,
shouldAdvanceDisplay,
displayRepeatUsed: context.displayRepeatUsed,
output: {
nextIndex,
nextLoraName,
nextLoraFilename,
currentLoraName,
currentLoraFilename
}
})
// Update visual feedback state (don't update displayRepeatUsed yet - wait for workflow completion)
state.executingRepeatStep.value = context.repeatUsed
state.isWorkflowExecuting.value = true
}
// Call original onExecuted if it exists
@@ -194,11 +384,69 @@ onMounted(async () => {
}
}
// Set up execution tracking via API events
if (props.api) {
// Handle workflow completion events using FIFO order
// Note: The 'executing' event doesn't contain prompt_id (only node ID as string),
// so we use FIFO order instead of prompt_id matching since executions are processed
// in the order they were queued
const handleExecutionComplete = () => {
// Process the first pending execution (FIFO order)
if (pendingExecutions.length === 0) {
return
}
const pending = pendingExecutions.shift()!
// Apply UI update now that workflow is complete
// Update repeat display (deferred like index updates)
state.displayRepeatUsed.value = pending.displayRepeatUsed
if (pending.output) {
if (pending.shouldAdvanceDisplay) {
state.currentIndex.value = pending.output.nextIndex
state.currentLoraName.value = pending.output.nextLoraName
state.currentLoraFilename.value = pending.output.nextLoraFilename
} else {
// When not advancing, show current LoRA info
state.currentLoraName.value = pending.output.currentLoraName
state.currentLoraFilename.value = pending.output.currentLoraFilename
}
}
// Reset visual feedback if no more pending
if (pendingExecutions.length === 0) {
state.isWorkflowExecuting.value = false
state.executingRepeatStep.value = 0
}
}
props.api.addEventListener('execution_success', handleExecutionComplete)
props.api.addEventListener('execution_error', handleExecutionComplete)
props.api.addEventListener('execution_interrupted', handleExecutionComplete)
// Store cleanup function for API listeners
const apiCleanup = () => {
props.api.removeEventListener('execution_success', handleExecutionComplete)
props.api.removeEventListener('execution_error', handleExecutionComplete)
props.api.removeEventListener('execution_interrupted', handleExecutionComplete)
}
// Extend existing cleanup
const existingCleanup = (props.widget as any).onRemoveCleanup
;(props.widget as any).onRemoveCleanup = () => {
existingCleanup?.()
apiCleanup()
}
}
// Watch for connection changes by polling (since ComfyUI doesn't provide connection events)
const checkInterval = setInterval(checkPoolConfigChanges, 1000)
// Cleanup on unmount (handled by Vue's effect scope)
const existingCleanupForInterval = (props.widget as any).onRemoveCleanup
;(props.widget as any).onRemoveCleanup = () => {
existingCleanupForInterval?.()
clearInterval(checkInterval)
}
})

View File

@@ -6,57 +6,111 @@
<!-- Progress Display -->
<div class="setting-section progress-section">
<div class="progress-display">
<div class="progress-info">
<span class="progress-label">Next LoRA:</span>
<span class="progress-name" :title="currentLoraFilename">{{ currentLoraName || 'None' }}</span>
<div class="progress-display" :class="{ executing: isWorkflowExecuting }">
<div
class="progress-info"
:class="{ disabled: isPauseDisabled }"
@click="handleOpenSelector"
>
<span class="progress-label">{{ isWorkflowExecuting ? 'Using LoRA:' : 'Next LoRA:' }}</span>
<span class="progress-name clickable" :class="{ disabled: isPauseDisabled }" :title="currentLoraFilename">
{{ currentLoraName || 'None' }}
<svg class="selector-icon" viewBox="0 0 24 24" fill="currentColor">
<path d="M7 10l5 5 5-5z"/>
</svg>
</span>
</div>
<div class="progress-counter">
<span class="progress-index">{{ currentIndex }}</span>
<span class="progress-separator">/</span>
<span class="progress-total">{{ totalCount }}</span>
<button
class="refresh-button"
:disabled="isLoading"
@click="$emit('refresh')"
title="Refresh list"
>
<svg
class="refresh-icon"
:class="{ spinning: isLoading }"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="M21 12a9 9 0 1 1-6.219-8.56"/>
<path d="M21 3v5h-5"/>
</svg>
</button>
<!-- Repeat progress indicator (only shown when repeatCount > 1) -->
<div v-if="repeatCount > 1" class="repeat-progress">
<div class="repeat-progress-track">
<div
class="repeat-progress-fill"
:style="{ width: `${(repeatUsed / repeatCount) * 100}%` }"
:class="{ 'is-complete': repeatUsed >= repeatCount }"
></div>
</div>
<span class="repeat-progress-text">{{ repeatUsed }}/{{ repeatCount }}</span>
</div>
</div>
</div>
</div>
<!-- Starting Index -->
<!-- Starting Index with Advanced Controls -->
<div class="setting-section">
<label class="setting-label">Starting Index</label>
<div class="index-input-container">
<input
type="number"
class="index-input"
:min="1"
:max="totalCount || 1"
:value="currentIndex"
:disabled="totalCount === 0"
@input="onIndexInput"
@blur="onIndexBlur"
@pointerdown.stop
@pointermove.stop
@pointerup.stop
/>
<span class="index-hint">1 - {{ totalCount || 1 }}</span>
<div class="index-controls-row">
<!-- Left: Index group -->
<div class="control-group">
<label class="control-group-label">Starting Index</label>
<div class="control-group-content">
<input
type="number"
class="index-input"
:min="1"
:max="totalCount || 1"
:value="currentIndex"
:disabled="totalCount === 0"
@input="onIndexInput"
@blur="onIndexBlur"
@pointerdown.stop
@pointermove.stop
@pointerup.stop
/>
<span class="index-hint">/ {{ totalCount || 1 }}</span>
</div>
</div>
<!-- Right: Repeat group -->
<div class="control-group">
<label class="control-group-label">Repeat</label>
<div class="control-group-content">
<input
type="number"
class="repeat-input"
min="1"
max="99"
:value="repeatCount"
@input="onRepeatInput"
@blur="onRepeatBlur"
@pointerdown.stop
@pointermove.stop
@pointerup.stop
title="Each LoRA will be used this many times before moving to the next"
/>
<span class="repeat-suffix">×</span>
</div>
</div>
<!-- Action buttons -->
<div class="action-buttons">
<button
class="control-btn"
:class="{ active: isPaused }"
:disabled="isPauseDisabled"
@click="$emit('toggle-pause')"
:title="isPauseDisabled ? 'Cannot pause while prompts are queued' : (isPaused ? 'Continue iteration' : 'Pause iteration')"
>
<svg v-if="isPaused" viewBox="0 0 24 24" fill="currentColor" class="control-icon">
<path d="M8 5v14l11-7z"/>
</svg>
<svg v-else viewBox="0 0 24 24" fill="currentColor" class="control-icon">
<path d="M6 4h4v16H6zm8 0h4v16h-4z"/>
</svg>
</button>
<button
class="control-btn"
@click="$emit('reset-index')"
title="Reset to index 1"
>
<svg viewBox="0 0 24 24" fill="currentColor" class="control-icon">
<path d="M12 5V1L7 6l5 5V7c3.31 0 6 2.69 6 6s-2.69 6-6 6-6-2.69-6-6H4c0 4.42 3.58 8 8 8s8-3.58 8-8-3.58-8-8-8z"/>
</svg>
</button>
</div>
</div>
</div>
@@ -122,7 +176,12 @@ const props = defineProps<{
clipStrength: number
useCustomClipRange: boolean
isClipStrengthDisabled: boolean
isLoading: boolean
repeatCount: number
repeatUsed: number
isPaused: boolean
isPauseDisabled: boolean
isWorkflowExecuting: boolean
executingRepeatStep: number
}>()
const emit = defineEmits<{
@@ -130,11 +189,22 @@ const emit = defineEmits<{
'update:modelStrength': [value: number]
'update:clipStrength': [value: number]
'update:useCustomClipRange': [value: boolean]
'refresh': []
'update:repeatCount': [value: number]
'toggle-pause': []
'reset-index': []
'open-lora-selector': []
}>()
// Temporary value for input while typing
const tempIndex = ref<string>('')
const tempRepeat = ref<string>('')
const handleOpenSelector = () => {
if (props.isPauseDisabled) {
return
}
emit('open-lora-selector')
}
const onIndexInput = (event: Event) => {
const input = event.target as HTMLInputElement
@@ -154,6 +224,25 @@ const onIndexBlur = (event: Event) => {
}
tempIndex.value = ''
}
const onRepeatInput = (event: Event) => {
const input = event.target as HTMLInputElement
tempRepeat.value = input.value
}
const onRepeatBlur = (event: Event) => {
const input = event.target as HTMLInputElement
const value = parseInt(input.value, 10)
if (!isNaN(value)) {
const clampedValue = Math.max(1, Math.min(value, 99))
emit('update:repeatCount', clampedValue)
input.value = clampedValue.toString()
} else {
input.value = props.repeatCount.toString()
}
tempRepeat.value = ''
}
</script>
<style scoped>
@@ -203,6 +292,17 @@ const onIndexBlur = (event: Event) => {
display: flex;
justify-content: space-between;
align-items: center;
transition: border-color 0.3s ease;
}
.progress-display.executing {
border-color: rgba(66, 153, 225, 0.5);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { border-color: rgba(66, 153, 225, 0.3); }
50% { border-color: rgba(66, 153, 225, 0.7); }
}
.progress-info {
@@ -230,6 +330,42 @@ const onIndexBlur = (event: Event) => {
white-space: nowrap;
}
.progress-name.clickable {
cursor: pointer;
padding: 2px 6px;
margin: -2px -6px;
border-radius: 4px;
transition: all 0.2s;
display: inline-flex;
align-items: center;
gap: 4px;
}
.progress-name.clickable:hover:not(.disabled) {
background: rgba(66, 153, 225, 0.2);
color: rgba(191, 219, 254, 1);
}
.progress-name.clickable.disabled {
cursor: not-allowed;
opacity: 0.5;
}
.progress-info.disabled {
cursor: not-allowed;
}
.selector-icon {
width: 16px;
height: 16px;
opacity: 0.5;
flex-shrink: 0;
}
.progress-name.clickable:hover .selector-icon {
opacity: 0.8;
}
.progress-counter {
display: flex;
align-items: center;
@@ -243,6 +379,9 @@ const onIndexBlur = (event: Event) => {
font-weight: 600;
color: rgba(66, 153, 225, 1);
font-family: 'SF Mono', 'Roboto Mono', monospace;
min-width: 4ch;
text-align: right;
font-variant-numeric: tabular-nums;
}
.progress-separator {
@@ -256,69 +395,92 @@ const onIndexBlur = (event: Event) => {
font-weight: 500;
color: rgba(226, 232, 240, 0.6);
font-family: 'SF Mono', 'Roboto Mono', monospace;
min-width: 4ch;
text-align: left;
font-variant-numeric: tabular-nums;
}
.refresh-button {
/* Repeat Progress */
.repeat-progress {
display: flex;
align-items: center;
justify-content: center;
width: 24px;
height: 24px;
gap: 6px;
margin-left: 8px;
padding: 0;
background: transparent;
border: 1px solid rgba(255, 255, 255, 0.1);
padding: 2px 6px;
background: rgba(26, 32, 44, 0.6);
border: 1px solid rgba(226, 232, 240, 0.1);
border-radius: 4px;
color: rgba(226, 232, 240, 0.6);
cursor: pointer;
transition: all 0.2s;
}
.refresh-button:hover:not(:disabled) {
background: rgba(66, 153, 225, 0.2);
border-color: rgba(66, 153, 225, 0.4);
color: rgba(191, 219, 254, 1);
.repeat-progress-track {
width: 32px;
height: 4px;
background: rgba(226, 232, 240, 0.15);
border-radius: 2px;
overflow: hidden;
}
.refresh-button:disabled {
opacity: 0.4;
cursor: not-allowed;
.repeat-progress-fill {
height: 100%;
background: linear-gradient(90deg, #f59e0b, #fbbf24);
border-radius: 2px;
transition: width 0.3s ease;
}
.refresh-icon {
width: 14px;
height: 14px;
.repeat-progress-fill.is-complete {
background: linear-gradient(90deg, #10b981, #34d399);
}
.refresh-icon.spinning {
animation: spin 1s linear infinite;
.repeat-progress-text {
font-size: 10px;
font-family: 'SF Mono', 'Roboto Mono', monospace;
color: rgba(253, 230, 138, 0.9);
min-width: 3ch;
font-variant-numeric: tabular-nums;
}
@keyframes spin {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
/* Index Input */
.index-input-container {
/* Index Controls Row - Grouped Layout */
.index-controls-row {
display: flex;
align-items: center;
gap: 8px;
align-items: flex-end;
gap: 16px;
}
/* Control Group */
.control-group {
display: flex;
flex-direction: column;
gap: 6px;
}
.control-group-label {
font-size: 11px;
font-weight: 500;
color: rgba(226, 232, 240, 0.5);
text-transform: uppercase;
letter-spacing: 0.03em;
line-height: 1;
}
.control-group-content {
display: flex;
align-items: baseline;
gap: 4px;
height: 32px;
}
.index-input {
width: 80px;
padding: 6px 10px;
width: 50px;
height: 32px;
padding: 0 8px;
background: rgba(26, 32, 44, 0.9);
border: 1px solid rgba(226, 232, 240, 0.2);
border-radius: 6px;
color: #e4e4e7;
font-size: 13px;
font-family: 'SF Mono', 'Roboto Mono', monospace;
line-height: 32px;
box-sizing: border-box;
}
.index-input:focus {
@@ -332,8 +494,89 @@ const onIndexBlur = (event: Event) => {
}
.index-hint {
font-size: 11px;
font-size: 12px;
color: rgba(226, 232, 240, 0.4);
font-variant-numeric: tabular-nums;
line-height: 32px;
}
/* Repeat Controls */
.repeat-input {
width: 50px;
height: 32px;
padding: 0 6px;
background: rgba(26, 32, 44, 0.9);
border: 1px solid rgba(226, 232, 240, 0.2);
border-radius: 6px;
color: #e4e4e7;
font-size: 13px;
font-family: 'SF Mono', 'Roboto Mono', monospace;
text-align: center;
line-height: 32px;
box-sizing: border-box;
}
.repeat-input:focus {
outline: none;
border-color: rgba(66, 153, 225, 0.6);
}
.repeat-suffix {
font-size: 13px;
color: rgba(226, 232, 240, 0.4);
font-weight: 500;
line-height: 32px;
}
/* Action Buttons */
.action-buttons {
display: flex;
align-items: center;
gap: 6px;
margin-left: auto;
}
/* Control Buttons */
.control-btn {
display: flex;
align-items: center;
justify-content: center;
width: 24px;
height: 24px;
padding: 0;
background: transparent;
border: 1px solid rgba(255, 255, 255, 0.1);
border-radius: 4px;
color: rgba(226, 232, 240, 0.6);
cursor: pointer;
transition: all 0.2s;
}
.control-btn:hover:not(:disabled) {
background: rgba(66, 153, 225, 0.2);
border-color: rgba(66, 153, 225, 0.4);
color: rgba(191, 219, 254, 1);
}
.control-btn:disabled {
opacity: 0.4;
cursor: not-allowed;
}
.control-btn.active {
background: rgba(245, 158, 11, 0.2);
border-color: rgba(245, 158, 11, 0.5);
color: rgba(253, 230, 138, 1);
}
.control-btn.active:hover {
background: rgba(245, 158, 11, 0.3);
border-color: rgba(245, 158, 11, 0.6);
}
.control-icon {
width: 14px;
height: 14px;
}
/* Slider Container */

Some files were not shown because too many files have changed in this diff Show More