mirror of
https://github.com/willmiao/ComfyUI-Lora-Manager.git
synced 2026-03-22 05:32:12 -03:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0a340d397c |
@@ -1,201 +0,0 @@
|
||||
---
|
||||
name: lora-manager-e2e
|
||||
description: End-to-end testing and validation for LoRa Manager features. Use when performing automated E2E validation of LoRa Manager standalone mode, including starting/restarting the server, using Chrome DevTools MCP to interact with the web UI at http://127.0.0.1:8188/loras, and verifying frontend-to-backend functionality. Covers workflow validation, UI interaction testing, and integration testing between the standalone Python backend and the browser frontend.
|
||||
---
|
||||
|
||||
# LoRa Manager E2E Testing
|
||||
|
||||
This skill provides workflows and utilities for end-to-end testing of LoRa Manager using Chrome DevTools MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- LoRa Manager project cloned and dependencies installed (`pip install -r requirements.txt`)
|
||||
- Chrome browser available for debugging
|
||||
- Chrome DevTools MCP connected
|
||||
|
||||
## Quick Start Workflow
|
||||
|
||||
### 1. Start LoRa Manager Standalone
|
||||
|
||||
```python
|
||||
# Use the provided script to start the server
|
||||
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188
|
||||
```
|
||||
|
||||
Or manually:
|
||||
```bash
|
||||
cd /home/miao/workspace/ComfyUI/custom_nodes/ComfyUI-Lora-Manager
|
||||
python standalone.py --port 8188
|
||||
```
|
||||
|
||||
Wait for server ready message before proceeding.
|
||||
|
||||
### 2. Open Chrome Debug Mode
|
||||
|
||||
```bash
|
||||
# Chrome with remote debugging on port 9222
|
||||
google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras
|
||||
```
|
||||
|
||||
### 3. Connect Chrome DevTools MCP
|
||||
|
||||
Ensure the MCP server is connected to Chrome at `http://localhost:9222`.
|
||||
|
||||
### 4. Navigate and Interact
|
||||
|
||||
Use Chrome DevTools MCP tools to:
|
||||
- Take snapshots: `take_snapshot`
|
||||
- Click elements: `click`
|
||||
- Fill forms: `fill` or `fill_form`
|
||||
- Evaluate scripts: `evaluate_script`
|
||||
- Wait for elements: `wait_for`
|
||||
|
||||
## Common E2E Test Patterns
|
||||
|
||||
### Pattern: Full Page Load Verification
|
||||
|
||||
```python
|
||||
# Navigate to LoRA list page
|
||||
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
|
||||
|
||||
# Wait for page to load
|
||||
wait_for(text="LoRAs", timeout=10000)
|
||||
|
||||
# Take snapshot to verify UI state
|
||||
snapshot = take_snapshot()
|
||||
```
|
||||
|
||||
### Pattern: Restart Server for Configuration Changes
|
||||
|
||||
```python
|
||||
# Stop current server (if running)
|
||||
# Start with new configuration
|
||||
python .agents/skills/lora-manager-e2e/scripts/start_server.py --port 8188 --restart
|
||||
|
||||
# Wait and refresh browser
|
||||
navigate_page(type="reload", ignoreCache=True)
|
||||
wait_for(text="LoRAs", timeout=15000)
|
||||
```
|
||||
|
||||
### Pattern: Verify Backend API via Frontend
|
||||
|
||||
```python
|
||||
# Execute script in browser to call backend API
|
||||
result = evaluate_script(function="""
|
||||
async () => {
|
||||
const response = await fetch('/loras/api/list');
|
||||
const data = await response.json();
|
||||
return { count: data.length, firstItem: data[0]?.name };
|
||||
}
|
||||
""")
|
||||
```
|
||||
|
||||
### Pattern: Form Submission Flow
|
||||
|
||||
```python
|
||||
# Fill a form (e.g., search or filter)
|
||||
fill_form(elements=[
|
||||
{"uid": "search-input", "value": "character"},
|
||||
])
|
||||
|
||||
# Click submit button
|
||||
click(uid="search-button")
|
||||
|
||||
# Wait for results
|
||||
wait_for(text="Results", timeout=5000)
|
||||
|
||||
# Verify results via snapshot
|
||||
snapshot = take_snapshot()
|
||||
```
|
||||
|
||||
### Pattern: Modal Dialog Interaction
|
||||
|
||||
```python
|
||||
# Open modal (e.g., add LoRA)
|
||||
click(uid="add-lora-button")
|
||||
|
||||
# Wait for modal to appear
|
||||
wait_for(text="Add LoRA", timeout=3000)
|
||||
|
||||
# Fill modal form
|
||||
fill_form(elements=[
|
||||
{"uid": "lora-name", "value": "Test LoRA"},
|
||||
{"uid": "lora-path", "value": "/path/to/lora.safetensors"},
|
||||
])
|
||||
|
||||
# Submit
|
||||
click(uid="modal-submit-button")
|
||||
|
||||
# Wait for success message or close
|
||||
wait_for(text="Success", timeout=5000)
|
||||
```
|
||||
|
||||
## Available Scripts
|
||||
|
||||
### scripts/start_server.py
|
||||
|
||||
Starts or restarts the LoRa Manager standalone server.
|
||||
|
||||
```bash
|
||||
python scripts/start_server.py [--port PORT] [--restart] [--wait]
|
||||
```
|
||||
|
||||
Options:
|
||||
- `--port`: Server port (default: 8188)
|
||||
- `--restart`: Kill existing server before starting
|
||||
- `--wait`: Wait for server to be ready before exiting
|
||||
|
||||
### scripts/wait_for_server.py
|
||||
|
||||
Polls server until ready or timeout.
|
||||
|
||||
```bash
|
||||
python scripts/wait_for_server.py [--port PORT] [--timeout SECONDS]
|
||||
```
|
||||
|
||||
## Test Scenarios Reference
|
||||
|
||||
See [references/test-scenarios.md](references/test-scenarios.md) for detailed test scenarios including:
|
||||
- LoRA list display and filtering
|
||||
- Model metadata editing
|
||||
- Recipe creation and management
|
||||
- Settings configuration
|
||||
- Import/export functionality
|
||||
|
||||
## Network Request Verification
|
||||
|
||||
Use `list_network_requests` and `get_network_request` to verify API calls:
|
||||
|
||||
```python
|
||||
# List recent XHR/fetch requests
|
||||
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
|
||||
|
||||
# Get details of specific request
|
||||
details = get_network_request(reqid=123)
|
||||
```
|
||||
|
||||
## Console Message Monitoring
|
||||
|
||||
```python
|
||||
# Check for errors or warnings
|
||||
messages = list_console_messages(types=["error", "warn"])
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
```python
|
||||
# Start performance trace
|
||||
performance_start_trace(reload=True, autoStop=False)
|
||||
|
||||
# Perform actions...
|
||||
|
||||
# Stop and analyze
|
||||
results = performance_stop_trace()
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
Always ensure proper cleanup after tests:
|
||||
1. Stop the standalone server
|
||||
2. Close browser pages (keep at least one open)
|
||||
3. Clear temporary data if needed
|
||||
@@ -1,324 +0,0 @@
|
||||
# Chrome DevTools MCP Cheatsheet for LoRa Manager
|
||||
|
||||
Quick reference for common MCP commands used in LoRa Manager E2E testing.
|
||||
|
||||
## Navigation
|
||||
|
||||
```python
|
||||
# Navigate to LoRA list page
|
||||
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
|
||||
|
||||
# Reload page with cache clear
|
||||
navigate_page(type="reload", ignoreCache=True)
|
||||
|
||||
# Go back/forward
|
||||
navigate_page(type="back")
|
||||
navigate_page(type="forward")
|
||||
```
|
||||
|
||||
## Waiting
|
||||
|
||||
```python
|
||||
# Wait for text to appear
|
||||
wait_for(text="LoRAs", timeout=10000)
|
||||
|
||||
# Wait for specific element (via evaluate_script)
|
||||
evaluate_script(function="""
|
||||
() => {
|
||||
return new Promise((resolve) => {
|
||||
const check = () => {
|
||||
if (document.querySelector('.lora-card')) {
|
||||
resolve(true);
|
||||
} else {
|
||||
setTimeout(check, 100);
|
||||
}
|
||||
};
|
||||
check();
|
||||
});
|
||||
}
|
||||
""")
|
||||
```
|
||||
|
||||
## Taking Snapshots
|
||||
|
||||
```python
|
||||
# Full page snapshot
|
||||
snapshot = take_snapshot()
|
||||
|
||||
# Verbose snapshot (more details)
|
||||
snapshot = take_snapshot(verbose=True)
|
||||
|
||||
# Save to file
|
||||
take_snapshot(filePath="test-snapshots/page-load.json")
|
||||
```
|
||||
|
||||
## Element Interaction
|
||||
|
||||
```python
|
||||
# Click element
|
||||
click(uid="element-uid-from-snapshot")
|
||||
|
||||
# Double click
|
||||
click(uid="element-uid", dblClick=True)
|
||||
|
||||
# Fill input
|
||||
fill(uid="search-input", value="test query")
|
||||
|
||||
# Fill multiple inputs
|
||||
fill_form(elements=[
|
||||
{"uid": "input-1", "value": "value 1"},
|
||||
{"uid": "input-2", "value": "value 2"},
|
||||
])
|
||||
|
||||
# Hover
|
||||
hover(uid="lora-card-1")
|
||||
|
||||
# Upload file
|
||||
upload_file(uid="file-input", filePath="/path/to/file.safetensors")
|
||||
```
|
||||
|
||||
## Keyboard Input
|
||||
|
||||
```python
|
||||
# Press key
|
||||
press_key(key="Enter")
|
||||
press_key(key="Escape")
|
||||
press_key(key="Tab")
|
||||
|
||||
# Keyboard shortcuts
|
||||
press_key(key="Control+A") # Select all
|
||||
press_key(key="Control+F") # Find
|
||||
```
|
||||
|
||||
## JavaScript Evaluation
|
||||
|
||||
```python
|
||||
# Simple evaluation
|
||||
result = evaluate_script(function="() => document.title")
|
||||
|
||||
# Async evaluation
|
||||
result = evaluate_script(function="""
|
||||
async () => {
|
||||
const response = await fetch('/loras/api/list');
|
||||
return await response.json();
|
||||
}
|
||||
""")
|
||||
|
||||
# Check element existence
|
||||
exists = evaluate_script(function="""
|
||||
() => document.querySelector('.lora-card') !== null
|
||||
""")
|
||||
|
||||
# Get element count
|
||||
count = evaluate_script(function="""
|
||||
() => document.querySelectorAll('.lora-card').length
|
||||
""")
|
||||
```
|
||||
|
||||
## Network Monitoring
|
||||
|
||||
```python
|
||||
# List all network requests
|
||||
requests = list_network_requests()
|
||||
|
||||
# Filter by resource type
|
||||
xhr_requests = list_network_requests(resourceTypes=["xhr", "fetch"])
|
||||
|
||||
# Get specific request details
|
||||
details = get_network_request(reqid=123)
|
||||
|
||||
# Include preserved requests from previous navigations
|
||||
all_requests = list_network_requests(includePreservedRequests=True)
|
||||
```
|
||||
|
||||
## Console Monitoring
|
||||
|
||||
```python
|
||||
# List all console messages
|
||||
messages = list_console_messages()
|
||||
|
||||
# Filter by type
|
||||
errors = list_console_messages(types=["error", "warn"])
|
||||
|
||||
# Include preserved messages
|
||||
all_messages = list_console_messages(includePreservedMessages=True)
|
||||
|
||||
# Get specific message
|
||||
details = get_console_message(msgid=1)
|
||||
```
|
||||
|
||||
## Performance Testing
|
||||
|
||||
```python
|
||||
# Start trace with page reload
|
||||
performance_start_trace(reload=True, autoStop=False)
|
||||
|
||||
# Start trace without reload
|
||||
performance_start_trace(reload=False, autoStop=True, filePath="trace.json.gz")
|
||||
|
||||
# Stop trace
|
||||
results = performance_stop_trace()
|
||||
|
||||
# Stop and save
|
||||
performance_stop_trace(filePath="trace-results.json.gz")
|
||||
|
||||
# Analyze specific insight
|
||||
insight = performance_analyze_insight(
|
||||
insightSetId="results.insightSets[0].id",
|
||||
insightName="LCPBreakdown"
|
||||
)
|
||||
```
|
||||
|
||||
## Page Management
|
||||
|
||||
```python
|
||||
# List open pages
|
||||
pages = list_pages()
|
||||
|
||||
# Select a page
|
||||
select_page(pageId=0, bringToFront=True)
|
||||
|
||||
# Create new page
|
||||
new_page(url="http://127.0.0.1:8188/loras")
|
||||
|
||||
# Close page (keep at least one open!)
|
||||
close_page(pageId=1)
|
||||
|
||||
# Resize page
|
||||
resize_page(width=1920, height=1080)
|
||||
```
|
||||
|
||||
## Screenshots
|
||||
|
||||
```python
|
||||
# Full page screenshot
|
||||
take_screenshot(fullPage=True)
|
||||
|
||||
# Viewport screenshot
|
||||
take_screenshot()
|
||||
|
||||
# Element screenshot
|
||||
take_screenshot(uid="lora-card-1")
|
||||
|
||||
# Save to file
|
||||
take_screenshot(filePath="screenshots/page.png", format="png")
|
||||
|
||||
# JPEG with quality
|
||||
take_screenshot(filePath="screenshots/page.jpg", format="jpeg", quality=90)
|
||||
```
|
||||
|
||||
## Dialog Handling
|
||||
|
||||
```python
|
||||
# Accept dialog
|
||||
handle_dialog(action="accept")
|
||||
|
||||
# Accept with text input
|
||||
handle_dialog(action="accept", promptText="user input")
|
||||
|
||||
# Dismiss dialog
|
||||
handle_dialog(action="dismiss")
|
||||
```
|
||||
|
||||
## Device Emulation
|
||||
|
||||
```python
|
||||
# Mobile viewport
|
||||
emulate(viewport={"width": 375, "height": 667, "isMobile": True, "hasTouch": True})
|
||||
|
||||
# Tablet viewport
|
||||
emulate(viewport={"width": 768, "height": 1024, "isMobile": True, "hasTouch": True})
|
||||
|
||||
# Desktop viewport
|
||||
emulate(viewport={"width": 1920, "height": 1080})
|
||||
|
||||
# Network throttling
|
||||
emulate(networkConditions="Slow 3G")
|
||||
emulate(networkConditions="Fast 4G")
|
||||
|
||||
# CPU throttling
|
||||
emulate(cpuThrottlingRate=4) # 4x slowdown
|
||||
|
||||
# Geolocation
|
||||
emulate(geolocation={"latitude": 37.7749, "longitude": -122.4194})
|
||||
|
||||
# User agent
|
||||
emulate(userAgent="Mozilla/5.0 (Custom)")
|
||||
|
||||
# Reset emulation
|
||||
emulate(viewport=None, networkConditions="No emulation", userAgent=None)
|
||||
```
|
||||
|
||||
## Drag and Drop
|
||||
|
||||
```python
|
||||
# Drag element to another
|
||||
drag(from_uid="draggable-item", to_uid="drop-zone")
|
||||
```
|
||||
|
||||
## Common LoRa Manager Test Patterns
|
||||
|
||||
### Verify LoRA Cards Loaded
|
||||
|
||||
```python
|
||||
navigate_page(type="url", url="http://127.0.0.1:8188/loras")
|
||||
wait_for(text="LoRAs", timeout=10000)
|
||||
|
||||
# Check if cards loaded
|
||||
result = evaluate_script(function="""
|
||||
() => {
|
||||
const cards = document.querySelectorAll('.lora-card');
|
||||
return {
|
||||
count: cards.length,
|
||||
hasData: cards.length > 0
|
||||
};
|
||||
}
|
||||
""")
|
||||
```
|
||||
|
||||
### Search and Verify Results
|
||||
|
||||
```python
|
||||
fill(uid="search-input", value="character")
|
||||
press_key(key="Enter")
|
||||
wait_for(timeout=2000) # Wait for debounce
|
||||
|
||||
# Check results
|
||||
result = evaluate_script(function="""
|
||||
() => {
|
||||
const cards = document.querySelectorAll('.lora-card');
|
||||
const names = Array.from(cards).map(c => c.dataset.name || c.textContent);
|
||||
return { count: cards.length, names };
|
||||
}
|
||||
""")
|
||||
```
|
||||
|
||||
### Check API Response
|
||||
|
||||
```python
|
||||
# Trigger API call
|
||||
evaluate_script(function="""
|
||||
() => window.loraApiCallPromise = fetch('/loras/api/list').then(r => r.json())
|
||||
""")
|
||||
|
||||
# Wait and get result
|
||||
import time
|
||||
time.sleep(1)
|
||||
|
||||
result = evaluate_script(function="""
|
||||
async () => await window.loraApiCallPromise
|
||||
""")
|
||||
```
|
||||
|
||||
### Monitor Console for Errors
|
||||
|
||||
```python
|
||||
# Before test: clear console (navigate reloads)
|
||||
navigate_page(type="reload")
|
||||
|
||||
# ... perform actions ...
|
||||
|
||||
# Check for errors
|
||||
errors = list_console_messages(types=["error"])
|
||||
assert len(errors) == 0, f"Console errors: {errors}"
|
||||
```
|
||||
@@ -1,272 +0,0 @@
|
||||
# LoRa Manager E2E Test Scenarios
|
||||
|
||||
This document provides detailed test scenarios for end-to-end validation of LoRa Manager features.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [LoRA List Page](#lora-list-page)
|
||||
2. [Model Details](#model-details)
|
||||
3. [Recipes](#recipes)
|
||||
4. [Settings](#settings)
|
||||
5. [Import/Export](#importexport)
|
||||
|
||||
---
|
||||
|
||||
## LoRA List Page
|
||||
|
||||
### Scenario: Page Load and Display
|
||||
|
||||
**Objective**: Verify the LoRA list page loads correctly and displays models.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `http://127.0.0.1:8188/loras`
|
||||
2. Wait for page title "LoRAs" to appear
|
||||
3. Take snapshot to verify:
|
||||
- Header with "LoRAs" title is visible
|
||||
- Search/filter controls are present
|
||||
- Grid/list view toggle exists
|
||||
- LoRA cards are displayed (if models exist)
|
||||
- Pagination controls (if applicable)
|
||||
|
||||
**Expected Result**: Page loads without errors, UI elements are present.
|
||||
|
||||
### Scenario: Search Functionality
|
||||
|
||||
**Objective**: Verify search filters LoRA models correctly.
|
||||
|
||||
**Steps**:
|
||||
1. Ensure at least one LoRA exists with known name (e.g., "test-character")
|
||||
2. Navigate to LoRA list page
|
||||
3. Enter search term in search box: "test"
|
||||
4. Press Enter or click search button
|
||||
5. Wait for results to update
|
||||
|
||||
**Expected Result**: Only LoRAs matching search term are displayed.
|
||||
|
||||
**Verification Script**:
|
||||
```python
|
||||
# After search, verify filtered results
|
||||
evaluate_script(function="""
|
||||
() => {
|
||||
const cards = document.querySelectorAll('.lora-card');
|
||||
const names = Array.from(cards).map(c => c.dataset.name);
|
||||
return { count: cards.length, names };
|
||||
}
|
||||
""")
|
||||
```
|
||||
|
||||
### Scenario: Filter by Tags
|
||||
|
||||
**Objective**: Verify tag filtering works correctly.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to LoRA list page
|
||||
2. Click on a tag (e.g., "character", "style")
|
||||
3. Wait for filtered results
|
||||
|
||||
**Expected Result**: Only LoRAs with selected tag are displayed.
|
||||
|
||||
### Scenario: View Mode Toggle
|
||||
|
||||
**Objective**: Verify grid/list view toggle works.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to LoRA list page
|
||||
2. Click list view button
|
||||
3. Verify list layout
|
||||
4. Click grid view button
|
||||
5. Verify grid layout
|
||||
|
||||
**Expected Result**: View mode changes correctly, layout updates.
|
||||
|
||||
---
|
||||
|
||||
## Model Details
|
||||
|
||||
### Scenario: Open Model Details
|
||||
|
||||
**Objective**: Verify clicking a LoRA opens its details.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to LoRA list page
|
||||
2. Click on a LoRA card
|
||||
3. Wait for details panel/modal to open
|
||||
|
||||
**Expected Result**: Details panel shows:
|
||||
- Model name
|
||||
- Preview image
|
||||
- Metadata (trigger words, tags, etc.)
|
||||
- Action buttons (edit, delete, etc.)
|
||||
|
||||
### Scenario: Edit Model Metadata
|
||||
|
||||
**Objective**: Verify metadata editing works end-to-end.
|
||||
|
||||
**Steps**:
|
||||
1. Open a LoRA's details
|
||||
2. Click "Edit" button
|
||||
3. Modify trigger words field
|
||||
4. Add/remove tags
|
||||
5. Save changes
|
||||
6. Refresh page
|
||||
7. Reopen the same LoRA
|
||||
|
||||
**Expected Result**: Changes persist after refresh.
|
||||
|
||||
### Scenario: Delete Model
|
||||
|
||||
**Objective**: Verify model deletion works.
|
||||
|
||||
**Steps**:
|
||||
1. Open a LoRA's details
|
||||
2. Click "Delete" button
|
||||
3. Confirm deletion in dialog
|
||||
4. Wait for removal
|
||||
|
||||
**Expected Result**: Model removed from list, success message shown.
|
||||
|
||||
---
|
||||
|
||||
## Recipes
|
||||
|
||||
### Scenario: Recipe List Display
|
||||
|
||||
**Objective**: Verify recipes page loads and displays recipes.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `http://127.0.0.1:8188/recipes`
|
||||
2. Wait for "Recipes" title
|
||||
3. Take snapshot
|
||||
|
||||
**Expected Result**: Recipe list displayed with cards/items.
|
||||
|
||||
### Scenario: Create New Recipe
|
||||
|
||||
**Objective**: Verify recipe creation workflow.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to recipes page
|
||||
2. Click "New Recipe" button
|
||||
3. Fill recipe form:
|
||||
- Name: "Test Recipe"
|
||||
- Description: "E2E test recipe"
|
||||
- Add LoRA models
|
||||
4. Save recipe
|
||||
5. Verify recipe appears in list
|
||||
|
||||
**Expected Result**: New recipe created and displayed.
|
||||
|
||||
### Scenario: Apply Recipe
|
||||
|
||||
**Objective**: Verify applying a recipe to ComfyUI.
|
||||
|
||||
**Steps**:
|
||||
1. Open a recipe
|
||||
2. Click "Apply" or "Load in ComfyUI"
|
||||
3. Verify action completes
|
||||
|
||||
**Expected Result**: Recipe applied successfully.
|
||||
|
||||
---
|
||||
|
||||
## Settings
|
||||
|
||||
### Scenario: Settings Page Load
|
||||
|
||||
**Objective**: Verify settings page displays correctly.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to `http://127.0.0.1:8188/settings`
|
||||
2. Wait for "Settings" title
|
||||
3. Take snapshot
|
||||
|
||||
**Expected Result**: Settings form with various options displayed.
|
||||
|
||||
### Scenario: Change Setting and Restart
|
||||
|
||||
**Objective**: Verify settings persist after restart.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to settings page
|
||||
2. Change a setting (e.g., default view mode)
|
||||
3. Save settings
|
||||
4. Restart server: `python scripts/start_server.py --restart --wait`
|
||||
5. Refresh browser page
|
||||
6. Navigate to settings
|
||||
|
||||
**Expected Result**: Changed setting value persists.
|
||||
|
||||
---
|
||||
|
||||
## Import/Export
|
||||
|
||||
### Scenario: Export Models List
|
||||
|
||||
**Objective**: Verify export functionality.
|
||||
|
||||
**Steps**:
|
||||
1. Navigate to LoRA list
|
||||
2. Click "Export" button
|
||||
3. Select format (JSON/CSV)
|
||||
4. Download file
|
||||
|
||||
**Expected Result**: File downloaded with correct data.
|
||||
|
||||
### Scenario: Import Models
|
||||
|
||||
**Objective**: Verify import functionality.
|
||||
|
||||
**Steps**:
|
||||
1. Prepare import file
|
||||
2. Navigate to import page
|
||||
3. Upload file
|
||||
4. Verify import results
|
||||
|
||||
**Expected Result**: Models imported successfully, confirmation shown.
|
||||
|
||||
---
|
||||
|
||||
## API Integration Tests
|
||||
|
||||
### Scenario: Verify API Endpoints
|
||||
|
||||
**Objective**: Verify backend API responds correctly.
|
||||
|
||||
**Test via browser console**:
|
||||
```javascript
|
||||
// List LoRAs
|
||||
fetch('/loras/api/list').then(r => r.json()).then(console.log)
|
||||
|
||||
// Get LoRA details
|
||||
fetch('/loras/api/detail/<id>').then(r => r.json()).then(console.log)
|
||||
|
||||
// Search LoRAs
|
||||
fetch('/loras/api/search?q=test').then(r => r.json()).then(console.log)
|
||||
```
|
||||
|
||||
**Expected Result**: APIs return valid JSON with expected structure.
|
||||
|
||||
---
|
||||
|
||||
## Console Error Monitoring
|
||||
|
||||
During all tests, monitor browser console for errors:
|
||||
|
||||
```python
|
||||
# Check for JavaScript errors
|
||||
messages = list_console_messages(types=["error"])
|
||||
assert len(messages) == 0, f"Console errors found: {messages}"
|
||||
```
|
||||
|
||||
## Network Request Verification
|
||||
|
||||
Verify key API calls are made:
|
||||
|
||||
```python
|
||||
# List XHR requests
|
||||
requests = list_network_requests(resourceTypes=["xhr", "fetch"])
|
||||
|
||||
# Look for specific endpoints
|
||||
lora_list_requests = [r for r in requests if "/api/list" in r.get("url", "")]
|
||||
assert len(lora_list_requests) > 0, "LoRA list API not called"
|
||||
```
|
||||
@@ -1,193 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Example E2E test demonstrating LoRa Manager testing workflow.
|
||||
|
||||
This script shows how to:
|
||||
1. Start the standalone server
|
||||
2. Use Chrome DevTools MCP to interact with the UI
|
||||
3. Verify functionality end-to-end
|
||||
|
||||
Note: This is a template. Actual execution requires Chrome DevTools MCP.
|
||||
"""
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
def run_test():
|
||||
"""Run example E2E test flow."""
|
||||
|
||||
print("=" * 60)
|
||||
print("LoRa Manager E2E Test Example")
|
||||
print("=" * 60)
|
||||
|
||||
# Step 1: Start server
|
||||
print("\n[1/5] Starting LoRa Manager standalone server...")
|
||||
result = subprocess.run(
|
||||
[sys.executable, "start_server.py", "--port", "8188", "--wait", "--timeout", "30"],
|
||||
capture_output=True,
|
||||
text=True
|
||||
)
|
||||
if result.returncode != 0:
|
||||
print(f"Failed to start server: {result.stderr}")
|
||||
return 1
|
||||
print("Server ready!")
|
||||
|
||||
# Step 2: Open Chrome (manual step - show command)
|
||||
print("\n[2/5] Open Chrome with debug mode:")
|
||||
print("google-chrome --remote-debugging-port=9222 --user-data-dir=/tmp/chrome-lora-manager http://127.0.0.1:8188/loras")
|
||||
print("(In actual test, this would be automated via MCP)")
|
||||
|
||||
# Step 3: Navigate and verify page load
|
||||
print("\n[3/5] Page Load Verification:")
|
||||
print("""
|
||||
MCP Commands to execute:
|
||||
1. navigate_page(type="url", url="http://127.0.0.1:8188/loras")
|
||||
2. wait_for(text="LoRAs", timeout=10000)
|
||||
3. snapshot = take_snapshot()
|
||||
""")
|
||||
|
||||
# Step 4: Test search functionality
|
||||
print("\n[4/5] Search Functionality Test:")
|
||||
print("""
|
||||
MCP Commands to execute:
|
||||
1. fill(uid="search-input", value="test")
|
||||
2. press_key(key="Enter")
|
||||
3. wait_for(text="Results", timeout=5000)
|
||||
4. result = evaluate_script(function="""
|
||||
() => {
|
||||
const cards = document.querySelectorAll('.lora-card');
|
||||
return { count: cards.length };
|
||||
}
|
||||
""")
|
||||
""")
|
||||
|
||||
# Step 5: Verify API
|
||||
print("\n[5/5] API Verification:")
|
||||
print("""
|
||||
MCP Commands to execute:
|
||||
1. api_result = evaluate_script(function="""
|
||||
async () => {
|
||||
const response = await fetch('/loras/api/list');
|
||||
const data = await response.json();
|
||||
return { count: data.length, status: response.status };
|
||||
}
|
||||
""")
|
||||
2. Verify api_result['status'] == 200
|
||||
""")
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Test flow completed!")
|
||||
print("=" * 60)
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
def example_restart_flow():
|
||||
"""Example: Testing configuration change that requires restart."""
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Example: Server Restart Flow")
|
||||
print("=" * 60)
|
||||
|
||||
print("""
|
||||
Scenario: Change setting and verify after restart
|
||||
|
||||
Steps:
|
||||
1. Navigate to settings page
|
||||
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
|
||||
|
||||
2. Change a setting (e.g., theme)
|
||||
- fill(uid="theme-select", value="dark")
|
||||
- click(uid="save-settings-button")
|
||||
|
||||
3. Restart server
|
||||
- subprocess.run([python, "start_server.py", "--restart", "--wait"])
|
||||
|
||||
4. Refresh browser
|
||||
- navigate_page(type="reload", ignoreCache=True)
|
||||
- wait_for(text="LoRAs", timeout=15000)
|
||||
|
||||
5. Verify setting persisted
|
||||
- navigate_page(type="url", url="http://127.0.0.1:8188/settings")
|
||||
- theme = evaluate_script(function="() => document.querySelector('#theme-select').value")
|
||||
- assert theme == "dark"
|
||||
""")
|
||||
|
||||
|
||||
def example_modal_interaction():
|
||||
"""Example: Testing modal dialog interaction."""
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Example: Modal Dialog Interaction")
|
||||
print("=" * 60)
|
||||
|
||||
print("""
|
||||
Scenario: Add new LoRA via modal
|
||||
|
||||
Steps:
|
||||
1. Open modal
|
||||
- click(uid="add-lora-button")
|
||||
- wait_for(text="Add LoRA", timeout=3000)
|
||||
|
||||
2. Fill form
|
||||
- fill_form(elements=[
|
||||
{"uid": "lora-name", "value": "Test Character"},
|
||||
{"uid": "lora-path", "value": "/models/test.safetensors"},
|
||||
])
|
||||
|
||||
3. Submit
|
||||
- click(uid="modal-submit-button")
|
||||
|
||||
4. Verify success
|
||||
- wait_for(text="Successfully added", timeout=5000)
|
||||
- snapshot = take_snapshot()
|
||||
""")
|
||||
|
||||
|
||||
def example_network_monitoring():
|
||||
"""Example: Network request monitoring."""
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("Example: Network Request Monitoring")
|
||||
print("=" * 60)
|
||||
|
||||
print("""
|
||||
Scenario: Verify API calls during user interaction
|
||||
|
||||
Steps:
|
||||
1. Clear network log (implicit on navigation)
|
||||
- navigate_page(type="url", url="http://127.0.0.1:8188/loras")
|
||||
|
||||
2. Perform action that triggers API call
|
||||
- fill(uid="search-input", value="character")
|
||||
- press_key(key="Enter")
|
||||
|
||||
3. List network requests
|
||||
- requests = list_network_requests(resourceTypes=["xhr", "fetch"])
|
||||
|
||||
4. Find search API call
|
||||
- search_requests = [r for r in requests if "/api/search" in r.get("url", "")]
|
||||
- assert len(search_requests) > 0, "Search API was not called"
|
||||
|
||||
5. Get request details
|
||||
- if search_requests:
|
||||
details = get_network_request(reqid=search_requests[0]["reqid"])
|
||||
- Verify request method, response status, etc.
|
||||
""")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("LoRa Manager E2E Test Examples\n")
|
||||
print("This script demonstrates E2E testing patterns.\n")
|
||||
print("Note: Actual execution requires Chrome DevTools MCP connection.\n")
|
||||
|
||||
run_test()
|
||||
example_restart_flow()
|
||||
example_modal_interaction()
|
||||
example_network_monitoring()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("All examples shown!")
|
||||
print("=" * 60)
|
||||
@@ -1,169 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Start or restart LoRa Manager standalone server for E2E testing.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import subprocess
|
||||
import sys
|
||||
import time
|
||||
import socket
|
||||
import signal
|
||||
import os
|
||||
|
||||
|
||||
def find_server_process(port: int) -> list[int]:
|
||||
"""Find PIDs of processes listening on the given port."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["lsof", "-ti", f":{port}"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False
|
||||
)
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
return [int(pid) for pid in result.stdout.strip().split("\n") if pid]
|
||||
except FileNotFoundError:
|
||||
# lsof not available, try netstat
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["netstat", "-tlnp"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False
|
||||
)
|
||||
pids = []
|
||||
for line in result.stdout.split("\n"):
|
||||
if f":{port}" in line:
|
||||
parts = line.split()
|
||||
for part in parts:
|
||||
if "/" in part:
|
||||
try:
|
||||
pid = int(part.split("/")[0])
|
||||
pids.append(pid)
|
||||
except ValueError:
|
||||
pass
|
||||
return pids
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
return []
|
||||
|
||||
|
||||
def kill_server(port: int) -> None:
|
||||
"""Kill processes using the specified port."""
|
||||
pids = find_server_process(port)
|
||||
for pid in pids:
|
||||
try:
|
||||
os.kill(pid, signal.SIGTERM)
|
||||
print(f"Sent SIGTERM to process {pid}")
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
|
||||
# Wait for processes to terminate
|
||||
time.sleep(1)
|
||||
|
||||
# Force kill if still running
|
||||
pids = find_server_process(port)
|
||||
for pid in pids:
|
||||
try:
|
||||
os.kill(pid, signal.SIGKILL)
|
||||
print(f"Sent SIGKILL to process {pid}")
|
||||
except ProcessLookupError:
|
||||
pass
|
||||
|
||||
|
||||
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
|
||||
"""Check if server is accepting connections."""
|
||||
try:
|
||||
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
|
||||
return True
|
||||
except (socket.timeout, ConnectionRefusedError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
def wait_for_server(port: int, timeout: int = 30) -> bool:
|
||||
"""Wait for server to become ready."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if is_server_ready(port):
|
||||
return True
|
||||
time.sleep(0.5)
|
||||
return False
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Start LoRa Manager standalone server for E2E testing"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
default=8188,
|
||||
help="Server port (default: 8188)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--restart",
|
||||
action="store_true",
|
||||
help="Kill existing server before starting"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--wait",
|
||||
action="store_true",
|
||||
help="Wait for server to be ready before exiting"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout",
|
||||
type=int,
|
||||
default=30,
|
||||
help="Timeout for waiting (default: 30)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Get project root (parent of .agents directory)
|
||||
script_dir = os.path.dirname(os.path.abspath(__file__))
|
||||
skill_dir = os.path.dirname(script_dir)
|
||||
project_root = os.path.dirname(os.path.dirname(os.path.dirname(skill_dir)))
|
||||
|
||||
# Restart if requested
|
||||
if args.restart:
|
||||
print(f"Killing existing server on port {args.port}...")
|
||||
kill_server(args.port)
|
||||
time.sleep(1)
|
||||
|
||||
# Check if already running
|
||||
if is_server_ready(args.port):
|
||||
print(f"Server already running on port {args.port}")
|
||||
return 0
|
||||
|
||||
# Start server
|
||||
print(f"Starting LoRa Manager standalone server on port {args.port}...")
|
||||
cmd = [sys.executable, "standalone.py", "--port", str(args.port)]
|
||||
|
||||
# Start in background
|
||||
process = subprocess.Popen(
|
||||
cmd,
|
||||
cwd=project_root,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
start_new_session=True
|
||||
)
|
||||
|
||||
print(f"Server process started with PID {process.pid}")
|
||||
|
||||
# Wait for ready if requested
|
||||
if args.wait:
|
||||
print(f"Waiting for server to be ready (timeout: {args.timeout}s)...")
|
||||
if wait_for_server(args.port, args.timeout):
|
||||
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
|
||||
return 0
|
||||
else:
|
||||
print(f"Timeout waiting for server")
|
||||
return 1
|
||||
|
||||
print(f"Server starting at http://127.0.0.1:{args.port}/loras")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
@@ -1,61 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Wait for LoRa Manager server to become ready.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import socket
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
def is_server_ready(port: int, timeout: float = 0.5) -> bool:
|
||||
"""Check if server is accepting connections."""
|
||||
try:
|
||||
with socket.create_connection(("127.0.0.1", port), timeout=timeout):
|
||||
return True
|
||||
except (socket.timeout, ConnectionRefusedError, OSError):
|
||||
return False
|
||||
|
||||
|
||||
def wait_for_server(port: int, timeout: int = 30) -> bool:
|
||||
"""Wait for server to become ready."""
|
||||
start = time.time()
|
||||
while time.time() - start < timeout:
|
||||
if is_server_ready(port):
|
||||
return True
|
||||
time.sleep(0.5)
|
||||
return False
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Wait for LoRa Manager server to become ready"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--port",
|
||||
type=int,
|
||||
default=8188,
|
||||
help="Server port (default: 8188)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout",
|
||||
type=int,
|
||||
default=30,
|
||||
help="Timeout in seconds (default: 30)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"Waiting for server on port {args.port} (timeout: {args.timeout}s)...")
|
||||
|
||||
if wait_for_server(args.port, args.timeout):
|
||||
print(f"Server ready at http://127.0.0.1:{args.port}/loras")
|
||||
return 0
|
||||
else:
|
||||
print(f"Timeout: Server not ready after {args.timeout}s")
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
3
.gitignore
vendored
3
.gitignore
vendored
@@ -19,6 +19,3 @@ model_cache/
|
||||
vue-widgets/node_modules/
|
||||
vue-widgets/.vite/
|
||||
vue-widgets/dist/
|
||||
|
||||
# Hypothesis test cache
|
||||
.hypothesis/
|
||||
|
||||
181
AGENTS.md
181
AGENTS.md
@@ -25,127 +25,168 @@ pytest tests/test_recipes.py::test_function_name
|
||||
|
||||
# Run backend tests with coverage
|
||||
COVERAGE_FILE=coverage/backend/.coverage pytest \
|
||||
--cov=py --cov=standalone \
|
||||
--cov=py \
|
||||
--cov=standalone \
|
||||
--cov-report=term-missing \
|
||||
--cov-report=html:coverage/backend/html \
|
||||
--cov-report=xml:coverage/backend/coverage.xml
|
||||
--cov-report=xml:coverage/backend/coverage.xml \
|
||||
--cov-report=json:coverage/backend/coverage.json
|
||||
```
|
||||
|
||||
### Frontend Development (Standalone Web UI)
|
||||
### Frontend Development
|
||||
|
||||
```bash
|
||||
# Install frontend dependencies
|
||||
npm install
|
||||
npm test # Run all tests (JS + Vue)
|
||||
npm run test:js # Run JS tests only
|
||||
npm run test:watch # Watch mode
|
||||
npm run test:coverage # Generate coverage report
|
||||
```
|
||||
|
||||
### Vue Widget Development
|
||||
# Run frontend tests
|
||||
npm test
|
||||
|
||||
```bash
|
||||
cd vue-widgets
|
||||
npm install
|
||||
npm run dev # Build in watch mode
|
||||
npm run build # Build production bundle
|
||||
npm run typecheck # Run TypeScript type checking
|
||||
npm test # Run Vue widget tests
|
||||
npm run test:watch # Watch mode
|
||||
npm run test:coverage # Generate coverage report
|
||||
# Run frontend tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Run frontend tests with coverage
|
||||
npm run test:coverage
|
||||
```
|
||||
|
||||
## Python Code Style
|
||||
|
||||
### Imports & Formatting
|
||||
### Imports
|
||||
|
||||
- Use `from __future__ import annotations` for forward references
|
||||
- Group imports: standard library, third-party, local (blank line separated)
|
||||
- Absolute imports within `py/`: `from ..services import X`
|
||||
- PEP 8 with 4-space indentation, type hints required
|
||||
- Use `from __future__ import annotations` for forward references in type hints
|
||||
- Group imports: standard library, third-party, local (separated by blank lines)
|
||||
- Use absolute imports within `py/` package: `from ..services import X`
|
||||
- Mock ComfyUI dependencies in tests using `tests/conftest.py` patterns
|
||||
|
||||
### Formatting & Types
|
||||
|
||||
- PEP 8 with 4-space indentation
|
||||
- Type hints required for function signatures and class attributes
|
||||
- Use `TYPE_CHECKING` guard for type-checking-only imports
|
||||
- Prefer dataclasses for simple data containers
|
||||
- Use `Optional[T]` for nullable types, `Union[T, None]` only when necessary
|
||||
|
||||
### Naming Conventions
|
||||
|
||||
- Files: `snake_case.py`, Classes: `PascalCase`, Functions/vars: `snake_case`
|
||||
- Constants: `UPPER_SNAKE_CASE`, Private: `_protected`, `__mangled`
|
||||
- Files: `snake_case.py` (e.g., `model_scanner.py`, `lora_service.py`)
|
||||
- Classes: `PascalCase` (e.g., `ModelScanner`, `LoraService`)
|
||||
- Functions/variables: `snake_case` (e.g., `get_instance`, `model_type`)
|
||||
- Constants: `UPPER_SNAKE_CASE` (e.g., `VALID_LORA_TYPES`)
|
||||
- Private members: `_single_underscore` (protected), `__double_underscore` (name-mangled)
|
||||
|
||||
### Error Handling & Async
|
||||
### Error Handling
|
||||
|
||||
- Use `logging.getLogger(__name__)`, define custom exceptions in `py/services/errors.py`
|
||||
- `async def` for I/O, `@pytest.mark.asyncio` for async tests
|
||||
- Singleton with `asyncio.Lock`: see `ModelScanner.get_instance()`
|
||||
- Return `aiohttp.web.json_response` or `web.Response`
|
||||
- Use `logging.getLogger(__name__)` for module-level loggers
|
||||
- Define custom exceptions in `py/services/errors.py`
|
||||
- Use `asyncio.Lock` for thread-safe singleton patterns
|
||||
- Raise specific exceptions with descriptive messages
|
||||
- Log errors at appropriate levels (DEBUG, INFO, WARNING, ERROR, CRITICAL)
|
||||
|
||||
### Testing
|
||||
### Async Patterns
|
||||
|
||||
- `pytest` with `--import-mode=importlib`
|
||||
- Fixtures in `tests/conftest.py`, use `tmp_path_factory` for isolation
|
||||
- Mark tests needing real paths: `@pytest.mark.no_settings_dir_isolation`
|
||||
- Mock ComfyUI dependencies via conftest patterns
|
||||
- Use `async def` for I/O-bound operations
|
||||
- Mark async tests with `@pytest.mark.asyncio`
|
||||
- Use `async with` for context managers
|
||||
- Singleton pattern with class-level locks: see `ModelScanner.get_instance()`
|
||||
- Use `aiohttp.web.Response` for HTTP responses
|
||||
|
||||
## JavaScript/TypeScript Code Style
|
||||
### Testing Patterns
|
||||
|
||||
- Use `pytest` with `--import-mode=importlib`
|
||||
- Fixtures in `tests/conftest.py` handle ComfyUI mocking
|
||||
- Use `@pytest.mark.no_settings_dir_isolation` for tests needing real paths
|
||||
- Test files: `tests/test_*.py`
|
||||
- Use `tmp_path_factory` for temporary directory isolation
|
||||
|
||||
## JavaScript Code Style
|
||||
|
||||
### Imports & Modules
|
||||
|
||||
- ES modules: `import { app } from "../../scripts/app.js"` for ComfyUI
|
||||
- Vue: `import { ref, computed } from 'vue'`, type imports: `import type { Foo }`
|
||||
- Export named functions: `export function foo() {}`
|
||||
- ES modules with `import`/`export`
|
||||
- Use `import { app } from "../../scripts/app.js"` for ComfyUI integration
|
||||
- Export named functions/classes: `export function foo() {}`
|
||||
- Widget files use `*_widget.js` suffix
|
||||
|
||||
### Naming & Formatting
|
||||
|
||||
- camelCase for functions/vars/props, PascalCase for classes
|
||||
- Constants: `UPPER_SNAKE_CASE`, Files: `snake_case.js` or `kebab-case.js`
|
||||
- camelCase for functions, variables, object properties
|
||||
- PascalCase for classes/constructors
|
||||
- Constants: `UPPER_SNAKE_CASE` (e.g., `CONVERTED_TYPE`)
|
||||
- Files: `snake_case.js` or `kebab-case.js`
|
||||
- 2-space indentation preferred (follow existing file conventions)
|
||||
- Vue Single File Components: `<script setup lang="ts">` preferred
|
||||
|
||||
### Widget Development
|
||||
|
||||
- ComfyUI: `app.registerExtension()`, `node.addDOMWidget(name, type, element, options)`
|
||||
- Event handlers via `addEventListener` or widget callbacks
|
||||
- Shared utilities: `web/comfyui/utils.js`
|
||||
|
||||
### Vue Composables Pattern
|
||||
|
||||
- Use composition API: `useXxxState(widget)`, return reactive refs and methods
|
||||
- Guard restoration loops with flag: `let isRestoring = false`
|
||||
- Build config from state: `const buildConfig = (): Config => { ... }`
|
||||
- Use `app.registerExtension()` to register ComfyUI extensions
|
||||
- Use `node.addDOMWidget(name, type, element, options)` for custom widgets
|
||||
- Event handlers attached via `addEventListener` or widget callbacks
|
||||
- See `web/comfyui/utils.js` for shared utilities
|
||||
|
||||
## Architecture Patterns
|
||||
|
||||
### Service Layer
|
||||
|
||||
- `ServiceRegistry` singleton for DI, services use `get_instance()` classmethod
|
||||
- Use `ServiceRegistry` singleton for dependency injection
|
||||
- Services follow singleton pattern via `get_instance()` class method
|
||||
- Separate scanners (discovery) from services (business logic)
|
||||
- Handlers in `py/routes/handlers/` are pure functions with deps as params
|
||||
- Handlers in `py/routes/handlers/` implement route logic
|
||||
|
||||
### Model Types & Routes
|
||||
### Model Types
|
||||
|
||||
- `BaseModelService` base for LoRA, Checkpoint, Embedding
|
||||
- `ModelScanner` for file discovery, hash deduplication
|
||||
- `PersistentModelCache` (SQLite) for persistence
|
||||
- Route registrars: `ModelRouteRegistrar`, endpoints: `/loras/*`, `/checkpoints/*`, `/embeddings/*`
|
||||
- WebSocket via `WebSocketManager` for real-time updates
|
||||
- BaseModelService is abstract base for LoRA, Checkpoint, Embedding services
|
||||
- ModelScanner provides file discovery and hash-based deduplication
|
||||
- Persistent cache in SQLite via `PersistentModelCache`
|
||||
- Metadata sync from CivitAI/CivArchive via `MetadataSyncService`
|
||||
|
||||
### Routes & Handlers
|
||||
|
||||
- Route registrars organize endpoints by domain: `ModelRouteRegistrar`, etc.
|
||||
- Handlers are pure functions taking dependencies as parameters
|
||||
- Use `WebSocketManager` for real-time progress updates
|
||||
- Return `aiohttp.web.json_response` or `web.Response`
|
||||
|
||||
### Recipe System
|
||||
|
||||
- Base: `py/recipes/base.py`, Enrichment: `RecipeEnrichmentService`
|
||||
- Parsers: `py/recipes/parsers/`
|
||||
- Base metadata in `py/recipes/base.py`
|
||||
- Enrichment adds model metadata: `RecipeEnrichmentService`
|
||||
- Parsers for different formats in `py/recipes/parsers/`
|
||||
|
||||
## Important Notes
|
||||
|
||||
- ALWAYS use English for comments (per copilot-instructions.md)
|
||||
- Dual mode: ComfyUI plugin (folder_paths) vs standalone (settings.json)
|
||||
- Always use English for comments (per copilot-instructions.md)
|
||||
- Dual mode: ComfyUI plugin (uses folder_paths) vs standalone (reads settings.json)
|
||||
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
|
||||
- Settings auto-saved in user directory or portable mode
|
||||
- WebSocket broadcasts for real-time updates (downloads, scans)
|
||||
- Symlink handling requires normalized paths
|
||||
- API endpoints follow `/loras/*`, `/checkpoints/*`, `/embeddings/*` patterns
|
||||
- Run `python scripts/sync_translation_keys.py` after UI string updates
|
||||
- Symlinks require normalized paths
|
||||
|
||||
## Frontend UI Architecture
|
||||
|
||||
### 1. Standalone Web UI
|
||||
This project has two distinct UI systems:
|
||||
|
||||
### 1. Standalone Lora Manager Web UI
|
||||
- Location: `./static/` and `./templates/`
|
||||
- Tech: Vanilla JS + CSS, served by standalone server
|
||||
- Tests via npm in root directory
|
||||
- Purpose: Full-featured web application for managing LoRA models
|
||||
- Tech stack: Vanilla JS + CSS, served by the standalone server
|
||||
- Development: Uses npm for frontend testing (`npm test`, `npm run test:watch`, etc.)
|
||||
|
||||
### 2. ComfyUI Custom Node Widgets
|
||||
- Location: `./web/comfyui/` (Vanilla JS) + `./vue-widgets/` (Vue)
|
||||
- Primary styles: `./web/comfyui/lm_styles.css` (NOT `./static/css/`)
|
||||
- Vue builds to `./web/comfyui/vue-widgets/`, typecheck via `vue-tsc`
|
||||
- Location: `./web/comfyui/`
|
||||
- Purpose: Widgets and UI logic that ComfyUI loads as custom node extensions
|
||||
- Tech stack: Vanilla JS + Vue.js widgets (in `./vue-widgets/` and built to `./web/comfyui/vue-widgets/`)
|
||||
- Widget styling: Primary styles in `./web/comfyui/lm_styles.css` (NOT `./static/css/`)
|
||||
- Development: No npm build step for these widgets (Vue widgets use build system)
|
||||
|
||||
### Widget Development Guidelines
|
||||
- Use `app.registerExtension()` to register ComfyUI extensions (ComfyUI integration layer)
|
||||
- Use `node.addDOMWidget()` for custom DOM widgets
|
||||
- Widget styles should follow the patterns in `./web/comfyui/lm_styles.css`
|
||||
- Selected state: `rgba(66, 153, 225, 0.3)` background, `rgba(66, 153, 225, 0.6)` border
|
||||
- Hover state: `rgba(66, 153, 225, 0.2)` background
|
||||
- Color palette matches the Lora Manager accent color (blue #4299e1)
|
||||
- Use oklch() for color values when possible (defined in `./static/css/base.css`)
|
||||
- Vue widget components are in `./vue-widgets/src/components/` and built to `./web/comfyui/vue-widgets/`
|
||||
- When modifying widget styles, check `./web/comfyui/lm_styles.css` for consistency with other ComfyUI widgets
|
||||
|
||||
|
||||
258
CLAUDE.md
258
CLAUDE.md
@@ -8,22 +8,17 @@ ComfyUI LoRA Manager is a comprehensive LoRA management system for ComfyUI that
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Backend
|
||||
|
||||
### Backend Development
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Install development dependencies (for testing)
|
||||
pip install -r requirements-dev.txt
|
||||
|
||||
# Run standalone server (port 8188 by default)
|
||||
python standalone.py --port 8188
|
||||
|
||||
# Run all backend tests
|
||||
pytest
|
||||
|
||||
# Run specific test file or function
|
||||
pytest tests/test_recipes.py
|
||||
pytest tests/test_recipes.py::test_function_name
|
||||
|
||||
# Run backend tests with coverage
|
||||
COVERAGE_FILE=coverage/backend/.coverage pytest \
|
||||
--cov=py \
|
||||
@@ -32,158 +27,185 @@ COVERAGE_FILE=coverage/backend/.coverage pytest \
|
||||
--cov-report=html:coverage/backend/html \
|
||||
--cov-report=xml:coverage/backend/coverage.xml \
|
||||
--cov-report=json:coverage/backend/coverage.json
|
||||
|
||||
# Run specific test file
|
||||
pytest tests/test_recipes.py
|
||||
```
|
||||
|
||||
### Frontend
|
||||
|
||||
There are three test suites run by `npm test`: vanilla JS tests (vitest at root) and Vue widget tests (`vue-widgets/` vitest).
|
||||
|
||||
### Frontend Development
|
||||
```bash
|
||||
# Install frontend dependencies
|
||||
npm install
|
||||
cd vue-widgets && npm install && cd ..
|
||||
|
||||
# Run all frontend tests (JS + Vue)
|
||||
# Run frontend tests
|
||||
npm test
|
||||
|
||||
# Run only vanilla JS tests
|
||||
npm run test:js
|
||||
|
||||
# Run only Vue widget tests
|
||||
npm run test:vue
|
||||
|
||||
# Watch mode (JS tests only)
|
||||
# Run frontend tests in watch mode
|
||||
npm run test:watch
|
||||
|
||||
# Frontend coverage
|
||||
# Run frontend tests with coverage
|
||||
npm run test:coverage
|
||||
|
||||
# Build Vue widgets (output to web/comfyui/vue-widgets/)
|
||||
cd vue-widgets && npm run build
|
||||
|
||||
# Vue widget dev mode (watch + rebuild)
|
||||
cd vue-widgets && npm run dev
|
||||
|
||||
# Typecheck Vue widgets
|
||||
cd vue-widgets && npm run typecheck
|
||||
```
|
||||
|
||||
### Localization
|
||||
|
||||
```bash
|
||||
# Sync translation keys after UI string updates
|
||||
python scripts/sync_translation_keys.py
|
||||
```
|
||||
|
||||
Locale files are in `locales/` (en, zh-CN, zh-TW, ja, ko, fr, de, es, ru, he).
|
||||
|
||||
## Architecture
|
||||
|
||||
### Dual Mode Operation
|
||||
### Backend Structure (Python)
|
||||
|
||||
The system runs in two modes:
|
||||
- **ComfyUI plugin mode**: Integrates with ComfyUI's PromptServer, uses `folder_paths` for model discovery
|
||||
- **Standalone mode**: `standalone.py` mocks ComfyUI dependencies, reads paths from `settings.json`
|
||||
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
|
||||
**Core Entry Points:**
|
||||
- `__init__.py` - ComfyUI plugin entry point, registers nodes and routes
|
||||
- `standalone.py` - Standalone server that mocks ComfyUI dependencies
|
||||
- `py/lora_manager.py` - Main LoraManager class that registers HTTP routes
|
||||
|
||||
### Backend (Python)
|
||||
**Service Layer** (`py/services/`):
|
||||
- `ServiceRegistry` - Singleton service registry for dependency management
|
||||
- `ModelServiceFactory` - Factory for creating model services (LoRA, Checkpoint, Embedding)
|
||||
- Scanner services (`lora_scanner.py`, `checkpoint_scanner.py`, `embedding_scanner.py`) - Model file discovery and indexing
|
||||
- `model_scanner.py` - Base scanner with hash-based deduplication and metadata extraction
|
||||
- `persistent_model_cache.py` - SQLite-based cache for model metadata
|
||||
- `metadata_sync_service.py` - Syncs metadata from CivitAI/CivArchive APIs
|
||||
- `civitai_client.py` / `civarchive_client.py` - API clients for external services
|
||||
- `downloader.py` / `download_manager.py` - Model download orchestration
|
||||
- `recipe_scanner.py` - Recipe file management and image association
|
||||
- `settings_manager.py` - Application settings with migration support
|
||||
- `websocket_manager.py` - WebSocket broadcasting for real-time updates
|
||||
- `use_cases/` - Business logic orchestration (auto-organize, bulk refresh, downloads)
|
||||
|
||||
**Entry points:**
|
||||
- `__init__.py` — ComfyUI plugin entry: registers nodes via `NODE_CLASS_MAPPINGS`, sets `WEB_DIRECTORY`, calls `LoraManager.add_routes()`
|
||||
- `standalone.py` — Standalone server: mocks `folder_paths` and node modules, starts aiohttp server
|
||||
- `py/lora_manager.py` — Main `LoraManager` class that registers all HTTP routes
|
||||
**Routes Layer** (`py/routes/`):
|
||||
- Route registrars organize endpoints by domain (models, recipes, previews, example images, updates)
|
||||
- `handlers/` - Request handlers implementing business logic
|
||||
- Routes use aiohttp and integrate with ComfyUI's PromptServer
|
||||
|
||||
**Service layer** (`py/services/`):
|
||||
- `ServiceRegistry` singleton for dependency injection; services follow `get_instance()` singleton pattern
|
||||
- `BaseModelService` abstract base → `LoraService`, `CheckpointService`, `EmbeddingService`
|
||||
- `ModelScanner` base → `LoraScanner`, `CheckpointScanner`, `EmbeddingScanner` for file discovery with hash-based deduplication
|
||||
- `PersistentModelCache` — SQLite-based metadata cache
|
||||
- `MetadataSyncService` — Background sync from CivitAI/CivArchive APIs
|
||||
- `SettingsManager` — Settings with schema migration support
|
||||
- `WebSocketManager` — Real-time progress broadcasting
|
||||
- `ModelServiceFactory` — Creates the right service for each model type
|
||||
- Use cases in `py/services/use_cases/` orchestrate complex business logic (auto-organize, bulk refresh, downloads)
|
||||
**Recipe System** (`py/recipes/`):
|
||||
- `base.py` - Base recipe metadata structure
|
||||
- `enrichment.py` - Enriches recipes with model metadata
|
||||
- `merger.py` - Merges recipe data from multiple sources
|
||||
- `parsers/` - Parsers for different recipe formats (PNG, JSON, workflow)
|
||||
|
||||
**Routes** (`py/routes/`):
|
||||
- Route registrars organize endpoints by domain: `ModelRouteRegistrar`, `RecipeRouteRegistrar`, etc.
|
||||
- Request handlers in `py/routes/handlers/` implement route logic
|
||||
- API endpoints follow `/loras/*`, `/checkpoints/*`, `/embeddings/*` patterns
|
||||
- All routes use aiohttp, return `web.json_response` or `web.Response`
|
||||
|
||||
**Recipe system** (`py/recipes/`):
|
||||
- `base.py` — Recipe metadata structure
|
||||
- `enrichment.py` — Enriches recipes with model metadata
|
||||
- `parsers/` — Parsers for PNG metadata, JSON, and workflow formats
|
||||
|
||||
**Custom nodes** (`py/nodes/`):
|
||||
- Each node class has a `NAME` class attribute used as key in `NODE_CLASS_MAPPINGS`
|
||||
- Standard ComfyUI node pattern: `INPUT_TYPES()` classmethod, `RETURN_TYPES`, `FUNCTION`
|
||||
- All nodes registered in `__init__.py`
|
||||
**Custom Nodes** (`py/nodes/`):
|
||||
- `lora_loader.py` - LoRA loader nodes with preset support
|
||||
- `save_image.py` - Enhanced save image with pattern-based filenames
|
||||
- `trigger_word_toggle.py` - Toggle trigger words in prompts
|
||||
- `lora_stacker.py` - Stack multiple LoRAs
|
||||
- `prompt.py` - Prompt node with autocomplete
|
||||
- `wanvideo_lora_select.py` - WanVideo-specific LoRA selection
|
||||
|
||||
**Configuration** (`py/config.py`):
|
||||
- Manages folder paths for models, handles symlink mappings
|
||||
- Manages folder paths for models, checkpoints, embeddings
|
||||
- Handles symlink mappings for complex directory structures
|
||||
- Auto-saves paths to settings.json in ComfyUI mode
|
||||
|
||||
### Frontend — Two Distinct UI Systems
|
||||
### Frontend Structure (JavaScript)
|
||||
|
||||
#### 1. Standalone Manager Web UI
|
||||
- **Location:** `static/` (JS/CSS) and `templates/` (HTML)
|
||||
- **Tech:** Vanilla JS + CSS, served by standalone server
|
||||
- **Structure:** `static/js/core.js` (shared), `loras.js`, `checkpoints.js`, `embeddings.js`, `recipes.js`, `statistics.js`
|
||||
- **Tests:** `tests/frontend/**/*.test.js` (vitest + jsdom)
|
||||
**ComfyUI Widgets** (`web/comfyui/`):
|
||||
- Vanilla JavaScript ES modules extending ComfyUI's LiteGraph-based UI
|
||||
- `loras_widget.js` - Main LoRA selection widget with preview
|
||||
- `loras_widget_events.js` - Event handling for widget interactions
|
||||
- `autocomplete.js` - Autocomplete for trigger words and embeddings
|
||||
- `preview_tooltip.js` - Preview tooltip for model cards
|
||||
- `top_menu_extension.js` - Adds "Launch LoRA Manager" menu item
|
||||
- `trigger_word_highlight.js` - Syntax highlighting for trigger words
|
||||
- `utils.js` - Shared utilities and API helpers
|
||||
|
||||
#### 2. ComfyUI Custom Node Widgets
|
||||
- **Vanilla JS widgets:** `web/comfyui/*.js` — ES modules extending ComfyUI's LiteGraph UI
|
||||
- `loras_widget.js` / `loras_widget_events.js` — Main LoRA selection widget
|
||||
- `autocomplete.js` — Trigger word and embedding autocomplete
|
||||
- `preview_tooltip.js` — Model card preview tooltips
|
||||
- `top_menu_extension.js` — "Launch LoRA Manager" menu item
|
||||
- `utils.js` — Shared utilities and API helpers
|
||||
- Widget styling in `web/comfyui/lm_styles.css` (NOT `static/css/`)
|
||||
- **Vue widgets:** `vue-widgets/src/` → built to `web/comfyui/vue-widgets/`
|
||||
- Vue 3 + TypeScript + PrimeVue + vue-i18n
|
||||
- Vite build with CSS-injected-by-JS plugin
|
||||
- Components: `LoraPoolWidget`, `LoraRandomizerWidget`, `LoraCyclerWidget`, `AutocompleteTextWidget`
|
||||
- Auto-built on ComfyUI startup via `py/vue_widget_builder.py`
|
||||
- Tests: `vue-widgets/tests/**/*.test.ts` (vitest)
|
||||
**Widget Development:**
|
||||
- Widgets use `app.registerExtension` and `getCustomWidgets` hooks
|
||||
- `node.addDOMWidget(name, type, element, options)` embeds HTML in nodes
|
||||
- See `docs/dom_widget_dev_guide.md` for complete DOMWidget development guide
|
||||
|
||||
**Widget registration pattern:**
|
||||
- Widgets use `app.registerExtension()` and `getCustomWidgets` hooks
|
||||
- `node.addDOMWidget(name, type, element, options)` embeds HTML in LiteGraph nodes
|
||||
- See `docs/dom_widget_dev_guide.md` for DOMWidget development guide
|
||||
**Web Source** (`web-src/`):
|
||||
- Modern frontend components (if migrating from static)
|
||||
- `components/` - Reusable UI components
|
||||
- `styles/` - CSS styling
|
||||
|
||||
### Key Patterns
|
||||
|
||||
**Dual Mode Operation:**
|
||||
- ComfyUI plugin mode: Integrates with ComfyUI's PromptServer, uses folder_paths
|
||||
- Standalone mode: Mocks ComfyUI dependencies via `standalone.py`, reads paths from settings.json
|
||||
- Detection: `os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1"`
|
||||
|
||||
**Settings Management:**
|
||||
- Settings stored in user directory (via `platformdirs`) or portable mode (in repo)
|
||||
- Migration system tracks settings schema version
|
||||
- Template in `settings.json.example` defines defaults
|
||||
|
||||
**Model Scanning Flow:**
|
||||
1. Scanner walks folder paths, computes file hashes
|
||||
2. Hash-based deduplication prevents duplicate processing
|
||||
3. Metadata extracted from safetensors headers
|
||||
4. Persistent cache stores results in SQLite
|
||||
5. Background sync fetches CivitAI/CivArchive metadata
|
||||
6. WebSocket broadcasts updates to connected clients
|
||||
|
||||
**Recipe System:**
|
||||
- Recipes store LoRA combinations with parameters
|
||||
- Supports import from workflow JSON, PNG metadata
|
||||
- Images associated with recipes via sibling file detection
|
||||
- Enrichment adds model metadata for display
|
||||
|
||||
**Frontend-Backend Communication:**
|
||||
- REST API for CRUD operations
|
||||
- WebSocket for real-time progress updates (downloads, scans)
|
||||
- API endpoints follow `/loras/*` pattern
|
||||
|
||||
## Code Style
|
||||
|
||||
**Python:**
|
||||
- PEP 8, 4-space indentation, English comments only
|
||||
- Use `from __future__ import annotations` for forward references
|
||||
- Use `TYPE_CHECKING` guard for type-checking-only imports
|
||||
- PEP 8 with 4-space indentation
|
||||
- snake_case for files, functions, variables
|
||||
- PascalCase for classes
|
||||
- Type hints preferred
|
||||
- English comments only (per copilot-instructions.md)
|
||||
- Loggers via `logging.getLogger(__name__)`
|
||||
- Custom exceptions in `py/services/errors.py`
|
||||
- Async patterns: `async def` for I/O, `@pytest.mark.asyncio` for async tests
|
||||
- Singleton pattern with class-level `asyncio.Lock` (see `ModelScanner.get_instance()`)
|
||||
|
||||
**JavaScript:**
|
||||
- ES modules, camelCase functions/variables, PascalCase classes
|
||||
- Widget files use `*_widget.js` suffix
|
||||
- Prefer vanilla JS for `web/comfyui/` widgets, avoid framework dependencies (except Vue widgets)
|
||||
- ES modules with camelCase
|
||||
- Files use `*_widget.js` suffix for ComfyUI widgets
|
||||
- Prefer vanilla JS, avoid framework dependencies
|
||||
|
||||
## Testing
|
||||
|
||||
**Backend (pytest):**
|
||||
- Config in `pytest.ini`: `--import-mode=importlib`, testpaths=`tests`
|
||||
- Fixtures in `tests/conftest.py` handle ComfyUI dependency mocking
|
||||
- Markers: `@pytest.mark.asyncio`, `@pytest.mark.no_settings_dir_isolation`
|
||||
- Uses `tmp_path_factory` for directory isolation
|
||||
**Backend Tests:**
|
||||
- pytest with `--import-mode=importlib`
|
||||
- Test files: `tests/test_*.py`
|
||||
- Fixtures in `tests/conftest.py`
|
||||
- Mock ComfyUI dependencies using standalone.py patterns
|
||||
- Markers: `@pytest.mark.asyncio` for async tests, `@pytest.mark.no_settings_dir_isolation` for real paths
|
||||
|
||||
**Frontend (vitest):**
|
||||
- Vanilla JS tests: `tests/frontend/**/*.test.js` with jsdom
|
||||
- Vue widget tests: `vue-widgets/tests/**/*.test.ts` with jsdom + @vue/test-utils
|
||||
**Frontend Tests:**
|
||||
- Vitest with jsdom environment
|
||||
- Test files: `tests/frontend/**/*.test.js`
|
||||
- Setup in `tests/frontend/setup.js`
|
||||
- Coverage via `npm run test:coverage`
|
||||
|
||||
## Key Integration Points
|
||||
## Important Notes
|
||||
|
||||
- **Settings:** Stored in user directory (via `platformdirs`) or portable mode (`"use_portable_settings": true`)
|
||||
- **CivitAI/CivArchive:** API clients for metadata sync and model downloads; CivitAI API key in settings
|
||||
- **Symlink handling:** Config scans symlinks to map virtual→physical paths; fingerprinting prevents redundant rescans
|
||||
- **WebSocket:** Broadcasts real-time progress for downloads, scans, and metadata sync
|
||||
- **Model scanning flow:** Walk folders → compute hashes → deduplicate → extract safetensors metadata → cache in SQLite → background CivitAI sync → WebSocket broadcast
|
||||
**Settings Location:**
|
||||
- ComfyUI mode: Auto-saves folder paths to user settings directory
|
||||
- Standalone mode: Use `settings.json` (copy from `settings.json.example`)
|
||||
- Portable mode: Set `"use_portable_settings": true` in settings.json
|
||||
|
||||
**API Integration:**
|
||||
- CivitAI API key required for downloads (add to settings)
|
||||
- CivArchive API used as fallback for deleted models
|
||||
- Metadata archive database available for offline metadata
|
||||
|
||||
**Symlink Handling:**
|
||||
- Config scans symlinks to map virtual paths to physical locations
|
||||
- Preview validation uses normalized preview root paths
|
||||
- Fingerprinting prevents redundant symlink rescans
|
||||
|
||||
**ComfyUI Node Development:**
|
||||
- Nodes defined in `py/nodes/`, registered in `__init__.py`
|
||||
- Frontend widgets in `web/comfyui/`, matched by node type
|
||||
- Use `WEB_DIRECTORY = "./web/comfyui"` convention
|
||||
|
||||
**Recipe Image Association:**
|
||||
- Recipes scan for sibling images in same directory
|
||||
- Supports repair/migration of recipe image paths
|
||||
- See `py/services/recipe_scanner.py` for implementation details
|
||||
|
||||
17
README.md
17
README.md
@@ -34,23 +34,6 @@ Enhance your Civitai browsing experience with our companion browser extension! S
|
||||
|
||||
## Release Notes
|
||||
|
||||
### v0.9.16
|
||||
* **Duplicate Detection Enhancement** - The model duplicates mode now respects filter configurations, making it easier to find duplicate groups within specific filtered results.
|
||||
* **Tag Logic Toggle** - Added OR/AND toggle for include tags filtering in the filters panel, providing more flexible tag-based model searches.
|
||||
* **Metadata Refresh Skip Paths** - New setting to exclude specific paths from metadata refresh operations. Models under these paths will be skipped when fetching metadata from remote sources.
|
||||
* **Dynamic Trigger Words in Prompt Node** - Prompt node now supports dynamic numbers of trigger word inputs for greater flexibility.
|
||||
* **Early Access Updates** - Model updates now display Early Access information, with a new setting to ignore Early Access updates if desired.
|
||||
* **LM Civitai Extension Integration** - Added integration with the LM Civitai Extension. Clicking the download button in model updates now sends downloads to the extension's download queue for seamless one-click downloads.
|
||||
|
||||
### v0.9.15
|
||||
* **Filter Presets** - Save filter combinations as presets for quick switching and reapplication.
|
||||
* **Bug Fixes** - Fixed various bugs for improved stability.
|
||||
|
||||
### v0.9.14
|
||||
* **LoRA Cycler Node** - Introduced a new LoRA Cycler node that enables iteration through specified LoRAs with support for repeat count and pause iteration functionality. Refer to the new "Lora Cycler" template workflow for concrete example.
|
||||
* **Enhanced Prompt Node with Tag Autocomplete** - Enhanced the Prompt node with comprehensive tag autocomplete based on merged Danbooru + e621 tags. Supports tag search and autocomplete functionality. Implemented a command system with shortcuts like `/char` or `/artist` for category-specific tag searching. Added `/ac` or `/noac` commands to quickly enable or disable autocomplete. Refer to the "Lora Manager Basic" template workflow in ComfyUI -> Templates -> ComfyUI-Lora-Manager for detailed tips.
|
||||
* **Bug Fixes & Stability** - Addressed multiple bugs and improved overall stability.
|
||||
|
||||
### v0.9.12
|
||||
* **LoRA Randomizer System** - Introduced a comprehensive LoRA randomization system featuring LoRA Pool and LoRA Randomizer nodes for flexible and dynamic generation workflows.
|
||||
* **LoRA Randomizer Template** - Refer to the new "LoRA Randomizer" template workflow for detailed examples of flexible randomization modes, lock & reuse options, and other features.
|
||||
|
||||
@@ -1,27 +1,31 @@
|
||||
## Overview
|
||||
|
||||
The **LoRA Manager Civitai Extension** is a Browser extension designed to work seamlessly with [LoRA Manager](https://github.com/willmiao/ComfyUI-Lora-Manager) to significantly enhance your browsing experience on [Civitai](https://civitai.com). With this extension, you can:
|
||||
The **LoRA Manager Civitai Extension** is a Browser extension designed to work seamlessly with [LoRA Manager](https://github.com/willmiao/ComfyUI-Lora-Manager) to significantly enhance your browsing experience on [Civitai](https://civitai.com).
|
||||
It also supports browsing on [CivArchive](https://civarchive.com/) (formerly CivitaiArchive).
|
||||
|
||||
With this extension, you can:
|
||||
|
||||
✅ Instantly see which models are already present in your local library
|
||||
✅ Download new models with a single click
|
||||
✅ Manage downloads efficiently with queue and parallel download support
|
||||
✅ Keep your downloaded models automatically organized according to your custom settings
|
||||
|
||||

|
||||
|
||||
**Update:** It now also supports browsing on [CivArchive](https://civarchive.com/) (formerly CivitaiArchive).
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
## Why Supporter Access?
|
||||
## Why Are All Features for Supporters Only?
|
||||
|
||||
LoRA Manager is built with love for the Stable Diffusion and ComfyUI communities. Your support makes it possible for me to keep improving and maintaining the tool full-time.
|
||||
I love building tools for the Stable Diffusion and ComfyUI communities, and LoRA Manager is a passion project that I've poured countless hours into. When I created this companion extension, my hope was to offer its core features for free, as a thank-you to all of you.
|
||||
|
||||
Supporter-exclusive features help ensure the long-term sustainability of LoRA Manager, allowing continuous updates, new features, and better performance for everyone.
|
||||
Unfortunately, I've reached a point where I need to be realistic. The level of support from the free model has been far lower than what's needed to justify the continuous development and maintenance for both projects. It was a difficult decision, but I've chosen to make the extension's features exclusive to supporters.
|
||||
|
||||
Every contribution directly fuels development and keeps the core LoRA Manager free and open-source. In addition to monthly supporters, one-time donation supporters will also receive a license key, with the duration scaling according to the contribution amount. Thank you for helping keep this project alive and growing. ❤️
|
||||
This change is crucial for me to be able to continue dedicating my time to improving the free and open-source LoRA Manager, which I'm committed to keeping available for everyone.
|
||||
|
||||
Your support does more than just unlock a few features—it allows me to keep innovating and ensures the core LoRA Manager project thrives. I'm incredibly grateful for your understanding and any support you can offer. ❤️
|
||||
|
||||
(_For those who previously supported me on Ko-fi with a one-time donation, I'll be sending out license keys individually as a thank-you._)
|
||||
|
||||
|
||||
---
|
||||
@@ -86,27 +90,20 @@ Clicking the download button adds the corresponding model version to the downloa
|
||||
|
||||
On a specific model page, visual indicators also appear on version buttons, showing which versions are already in your local library.
|
||||
|
||||
**Starting from v0.4.8**, model pages use a dedicated download button for better compatibility. When switching to a specific version by clicking a version button:
|
||||
When switching to a specific version by clicking a version button:
|
||||
|
||||
- The new **dedicated download button** directly triggers download via **LoRA Manager**
|
||||
- The **original download button** remains unchanged for standard browser downloads
|
||||
- Clicking the download button will open a dropdown:
|
||||
- Download via **LoRA Manager**
|
||||
- Download via **Original Download** (browser download)
|
||||
|
||||
You can check **Remember my choice** to set your preferred default. You can change this setting anytime in the extension's settings.
|
||||
|
||||

|
||||
|
||||
### Hide Models Already in Library (Beta)
|
||||
|
||||
**New in v0.4.8**: A new **Hide models already in library (Beta)** option makes it easier to focus on models you haven't added yet. It can be enabled from Settings, or toggled quickly using **Ctrl + Shift + H** (macOS: **Command + Shift + H**).
|
||||
|
||||
### Resources on Image Pages — now shows in-library indicators for image resources plus one-click recipe import
|
||||
|
||||
- **One-Click Import Civitai Image as Recipe** — Import any Civitai image as a recipe with a single click in the Resources Used panel.
|
||||
- **Auto-Queue Missing Assets** — In Settings you can decide if LoRAs or checkpoints referenced by that image should automatically be added to your download queue.
|
||||
- **More Accurate Metadata** — Importing directly from the page is faster than copying inside LM and keeps on-site tags and other metadata perfectly aligned.
|
||||
### Resources on Image Pages (2025-08-05) — now shows in-library indicators for image resources. ‘Import image as recipe’ coming soon!
|
||||
|
||||

|
||||
|
||||
[](https://github.com/user-attachments/assets/41fd4240-c949-4f83-bde7-8f3124c09494)
|
||||
|
||||
---
|
||||
|
||||
## Model Download Location & LoRA Manager Settings
|
||||
@@ -173,11 +170,11 @@ _Thanks to user **Temikus** for sharing this solution!_
|
||||
The extension will evolve alongside **LoRA Manager** improvements. Planned features include:
|
||||
|
||||
- [x] Support for **additional model types** (e.g., embeddings)
|
||||
- [x] One-click **Recipe Import**
|
||||
- [x] Display of in-library status for all resources in the **Resources Used** section of the image page
|
||||
- [ ] One-click **Recipe Import**
|
||||
- [x] Display of in-library status for all resources in the **Resources Used** section of the image page
|
||||
- [x] One-click **Auto-organize Models**
|
||||
- [x] **Hide models already in library (Beta)** - Focus on models you haven't added yet
|
||||
|
||||
**Stay tuned — and thank you for your support!**
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,678 +0,0 @@
|
||||
# Backend Testing Improvement Plan
|
||||
|
||||
**Status:** Phase 4 Complete ✅
|
||||
**Created:** 2026-02-11
|
||||
**Updated:** 2026-02-11
|
||||
**Priority:** P0 - Critical
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document outlines a comprehensive plan to improve the quality, coverage, and maintainability of the LoRa Manager backend test suite. Recent critical bugs (_handle_download_task_done and get_status methods missing) were not caught by existing tests, highlighting significant gaps in the testing strategy.
|
||||
|
||||
## Current State Assessment
|
||||
|
||||
### Test Statistics
|
||||
- **Total Python Test Files:** 80+
|
||||
- **Total JavaScript Test Files:** 29
|
||||
- **Test Lines of Code:** ~15,000
|
||||
- **Current Pass Rate:** 100% (but missing critical edge cases)
|
||||
|
||||
### Key Findings
|
||||
1. **Coverage Gaps:** Critical modules have no direct tests
|
||||
2. **Mocking Issues:** Over-mocking hides real bugs
|
||||
3. **Integration Deficit:** Missing end-to-end tests
|
||||
4. **Async Inconsistency:** Multiple patterns for async tests
|
||||
5. **Maintenance Burden:** Large, complex test files with duplication
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 Completion Summary (2026-02-11)
|
||||
|
||||
### Completed Items
|
||||
|
||||
1. **Integration Test Framework** ✅
|
||||
- Created `tests/integration/` directory structure
|
||||
- Added `tests/integration/conftest.py` with shared fixtures
|
||||
- Added `tests/integration/__init__.py` for package organization
|
||||
|
||||
2. **Download Flow Integration Tests** ✅
|
||||
- Created `tests/integration/test_download_flow.py` with 7 tests
|
||||
- Tests cover:
|
||||
- Download with mocked network (2 tests)
|
||||
- Progress broadcast verification (1 test)
|
||||
- Error handling (1 test)
|
||||
- Cancellation flow (1 test)
|
||||
- Concurrent download management (1 test)
|
||||
- Route endpoint validation (1 test)
|
||||
|
||||
3. **Recipe Flow Integration Tests** ✅
|
||||
- Created `tests/integration/test_recipe_flow.py` with 9 tests
|
||||
- Tests cover:
|
||||
- Recipe save and retrieve flow (1 test)
|
||||
- Recipe update flow (1 test)
|
||||
- Recipe delete flow (1 test)
|
||||
- Recipe model extraction (1 test)
|
||||
- Generation parameters handling (1 test)
|
||||
- Concurrent recipe reads (1 test)
|
||||
- Concurrent read/write operations (1 test)
|
||||
- Recipe list endpoint (1 test)
|
||||
- Recipe metadata parsing (1 test)
|
||||
|
||||
4. **ModelLifecycleService Coverage** ✅
|
||||
- Added 12 new tests to `tests/services/test_model_lifecycle_service.py`
|
||||
- Tests cover:
|
||||
- `exclude_model` functionality (3 tests)
|
||||
- `bulk_delete_models` functionality (2 tests)
|
||||
- Error path tests (5 tests)
|
||||
- `_extract_model_id_from_payload` utility (3 tests)
|
||||
- Total: 18 tests (up from 6)
|
||||
|
||||
5. **PersistentRecipeCache Concurrent Access** ✅
|
||||
- Added 5 new concurrent access tests to `tests/test_persistent_recipe_cache.py`
|
||||
- Tests cover:
|
||||
- Concurrent reads without corruption (1 test)
|
||||
- Concurrent write and read operations (1 test)
|
||||
- Concurrent updates to same recipe (1 test)
|
||||
- Schema initialization thread safety (1 test)
|
||||
- Concurrent save and remove operations (1 test)
|
||||
- Total: 17 tests (up from 12)
|
||||
|
||||
### Test Results
|
||||
- **Integration Tests:** 16/16 passing
|
||||
- **ModelLifecycleService Tests:** 18/18 passing
|
||||
- **PersistentRecipeCache Tests:** 17/17 passing
|
||||
- **Total New Tests Added:** 28 tests
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 Completion Summary (2026-02-11)
|
||||
|
||||
### Completed Items
|
||||
|
||||
1. **pytest-asyncio Integration** ✅
|
||||
- Added `pytest-asyncio>=0.21.0` to `requirements-dev.txt`
|
||||
- Updated `pytest.ini` with `asyncio_mode = auto` and `asyncio_default_fixture_loop_scope = function`
|
||||
- Removed custom `pytest_pyfunc_call` handler from `tests/conftest.py`
|
||||
- Added `@pytest.mark.asyncio` decorator to 21 async test functions in `tests/services/test_download_manager.py`
|
||||
|
||||
2. **Error Path Tests** ✅
|
||||
- Created `tests/services/test_downloader_error_paths.py` with 19 new tests
|
||||
- Tests cover:
|
||||
- DownloadStreamControl state management (6 tests)
|
||||
- Downloader configuration and initialization (4 tests)
|
||||
- DownloadProgress dataclass (1 test)
|
||||
- Custom exceptions (2 tests)
|
||||
- Authentication headers (3 tests)
|
||||
- Session management (3 tests)
|
||||
|
||||
3. **Test Results**
|
||||
- All 45 tests pass (26 in test_download_manager.py + 19 in test_downloader_error_paths.py)
|
||||
- No regressions introduced
|
||||
|
||||
### Notes
|
||||
- Over-mocking fix in `test_download_manager.py` deferred to Phase 2 as it requires significant refactoring
|
||||
- Error path tests focus on unit-level testing of downloader components rather than complex integration scenarios
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Critical Fixes (P0) - Week 1-2
|
||||
|
||||
### 1.1 Fix Over-Mocking Issues
|
||||
|
||||
**Problem:** Tests mock the methods they purport to test, hiding real bugs.
|
||||
|
||||
**Affected Files:**
|
||||
- `tests/services/test_download_manager.py` - Mocks `_execute_download`
|
||||
- `tests/utils/test_example_images_download_manager_unit.py` - Mocks callbacks
|
||||
- `tests/routes/test_base_model_routes_smoke.py` - Uses fake service stubs
|
||||
|
||||
**Actions:**
|
||||
1. Refactor `test_download_manager.py` to test actual download logic
|
||||
2. Replace method-level mocks with dependency injection
|
||||
3. Add integration tests that verify real behavior
|
||||
|
||||
**Example Fix:**
|
||||
```python
|
||||
# BEFORE (Bad - mocks method under test)
|
||||
async def fake_execute_download(self, **kwargs):
|
||||
return {"success": True}
|
||||
monkeypatch.setattr(DownloadManager, "_execute_download", fake_execute_download)
|
||||
|
||||
# AFTER (Good - tests actual logic with injected dependencies)
|
||||
async def test_download_executes_with_real_logic(
|
||||
tmp_path, mock_downloader, mock_websocket
|
||||
):
|
||||
manager = DownloadManager(
|
||||
downloader=mock_downloader,
|
||||
ws_manager=mock_websocket
|
||||
)
|
||||
result = await manager._execute_download(urls=["http://test.com/file.safetensors"])
|
||||
assert result.success is True
|
||||
assert mock_downloader.download_calls == 1
|
||||
```
|
||||
|
||||
### 1.2 Add Missing Error Path Tests
|
||||
|
||||
**Problem:** Error handling code is not tested, leading to production failures.
|
||||
|
||||
**Required Tests:**
|
||||
|
||||
| Error Type | Module | Priority |
|
||||
|------------|--------|----------|
|
||||
| Network timeout | `downloader.py` | P0 |
|
||||
| Disk full | `download_manager.py` | P0 |
|
||||
| Permission denied | `example_images_download_manager.py` | P0 |
|
||||
| Session refresh failure | `downloader.py` | P1 |
|
||||
| Partial file cleanup | `download_manager.py` | P1 |
|
||||
|
||||
**Implementation:**
|
||||
```python
|
||||
@pytest.mark.asyncio
|
||||
async def test_download_handles_network_timeout():
|
||||
"""Verify download retries on timeout and eventually fails gracefully."""
|
||||
# Arrange
|
||||
downloader = Downloader()
|
||||
mock_session = AsyncMock()
|
||||
mock_session.get.side_effect = asyncio.TimeoutError()
|
||||
|
||||
# Act
|
||||
success, message = await downloader.download_file(
|
||||
url="http://test.com/file.safetensors",
|
||||
target_path=tmp_path / "test.safetensors",
|
||||
session=mock_session
|
||||
)
|
||||
|
||||
# Assert
|
||||
assert success is False
|
||||
assert "timeout" in message.lower()
|
||||
assert mock_session.get.call_count == MAX_RETRIES
|
||||
```
|
||||
|
||||
### 1.3 Standardize Async Test Patterns
|
||||
|
||||
**Problem:** Inconsistent async test patterns across codebase.
|
||||
|
||||
**Current State:**
|
||||
- Some use `@pytest.mark.asyncio`
|
||||
- Some rely on custom `pytest_pyfunc_call` in conftest.py
|
||||
- Some use bare async functions
|
||||
|
||||
**Solution:**
|
||||
1. Add `pytest-asyncio` to requirements-dev.txt
|
||||
2. Update `pytest.ini`:
|
||||
```ini
|
||||
[pytest]
|
||||
asyncio_mode = auto
|
||||
asyncio_default_fixture_loop_scope = function
|
||||
```
|
||||
3. Remove custom `pytest_pyfunc_call` handler from conftest.py
|
||||
4. Bulk update all async tests to use `@pytest.mark.asyncio`
|
||||
|
||||
**Migration Script:**
|
||||
```bash
|
||||
# Find all async test functions missing decorator
|
||||
rg "^async def test_" tests/ --type py -A1 | grep -B1 "@pytest.mark" | grep "async def"
|
||||
|
||||
# Add decorator (manual review required)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Integration & Coverage (P1) - Week 3-4
|
||||
|
||||
### 2.1 Add Critical Module Tests
|
||||
|
||||
**Priority 1: `py/services/model_lifecycle_service.py`**
|
||||
```python
|
||||
# tests/services/test_model_lifecycle_service.py
|
||||
class TestModelLifecycleService:
|
||||
async def test_create_model_registers_in_cache(self):
|
||||
"""Verify new model is registered in both cache and database."""
|
||||
|
||||
async def test_delete_model_cleans_up_files_and_cache(self):
|
||||
"""Verify deletion removes files and updates all indexes."""
|
||||
|
||||
async def test_update_model_metadata_propagates_changes(self):
|
||||
"""Verify metadata updates reach all subscribers."""
|
||||
```
|
||||
|
||||
**Priority 2: `py/services/persistent_recipe_cache.py`**
|
||||
```python
|
||||
# tests/services/test_persistent_recipe_cache.py
|
||||
class TestPersistentRecipeCache:
|
||||
def test_initialization_creates_schema(self):
|
||||
"""Verify SQLite schema is created on first use."""
|
||||
|
||||
async def test_save_recipe_persists_to_sqlite(self):
|
||||
"""Verify recipe data is saved correctly."""
|
||||
|
||||
async def test_concurrent_access_does_not_corrupt_database(self):
|
||||
"""Verify thread safety under concurrent writes."""
|
||||
```
|
||||
|
||||
**Priority 3: Route Handler Tests**
|
||||
- `py/routes/handlers/preview_handlers.py`
|
||||
- `py/routes/handlers/misc_handlers.py`
|
||||
- `py/routes/handlers/model_handlers.py`
|
||||
|
||||
### 2.2 Add End-to-End Integration Tests
|
||||
|
||||
**Download Flow Integration Test:**
|
||||
```python
|
||||
# tests/integration/test_download_flow.py
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.asyncio
|
||||
async def test_complete_download_flow(tmp_path, test_server):
|
||||
"""
|
||||
Integration test covering:
|
||||
1. Route receives download request
|
||||
2. DownloadCoordinator schedules it
|
||||
3. DownloadManager executes actual download
|
||||
4. Downloader makes HTTP request (to test server)
|
||||
5. Progress is broadcast via WebSocket
|
||||
6. File is saved and cache updated
|
||||
"""
|
||||
# Setup test server with known file
|
||||
test_file = tmp_path / "test_model.safetensors"
|
||||
test_file.write_bytes(b"fake model data")
|
||||
|
||||
# Start download
|
||||
async with aiohttp.ClientSession() as session:
|
||||
response = await session.post(
|
||||
"http://localhost:8188/api/lm/download",
|
||||
json={"urls": [f"http://localhost:{test_server.port}/test_model.safetensors"]}
|
||||
)
|
||||
assert response.status == 200
|
||||
|
||||
# Verify file downloaded
|
||||
downloaded = tmp_path / "downloads" / "test_model.safetensors"
|
||||
assert downloaded.exists()
|
||||
assert downloaded.read_bytes() == b"fake model data"
|
||||
|
||||
# Verify WebSocket progress updates
|
||||
assert len(ws_manager.broadcasts) > 0
|
||||
assert any(b["status"] == "completed" for b in ws_manager.broadcasts)
|
||||
```
|
||||
|
||||
**Recipe Flow Integration Test:**
|
||||
```python
|
||||
# tests/integration/test_recipe_flow.py
|
||||
@pytest.mark.integration
|
||||
@pytest.mark.asyncio
|
||||
async def test_recipe_analysis_and_save_flow(tmp_path):
|
||||
"""
|
||||
Integration test covering:
|
||||
1. Import recipe from image
|
||||
2. Parse metadata and extract models
|
||||
3. Save to cache and database
|
||||
4. Retrieve and display
|
||||
"""
|
||||
```
|
||||
|
||||
### 2.3 Strengthen Assertions
|
||||
|
||||
**Replace loose assertions:**
|
||||
```python
|
||||
# BEFORE
|
||||
assert "mismatch" in message.lower()
|
||||
|
||||
# AFTER
|
||||
assert message == "File size mismatch. Expected: 1000 bytes, Got: 500 bytes"
|
||||
assert not target_path.exists()
|
||||
assert not Path(str(target_path) + ".part").exists()
|
||||
assert len(downloader.retry_history) == 3
|
||||
```
|
||||
|
||||
**Add state verification:**
|
||||
```python
|
||||
# BEFORE
|
||||
assert result is True
|
||||
|
||||
# AFTER
|
||||
assert result is True
|
||||
assert model["status"] == "downloaded"
|
||||
assert model["file_path"].exists()
|
||||
assert cache.get_by_hash(model["sha256"]) is not None
|
||||
assert len(ws_manager.payloads) >= 2 # Started + completed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 Completion Summary (2026-02-11)
|
||||
|
||||
### Completed Items
|
||||
|
||||
1. **Property-Based Tests (Hypothesis)** ✅
|
||||
- Created `tests/utils/test_utils_hypothesis.py` with 19 property-based tests
|
||||
- Tests cover:
|
||||
- `sanitize_folder_name` idempotency and invalid character handling (4 tests)
|
||||
- `_sanitize_library_name` idempotency and safe character filtering (2 tests)
|
||||
- `normalize_path` idempotency and forward slash usage (2 tests)
|
||||
- `fuzzy_match` edge cases and threshold behavior (3 tests)
|
||||
- `determine_base_model` return type guarantees (2 tests)
|
||||
- `get_preview_extension` return type validation (2 tests)
|
||||
- `calculate_recipe_fingerprint` determinism and ordering (4 tests)
|
||||
- Fixed Hypothesis plugin compatibility issue by creating a `MockModule` class in `conftest.py` that is hashable (unlike `types.SimpleNamespace`)
|
||||
|
||||
2. **Snapshot Tests (Syrupy)** ✅
|
||||
- Created `tests/routes/test_api_snapshots.py` with 7 snapshot tests
|
||||
- Tests cover:
|
||||
- SettingsHandler response formats (2 tests)
|
||||
- NodeRegistryHandler response formats (2 tests)
|
||||
- Utility function output verification (2 tests)
|
||||
- ModelLibraryHandler empty response format (1 test)
|
||||
- All snapshots generated and tests passing (7/7)
|
||||
|
||||
3. **Performance Benchmarks** ✅
|
||||
- Created `tests/performance/test_cache_performance.py` with 11 benchmark tests
|
||||
- Tests cover:
|
||||
- Hash index lookup performance (100, 1K, 10K models) - 3 tests
|
||||
- Hash index add entry performance (100, 10K existing) - 2 tests
|
||||
- Fuzzy matching performance (short text, long text, many words) - 3 tests
|
||||
- Recipe fingerprint calculation (5, 50, 200 LoRAs) - 3 tests
|
||||
- All benchmarks passing with performance metrics (11/11)
|
||||
|
||||
4. **Package Dependencies** ✅
|
||||
- Added `hypothesis>=6.0` to `requirements-dev.txt`
|
||||
- Added `syrupy>=5.0` to `requirements-dev.txt`
|
||||
- Added `pytest-benchmark>=5.0` to `requirements-dev.txt`
|
||||
|
||||
### Test Results
|
||||
- **Property-Based Tests:** 19/19 passing
|
||||
- **Snapshot Tests:** 7/7 passing
|
||||
- **Performance Benchmarks:** 11/11 passing
|
||||
- **Total New Tests Added:** 37 tests
|
||||
- **Full Test Suite:** 947/947 passing
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 Completion Summary (2026-02-11)
|
||||
|
||||
### Completed Items
|
||||
|
||||
1. **Centralized Test Fixtures** ✅
|
||||
- Added `mock_downloader` fixture to `tests/conftest.py`
|
||||
- Configurable mock with `should_fail` and `return_value` attributes
|
||||
- Records all download calls for verification
|
||||
- Added `mock_websocket_manager` fixture to `tests/conftest.py`
|
||||
- Recording WebSocket manager that captures all broadcast payloads
|
||||
- Includes helper method `get_payloads_by_type()` for filtering
|
||||
- Added `reset_singletons` autouse fixture to `tests/conftest.py`
|
||||
- Resets DownloadManager, ServiceRegistry, ModelScanner, and SettingsManager
|
||||
- Ensures test isolation and prevents singleton pollution
|
||||
|
||||
2. **Split Large Test Files** ✅
|
||||
- Split `tests/services/test_download_manager.py` (1422 lines) into:
|
||||
- `test_download_manager_basic.py` - Core functionality (12 tests)
|
||||
- `test_download_manager_error.py` - Error handling and execution (15 tests)
|
||||
- `test_download_manager_concurrent.py` - Advanced scenarios (6 tests)
|
||||
- Split `tests/utils/test_cache_paths.py` (530 lines) into:
|
||||
- `test_cache_paths_resolution.py` - Path resolution and CacheType tests (11 tests)
|
||||
- `test_cache_paths_validation.py` - Legacy path validation and cleanup (9 tests)
|
||||
- `test_cache_paths_migration.py` - Migration scenarios and auto-cleanup (9 tests)
|
||||
|
||||
3. **Complex Test Refactoring** ✅
|
||||
- Reviewed `test_example_images_download_manager_unit.py`
|
||||
- Existing async event-based patterns are appropriate for testing concurrent behavior
|
||||
- No refactoring needed - tests follow consistent patterns and are maintainable
|
||||
|
||||
### Test Results
|
||||
- **Download Manager Tests:** 33/33 passing across 3 files
|
||||
- **Cache Paths Tests:** 29/29 passing across 3 files
|
||||
- **Total Tests Maintained:** All existing tests preserved and organized
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Architecture & Maintainability (P2) - Week 5-6
|
||||
|
||||
### 3.1 Centralize Test Fixtures
|
||||
|
||||
**Create `tests/conftest.py` improvements:**
|
||||
|
||||
```python
|
||||
# tests/conftest.py additions
|
||||
|
||||
@pytest.fixture
|
||||
def mock_downloader():
|
||||
"""Provide a configurable mock downloader."""
|
||||
class MockDownloader:
|
||||
def __init__(self):
|
||||
self.download_calls = []
|
||||
self.should_fail = False
|
||||
|
||||
async def download_file(self, url, target_path, **kwargs):
|
||||
self.download_calls.append({"url": url, "target_path": target_path})
|
||||
if self.should_fail:
|
||||
return False, "Download failed"
|
||||
return True, str(target_path)
|
||||
|
||||
return MockDownloader()
|
||||
|
||||
@pytest.fixture
|
||||
def mock_websocket_manager():
|
||||
"""Provide a recording WebSocket manager."""
|
||||
class RecordingWebSocketManager:
|
||||
def __init__(self):
|
||||
self.payloads = []
|
||||
|
||||
async def broadcast(self, payload):
|
||||
self.payloads.append(payload)
|
||||
|
||||
return RecordingWebSocketManager()
|
||||
|
||||
@pytest.fixture
|
||||
def mock_scanner():
|
||||
"""Provide a mock model scanner with configurable cache."""
|
||||
# ... existing MockScanner but improved ...
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def reset_singletons():
|
||||
"""Reset all singletons before each test."""
|
||||
# Centralized singleton reset
|
||||
DownloadManager._instance = None
|
||||
ServiceRegistry.clear_services()
|
||||
ModelScanner._instances.clear()
|
||||
yield
|
||||
# Cleanup
|
||||
DownloadManager._instance = None
|
||||
ServiceRegistry.clear_services()
|
||||
ModelScanner._instances.clear()
|
||||
```
|
||||
|
||||
### 3.2 Split Large Test Files
|
||||
|
||||
**Target Files:**
|
||||
- `tests/services/test_download_manager.py` (1000+ lines) → Split into:
|
||||
- `test_download_manager_basic.py` - Core functionality
|
||||
- `test_download_manager_error.py` - Error handling
|
||||
- `test_download_manager_concurrent.py` - Concurrent operations
|
||||
|
||||
- `tests/utils/test_cache_paths.py` (529 lines) → Split into:
|
||||
- `test_cache_paths_resolution.py`
|
||||
- `test_cache_paths_validation.py`
|
||||
- `test_cache_paths_migration.py`
|
||||
|
||||
### 3.3 Refactor Complex Tests
|
||||
|
||||
**Example: Simplify test setup in `test_example_images_download_manager_unit.py`**
|
||||
|
||||
**Current (Complex):**
|
||||
```python
|
||||
async def test_start_download_bootstraps_progress_and_task(
|
||||
monkeypatch: pytest.MonkeyPatch, tmp_path
|
||||
):
|
||||
# 40+ lines of setup
|
||||
started = asyncio.Event()
|
||||
release = asyncio.Event()
|
||||
|
||||
async def fake_download(self, ...):
|
||||
started.set()
|
||||
await release.wait()
|
||||
# ... more logic ...
|
||||
```
|
||||
|
||||
**Improved (Using fixtures):**
|
||||
```python
|
||||
async def test_start_download_bootstraps_progress_and_task(
|
||||
download_manager_with_fake_backend, release_event
|
||||
):
|
||||
# Setup in fixtures, test is clean
|
||||
manager = download_manager_with_fake_backend
|
||||
result = await manager.start_download({"model_types": ["lora"]})
|
||||
assert result["success"] is True
|
||||
assert manager._is_downloading is True
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Advanced Testing (P3) - Week 7-8
|
||||
|
||||
### 4.1 Add Property-Based Tests (Hypothesis)
|
||||
|
||||
**Install:** `pip install hypothesis`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# tests/utils/test_hash_utils_hypothesis.py
|
||||
from hypothesis import given, strategies as st
|
||||
|
||||
@given(st.text(min_size=1, max_size=100))
|
||||
def test_hash_normalization_idempotent(name):
|
||||
"""Hash normalization should be idempotent."""
|
||||
normalized = normalize_hash(name)
|
||||
assert normalize_hash(normalized) == normalized
|
||||
|
||||
@given(st.lists(st.dictionaries(st.text(), st.text()), min_size=0, max_size=1000))
|
||||
def test_model_cache_handles_any_model_list(models):
|
||||
"""Cache should handle any list of models without crashing."""
|
||||
cache = ModelCache()
|
||||
cache.raw_data = models
|
||||
# Should not raise
|
||||
list(cache.iter_models())
|
||||
```
|
||||
|
||||
### 4.2 Add Snapshot Tests (Syrupy)
|
||||
|
||||
**Install:** `pip install syrupy`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# tests/routes/test_api_snapshots.py
|
||||
import pytest
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_lora_list_response_format(snapshot, client):
|
||||
"""Verify API response format matches snapshot."""
|
||||
response = await client.get("/api/lm/loras")
|
||||
data = await response.json()
|
||||
assert data == snapshot # Syrupy handles this
|
||||
```
|
||||
|
||||
### 4.3 Add Performance Benchmarks
|
||||
|
||||
**Install:** `pip install pytest-benchmark`
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
# tests/performance/test_cache_performance.py
|
||||
import pytest
|
||||
|
||||
def test_cache_lookup_performance(benchmark):
|
||||
"""Benchmark cache lookup with 10,000 models."""
|
||||
cache = create_cache_with_n_models(10000)
|
||||
|
||||
result = benchmark(lambda: cache.get_by_hash("abc123"))
|
||||
# Benchmark automatically collects timing stats
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Week 1-2: Critical Fixes
|
||||
- [x] Fix over-mocking in `test_download_manager.py` (Skipped - requires major refactoring, see Phase 2)
|
||||
- [x] Add network timeout tests (Added `test_downloader_error_paths.py` with 19 error path tests)
|
||||
- [x] Add disk full error tests (Covered in error path tests)
|
||||
- [x] Add permission denied tests (Covered in error path tests)
|
||||
- [x] Install and configure pytest-asyncio (Added to requirements-dev.txt and pytest.ini)
|
||||
- [x] Remove custom pytest_pyfunc_call handler (Removed from conftest.py)
|
||||
- [x] Add `@pytest.mark.asyncio` to all async tests (Added to 21 async test functions in test_download_manager.py)
|
||||
|
||||
### Week 3-4: Integration & Coverage
|
||||
- [x] Create `test_model_lifecycle_service.py` tests (12 new tests added)
|
||||
- [x] Create `test_persistent_recipe_cache.py` tests (5 new concurrent access tests added)
|
||||
- [x] Create `tests/integration/` directory (created with conftest.py)
|
||||
- [x] Add download flow integration test (7 tests added)
|
||||
- [x] Add recipe flow integration test (9 tests added)
|
||||
- [x] Add route handler tests for preview_handlers.py (already exists in test_preview_routes.py)
|
||||
- [x] Strengthen assertions across integration tests (comprehensive assertions added)
|
||||
|
||||
### Week 5-6: Architecture
|
||||
- [x] Add centralized fixtures to conftest.py
|
||||
- [x] Split `test_download_manager.py` into 3 files
|
||||
- [x] Split `test_cache_paths.py` into 3 files
|
||||
- [x] Refactor complex test setups (reviewed - no changes needed)
|
||||
- [x] Remove duplicate singleton reset fixtures (consolidated in conftest.py)
|
||||
|
||||
### Week 7-8: Advanced Testing
|
||||
- [x] Install hypothesis (Added to requirements-dev.txt)
|
||||
- [x] Add 10 property-based tests (Created 19 tests in test_utils_hypothesis.py)
|
||||
- [x] Install syrupy (Added to requirements-dev.txt)
|
||||
- [x] Add 5 snapshot tests (Created 7 tests in test_api_snapshots.py)
|
||||
- [x] Install pytest-benchmark (Added to requirements-dev.txt)
|
||||
- [x] Add 3 performance benchmarks (Created 11 tests in test_cache_performance.py)
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Quantitative
|
||||
- **Code Coverage:** Increase from ~70% to >90%
|
||||
- **Test Count:** Increase from 400+ to 600+
|
||||
- **Assertion Strength:** Replace 50+ weak assertions
|
||||
- **Integration Test Ratio:** Increase from 5% to 20%
|
||||
|
||||
### Qualitative
|
||||
- **Bug Escape Rate:** Reduce by 80%
|
||||
- **Test Maintenance Time:** Reduce by 50%
|
||||
- **Time to Write New Tests:** Reduce by 30%
|
||||
- **CI Pipeline Speed:** Maintain <5 minutes
|
||||
|
||||
---
|
||||
|
||||
## Risk Mitigation
|
||||
|
||||
| Risk | Mitigation |
|
||||
|------|------------|
|
||||
| Breaking existing tests | Run full test suite after each change |
|
||||
| Increased CI time | Optimize tests, parallelize execution |
|
||||
| Developer resistance | Provide training, pair programming |
|
||||
| Maintenance burden | Document patterns, provide templates |
|
||||
| Coverage gaps | Use coverage.py in CI, fail on <90% |
|
||||
|
||||
---
|
||||
|
||||
## Related Documents
|
||||
|
||||
- `docs/testing/frontend-testing-roadmap.md` - Frontend testing plan
|
||||
- `docs/AGENTS.md` - Development guidelines
|
||||
- `pytest.ini` - Test configuration
|
||||
- `tests/conftest.py` - Shared fixtures
|
||||
|
||||
---
|
||||
|
||||
## Approval
|
||||
|
||||
| Role | Name | Date | Signature |
|
||||
|------|------|------|-----------|
|
||||
| Tech Lead | | | |
|
||||
| QA Lead | | | |
|
||||
| Product Owner | | | |
|
||||
|
||||
---
|
||||
|
||||
**Next Review Date:** 2026-02-25
|
||||
|
||||
**Document Owner:** Backend Team
|
||||
File diff suppressed because one or more lines are too long
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "Update",
|
||||
"updateAvailable": "Update verfügbar",
|
||||
"skipRefresh": "Metadaten-Aktualisierung übersprungen"
|
||||
"updateAvailable": "Update verfügbar"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "Verwendungsanzahl"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "Rezepte",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "Statistiken"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "LoRAs suchen...",
|
||||
"recipes": "Rezepte suchen...",
|
||||
"checkpoints": "Checkpoints suchen...",
|
||||
"embeddings": "Embeddings suchen..."
|
||||
"embeddings": "Embeddings suchen...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "Suchoptionen",
|
||||
"searchIn": "Suchen in:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "Kein Credit erforderlich",
|
||||
"allowSellingGeneratedContent": "Verkauf erlaubt",
|
||||
"noTags": "Keine Tags",
|
||||
"clearAll": "Alle Filter löschen",
|
||||
"any": "Beliebig",
|
||||
"all": "Alle",
|
||||
"tagLogicAny": "Jedes Tag abgleichen (ODER)",
|
||||
"tagLogicAll": "Alle Tags abgleichen (UND)"
|
||||
"clearAll": "Alle Filter löschen"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "Theme wechseln",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "Fehler beim Speichern der Ausschlüsse: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "Metadaten-Aktualisierung: Übersprungene Pfade",
|
||||
"placeholder": "Beispiel: temp, archived/old, test_models",
|
||||
"help": "Modelle in diesen Verzeichnispfaden bei der Massenaktualisierung der Metadaten (\"Alle Metadaten abrufen\") überspringen. Geben Sie Ordnerpfade relativ zum Modell-Stammverzeichnis ein, getrennt durch Kommas.",
|
||||
"validation": {
|
||||
"noPaths": "Geben Sie mindestens einen durch Kommas getrennten Pfad ein.",
|
||||
"saveFailed": "Übersprungene Pfade konnten nicht gespeichert werden: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Anzeige-Dichte",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "Jede verfügbare Aktualisierung markieren"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "Früher Zugriff Updates ausblenden",
|
||||
"help": "Nur Early-Access-Updates"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "Trigger Words in LoRA-Syntax einschließen",
|
||||
"includeTriggerWordsHelp": "Trainierte Trigger Words beim Kopieren der LoRA-Syntax in die Zwischenablage einschließen"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "Auswahl auf Updates prüfen",
|
||||
"moveAll": "Alle in Ordner verschieben",
|
||||
"autoOrganize": "Automatisch organisieren",
|
||||
"skipMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle überspringen",
|
||||
"resumeMetadataRefresh": "Metadaten-Aktualisierung für ausgewählte Modelle fortsetzen",
|
||||
"deleteAll": "Alle Modelle löschen",
|
||||
"clear": "Auswahl löschen",
|
||||
"skipMetadataRefreshCount": "Überspringen({count} Modelle)",
|
||||
"resumeMetadataRefreshCount": "Fortsetzen({count} Modelle)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Automatische Organisation wird initialisiert...",
|
||||
"starting": "Automatische Organisation für {type} wird gestartet...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embedding-Modelle"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Stammverzeichnis",
|
||||
"collapseAll": "Alle Ordner einklappen",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "Unbenannte Version",
|
||||
"noDetails": "Keine zusätzlichen Details",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "bald endend",
|
||||
"hours": "in {count}h",
|
||||
"days": "in {count}d"
|
||||
"noDetails": "Keine zusätzlichen Details"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Aktuelle Version",
|
||||
"inLibrary": "In der Bibliothek",
|
||||
"newer": "Neuere Version",
|
||||
"earlyAccess": "Früher Zugriff",
|
||||
"ignored": "Ignoriert"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "Löschen",
|
||||
"ignore": "Ignorieren",
|
||||
"unignore": "Ignorierung aufheben",
|
||||
"earlyAccessTooltip": "Erfordert Early-Access-Kauf",
|
||||
"resumeModelUpdates": "Aktualisierungen für dieses Modell fortsetzen",
|
||||
"ignoreModelUpdates": "Aktualisierungen für dieses Modell ignorieren",
|
||||
"viewLocalVersions": "Alle lokalen Versionen anzeigen",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "Statistiken werden initialisiert",
|
||||
"message": "Modelldaten für Statistiken werden verarbeitet. Dies kann einige Minuten dauern..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "Tipps & Tricks",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "Rezept zum Workflow hinzugefügt",
|
||||
"recipeReplaced": "Rezept im Workflow ersetzt",
|
||||
"recipeFailedToSend": "Fehler beim Senden des Rezepts an den Workflow",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "Keine kompatiblen Knoten im aktuellen Workflow verfügbar",
|
||||
"noTargetNodeSelected": "Kein Zielknoten ausgewählt"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Rezept",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "Ersetzen",
|
||||
"append": "Anhängen",
|
||||
"selectTargetNode": "Zielknoten auswählen",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "Basis-Modell erfolgreich für {count} Modell(e) aktualisiert",
|
||||
"bulkBaseModelUpdatePartial": "{success} Modelle aktualisiert, {failed} fehlgeschlagen",
|
||||
"bulkBaseModelUpdateFailed": "Aktualisierung des Basis-Modells für ausgewählte Modelle fehlgeschlagen",
|
||||
"skipMetadataRefreshUpdating": "Aktualisiere Metadaten-Aktualisierungs-Flag für {count} Modell(e)...",
|
||||
"skipMetadataRefreshSet": "Metadaten-Aktualisierung für {count} Modell(e) übersprungen",
|
||||
"skipMetadataRefreshCleared": "Metadaten-Aktualisierung für {count} Modell(e) fortgesetzt",
|
||||
"skipMetadataRefreshPartial": "{success} Modell(e) aktualisiert, {failed} fehlgeschlagen",
|
||||
"skipMetadataRefreshFailed": "Fehler beim Aktualisieren des Metadaten-Aktualisierungs-Flags für ausgewählte Modelle",
|
||||
"bulkContentRatingUpdating": "Inhaltsbewertung wird für {count} Modell(e) aktualisiert...",
|
||||
"bulkContentRatingSet": "Inhaltsbewertung auf {level} für {count} Modell(e) gesetzt",
|
||||
"bulkContentRatingPartial": "Inhaltsbewertung auf {level} für {success} Modell(e) gesetzt, {failed} fehlgeschlagen",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "Fehler beim Laden des Ordnerbaums",
|
||||
"folderTreeError": "Fehler beim Laden des Ordnerbaums",
|
||||
"imagesImported": "Beispielbilder erfolgreich importiert",
|
||||
"imagesPartial": "{success} Bild(er) importiert, {failed} fehlgeschlagen",
|
||||
"importFailed": "Fehler beim Importieren der Beispielbilder: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "Cache-Korruption erkannt"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "Cache-Probleme erkannt"
|
||||
},
|
||||
"content": "{invalid} von {total} Cache-Einträgen sind ungültig ({rate}). Dies kann zu fehlenden Modellen oder Fehlern führen. Ein Neuaufbau des Caches wird empfohlen.",
|
||||
"rebuildCache": "Cache neu aufbauen",
|
||||
"dismiss": "Verwerfen",
|
||||
"rebuilding": "Cache wird neu aufgebaut...",
|
||||
"rebuildFailed": "Fehler beim Neuaufbau des Caches: {error}",
|
||||
"retry": "Wiederholen"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "Update",
|
||||
"updateAvailable": "Update available",
|
||||
"skipRefresh": "Metadata refresh skipped"
|
||||
"updateAvailable": "Update available"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "Times used"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "Recipes",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "Misc",
|
||||
"statistics": "Stats"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "Search LoRAs...",
|
||||
"recipes": "Search recipes...",
|
||||
"checkpoints": "Search checkpoints...",
|
||||
"embeddings": "Search embeddings..."
|
||||
"embeddings": "Search embeddings...",
|
||||
"misc": "Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "Search Options",
|
||||
"searchIn": "Search In:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "No Credit Required",
|
||||
"allowSellingGeneratedContent": "Allow Selling",
|
||||
"noTags": "No tags",
|
||||
"clearAll": "Clear All Filters",
|
||||
"any": "Any",
|
||||
"all": "All",
|
||||
"tagLogicAny": "Match any tag (OR)",
|
||||
"tagLogicAll": "Match all tags (AND)"
|
||||
"clearAll": "Clear All Filters"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "Toggle theme",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "Unable to save exclusions: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "Metadata refresh skip paths",
|
||||
"placeholder": "Example: temp, archived/old, test_models",
|
||||
"help": "Skip models in these directory paths during bulk metadata refresh (\"Fetch All Metadata\"). Enter folder paths relative to your model root directory, separated by commas.",
|
||||
"validation": {
|
||||
"noPaths": "Enter at least one path separated by commas.",
|
||||
"saveFailed": "Unable to save skip paths: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Display Density",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "Flag any available update"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "Hide Early Access Updates",
|
||||
"help": "When enabled, models with only early access updates will not show 'Update available' badge"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "Include Trigger Words in LoRA Syntax",
|
||||
"includeTriggerWordsHelp": "Include trained trigger words when copying LoRA syntax to clipboard"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "Check Updates for Selected",
|
||||
"moveAll": "Move Selected to Folder",
|
||||
"autoOrganize": "Auto-Organize Selected",
|
||||
"skipMetadataRefresh": "Skip Metadata Refresh for Selected",
|
||||
"resumeMetadataRefresh": "Resume Metadata Refresh for Selected",
|
||||
"deleteAll": "Delete Selected Models",
|
||||
"clear": "Clear Selection",
|
||||
"skipMetadataRefreshCount": "Skip ({count} models)",
|
||||
"resumeMetadataRefreshCount": "Resume ({count} models)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Initializing auto-organize...",
|
||||
"starting": "Starting auto-organize for {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embedding Models"
|
||||
},
|
||||
"misc": {
|
||||
"title": "VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "VAE",
|
||||
"upscaler": "Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Root",
|
||||
"collapseAll": "Collapse All Folders",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "Untitled Version",
|
||||
"noDetails": "No additional details",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "ending soon",
|
||||
"hours": "in {count}h",
|
||||
"days": "in {count}d"
|
||||
"noDetails": "No additional details"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Current Version",
|
||||
"inLibrary": "In Library",
|
||||
"newer": "Newer Version",
|
||||
"earlyAccess": "Early Access",
|
||||
"ignored": "Ignored"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "Delete",
|
||||
"ignore": "Ignore",
|
||||
"unignore": "Unignore",
|
||||
"earlyAccessTooltip": "Requires early access purchase",
|
||||
"resumeModelUpdates": "Resume updates for this model",
|
||||
"ignoreModelUpdates": "Ignore updates for this model",
|
||||
"viewLocalVersions": "View all local versions",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "Initializing Statistics",
|
||||
"message": "Processing model data for statistics. This may take a few minutes..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "Initializing Misc Model Manager",
|
||||
"message": "Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "Tips & Tricks",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "Recipe appended to workflow",
|
||||
"recipeReplaced": "Recipe replaced in workflow",
|
||||
"recipeFailedToSend": "Failed to send recipe to workflow",
|
||||
"vaeUpdated": "VAE updated in workflow",
|
||||
"vaeFailed": "Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "Upscaler updated in workflow",
|
||||
"upscalerFailed": "Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "No compatible nodes available in the current workflow",
|
||||
"noTargetNodeSelected": "No target node selected"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Recipe",
|
||||
"lora": "LoRA",
|
||||
"vae": "VAE",
|
||||
"upscaler": "Upscaler",
|
||||
"replace": "Replace",
|
||||
"append": "Append",
|
||||
"selectTargetNode": "Select target node",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "Successfully updated base model for {count} model(s)",
|
||||
"bulkBaseModelUpdatePartial": "Updated {success} model(s), failed {failed} model(s)",
|
||||
"bulkBaseModelUpdateFailed": "Failed to update base model for selected models",
|
||||
"skipMetadataRefreshUpdating": "Updating metadata refresh flag for {count} model(s)...",
|
||||
"skipMetadataRefreshSet": "Metadata refresh skipped for {count} model(s)",
|
||||
"skipMetadataRefreshCleared": "Metadata refresh resumed for {count} model(s)",
|
||||
"skipMetadataRefreshPartial": "Updated {success} model(s), {failed} failed",
|
||||
"skipMetadataRefreshFailed": "Failed to update metadata refresh flag for selected models",
|
||||
"bulkContentRatingUpdating": "Updating content rating for {count} model(s)...",
|
||||
"bulkContentRatingSet": "Set content rating to {level} for {count} model(s)",
|
||||
"bulkContentRatingPartial": "Set content rating to {level} for {success} model(s), {failed} failed",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "Failed to load folder tree",
|
||||
"folderTreeError": "Error loading folder tree",
|
||||
"imagesImported": "Example images imported successfully",
|
||||
"imagesPartial": "{success} image(s) imported, {failed} failed",
|
||||
"importFailed": "Failed to import example images: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "Cache Corruption Detected"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "Cache Issues Detected"
|
||||
},
|
||||
"content": "{invalid} of {total} cache entries are invalid ({rate}). This may cause missing models or errors. Rebuilding the cache is recommended.",
|
||||
"rebuildCache": "Rebuild Cache",
|
||||
"dismiss": "Dismiss",
|
||||
"rebuilding": "Rebuilding cache...",
|
||||
"rebuildFailed": "Failed to rebuild cache: {error}",
|
||||
"retry": "Retry"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "Actualización",
|
||||
"updateAvailable": "Actualización disponible",
|
||||
"skipRefresh": "Actualización de metadatos omitida"
|
||||
"updateAvailable": "Actualización disponible"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "Veces usado"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "Recetas",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "Estadísticas"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "Buscar LoRAs...",
|
||||
"recipes": "Buscar recetas...",
|
||||
"checkpoints": "Buscar checkpoints...",
|
||||
"embeddings": "Buscar embeddings..."
|
||||
"embeddings": "Buscar embeddings...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "Opciones de búsqueda",
|
||||
"searchIn": "Buscar en:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "Sin crédito requerido",
|
||||
"allowSellingGeneratedContent": "Venta permitida",
|
||||
"noTags": "Sin etiquetas",
|
||||
"clearAll": "Limpiar todos los filtros",
|
||||
"any": "Cualquiera",
|
||||
"all": "Todos",
|
||||
"tagLogicAny": "Coincidir con cualquier etiqueta (O)",
|
||||
"tagLogicAll": "Coincidir con todas las etiquetas (Y)"
|
||||
"clearAll": "Limpiar todos los filtros"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "Cambiar tema",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "No se pudieron guardar las exclusiones: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "Rutas a omitir en la actualización de metadatos",
|
||||
"placeholder": "Ejemplo: temp, archived/old, test_models",
|
||||
"help": "Omitir modelos en estas rutas de directorio durante la actualización masiva de metadatos (\"Obtener todos los metadatos\"). Ingrese rutas de carpetas relativas al directorio raíz de modelos, separadas por comas.",
|
||||
"validation": {
|
||||
"noPaths": "Ingrese al menos una ruta separada por comas.",
|
||||
"saveFailed": "No se pudieron guardar las rutas a omitir: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Densidad de visualización",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "Marcar cualquier actualización disponible"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "Ocultar actualizaciones de acceso temprano",
|
||||
"help": "Solo actualizaciones de acceso temprano"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "Incluir palabras clave en la sintaxis de LoRA",
|
||||
"includeTriggerWordsHelp": "Incluir palabras clave entrenadas al copiar la sintaxis de LoRA al portapapeles"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "Comprobar actualizaciones para la selección",
|
||||
"moveAll": "Mover todos a carpeta",
|
||||
"autoOrganize": "Auto-organizar seleccionados",
|
||||
"skipMetadataRefresh": "Omitir actualización de metadatos para seleccionados",
|
||||
"resumeMetadataRefresh": "Reanudar actualización de metadatos para seleccionados",
|
||||
"deleteAll": "Eliminar todos los modelos",
|
||||
"clear": "Limpiar selección",
|
||||
"skipMetadataRefreshCount": "Omitir({count} modelos)",
|
||||
"resumeMetadataRefreshCount": "Reanudar({count} modelos)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Inicializando auto-organización...",
|
||||
"starting": "Iniciando auto-organización para {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Modelos embedding"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Raíz",
|
||||
"collapseAll": "Colapsar todas las carpetas",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "Versión sin nombre",
|
||||
"noDetails": "Sin detalles adicionales",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "terminando pronto",
|
||||
"hours": "en {count}h",
|
||||
"days": "en {count}d"
|
||||
"noDetails": "Sin detalles adicionales"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Versión actual",
|
||||
"inLibrary": "En la biblioteca",
|
||||
"newer": "Versión más reciente",
|
||||
"earlyAccess": "Acceso temprano",
|
||||
"ignored": "Ignorada"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "Eliminar",
|
||||
"ignore": "Ignorar",
|
||||
"unignore": "Dejar de ignorar",
|
||||
"earlyAccessTooltip": "Requiere compra de acceso temprano",
|
||||
"resumeModelUpdates": "Reanudar actualizaciones para este modelo",
|
||||
"ignoreModelUpdates": "Ignorar actualizaciones para este modelo",
|
||||
"viewLocalVersions": "Ver todas las versiones locales",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "Inicializando estadísticas",
|
||||
"message": "Procesando datos del modelo para estadísticas. Esto puede tomar unos minutos..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "Consejos y trucos",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "Receta añadida al flujo de trabajo",
|
||||
"recipeReplaced": "Receta reemplazada en el flujo de trabajo",
|
||||
"recipeFailedToSend": "Error al enviar receta al flujo de trabajo",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "No hay nodos compatibles disponibles en el flujo de trabajo actual",
|
||||
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Receta",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "Reemplazar",
|
||||
"append": "Añadir",
|
||||
"selectTargetNode": "Seleccionar nodo de destino",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "Modelo base actualizado exitosamente para {count} modelo(s)",
|
||||
"bulkBaseModelUpdatePartial": "Actualizados {success} modelo(s), fallaron {failed} modelo(s)",
|
||||
"bulkBaseModelUpdateFailed": "Error al actualizar el modelo base para los modelos seleccionados",
|
||||
"skipMetadataRefreshUpdating": "Actualizando flag de actualización de metadatos para {count} modelo(s)...",
|
||||
"skipMetadataRefreshSet": "Actualización de metadatos omitida para {count} modelo(s)",
|
||||
"skipMetadataRefreshCleared": "Actualización de metadatos reanudada para {count} modelo(s)",
|
||||
"skipMetadataRefreshPartial": "{success} modelo(s) actualizados, {failed} fallaron",
|
||||
"skipMetadataRefreshFailed": "Error al actualizar flag de actualización de metadatos para los modelos seleccionados",
|
||||
"bulkContentRatingUpdating": "Actualizando la clasificación de contenido para {count} modelo(s)...",
|
||||
"bulkContentRatingSet": "Clasificación de contenido establecida en {level} para {count} modelo(s)",
|
||||
"bulkContentRatingPartial": "Clasificación de contenido establecida en {level} para {success} modelo(s), {failed} fallaron",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "Error al cargar árbol de carpetas",
|
||||
"folderTreeError": "Error al cargar árbol de carpetas",
|
||||
"imagesImported": "Imágenes de ejemplo importadas exitosamente",
|
||||
"imagesPartial": "{success} imagen(es) importada(s), {failed} fallida(s)",
|
||||
"importFailed": "Error al importar imágenes de ejemplo: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "Corrupción de caché detectada"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "Problemas de caché detectados"
|
||||
},
|
||||
"content": "{invalid} de {total} entradas de caché son inválidas ({rate}). Esto puede causar modelos faltantes o errores. Se recomienda reconstruir la caché.",
|
||||
"rebuildCache": "Reconstruir caché",
|
||||
"dismiss": "Descartar",
|
||||
"rebuilding": "Reconstruyendo caché...",
|
||||
"rebuildFailed": "Error al reconstruir la caché: {error}",
|
||||
"retry": "Reintentar"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "Mise à jour",
|
||||
"updateAvailable": "Mise à jour disponible",
|
||||
"skipRefresh": "Actualisation des métadonnées ignorée"
|
||||
"updateAvailable": "Mise à jour disponible"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "Nombre d'utilisations"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "Recipes",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "Statistiques"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "Rechercher des LoRAs...",
|
||||
"recipes": "Rechercher des recipes...",
|
||||
"checkpoints": "Rechercher des checkpoints...",
|
||||
"embeddings": "Rechercher des embeddings..."
|
||||
"embeddings": "Rechercher des embeddings...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "Options de recherche",
|
||||
"searchIn": "Rechercher dans :",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "Crédit non requis",
|
||||
"allowSellingGeneratedContent": "Vente autorisée",
|
||||
"noTags": "Aucun tag",
|
||||
"clearAll": "Effacer tous les filtres",
|
||||
"any": "N'importe quel",
|
||||
"all": "Tous",
|
||||
"tagLogicAny": "Correspondre à n'importe quel tag (OU)",
|
||||
"tagLogicAll": "Correspondre à tous les tags (ET)"
|
||||
"clearAll": "Effacer tous les filtres"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "Basculer le thème",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "Impossible d'enregistrer les exclusions : {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "Chemins à ignorer pour l'actualisation des métadonnées",
|
||||
"placeholder": "Exemple : temp, archived/old, test_models",
|
||||
"help": "Ignorer les modèles dans ces chemins de répertoires lors de l'actualisation groupée des métadonnées (\"Récupérer toutes les métadonnées\"). Entrez les chemins de dossiers relatifs au répertoire racine des modèles, séparés par des virgules.",
|
||||
"validation": {
|
||||
"noPaths": "Entrez au moins un chemin séparé par des virgules.",
|
||||
"saveFailed": "Impossible d'enregistrer les chemins à ignorer : {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Densité d'affichage",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "Signaler n’importe quelle mise à jour disponible"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "Masquer les mises à jour en accès anticipé",
|
||||
"help": "Seulement les mises à jour en accès anticipé"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "Inclure les mots-clés dans la syntaxe LoRA",
|
||||
"includeTriggerWordsHelp": "Inclure les mots-clés d'entraînement lors de la copie de la syntaxe LoRA dans le presse-papiers"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "Vérifier les mises à jour pour la sélection",
|
||||
"moveAll": "Déplacer tout vers un dossier",
|
||||
"autoOrganize": "Auto-organiser la sélection",
|
||||
"skipMetadataRefresh": "Ignorer l'actualisation des métadonnées pour la sélection",
|
||||
"resumeMetadataRefresh": "Reprendre l'actualisation des métadonnées pour la sélection",
|
||||
"deleteAll": "Supprimer tous les modèles",
|
||||
"clear": "Effacer la sélection",
|
||||
"skipMetadataRefreshCount": "Ignorer({count} modèles)",
|
||||
"resumeMetadataRefreshCount": "Reprendre({count} modèles)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Initialisation de l'auto-organisation...",
|
||||
"starting": "Démarrage de l'auto-organisation pour {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Modèles Embedding"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Racine",
|
||||
"collapseAll": "Réduire tous les dossiers",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "Version sans nom",
|
||||
"noDetails": "Aucun détail supplémentaire",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "se termine bientôt",
|
||||
"hours": "dans {count}h",
|
||||
"days": "dans {count}j"
|
||||
"noDetails": "Aucun détail supplémentaire"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Version actuelle",
|
||||
"inLibrary": "Dans la bibliothèque",
|
||||
"newer": "Version plus récente",
|
||||
"earlyAccess": "Accès anticipé",
|
||||
"ignored": "Ignorée"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "Supprimer",
|
||||
"ignore": "Ignorer",
|
||||
"unignore": "Ne plus ignorer",
|
||||
"earlyAccessTooltip": "Nécessite l'achat de l'accès anticipé",
|
||||
"resumeModelUpdates": "Reprendre les mises à jour pour ce modèle",
|
||||
"ignoreModelUpdates": "Ignorer les mises à jour pour ce modèle",
|
||||
"viewLocalVersions": "Voir toutes les versions locales",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "Initialisation des statistiques",
|
||||
"message": "Traitement des données de modèle pour les statistiques. Cela peut prendre quelques minutes..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "Astuces et conseils",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "Recipe ajoutée au workflow",
|
||||
"recipeReplaced": "Recipe remplacée dans le workflow",
|
||||
"recipeFailedToSend": "Échec de l'envoi de la recipe au workflow",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "Aucun nœud compatible disponible dans le workflow actuel",
|
||||
"noTargetNodeSelected": "Aucun nœud cible sélectionné"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Recipe",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "Remplacer",
|
||||
"append": "Ajouter",
|
||||
"selectTargetNode": "Sélectionner le nœud cible",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "Modèle de base mis à jour avec succès pour {count} modèle(s)",
|
||||
"bulkBaseModelUpdatePartial": "{success} modèle(s) mis à jour, {failed} modèle(s) en échec",
|
||||
"bulkBaseModelUpdateFailed": "Échec de la mise à jour du modèle de base pour les modèles sélectionnés",
|
||||
"skipMetadataRefreshUpdating": "Mise à jour du flag d'actualisation des métadonnées pour {count} modèle(s)...",
|
||||
"skipMetadataRefreshSet": "Actualisation des métadonnées ignorée pour {count} modèle(s)",
|
||||
"skipMetadataRefreshCleared": "Actualisation des métadonnées reprise pour {count} modèle(s)",
|
||||
"skipMetadataRefreshPartial": "{success} modèle(s) mis à jour, {failed} échoué(s)",
|
||||
"skipMetadataRefreshFailed": "Échec de la mise à jour du flag d'actualisation des métadonnées pour les modèles sélectionnés",
|
||||
"bulkContentRatingUpdating": "Mise à jour de la classification du contenu pour {count} modèle(s)...",
|
||||
"bulkContentRatingSet": "Classification du contenu définie sur {level} pour {count} modèle(s)",
|
||||
"bulkContentRatingPartial": "Classification du contenu définie sur {level} pour {success} modèle(s), {failed} échec(s)",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "Échec du chargement de l'arborescence des dossiers",
|
||||
"folderTreeError": "Erreur lors du chargement de l'arborescence des dossiers",
|
||||
"imagesImported": "Images d'exemple importées avec succès",
|
||||
"imagesPartial": "{success} image(s) importée(s), {failed} échouée(s)",
|
||||
"importFailed": "Échec de l'importation des images d'exemple : {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "Corruption du cache détectée"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "Problèmes de cache détectés"
|
||||
},
|
||||
"content": "{invalid} des {total} entrées de cache sont invalides ({rate}). Cela peut provoquer des modèles manquants ou des erreurs. Il est recommandé de reconstruire le cache.",
|
||||
"rebuildCache": "Reconstruire le cache",
|
||||
"dismiss": "Ignorer",
|
||||
"rebuilding": "Reconstruction du cache...",
|
||||
"rebuildFailed": "Échec de la reconstruction du cache : {error}",
|
||||
"retry": "Réessayer"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "עדכון",
|
||||
"updateAvailable": "עדכון זמין",
|
||||
"skipRefresh": "רענון המטא-נתונים דולג"
|
||||
"updateAvailable": "עדכון זמין"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "מספר שימושים"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "מתכונים",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "סטטיסטיקה"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "חפש LoRAs...",
|
||||
"recipes": "חפש מתכונים...",
|
||||
"checkpoints": "חפש checkpoints...",
|
||||
"embeddings": "חפש embeddings..."
|
||||
"embeddings": "חפש embeddings...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "אפשרויות חיפוש",
|
||||
"searchIn": "חפש ב:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "ללא קרדיט נדרש",
|
||||
"allowSellingGeneratedContent": "אפשר מכירה",
|
||||
"noTags": "ללא תגיות",
|
||||
"clearAll": "נקה את כל המסננים",
|
||||
"any": "כלשהו",
|
||||
"all": "כל התגים",
|
||||
"tagLogicAny": "התאם כל תג (או)",
|
||||
"tagLogicAll": "התאם את כל התגים (וגם)"
|
||||
"clearAll": "נקה את כל המסננים"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "החלף ערכת נושא",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "לא ניתן לשמור את ההוצאות: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "נתיבים לדילוג ברענון מטא-נתונים",
|
||||
"placeholder": "דוגמה: temp, archived/old, test_models",
|
||||
"help": "דלג על מודלים בנתיבי תיקיות אלה בעת רענון מטא-נתונים המוני (\"אחזר את כל המטא-נתונים\"). הזן נתיבי תיקיות יחסית לספריית השורש של המודל, מופרדים בפסיקים.",
|
||||
"validation": {
|
||||
"noPaths": "הזן לפחות נתיב אחד מופרד בפסיקים.",
|
||||
"saveFailed": "לא ניתן לשמור נתיבי דילוג: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "צפיפות תצוגה",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "תוויות לכל עדכון זמין"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "הסתר עדכוני גישה מוקדמת",
|
||||
"help": "רק עדכוני גישה מוקדמת"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "כלול מילות טריגר בתחביר LoRA",
|
||||
"includeTriggerWordsHelp": "כלול מילות טריגר מאומנות בעת העתקת תחביר LoRA ללוח"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "בדוק עדכונים לבחירה",
|
||||
"moveAll": "העבר הכל לתיקייה",
|
||||
"autoOrganize": "ארגן אוטומטית נבחרים",
|
||||
"skipMetadataRefresh": "דילוג על רענון מטא-נתונים לנבחרים",
|
||||
"resumeMetadataRefresh": "המשך רענון מטא-נתונים לנבחרים",
|
||||
"deleteAll": "מחק את כל המודלים",
|
||||
"clear": "נקה בחירה",
|
||||
"skipMetadataRefreshCount": "דילוג({count} מודלים)",
|
||||
"resumeMetadataRefreshCount": "המשך({count} מודלים)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "מאתחל ארגון אוטומטי...",
|
||||
"starting": "מתחיל ארגון אוטומטי עבור {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "מודלי Embedding"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "שורש",
|
||||
"collapseAll": "כווץ את כל התיקיות",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "גרסה ללא שם",
|
||||
"noDetails": "אין פרטים נוספים",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "מסתיים בקרוב",
|
||||
"hours": "בעוד {count} שעות",
|
||||
"days": "בעוד {count} ימים"
|
||||
"noDetails": "אין פרטים נוספים"
|
||||
},
|
||||
"badges": {
|
||||
"current": "גרסה נוכחית",
|
||||
"inLibrary": "בספרייה",
|
||||
"newer": "גרסה חדשה יותר",
|
||||
"earlyAccess": "גישה מוקדמת",
|
||||
"ignored": "התעלם"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "מחיקה",
|
||||
"ignore": "התעלם",
|
||||
"unignore": "בטל התעלמות",
|
||||
"earlyAccessTooltip": "נדרש רכישת גישה מוקדמת",
|
||||
"resumeModelUpdates": "המשך עדכונים עבור מודל זה",
|
||||
"ignoreModelUpdates": "התעלם מעדכונים עבור מודל זה",
|
||||
"viewLocalVersions": "הצג את כל הגרסאות המקומיות",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "מאתחל סטטיסטיקה",
|
||||
"message": "מעבד נתוני מודלים עבור סטטיסטיקה. זה עשוי לקחת מספר דקות..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "טיפים וטריקים",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "מתכון נוסף ל-workflow",
|
||||
"recipeReplaced": "מתכון הוחלף ב-workflow",
|
||||
"recipeFailedToSend": "שליחת מתכון ל-workflow נכשלה",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "אין צמתים תואמים זמינים ב-workflow הנוכחי",
|
||||
"noTargetNodeSelected": "לא נבחר צומת יעד"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "מתכון",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "החלף",
|
||||
"append": "הוסף",
|
||||
"selectTargetNode": "בחר צומת יעד",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "עודכן בהצלחה מודל הבסיס עבור {count} מודל(ים)",
|
||||
"bulkBaseModelUpdatePartial": "עודכנו {success} מודל(ים), נכשלו {failed} מודל(ים)",
|
||||
"bulkBaseModelUpdateFailed": "עדכון מודל הבסיס עבור המודלים שנבחרו נכשל",
|
||||
"skipMetadataRefreshUpdating": "מעדכן דגל רענון מטא-נתונים עבור {count} מודל(ים)...",
|
||||
"skipMetadataRefreshSet": "רענון מטא-נתונים דולג עבור {count} מודל(ים)",
|
||||
"skipMetadataRefreshCleared": "רענון מטא-נתונים התחדש עבור {count} מודל(ים)",
|
||||
"skipMetadataRefreshPartial": "{success} מודל(ים) עודכנו, {failed} נכשלו",
|
||||
"skipMetadataRefreshFailed": "נכשל בעדכון דגל רענון מטא-נתונים עבור המודלים הנבחרים",
|
||||
"bulkContentRatingUpdating": "מעדכן דירוג תוכן עבור {count} מודלים...",
|
||||
"bulkContentRatingSet": "דירוג התוכן הוגדר ל-{level} עבור {count} מודלים",
|
||||
"bulkContentRatingPartial": "דירוג התוכן הוגדר ל-{level} עבור {success} מודלים, {failed} נכשלו",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "טעינת עץ התיקיות נכשלה",
|
||||
"folderTreeError": "שגיאה בטעינת עץ התיקיות",
|
||||
"imagesImported": "תמונות הדוגמה יובאו בהצלחה",
|
||||
"imagesPartial": "{success} תמונה/ות יובאו, {failed} נכשלו",
|
||||
"importFailed": "ייבוא תמונות הדוגמה נכשל: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "זוהתה שחיתות במטמון"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "זוהו בעיות במטמון"
|
||||
},
|
||||
"content": "{invalid} מתוך {total} רשומות מטמון אינן תקינות ({rate}). זה עלול לגרום לדגמים חסרים או לשגיאות. מומלץ לבנות מחדש את המטמון.",
|
||||
"rebuildCache": "בניית מטמון מחדש",
|
||||
"dismiss": "ביטול",
|
||||
"rebuilding": "בונה מחדש את המטמון...",
|
||||
"rebuildFailed": "נכשלה בניית המטמון מחדש: {error}",
|
||||
"retry": "נסה שוב"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "アップデート",
|
||||
"updateAvailable": "アップデートがあります",
|
||||
"skipRefresh": "メタデータの更新がスキップされました"
|
||||
"updateAvailable": "アップデートがあります"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "使用回数"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "レシピ",
|
||||
"checkpoints": "Checkpoint",
|
||||
"embeddings": "Embedding",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "統計"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "LoRAを検索...",
|
||||
"recipes": "レシピを検索...",
|
||||
"checkpoints": "checkpointを検索...",
|
||||
"embeddings": "embeddingを検索..."
|
||||
"embeddings": "embeddingを検索...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "検索オプション",
|
||||
"searchIn": "検索対象:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "クレジット不要",
|
||||
"allowSellingGeneratedContent": "販売許可",
|
||||
"noTags": "タグなし",
|
||||
"clearAll": "すべてのフィルタをクリア",
|
||||
"any": "いずれか",
|
||||
"all": "すべて",
|
||||
"tagLogicAny": "いずれかのタグに一致 (OR)",
|
||||
"tagLogicAll": "すべてのタグに一致 (AND)"
|
||||
"clearAll": "すべてのフィルタをクリア"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "テーマの切り替え",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "除外設定を保存できませんでした: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "メタデータ更新スキップパス",
|
||||
"placeholder": "例:temp, archived/old, test_models",
|
||||
"help": "一括メタデータ更新(「すべてのメタデータを取得」)時にこれらのディレクトリパス内のモデルをスキップします。モデルルートディレクトリからの相対フォルダパスをカンマ区切りで入力してください。",
|
||||
"validation": {
|
||||
"noPaths": "カンマで区切って少なくとも1つのパスを入力してください。",
|
||||
"saveFailed": "スキップパスの保存に失敗しました:{message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "表示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "利用可能な更新すべてを表示"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "早期アクセス更新を非表示",
|
||||
"help": "早期アクセスのみの更新"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "LoRA構文にトリガーワードを含める",
|
||||
"includeTriggerWordsHelp": "LoRA構文をクリップボードにコピーする際、学習済みトリガーワードを含めます"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "選択項目の更新を確認",
|
||||
"moveAll": "すべてをフォルダに移動",
|
||||
"autoOrganize": "自動整理を実行",
|
||||
"skipMetadataRefresh": "選択したモデルのメタデータ更新をスキップ",
|
||||
"resumeMetadataRefresh": "選択したモデルのメタデータ更新を再開",
|
||||
"deleteAll": "すべてのモデルを削除",
|
||||
"clear": "選択をクリア",
|
||||
"skipMetadataRefreshCount": "スキップ({count}モデル)",
|
||||
"resumeMetadataRefreshCount": "再開({count}モデル)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "自動整理を初期化中...",
|
||||
"starting": "{type}の自動整理を開始中...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embeddingモデル"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "ルート",
|
||||
"collapseAll": "すべてのフォルダを折りたたむ",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "名前のないバージョン",
|
||||
"noDetails": "追加情報なし",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "まもなく終了",
|
||||
"hours": "{count}時間後",
|
||||
"days": "{count}日後"
|
||||
"noDetails": "追加情報なし"
|
||||
},
|
||||
"badges": {
|
||||
"current": "現在のバージョン",
|
||||
"inLibrary": "ライブラリにあります",
|
||||
"newer": "新しいバージョン",
|
||||
"earlyAccess": "早期アクセス",
|
||||
"ignored": "無視中"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "削除",
|
||||
"ignore": "無視",
|
||||
"unignore": "無視を解除",
|
||||
"earlyAccessTooltip": "早期アクセス購入が必要",
|
||||
"resumeModelUpdates": "このモデルの更新を再開",
|
||||
"ignoreModelUpdates": "このモデルの更新を無視",
|
||||
"viewLocalVersions": "ローカルの全バージョンを表示",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "統計を初期化中",
|
||||
"message": "統計用のモデルデータを処理中。数分かかる場合があります..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "ヒント&コツ",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "レシピがワークフローに追加されました",
|
||||
"recipeReplaced": "レシピがワークフローで置換されました",
|
||||
"recipeFailedToSend": "レシピをワークフローに送信できませんでした",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "現在のワークフローには互換性のあるノードがありません",
|
||||
"noTargetNodeSelected": "ターゲットノードが選択されていません"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "レシピ",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "置換",
|
||||
"append": "追加",
|
||||
"selectTargetNode": "ターゲットノードを選択",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "{count} モデルのベースモデルが正常に更新されました",
|
||||
"bulkBaseModelUpdatePartial": "{success} モデルを更新、{failed} モデルは失敗しました",
|
||||
"bulkBaseModelUpdateFailed": "選択したモデルのベースモデルの更新に失敗しました",
|
||||
"skipMetadataRefreshUpdating": "{count}モデルのメタデータ更新フラグを更新中...",
|
||||
"skipMetadataRefreshSet": "{count}モデルのメタデータ更新をスキップしました",
|
||||
"skipMetadataRefreshCleared": "{count}モデルのメタデータ更新を再開しました",
|
||||
"skipMetadataRefreshPartial": "{success}モデルを更新しました。{failed}モデルで失敗しました",
|
||||
"skipMetadataRefreshFailed": "選択したモデルのメタデータ更新フラグの更新に失敗しました",
|
||||
"bulkContentRatingUpdating": "{count} 件のモデルのコンテンツレーティングを更新中...",
|
||||
"bulkContentRatingSet": "{count} 件のモデルのコンテンツレーティングを {level} に設定しました",
|
||||
"bulkContentRatingPartial": "{success} 件のモデルのコンテンツレーティングを {level} に設定、{failed} 件は失敗しました",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "フォルダツリーの読み込みに失敗しました",
|
||||
"folderTreeError": "フォルダツリー読み込みエラー",
|
||||
"imagesImported": "例画像が正常にインポートされました",
|
||||
"imagesPartial": "{success} 件の画像をインポート、{failed} 件失敗",
|
||||
"importFailed": "例画像のインポートに失敗しました:{message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "キャッシュの破損が検出されました"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "キャッシュの問題が検出されました"
|
||||
},
|
||||
"content": "{total}個のキャッシュエントリのうち{invalid}個が無効です({rate})。モデルが見つからない原因になったり、エラーが発生する可能性があります。キャッシュの再構築を推奨します。",
|
||||
"rebuildCache": "キャッシュを再構築",
|
||||
"dismiss": "閉じる",
|
||||
"rebuilding": "キャッシュを再構築中...",
|
||||
"rebuildFailed": "キャッシュの再構築に失敗しました: {error}",
|
||||
"retry": "再試行"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "업데이트",
|
||||
"updateAvailable": "업데이트 가능",
|
||||
"skipRefresh": "메타데이터 새로고침 건너뜀"
|
||||
"updateAvailable": "업데이트 가능"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "사용 횟수"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "레시피",
|
||||
"checkpoints": "Checkpoint",
|
||||
"embeddings": "Embedding",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "통계"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "LoRA 검색...",
|
||||
"recipes": "레시피 검색...",
|
||||
"checkpoints": "Checkpoint 검색...",
|
||||
"embeddings": "Embedding 검색..."
|
||||
"embeddings": "Embedding 검색...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "검색 옵션",
|
||||
"searchIn": "검색 범위:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "크레딧 표기 없음",
|
||||
"allowSellingGeneratedContent": "판매 허용",
|
||||
"noTags": "태그 없음",
|
||||
"clearAll": "모든 필터 지우기",
|
||||
"any": "아무",
|
||||
"all": "모두",
|
||||
"tagLogicAny": "모든 태그 일치 (OR)",
|
||||
"tagLogicAll": "모든 태그 일치 (AND)"
|
||||
"clearAll": "모든 필터 지우기"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "테마 토글",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "제외 항목을 저장할 수 없습니다: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "메타데이터 새로고침 건너뛰기 경로",
|
||||
"placeholder": "예: temp, archived/old, test_models",
|
||||
"help": "일괄 메타데이터 새로고침(\"모든 메타데이터 가져오기\") 시 이 디렉터리 경로의 모델을 건너뜁니다. 모델 루트 디렉터리를 기준으로 한 폴 더 경로를 쉼표로 구분하여 입력하세요.",
|
||||
"validation": {
|
||||
"noPaths": "쉼표로 구분하여 하나 이상의 경로를 입력하세요.",
|
||||
"saveFailed": "건너뛰기 경로를 저장할 수 없습니다: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "표시 밀도",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "사용 가능한 모든 업데이트 표시"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "얼리 액세스 업데이트 숨기기",
|
||||
"help": "얼리 액세스 업데이트만"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "LoRA 문법에 트리거 단어 포함",
|
||||
"includeTriggerWordsHelp": "LoRA 문법을 클립보드에 복사할 때 학습된 트리거 단어를 포함합니다"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "선택 항목 업데이트 확인",
|
||||
"moveAll": "모두 폴더로 이동",
|
||||
"autoOrganize": "자동 정리 선택",
|
||||
"skipMetadataRefresh": "선택한 모델의 메타데이터 새로고침 건너뛰기",
|
||||
"resumeMetadataRefresh": "선택한 모델의 메타데이터 새로고침 재개",
|
||||
"deleteAll": "모든 모델 삭제",
|
||||
"clear": "선택 지우기",
|
||||
"skipMetadataRefreshCount": "건너뛰기({count}개 모델)",
|
||||
"resumeMetadataRefreshCount": "재개({count}개 모델)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "자동 정리 초기화 중...",
|
||||
"starting": "{type}에 대한 자동 정리 시작...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embedding 모델"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "루트",
|
||||
"collapseAll": "모든 폴더 접기",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "이름 없는 버전",
|
||||
"noDetails": "추가 정보 없음",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "곧 종료",
|
||||
"hours": "{count}시간 후",
|
||||
"days": "{count}일 후"
|
||||
"noDetails": "추가 정보 없음"
|
||||
},
|
||||
"badges": {
|
||||
"current": "현재 버전",
|
||||
"inLibrary": "라이브러리에 있음",
|
||||
"newer": "최신 버전",
|
||||
"earlyAccess": "얼리 액세스",
|
||||
"ignored": "무시됨"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "삭제",
|
||||
"ignore": "무시",
|
||||
"unignore": "무시 해제",
|
||||
"earlyAccessTooltip": "얼리 액세스 구매 필요",
|
||||
"resumeModelUpdates": "이 모델 업데이트 재개",
|
||||
"ignoreModelUpdates": "이 모델 업데이트 무시",
|
||||
"viewLocalVersions": "로컬 버전 모두 보기",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "통계 초기화 중",
|
||||
"message": "통계를 위한 모델 데이터를 처리하고 있습니다. 몇 분이 걸릴 수 있습니다..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "팁 & 요령",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "레시피가 워크플로에 추가되었습니다",
|
||||
"recipeReplaced": "레시피가 워크플로에서 교체되었습니다",
|
||||
"recipeFailedToSend": "레시피를 워크플로로 전송하지 못했습니다",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "현재 워크플로에서 호환되는 노드가 없습니다",
|
||||
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "레시피",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "교체",
|
||||
"append": "추가",
|
||||
"selectTargetNode": "대상 노드 선택",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "{count}개의 모델에 베이스 모델이 성공적으로 업데이트되었습니다",
|
||||
"bulkBaseModelUpdatePartial": "{success}개의 모델이 업데이트되었고, {failed}개의 모델이 실패했습니다",
|
||||
"bulkBaseModelUpdateFailed": "선택한 모델의 베이스 모델 업데이트에 실패했습니다",
|
||||
"skipMetadataRefreshUpdating": "{count}개 모델의 메타데이터 새로고침 플래그를 업데이트하는 중...",
|
||||
"skipMetadataRefreshSet": "{count}개 모델의 메타데이터 새로고침을 건너뛰었습니다",
|
||||
"skipMetadataRefreshCleared": "{count}개 모델의 메타데이터 새로고침을 재개했습니다",
|
||||
"skipMetadataRefreshPartial": "{success}개 모델을 업데이트했습니다. {failed}개 실패",
|
||||
"skipMetadataRefreshFailed": "선택한 모델의 메타데이터 새로고침 플래그 업데이트 실패",
|
||||
"bulkContentRatingUpdating": "{count}개 모델의 콘텐츠 등급을 업데이트하는 중...",
|
||||
"bulkContentRatingSet": "{count}개 모델의 콘텐츠 등급을 {level}(으)로 설정했습니다",
|
||||
"bulkContentRatingPartial": "{success}개 모델의 콘텐츠 등급을 {level}(으)로 설정했고, {failed}개는 실패했습니다",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "폴더 트리 로딩 실패",
|
||||
"folderTreeError": "폴더 트리 로딩 오류",
|
||||
"imagesImported": "예시 이미지가 성공적으로 가져와졌습니다",
|
||||
"imagesPartial": "{success}개 이미지 가져오기 성공, {failed}개 실패",
|
||||
"importFailed": "예시 이미지 가져오기 실패: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "캐시 손상이 감지되었습니다"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "캐시 문제가 감지되었습니다"
|
||||
},
|
||||
"content": "{total}개의 캐시 항목 중 {invalid}개가 유효하지 않습니다 ({rate}). 모델 누락이나 오류가 발생할 수 있습니다. 캐시를 재구축하는 것이 좋습니다.",
|
||||
"rebuildCache": "캐시 재구축",
|
||||
"dismiss": "무시",
|
||||
"rebuilding": "캐시 재구축 중...",
|
||||
"rebuildFailed": "캐시 재구축 실패: {error}",
|
||||
"retry": "다시 시도"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "Обновление",
|
||||
"updateAvailable": "Доступно обновление",
|
||||
"skipRefresh": "Обновление метаданных пропущено"
|
||||
"updateAvailable": "Доступно обновление"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "Количество использований"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "Рецепты",
|
||||
"checkpoints": "Checkpoints",
|
||||
"embeddings": "Embeddings",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "Статистика"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "Поиск LoRAs...",
|
||||
"recipes": "Поиск рецептов...",
|
||||
"checkpoints": "Поиск checkpoints...",
|
||||
"embeddings": "Поиск embeddings..."
|
||||
"embeddings": "Поиск embeddings...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "Опции поиска",
|
||||
"searchIn": "Искать в:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "Без указания авторства",
|
||||
"allowSellingGeneratedContent": "Продажа разрешена",
|
||||
"noTags": "Без тегов",
|
||||
"clearAll": "Очистить все фильтры",
|
||||
"any": "Любой",
|
||||
"all": "Все",
|
||||
"tagLogicAny": "Совпадение с любым тегом (ИЛИ)",
|
||||
"tagLogicAll": "Совпадение со всеми тегами (И)"
|
||||
"clearAll": "Очистить все фильтры"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "Переключить тему",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "Не удалось сохранить исключения: {message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "Пути для пропуска обновления метаданных",
|
||||
"placeholder": "Пример: temp, archived/old, test_models",
|
||||
"help": "Пропускать модели в этих каталогах при массовом обновлении метаданных («Получить все метаданные»). Введите пути к папкам относительно корневого каталога моделей, разделённые запятой.",
|
||||
"validation": {
|
||||
"noPaths": "Введите хотя бы один путь, разделённый запятыми.",
|
||||
"saveFailed": "Не удалось сохранить пути для пропуска: {message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "Плотность отображения",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "Отмечать любые доступные обновления"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "Скрыть обновления раннего доступа",
|
||||
"help": "Только обновления раннего доступа"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "Включать триггерные слова в синтаксис LoRA",
|
||||
"includeTriggerWordsHelp": "Включать обученные триггерные слова при копировании синтаксиса LoRA в буфер обмена"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "Проверить обновления для выбранных",
|
||||
"moveAll": "Переместить все в папку",
|
||||
"autoOrganize": "Автоматически организовать выбранные",
|
||||
"skipMetadataRefresh": "Пропустить обновление метаданных для выбранных",
|
||||
"resumeMetadataRefresh": "Возобновить обновление метаданных для выбранных",
|
||||
"deleteAll": "Удалить все модели",
|
||||
"clear": "Очистить выбор",
|
||||
"skipMetadataRefreshCount": "Пропустить({count} моделей)",
|
||||
"resumeMetadataRefreshCount": "Возобновить({count} моделей)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "Инициализация автоматической организации...",
|
||||
"starting": "Запуск автоматической организации для {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Модели Embedding"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "Корень",
|
||||
"collapseAll": "Свернуть все папки",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "Версия без названия",
|
||||
"noDetails": "Дополнительная информация отсутствует",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "скоро заканчивается",
|
||||
"hours": "через {count}ч",
|
||||
"days": "через {count}д"
|
||||
"noDetails": "Дополнительная информация отсутствует"
|
||||
},
|
||||
"badges": {
|
||||
"current": "Текущая версия",
|
||||
"inLibrary": "В библиотеке",
|
||||
"newer": "Более новая версия",
|
||||
"earlyAccess": "Ранний доступ",
|
||||
"ignored": "Игнорируется"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "Удалить",
|
||||
"ignore": "Игнорировать",
|
||||
"unignore": "Перестать игнорировать",
|
||||
"earlyAccessTooltip": "Требуется покупка раннего доступа",
|
||||
"resumeModelUpdates": "Возобновить обновления для этой модели",
|
||||
"ignoreModelUpdates": "Игнорировать обновления для этой модели",
|
||||
"viewLocalVersions": "Показать все локальные версии",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "Инициализация статистики",
|
||||
"message": "Обработка данных моделей для статистики. Это может занять несколько минут..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "Советы и хитрости",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "Рецепт добавлен в workflow",
|
||||
"recipeReplaced": "Рецепт заменён в workflow",
|
||||
"recipeFailedToSend": "Не удалось отправить рецепт в workflow",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "В текущем workflow нет совместимых узлов",
|
||||
"noTargetNodeSelected": "Целевой узел не выбран"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "Рецепт",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "Заменить",
|
||||
"append": "Добавить",
|
||||
"selectTargetNode": "Выберите целевой узел",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "Базовая модель успешно обновлена для {count} моделей",
|
||||
"bulkBaseModelUpdatePartial": "Обновлено {success} моделей, не удалось обновить {failed} моделей",
|
||||
"bulkBaseModelUpdateFailed": "Не удалось обновить базовую модель для выбранных моделей",
|
||||
"skipMetadataRefreshUpdating": "Обновление флага обновления метаданных для {count} модели(ей)...",
|
||||
"skipMetadataRefreshSet": "Обновление метаданных пропущено для {count} модели(ей)",
|
||||
"skipMetadataRefreshCleared": "Обновление метаданных возобновлено для {count} модели(ей)",
|
||||
"skipMetadataRefreshPartial": "{success} модели(ей) обновлено, {failed} не удалось",
|
||||
"skipMetadataRefreshFailed": "Не удалось обновить флаг обновления метаданных для выбранных моделей",
|
||||
"bulkContentRatingUpdating": "Обновление рейтинга контента для {count} модель(ей)...",
|
||||
"bulkContentRatingSet": "Рейтинг контента установлен на {level} для {count} модель(ей)",
|
||||
"bulkContentRatingPartial": "Рейтинг контента {level} установлен для {success} модель(ей), {failed} не удалось",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "Не удалось загрузить дерево папок",
|
||||
"folderTreeError": "Ошибка загрузки дерева папок",
|
||||
"imagesImported": "Примеры изображений успешно импортированы",
|
||||
"imagesPartial": "{success} изображ. импортировано, {failed} не удалось",
|
||||
"importFailed": "Не удалось импортировать примеры изображений: {message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "Обнаружено повреждение кэша"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "Обнаружены проблемы с кэшем"
|
||||
},
|
||||
"content": "{invalid} из {total} записей кэша недействительны ({rate}). Это может привести к отсутствию моделей или ошибкам. Рекомендуется перестроить кэш.",
|
||||
"rebuildCache": "Перестроить кэш",
|
||||
"dismiss": "Отклонить",
|
||||
"rebuilding": "Перестроение кэша...",
|
||||
"rebuildFailed": "Не удалось перестроить кэш: {error}",
|
||||
"retry": "Повторить"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "更新",
|
||||
"updateAvailable": "有可用更新",
|
||||
"skipRefresh": "元数据刷新已跳过"
|
||||
"updateAvailable": "有可用更新"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "使用次数"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "配方",
|
||||
"checkpoints": "Checkpoint",
|
||||
"embeddings": "Embedding",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "统计"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "搜索 LoRA...",
|
||||
"recipes": "搜索配方...",
|
||||
"checkpoints": "搜索 Checkpoint...",
|
||||
"embeddings": "搜索 Embedding..."
|
||||
"embeddings": "搜索 Embedding...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "搜索选项",
|
||||
"searchIn": "搜索范围:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "无需署名",
|
||||
"allowSellingGeneratedContent": "允许销售",
|
||||
"noTags": "无标签",
|
||||
"clearAll": "清除所有筛选",
|
||||
"any": "任一",
|
||||
"all": "全部",
|
||||
"tagLogicAny": "匹配任一标签 (或)",
|
||||
"tagLogicAll": "匹配所有标签 (与)"
|
||||
"clearAll": "清除所有筛选"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "切换主题",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "无法保存排除项:{message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "元数据刷新跳过路径",
|
||||
"placeholder": "示例:temp, archived/old, test_models",
|
||||
"help": "批量刷新元数据(\"获取全部元数据\")时跳过这些目录路径中的模型。输入相对于模型根目录的文件夹路径,以逗号分隔。",
|
||||
"validation": {
|
||||
"noPaths": "请输入至少一个路径,以逗号分隔。",
|
||||
"saveFailed": "无法保存跳过路径:{message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "显示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "显示任何可用更新"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "隐藏抢先体验更新",
|
||||
"help": "抢先体验更新"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "复制 LoRA 语法时包含触发词",
|
||||
"includeTriggerWordsHelp": "复制 LoRA 语法到剪贴板时包含训练触发词"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "检查所选更新",
|
||||
"moveAll": "移动所选中到文件夹",
|
||||
"autoOrganize": "自动整理所选模型",
|
||||
"skipMetadataRefresh": "跳过所选模型的元数据刷新",
|
||||
"resumeMetadataRefresh": "恢复所选模型的元数据刷新",
|
||||
"deleteAll": "删除选中模型",
|
||||
"clear": "清除选择",
|
||||
"skipMetadataRefreshCount": "跳过({count} 个模型)",
|
||||
"resumeMetadataRefreshCount": "恢复({count} 个模型)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "正在初始化自动整理...",
|
||||
"starting": "正在为 {type} 启动自动整理...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embedding 模型"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "根目录",
|
||||
"collapseAll": "折叠所有文件夹",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "未命名版本",
|
||||
"noDetails": "暂无更多信息",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "即将结束",
|
||||
"hours": "{count}小时后",
|
||||
"days": "{count}天后"
|
||||
"noDetails": "暂无更多信息"
|
||||
},
|
||||
"badges": {
|
||||
"current": "当前版本",
|
||||
"inLibrary": "已在库中",
|
||||
"newer": "较新的版本",
|
||||
"earlyAccess": "抢先体验",
|
||||
"ignored": "已忽略"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "删除",
|
||||
"ignore": "忽略",
|
||||
"unignore": "取消忽略",
|
||||
"earlyAccessTooltip": "需要购买抢先体验",
|
||||
"resumeModelUpdates": "继续跟踪该模型的更新",
|
||||
"ignoreModelUpdates": "忽略该模型的更新",
|
||||
"viewLocalVersions": "查看所有本地版本",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "初始化统计",
|
||||
"message": "正在处理模型数据以生成统计信息。这可能需要几分钟..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "技巧与提示",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "配方已追加到工作流",
|
||||
"recipeReplaced": "配方已替换到工作流",
|
||||
"recipeFailedToSend": "发送配方到工作流失败",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "当前工作流中没有兼容的节点",
|
||||
"noTargetNodeSelected": "未选择目标节点"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "配方",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "替换",
|
||||
"append": "追加",
|
||||
"selectTargetNode": "选择目标节点",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "成功为 {count} 个模型更新基础模型",
|
||||
"bulkBaseModelUpdatePartial": "更新了 {success} 个模型,{failed} 个失败",
|
||||
"bulkBaseModelUpdateFailed": "为选中模型更新基础模型失败",
|
||||
"skipMetadataRefreshUpdating": "正在更新 {count} 个模型的元数据刷新标志...",
|
||||
"skipMetadataRefreshSet": "已为 {count} 个模型跳过元数据刷新",
|
||||
"skipMetadataRefreshCleared": "已为 {count} 个模型恢复元数据刷新",
|
||||
"skipMetadataRefreshPartial": "已更新 {success} 个模型,{failed} 个失败",
|
||||
"skipMetadataRefreshFailed": "未能更新所选模型的元数据刷新标志",
|
||||
"bulkContentRatingUpdating": "正在为 {count} 个模型更新内容评级...",
|
||||
"bulkContentRatingSet": "已将 {count} 个模型的内容评级设置为 {level}",
|
||||
"bulkContentRatingPartial": "已将 {success} 个模型的内容评级设置为 {level},{failed} 个失败",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "加载文件夹树失败",
|
||||
"folderTreeError": "加载文件夹树出错",
|
||||
"imagesImported": "示例图片导入成功",
|
||||
"imagesPartial": "成功导入 {success} 张图片,{failed} 张失败",
|
||||
"importFailed": "导入示例图片失败:{message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "来爱发电为Lora Manager项目发电,支持项目持续开发的同时,获取浏览器插件验证码,按季支付更优惠!支付宝/微信方便支付。感谢支持!🚀",
|
||||
"supportCta": "为LM发电",
|
||||
"learnMore": "浏览器插件教程"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "检测到缓存损坏"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "检测到缓存问题"
|
||||
},
|
||||
"content": "{total} 个缓存条目中有 {invalid} 个无效({rate})。这可能导致模型丢失或错误。建议重建缓存。",
|
||||
"rebuildCache": "重建缓存",
|
||||
"dismiss": "忽略",
|
||||
"rebuilding": "正在重建缓存...",
|
||||
"rebuildFailed": "重建缓存失败:{error}",
|
||||
"retry": "重试"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,8 +131,7 @@
|
||||
},
|
||||
"badges": {
|
||||
"update": "更新",
|
||||
"updateAvailable": "有可用更新",
|
||||
"skipRefresh": "元數據更新已跳過"
|
||||
"updateAvailable": "有可用更新"
|
||||
},
|
||||
"usage": {
|
||||
"timesUsed": "使用次數"
|
||||
@@ -180,6 +179,7 @@
|
||||
"recipes": "配方",
|
||||
"checkpoints": "Checkpoint",
|
||||
"embeddings": "Embedding",
|
||||
"misc": "[TODO: Translate] Misc",
|
||||
"statistics": "統計"
|
||||
},
|
||||
"search": {
|
||||
@@ -188,7 +188,8 @@
|
||||
"loras": "搜尋 LoRA...",
|
||||
"recipes": "搜尋配方...",
|
||||
"checkpoints": "搜尋 checkpoint...",
|
||||
"embeddings": "搜尋 embedding..."
|
||||
"embeddings": "搜尋 embedding...",
|
||||
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
|
||||
},
|
||||
"options": "搜尋選項",
|
||||
"searchIn": "搜尋範圍:",
|
||||
@@ -224,11 +225,7 @@
|
||||
"noCreditRequired": "無需署名",
|
||||
"allowSellingGeneratedContent": "允許銷售",
|
||||
"noTags": "無標籤",
|
||||
"clearAll": "清除所有篩選",
|
||||
"any": "任一",
|
||||
"all": "全部",
|
||||
"tagLogicAny": "符合任一票籤 (或)",
|
||||
"tagLogicAll": "符合所有標籤 (與)"
|
||||
"clearAll": "清除所有篩選"
|
||||
},
|
||||
"theme": {
|
||||
"toggle": "切換主題",
|
||||
@@ -292,15 +289,6 @@
|
||||
"saveFailed": "無法儲存排除項目:{message}"
|
||||
}
|
||||
},
|
||||
"metadataRefreshSkipPaths": {
|
||||
"label": "中繼資料重新整理跳過路徑",
|
||||
"placeholder": "範例:temp, archived/old, test_models",
|
||||
"help": "批次重新整理中繼資料(「擷取所有中繼資料」)時跳過這些目錄路徑中的模型。輸入相對於模型根目錄的資料夾路徑,以逗號分隔。",
|
||||
"validation": {
|
||||
"noPaths": "請輸入至少一個路徑,以逗號分隔。",
|
||||
"saveFailed": "無法儲存跳過路徑:{message}"
|
||||
}
|
||||
},
|
||||
"layoutSettings": {
|
||||
"displayDensity": "顯示密度",
|
||||
"displayDensityOptions": {
|
||||
@@ -426,10 +414,6 @@
|
||||
"any": "顯示任何可用更新"
|
||||
}
|
||||
},
|
||||
"hideEarlyAccessUpdates": {
|
||||
"label": "隱藏搶先體驗更新",
|
||||
"help": "搶先體驗更新"
|
||||
},
|
||||
"misc": {
|
||||
"includeTriggerWords": "在 LoRA 語法中包含觸發詞",
|
||||
"includeTriggerWordsHelp": "複製 LoRA 語法到剪貼簿時包含訓練觸發詞"
|
||||
@@ -541,12 +525,8 @@
|
||||
"checkUpdates": "檢查所選更新",
|
||||
"moveAll": "全部移動到資料夾",
|
||||
"autoOrganize": "自動整理所選模型",
|
||||
"skipMetadataRefresh": "跳過所選模型的元數據更新",
|
||||
"resumeMetadataRefresh": "恢復所選模型的元數據更新",
|
||||
"deleteAll": "刪除全部模型",
|
||||
"clear": "清除選取",
|
||||
"skipMetadataRefreshCount": "跳過({count} 個模型)",
|
||||
"resumeMetadataRefreshCount": "恢復({count} 個模型)",
|
||||
"autoOrganizeProgress": {
|
||||
"initializing": "正在初始化自動整理...",
|
||||
"starting": "正在開始自動整理 {type}...",
|
||||
@@ -710,6 +690,16 @@
|
||||
"embeddings": {
|
||||
"title": "Embedding 模型"
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] VAE & Upscaler Models",
|
||||
"modelTypes": {
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler"
|
||||
},
|
||||
"contextMenu": {
|
||||
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
|
||||
}
|
||||
},
|
||||
"sidebar": {
|
||||
"modelRoot": "根目錄",
|
||||
"collapseAll": "全部摺疊資料夾",
|
||||
@@ -1035,19 +1025,12 @@
|
||||
},
|
||||
"labels": {
|
||||
"unnamed": "未命名版本",
|
||||
"noDetails": "沒有其他資訊",
|
||||
"earlyAccess": "EA"
|
||||
},
|
||||
"eaTime": {
|
||||
"endingSoon": "即將結束",
|
||||
"hours": "{count}小時後",
|
||||
"days": "{count}天後"
|
||||
"noDetails": "沒有其他資訊"
|
||||
},
|
||||
"badges": {
|
||||
"current": "目前版本",
|
||||
"inLibrary": "已在庫中",
|
||||
"newer": "較新版本",
|
||||
"earlyAccess": "搶先體驗",
|
||||
"ignored": "已忽略"
|
||||
},
|
||||
"actions": {
|
||||
@@ -1055,7 +1038,6 @@
|
||||
"delete": "刪除",
|
||||
"ignore": "忽略",
|
||||
"unignore": "取消忽略",
|
||||
"earlyAccessTooltip": "需要購買搶先體驗",
|
||||
"resumeModelUpdates": "恢復追蹤此模型的更新",
|
||||
"ignoreModelUpdates": "忽略此模型的更新",
|
||||
"viewLocalVersions": "檢視所有本地版本",
|
||||
@@ -1134,6 +1116,10 @@
|
||||
"title": "初始化統計",
|
||||
"message": "正在處理模型資料以產生統計,可能需要幾分鐘..."
|
||||
},
|
||||
"misc": {
|
||||
"title": "[TODO: Translate] Initializing Misc Model Manager",
|
||||
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
|
||||
},
|
||||
"tips": {
|
||||
"title": "小技巧",
|
||||
"civitai": {
|
||||
@@ -1193,12 +1179,18 @@
|
||||
"recipeAdded": "配方已附加到工作流",
|
||||
"recipeReplaced": "配方已取代於工作流",
|
||||
"recipeFailedToSend": "傳送配方到工作流失敗",
|
||||
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
|
||||
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
|
||||
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
|
||||
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
|
||||
"noMatchingNodes": "目前工作流程中沒有相容的節點",
|
||||
"noTargetNodeSelected": "未選擇目標節點"
|
||||
},
|
||||
"nodeSelector": {
|
||||
"recipe": "配方",
|
||||
"lora": "LoRA",
|
||||
"vae": "[TODO: Translate] VAE",
|
||||
"upscaler": "[TODO: Translate] Upscaler",
|
||||
"replace": "取代",
|
||||
"append": "附加",
|
||||
"selectTargetNode": "選擇目標節點",
|
||||
@@ -1405,11 +1397,6 @@
|
||||
"bulkBaseModelUpdateSuccess": "已成功為 {count} 個模型更新基礎模型",
|
||||
"bulkBaseModelUpdatePartial": "已更新 {success} 個模型,{failed} 個模型失敗",
|
||||
"bulkBaseModelUpdateFailed": "更新所選模型的基礎模型失敗",
|
||||
"skipMetadataRefreshUpdating": "正在更新 {count} 個模型的元數據更新標記...",
|
||||
"skipMetadataRefreshSet": "已為 {count} 個模型跳過元數據更新",
|
||||
"skipMetadataRefreshCleared": "已為 {count} 個模型恢復元數據更新",
|
||||
"skipMetadataRefreshPartial": "已更新 {success} 個模型,{failed} 個失敗",
|
||||
"skipMetadataRefreshFailed": "無法更新所選模型的元數據更新標記",
|
||||
"bulkContentRatingUpdating": "正在為 {count} 個模型更新內容分級...",
|
||||
"bulkContentRatingSet": "已將 {count} 個模型的內容分級設定為 {level}",
|
||||
"bulkContentRatingPartial": "已將 {success} 個模型的內容分級設定為 {level},{failed} 個失敗",
|
||||
@@ -1497,7 +1484,6 @@
|
||||
"folderTreeFailed": "載入資料夾樹狀結構失敗",
|
||||
"folderTreeError": "載入資料夾樹狀結構錯誤",
|
||||
"imagesImported": "範例圖片匯入成功",
|
||||
"imagesPartial": "成功匯入 {success} 張圖片,{failed} 張失敗",
|
||||
"importFailed": "匯入範例圖片失敗:{message}"
|
||||
},
|
||||
"triggerWords": {
|
||||
@@ -1608,20 +1594,6 @@
|
||||
"content": "LoRA Manager is a passion project maintained full-time by a solo developer. Your support on Ko-fi helps cover development costs, keeps new updates coming, and unlocks a license key for the LM Civitai Extension as a thank-you gift. Every contribution truly makes a difference.",
|
||||
"supportCta": "Support on Ko-fi",
|
||||
"learnMore": "LM Civitai Extension Tutorial"
|
||||
},
|
||||
"cacheHealth": {
|
||||
"corrupted": {
|
||||
"title": "檢測到快取損壞"
|
||||
},
|
||||
"degraded": {
|
||||
"title": "檢測到快取問題"
|
||||
},
|
||||
"content": "{total} 個快取項目中有 {invalid} 個無效({rate})。這可能會導致模型遺失或錯誤。建議重建快取。",
|
||||
"rebuildCache": "重建快取",
|
||||
"dismiss": "關閉",
|
||||
"rebuilding": "重建快取中...",
|
||||
"rebuildFailed": "重建快取失敗:{error}",
|
||||
"retry": "重試"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,9 +4,7 @@
|
||||
"private": true,
|
||||
"type": "module",
|
||||
"scripts": {
|
||||
"test": "npm run test:js && npm run test:vue",
|
||||
"test:js": "vitest run",
|
||||
"test:vue": "cd vue-widgets && npx vitest run",
|
||||
"test": "vitest run",
|
||||
"test:watch": "vitest",
|
||||
"test:coverage": "node scripts/run_frontend_coverage.js"
|
||||
},
|
||||
|
||||
259
py/config.py
259
py/config.py
@@ -89,8 +89,11 @@ class Config:
|
||||
self.checkpoints_roots = None
|
||||
self.unet_roots = None
|
||||
self.embeddings_roots = None
|
||||
self.vae_roots = None
|
||||
self.upscaler_roots = None
|
||||
self.base_models_roots = self._init_checkpoint_paths()
|
||||
self.embeddings_roots = self._init_embedding_paths()
|
||||
self.misc_roots = self._init_misc_paths()
|
||||
# Scan symbolic links during initialization
|
||||
self._initialize_symlink_mappings()
|
||||
|
||||
@@ -151,6 +154,8 @@ class Config:
|
||||
'checkpoints': list(self.checkpoints_roots or []),
|
||||
'unet': list(self.unet_roots or []),
|
||||
'embeddings': list(self.embeddings_roots or []),
|
||||
'vae': list(self.vae_roots or []),
|
||||
'upscale_models': list(self.upscaler_roots or []),
|
||||
}
|
||||
|
||||
normalized_target_paths = _normalize_folder_paths_for_comparison(target_folder_paths)
|
||||
@@ -250,6 +255,7 @@ class Config:
|
||||
roots.extend(self.loras_roots or [])
|
||||
roots.extend(self.base_models_roots or [])
|
||||
roots.extend(self.embeddings_roots or [])
|
||||
roots.extend(self.misc_roots or [])
|
||||
return roots
|
||||
|
||||
def _build_symlink_fingerprint(self) -> Dict[str, object]:
|
||||
@@ -441,53 +447,82 @@ class Config:
|
||||
logger.info("Failed to write symlink cache %s: %s", cache_path, exc)
|
||||
|
||||
def _scan_symbolic_links(self):
|
||||
"""Scan symbolic links in LoRA, Checkpoint, and Embedding root directories.
|
||||
|
||||
Only scans the first level of each root directory to avoid performance
|
||||
issues with large file systems. Detects symlinks and Windows junctions
|
||||
at the root level only (not nested symlinks in subdirectories).
|
||||
"""
|
||||
"""Scan all symbolic links in LoRA, Checkpoint, and Embedding root directories"""
|
||||
start = time.perf_counter()
|
||||
|
||||
# Reset mappings before rescanning to avoid stale entries
|
||||
self._path_mappings.clear()
|
||||
self._seed_root_symlink_mappings()
|
||||
visited_dirs: Set[str] = set()
|
||||
for root in self._symlink_roots():
|
||||
self._scan_first_level_symlinks(root)
|
||||
self._scan_directory_links(root, visited_dirs)
|
||||
logger.debug(
|
||||
"Symlink scan finished in %.2f ms with %d mappings",
|
||||
(time.perf_counter() - start) * 1000,
|
||||
len(self._path_mappings),
|
||||
)
|
||||
|
||||
def _scan_first_level_symlinks(self, root: str):
|
||||
"""Scan only the first level of a directory for symlinks.
|
||||
|
||||
This avoids traversing the entire directory tree which can be extremely
|
||||
slow for large model collections. Only symlinks directly under the root
|
||||
are detected.
|
||||
"""
|
||||
def _scan_directory_links(self, root: str, visited_dirs: Set[str]):
|
||||
"""Iteratively scan directory symlinks to avoid deep recursion."""
|
||||
try:
|
||||
with os.scandir(root) as it:
|
||||
for entry in it:
|
||||
try:
|
||||
# Only detect symlinks including Windows junctions
|
||||
# Skip normal directories to avoid deep traversal
|
||||
if not self._entry_is_symlink(entry):
|
||||
continue
|
||||
# Note: We only use realpath for the initial root if it's not already resolved
|
||||
# to ensure we have a valid entry point.
|
||||
root_real = self._normalize_path(os.path.realpath(root))
|
||||
except OSError:
|
||||
root_real = self._normalize_path(root)
|
||||
|
||||
# Resolve the symlink target
|
||||
target_path = os.path.realpath(entry.path)
|
||||
if not os.path.isdir(target_path):
|
||||
continue
|
||||
if root_real in visited_dirs:
|
||||
return
|
||||
|
||||
self.add_path_mapping(entry.path, target_path)
|
||||
except Exception as inner_exc:
|
||||
logger.debug(
|
||||
"Error processing directory entry %s: %s", entry.path, inner_exc
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error scanning links in {root}: {e}")
|
||||
visited_dirs.add(root_real)
|
||||
# Stack entries: (display_path, real_resolved_path)
|
||||
stack: List[Tuple[str, str]] = [(root, root_real)]
|
||||
|
||||
while stack:
|
||||
current_display, current_real = stack.pop()
|
||||
try:
|
||||
with os.scandir(current_display) as it:
|
||||
for entry in it:
|
||||
try:
|
||||
# 1. Detect symlinks including Windows junctions
|
||||
is_link = self._entry_is_symlink(entry)
|
||||
|
||||
if is_link:
|
||||
# Only resolve realpath when we actually find a link
|
||||
target_path = os.path.realpath(entry.path)
|
||||
if not os.path.isdir(target_path):
|
||||
continue
|
||||
|
||||
normalized_target = self._normalize_path(target_path)
|
||||
self.add_path_mapping(entry.path, target_path)
|
||||
|
||||
if normalized_target in visited_dirs:
|
||||
continue
|
||||
|
||||
visited_dirs.add(normalized_target)
|
||||
stack.append((target_path, normalized_target))
|
||||
continue
|
||||
|
||||
# 2. Process normal directories
|
||||
if not entry.is_dir(follow_symlinks=False):
|
||||
continue
|
||||
|
||||
# For normal directories, we avoid realpath() call by
|
||||
# incrementally building the real path relative to current_real.
|
||||
# This is safe because 'entry' is NOT a symlink.
|
||||
entry_real = self._normalize_path(os.path.join(current_real, entry.name))
|
||||
|
||||
if entry_real in visited_dirs:
|
||||
continue
|
||||
|
||||
visited_dirs.add(entry_real)
|
||||
stack.append((entry.path, entry_real))
|
||||
except Exception as inner_exc:
|
||||
logger.debug(
|
||||
"Error processing directory entry %s: %s", entry.path, inner_exc
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error scanning links in {current_display}: {e}")
|
||||
|
||||
|
||||
|
||||
@@ -570,6 +605,8 @@ class Config:
|
||||
preview_roots.update(self._expand_preview_root(root))
|
||||
for root in self.embeddings_roots or []:
|
||||
preview_roots.update(self._expand_preview_root(root))
|
||||
for root in self.misc_roots or []:
|
||||
preview_roots.update(self._expand_preview_root(root))
|
||||
|
||||
for target, link in self._path_mappings.items():
|
||||
preview_roots.update(self._expand_preview_root(target))
|
||||
@@ -577,11 +614,12 @@ class Config:
|
||||
|
||||
self._preview_root_paths = {path for path in preview_roots if path.is_absolute()}
|
||||
logger.debug(
|
||||
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d symlink mappings",
|
||||
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d misc roots, %d symlink mappings",
|
||||
len(self._preview_root_paths),
|
||||
len(self.loras_roots or []),
|
||||
len(self.base_models_roots or []),
|
||||
len(self.embeddings_roots or []),
|
||||
len(self.misc_roots or []),
|
||||
len(self._path_mappings),
|
||||
)
|
||||
|
||||
@@ -645,23 +683,6 @@ class Config:
|
||||
checkpoint_map = self._dedupe_existing_paths(checkpoint_paths)
|
||||
unet_map = self._dedupe_existing_paths(unet_paths)
|
||||
|
||||
# Detect when checkpoints and unet share the same physical location
|
||||
# This is a configuration issue that can cause duplicate model entries
|
||||
overlapping_real_paths = set(checkpoint_map.keys()) & set(unet_map.keys())
|
||||
if overlapping_real_paths:
|
||||
logger.warning(
|
||||
"Detected overlapping paths between 'checkpoints' and 'diffusion_models' (unet). "
|
||||
"They should not point to the same physical folder as they are different model types. "
|
||||
"Please fix your ComfyUI path configuration to separate these folders. "
|
||||
"Falling back to 'checkpoints' for backward compatibility. "
|
||||
"Overlapping real paths: %s",
|
||||
[checkpoint_map.get(rp, rp) for rp in overlapping_real_paths]
|
||||
)
|
||||
# Remove overlapping paths from unet_map to prioritize checkpoints
|
||||
for rp in overlapping_real_paths:
|
||||
if rp in unet_map:
|
||||
del unet_map[rp]
|
||||
|
||||
merged_map: Dict[str, str] = {}
|
||||
for real_path, original in {**checkpoint_map, **unet_map}.items():
|
||||
if real_path not in merged_map:
|
||||
@@ -757,6 +778,49 @@ class Config:
|
||||
logger.warning(f"Error initializing embedding paths: {e}")
|
||||
return []
|
||||
|
||||
def _init_misc_paths(self) -> List[str]:
|
||||
"""Initialize and validate misc (VAE and upscaler) paths from ComfyUI settings"""
|
||||
try:
|
||||
raw_vae_paths = folder_paths.get_folder_paths("vae")
|
||||
raw_upscaler_paths = folder_paths.get_folder_paths("upscale_models")
|
||||
unique_paths = self._prepare_misc_paths(raw_vae_paths, raw_upscaler_paths)
|
||||
|
||||
logger.info("Found misc roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
|
||||
|
||||
if not unique_paths:
|
||||
logger.warning("No valid VAE or upscaler folders found in ComfyUI configuration")
|
||||
return []
|
||||
|
||||
return unique_paths
|
||||
except Exception as e:
|
||||
logger.warning(f"Error initializing misc paths: {e}")
|
||||
return []
|
||||
|
||||
def _prepare_misc_paths(
|
||||
self, vae_paths: Iterable[str], upscaler_paths: Iterable[str]
|
||||
) -> List[str]:
|
||||
vae_map = self._dedupe_existing_paths(vae_paths)
|
||||
upscaler_map = self._dedupe_existing_paths(upscaler_paths)
|
||||
|
||||
merged_map: Dict[str, str] = {}
|
||||
for real_path, original in {**vae_map, **upscaler_map}.items():
|
||||
if real_path not in merged_map:
|
||||
merged_map[real_path] = original
|
||||
|
||||
unique_paths = sorted(merged_map.values(), key=lambda p: p.lower())
|
||||
|
||||
vae_values = set(vae_map.values())
|
||||
upscaler_values = set(upscaler_map.values())
|
||||
self.vae_roots = [p for p in unique_paths if p in vae_values]
|
||||
self.upscaler_roots = [p for p in unique_paths if p in upscaler_values]
|
||||
|
||||
for original_path in unique_paths:
|
||||
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
|
||||
if real_path != original_path:
|
||||
self.add_path_mapping(original_path, real_path)
|
||||
|
||||
return unique_paths
|
||||
|
||||
def get_preview_static_url(self, preview_path: str) -> str:
|
||||
if not preview_path:
|
||||
return ""
|
||||
@@ -766,23 +830,7 @@ class Config:
|
||||
return f'/api/lm/previews?path={encoded_path}'
|
||||
|
||||
def is_preview_path_allowed(self, preview_path: str) -> bool:
|
||||
"""Return ``True`` if ``preview_path`` is within an allowed directory.
|
||||
|
||||
If the path is initially rejected, attempts to discover deep symlinks
|
||||
that were not scanned during initialization. If a symlink is found,
|
||||
updates the in-memory path mappings and retries the check.
|
||||
"""
|
||||
|
||||
if self._is_path_in_allowed_roots(preview_path):
|
||||
return True
|
||||
|
||||
if self._try_discover_deep_symlink(preview_path):
|
||||
return self._is_path_in_allowed_roots(preview_path)
|
||||
|
||||
return False
|
||||
|
||||
def _is_path_in_allowed_roots(self, preview_path: str) -> bool:
|
||||
"""Check if preview_path is within allowed preview roots without modification."""
|
||||
"""Return ``True`` if ``preview_path`` is within an allowed directory."""
|
||||
|
||||
if not preview_path:
|
||||
return False
|
||||
@@ -792,72 +840,29 @@ class Config:
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# Use os.path.normcase for case-insensitive comparison on Windows.
|
||||
# On Windows, Path.relative_to() is case-sensitive for drive letters,
|
||||
# causing paths like 'a:/folder' to not match 'A:/folder'.
|
||||
candidate_str = os.path.normcase(str(candidate))
|
||||
for root in self._preview_root_paths:
|
||||
root_str = os.path.normcase(str(root))
|
||||
# Check if candidate is equal to or under the root directory
|
||||
if candidate_str == root_str or candidate_str.startswith(root_str + os.sep):
|
||||
return True
|
||||
|
||||
logger.debug(
|
||||
"Path not in allowed roots: %s (candidate=%s, num_roots=%d)",
|
||||
preview_path,
|
||||
candidate_str,
|
||||
len(self._preview_root_paths),
|
||||
)
|
||||
|
||||
return False
|
||||
|
||||
def _try_discover_deep_symlink(self, preview_path: str) -> bool:
|
||||
"""Attempt to discover a deep symlink that contains the preview_path.
|
||||
|
||||
Walks up from the preview path to the root directories, checking each
|
||||
parent directory for symlinks. If a symlink is found, updates the
|
||||
in-memory path mappings and preview roots.
|
||||
|
||||
Only updates in-memory state (self._path_mappings and self._preview_root_paths),
|
||||
does not modify the persistent cache file.
|
||||
|
||||
Returns:
|
||||
True if a symlink was discovered and mappings updated, False otherwise.
|
||||
"""
|
||||
if not preview_path:
|
||||
return False
|
||||
|
||||
try:
|
||||
candidate = Path(preview_path).expanduser()
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
current = candidate
|
||||
while True:
|
||||
try:
|
||||
if self._is_link(str(current)):
|
||||
try:
|
||||
target = os.path.realpath(str(current))
|
||||
normalized_target = self._normalize_path(target)
|
||||
normalized_link = self._normalize_path(str(current))
|
||||
|
||||
self._path_mappings[normalized_target] = normalized_link
|
||||
self._preview_root_paths.update(self._expand_preview_root(normalized_target))
|
||||
self._preview_root_paths.update(self._expand_preview_root(normalized_link))
|
||||
|
||||
logger.debug(
|
||||
"Discovered deep symlink: %s -> %s (preview path: %s)",
|
||||
normalized_link,
|
||||
normalized_target,
|
||||
preview_path
|
||||
)
|
||||
|
||||
return True
|
||||
except OSError:
|
||||
pass
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
parent = current.parent
|
||||
if parent == current:
|
||||
break
|
||||
current = parent
|
||||
if self._preview_root_paths:
|
||||
logger.debug(
|
||||
"Preview path rejected: %s (candidate=%s, num_roots=%d, first_root=%s)",
|
||||
preview_path,
|
||||
candidate_str,
|
||||
len(self._preview_root_paths),
|
||||
os.path.normcase(str(next(iter(self._preview_root_paths)))),
|
||||
)
|
||||
else:
|
||||
logger.debug(
|
||||
"Preview path rejected (no roots configured): %s",
|
||||
preview_path,
|
||||
)
|
||||
|
||||
return False
|
||||
|
||||
|
||||
@@ -184,15 +184,17 @@ class LoraManager:
|
||||
lora_scanner = await ServiceRegistry.get_lora_scanner()
|
||||
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
|
||||
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
|
||||
misc_scanner = await ServiceRegistry.get_misc_scanner()
|
||||
|
||||
# Initialize recipe scanner if needed
|
||||
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
|
||||
|
||||
|
||||
# Create low-priority initialization tasks
|
||||
init_tasks = [
|
||||
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init'),
|
||||
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init'),
|
||||
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init'),
|
||||
asyncio.create_task(misc_scanner.initialize_in_background(), name='misc_cache_init'),
|
||||
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
|
||||
]
|
||||
|
||||
@@ -252,8 +254,9 @@ class LoraManager:
|
||||
# Collect all model roots
|
||||
all_roots = set()
|
||||
all_roots.update(config.loras_roots)
|
||||
all_roots.update(config.base_models_roots)
|
||||
all_roots.update(config.base_models_roots)
|
||||
all_roots.update(config.embeddings_roots)
|
||||
all_roots.update(config.misc_roots or [])
|
||||
|
||||
total_deleted = 0
|
||||
total_size_freed = 0
|
||||
|
||||
@@ -1,7 +1,4 @@
|
||||
import os
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Check if running in standalone mode
|
||||
standalone_mode = os.environ.get("LORA_MANAGER_STANDALONE", "0") == "1" or os.environ.get("HF_HUB_DISABLE_TELEMETRY", "0") == "0"
|
||||
@@ -17,7 +14,7 @@ if not standalone_mode:
|
||||
# Initialize registry
|
||||
registry = MetadataRegistry()
|
||||
|
||||
logger.info("ComfyUI Metadata Collector initialized")
|
||||
print("ComfyUI Metadata Collector initialized")
|
||||
|
||||
def get_metadata(prompt_id=None):
|
||||
"""Helper function to get metadata from the registry"""
|
||||
@@ -26,7 +23,7 @@ if not standalone_mode:
|
||||
else:
|
||||
# Standalone mode - provide dummy implementations
|
||||
def init():
|
||||
logger.info("ComfyUI Metadata Collector disabled in standalone mode")
|
||||
print("ComfyUI Metadata Collector disabled in standalone mode")
|
||||
|
||||
def get_metadata(prompt_id=None):
|
||||
"""Dummy implementation for standalone mode"""
|
||||
|
||||
@@ -1,10 +1,7 @@
|
||||
import sys
|
||||
import inspect
|
||||
import logging
|
||||
from .metadata_registry import MetadataRegistry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MetadataHook:
|
||||
"""Install hooks for metadata collection"""
|
||||
|
||||
@@ -26,7 +23,7 @@ class MetadataHook:
|
||||
|
||||
# If we can't find the execution module, we can't install hooks
|
||||
if execution is None:
|
||||
logger.warning("Could not locate ComfyUI execution module, metadata collection disabled")
|
||||
print("Could not locate ComfyUI execution module, metadata collection disabled")
|
||||
return
|
||||
|
||||
# Detect whether we're using the new async version of ComfyUI
|
||||
@@ -40,16 +37,16 @@ class MetadataHook:
|
||||
is_async = inspect.iscoroutinefunction(execution._map_node_over_list)
|
||||
|
||||
if is_async:
|
||||
logger.info("Detected async ComfyUI execution, installing async metadata hooks")
|
||||
print("Detected async ComfyUI execution, installing async metadata hooks")
|
||||
MetadataHook._install_async_hooks(execution, map_node_func_name)
|
||||
else:
|
||||
logger.info("Detected sync ComfyUI execution, installing sync metadata hooks")
|
||||
print("Detected sync ComfyUI execution, installing sync metadata hooks")
|
||||
MetadataHook._install_sync_hooks(execution)
|
||||
|
||||
logger.info("Metadata collection hooks installed for runtime values")
|
||||
print("Metadata collection hooks installed for runtime values")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error installing metadata hooks: {str(e)}")
|
||||
print(f"Error installing metadata hooks: {str(e)}")
|
||||
|
||||
@staticmethod
|
||||
def _install_sync_hooks(execution):
|
||||
@@ -85,7 +82,7 @@ class MetadataHook:
|
||||
if node_id is not None:
|
||||
registry.record_node_execution(node_id, class_type, input_data_all, None)
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
|
||||
print(f"Error collecting metadata (pre-execution): {str(e)}")
|
||||
|
||||
# Execute the original function
|
||||
results = original_map_node_over_list(obj, input_data_all, func, allow_interrupt, execution_block_cb, pre_execute_cb)
|
||||
@@ -116,7 +113,7 @@ class MetadataHook:
|
||||
if node_id is not None:
|
||||
registry.update_node_execution(node_id, class_type, results)
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
|
||||
print(f"Error collecting metadata (post-execution): {str(e)}")
|
||||
|
||||
return results
|
||||
|
||||
@@ -162,7 +159,7 @@ class MetadataHook:
|
||||
if node_id is not None:
|
||||
registry.record_node_execution(node_id, class_type, input_data_all, None)
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting metadata (pre-execution): {str(e)}")
|
||||
print(f"Error collecting metadata (pre-execution): {str(e)}")
|
||||
|
||||
# Call original function with all args/kwargs
|
||||
results = await original_map_node_over_list(
|
||||
@@ -179,7 +176,7 @@ class MetadataHook:
|
||||
if node_id is not None:
|
||||
registry.update_node_execution(node_id, class_type, results)
|
||||
except Exception as e:
|
||||
logger.error(f"Error collecting metadata (post-execution): {str(e)}")
|
||||
print(f"Error collecting metadata (post-execution): {str(e)}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@@ -126,7 +126,9 @@ class LoraCyclerLM:
|
||||
"current_index": [clamped_index],
|
||||
"next_index": [next_index],
|
||||
"total_count": [total_count],
|
||||
"current_lora_name": [current_lora["file_name"]],
|
||||
"current_lora_name": [
|
||||
current_lora.get("model_name", current_lora["file_name"])
|
||||
],
|
||||
"current_lora_filename": [current_lora["file_name"]],
|
||||
"next_lora_name": [next_display_name],
|
||||
"next_lora_filename": [next_lora["file_name"]],
|
||||
|
||||
@@ -1,16 +1,4 @@
|
||||
from typing import Any
|
||||
import inspect
|
||||
|
||||
|
||||
class _AllContainer:
|
||||
"""Container that accepts any key for dynamic input validation."""
|
||||
|
||||
def __contains__(self, item):
|
||||
return True
|
||||
|
||||
def __getitem__(self, key):
|
||||
return ("STRING", {"forceInput": True})
|
||||
|
||||
from typing import Any, Optional
|
||||
|
||||
class PromptLM:
|
||||
"""Encodes text (and optional trigger words) into CLIP conditioning."""
|
||||
@@ -19,27 +7,11 @@ class PromptLM:
|
||||
CATEGORY = "Lora Manager/conditioning"
|
||||
DESCRIPTION = (
|
||||
"Encodes a text prompt using a CLIP model into an embedding that can be used "
|
||||
"to guide the diffusion model towards generating specific images. "
|
||||
"Supports dynamic trigger words inputs."
|
||||
"to guide the diffusion model towards generating specific images."
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def INPUT_TYPES(cls):
|
||||
dyn_inputs = {
|
||||
"trigger_words1": (
|
||||
"STRING",
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": "Trigger words to prepend. Connect to add more inputs.",
|
||||
},
|
||||
),
|
||||
}
|
||||
|
||||
# Bypass validation for dynamic inputs during graph execution
|
||||
stack = inspect.stack()
|
||||
if len(stack) > 2 and stack[2].function == "get_input_info":
|
||||
dyn_inputs = _AllContainer()
|
||||
|
||||
return {
|
||||
"required": {
|
||||
"text": (
|
||||
@@ -51,34 +23,36 @@ class PromptLM:
|
||||
},
|
||||
),
|
||||
"clip": (
|
||||
"CLIP",
|
||||
'CLIP',
|
||||
{"tooltip": "The CLIP model used for encoding the text."},
|
||||
),
|
||||
},
|
||||
"optional": dyn_inputs,
|
||||
"optional": {
|
||||
"trigger_words": (
|
||||
'STRING',
|
||||
{
|
||||
"forceInput": True,
|
||||
"tooltip": (
|
||||
"Optional trigger words to prepend to the text before "
|
||||
"encoding."
|
||||
)
|
||||
},
|
||||
)
|
||||
},
|
||||
}
|
||||
|
||||
RETURN_TYPES = ("CONDITIONING", "STRING")
|
||||
RETURN_NAMES = ("CONDITIONING", "PROMPT")
|
||||
RETURN_TYPES = ('CONDITIONING', 'STRING',)
|
||||
RETURN_NAMES = ('CONDITIONING', 'PROMPT',)
|
||||
OUTPUT_TOOLTIPS = (
|
||||
"A conditioning containing the embedded text used to guide the diffusion model.",
|
||||
)
|
||||
FUNCTION = "encode"
|
||||
|
||||
def encode(self, text: str, clip: Any, **kwargs):
|
||||
# Collect all trigger words from dynamic inputs
|
||||
trigger_words = []
|
||||
for key, value in kwargs.items():
|
||||
if key.startswith("trigger_words") and value:
|
||||
trigger_words.append(value)
|
||||
|
||||
# Build final prompt
|
||||
def encode(self, text: str, clip: Any, trigger_words: Optional[str] = None):
|
||||
prompt = text
|
||||
if trigger_words:
|
||||
prompt = ", ".join(trigger_words + [text])
|
||||
else:
|
||||
prompt = text
|
||||
prompt = ", ".join([trigger_words, text])
|
||||
|
||||
from nodes import CLIPTextEncode # type: ignore
|
||||
|
||||
conditioning = CLIPTextEncode().encode(clip, prompt)[0]
|
||||
return (conditioning, prompt)
|
||||
return (conditioning, prompt,)
|
||||
@@ -8,9 +8,6 @@ from ..metadata_collector.metadata_processor import MetadataProcessor
|
||||
from ..metadata_collector import get_metadata
|
||||
from PIL import Image, PngImagePlugin
|
||||
import piexif
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class SaveImageLM:
|
||||
NAME = "Save Image (LoraManager)"
|
||||
@@ -388,7 +385,7 @@ class SaveImageLM:
|
||||
exif_bytes = piexif.dump(exif_dict)
|
||||
save_kwargs["exif"] = exif_bytes
|
||||
except Exception as e:
|
||||
logger.error(f"Error adding EXIF data: {e}")
|
||||
print(f"Error adding EXIF data: {e}")
|
||||
img.save(file_path, format="JPEG", **save_kwargs)
|
||||
elif file_format == "webp":
|
||||
try:
|
||||
@@ -406,7 +403,7 @@ class SaveImageLM:
|
||||
exif_bytes = piexif.dump(exif_dict)
|
||||
save_kwargs["exif"] = exif_bytes
|
||||
except Exception as e:
|
||||
logger.error(f"Error adding EXIF data: {e}")
|
||||
print(f"Error adding EXIF data: {e}")
|
||||
|
||||
img.save(file_path, format="WEBP", **save_kwargs)
|
||||
|
||||
@@ -417,7 +414,7 @@ class SaveImageLM:
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving image: {e}")
|
||||
print(f"Error saving image: {e}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
@@ -60,22 +60,6 @@ class TriggerWordToggleLM:
|
||||
else:
|
||||
return data
|
||||
|
||||
def _normalize_trigger_words(self, trigger_words):
|
||||
"""Normalize trigger words by splitting by both single and double commas, stripping whitespace, and filtering empty strings"""
|
||||
if not trigger_words or not isinstance(trigger_words, str):
|
||||
return set()
|
||||
|
||||
# Split by double commas first to preserve groups, then by single commas
|
||||
groups = re.split(r",{2,}", trigger_words)
|
||||
words = []
|
||||
for group in groups:
|
||||
# Split each group by single comma
|
||||
group_words = [word.strip() for word in group.split(",")]
|
||||
words.extend(group_words)
|
||||
|
||||
# Filter out empty strings and return as set
|
||||
return set(word for word in words if word)
|
||||
|
||||
def process_trigger_words(
|
||||
self,
|
||||
id,
|
||||
@@ -97,7 +81,7 @@ class TriggerWordToggleLM:
|
||||
if (
|
||||
trigger_words_override
|
||||
and isinstance(trigger_words_override, str)
|
||||
and self._normalize_trigger_words(trigger_words_override) != self._normalize_trigger_words(trigger_words)
|
||||
and trigger_words_override != trigger_words
|
||||
):
|
||||
filtered_triggers = trigger_words_override
|
||||
return (filtered_triggers,)
|
||||
|
||||
@@ -204,7 +204,6 @@ class BaseModelRoutes(ABC):
|
||||
service=service,
|
||||
update_service=update_service,
|
||||
metadata_provider_selector=get_metadata_provider,
|
||||
settings_service=self._settings,
|
||||
logger=logger,
|
||||
)
|
||||
return ModelHandlerSet(
|
||||
|
||||
@@ -30,7 +30,6 @@ ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
|
||||
RouteDefinition("POST", "/api/lm/force-download-example-images", "force_download_example_images"),
|
||||
RouteDefinition("POST", "/api/lm/cleanup-example-image-folders", "cleanup_example_image_folders"),
|
||||
RouteDefinition("POST", "/api/lm/example-images/set-nsfw-level", "set_example_image_nsfw_level"),
|
||||
RouteDefinition("POST", "/api/lm/check-example-images-needed", "check_example_images_needed"),
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -1,14 +1,11 @@
|
||||
"""Handler set for example image routes."""
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
from typing import Callable, Mapping
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
from ...services.use_cases.example_images import (
|
||||
DownloadExampleImagesConfigurationError,
|
||||
DownloadExampleImagesInProgressError,
|
||||
@@ -95,19 +92,6 @@ class ExampleImagesDownloadHandler:
|
||||
except ExampleImagesDownloadError as exc:
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=500)
|
||||
|
||||
async def check_example_images_needed(self, request: web.Request) -> web.StreamResponse:
|
||||
"""Lightweight check to see if any models need example images downloaded."""
|
||||
try:
|
||||
payload = await request.json()
|
||||
model_types = payload.get('model_types', ['lora', 'checkpoint', 'embedding'])
|
||||
result = await self._download_manager.check_pending_models(model_types)
|
||||
return web.json_response(result)
|
||||
except Exception as exc:
|
||||
return web.json_response(
|
||||
{'success': False, 'error': str(exc)},
|
||||
status=500
|
||||
)
|
||||
|
||||
|
||||
class ExampleImagesManagementHandler:
|
||||
"""HTTP adapters for import/delete endpoints."""
|
||||
@@ -125,9 +109,6 @@ class ExampleImagesManagementHandler:
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=400)
|
||||
except ExampleImagesImportError as exc:
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=500)
|
||||
except Exception as exc:
|
||||
logger.exception("Unexpected error importing example images")
|
||||
return web.json_response({'success': False, 'error': str(exc)}, status=500)
|
||||
|
||||
async def delete_example_image(self, request: web.Request) -> web.StreamResponse:
|
||||
return await self._processor.delete_custom_image(request)
|
||||
@@ -180,7 +161,6 @@ class ExampleImagesHandlerSet:
|
||||
"resume_example_images": self.download.resume_example_images,
|
||||
"stop_example_images": self.download.stop_example_images,
|
||||
"force_download_example_images": self.download.force_download_example_images,
|
||||
"check_example_images_needed": self.download.check_example_images_needed,
|
||||
"import_example_images": self.management.import_example_images,
|
||||
"delete_example_image": self.management.delete_example_image,
|
||||
"set_example_image_nsfw_level": self.management.set_example_image_nsfw_level,
|
||||
|
||||
@@ -220,17 +220,43 @@ class HealthCheckHandler:
|
||||
class SettingsHandler:
|
||||
"""Sync settings between backend and frontend."""
|
||||
|
||||
# Settings keys that should NOT be synced to frontend.
|
||||
# All other settings are synced by default.
|
||||
_NO_SYNC_KEYS = frozenset({
|
||||
# Internal/performance settings (not used by frontend)
|
||||
"hash_chunk_size_mb",
|
||||
"download_stall_timeout_seconds",
|
||||
# Complex internal structures retrieved via separate endpoints
|
||||
"folder_paths",
|
||||
"libraries",
|
||||
"active_library",
|
||||
})
|
||||
_SYNC_KEYS = (
|
||||
"civitai_api_key",
|
||||
"default_lora_root",
|
||||
"default_checkpoint_root",
|
||||
"default_unet_root",
|
||||
"default_embedding_root",
|
||||
"base_model_path_mappings",
|
||||
"download_path_templates",
|
||||
"enable_metadata_archive_db",
|
||||
"language",
|
||||
"use_portable_settings",
|
||||
"onboarding_completed",
|
||||
"dismissed_banners",
|
||||
"proxy_enabled",
|
||||
"proxy_type",
|
||||
"proxy_host",
|
||||
"proxy_port",
|
||||
"proxy_username",
|
||||
"proxy_password",
|
||||
"example_images_path",
|
||||
"optimize_example_images",
|
||||
"auto_download_example_images",
|
||||
"blur_mature_content",
|
||||
"autoplay_on_hover",
|
||||
"display_density",
|
||||
"card_info_display",
|
||||
"show_folder_sidebar",
|
||||
"include_trigger_words",
|
||||
"show_only_sfw",
|
||||
"compact_mode",
|
||||
"priority_tags",
|
||||
"model_card_footer_action",
|
||||
"model_name_display",
|
||||
"update_flag_strategy",
|
||||
"auto_organize_exclusions",
|
||||
"filter_presets",
|
||||
)
|
||||
|
||||
_PROXY_KEYS = {
|
||||
"proxy_enabled",
|
||||
@@ -277,12 +303,10 @@ class SettingsHandler:
|
||||
async def get_settings(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
response_data = {}
|
||||
# Sync all settings except those in _NO_SYNC_KEYS
|
||||
for key in self._settings.keys():
|
||||
if key not in self._NO_SYNC_KEYS:
|
||||
value = self._settings.get(key)
|
||||
if value is not None:
|
||||
response_data[key] = value
|
||||
for key in self._SYNC_KEYS:
|
||||
value = self._settings.get(key)
|
||||
if value is not None:
|
||||
response_data[key] = value
|
||||
settings_file = getattr(self._settings, "settings_file", None)
|
||||
if settings_file:
|
||||
response_data["settings_file"] = settings_file
|
||||
|
||||
@@ -6,7 +6,6 @@ import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Awaitable, Callable, Dict, Iterable, List, Mapping, Optional
|
||||
@@ -270,11 +269,6 @@ class ModelListingHandler:
|
||||
request.query.get("update_available_only", "false").lower() == "true"
|
||||
)
|
||||
|
||||
# Tag logic: "any" (OR) or "all" (AND) for include tags
|
||||
tag_logic = request.query.get("tag_logic", "any").lower()
|
||||
if tag_logic not in ("any", "all"):
|
||||
tag_logic = "any"
|
||||
|
||||
# New license-based query filters
|
||||
credit_required = request.query.get("credit_required")
|
||||
if credit_required is not None:
|
||||
@@ -303,7 +297,6 @@ class ModelListingHandler:
|
||||
"fuzzy_search": fuzzy_search,
|
||||
"base_models": base_models,
|
||||
"tags": tag_filters,
|
||||
"tag_logic": tag_logic,
|
||||
"search_options": search_options,
|
||||
"hash_filters": hash_filters,
|
||||
"favorites_only": favorites_only,
|
||||
@@ -648,7 +641,7 @@ class ModelQueryHandler:
|
||||
async def get_top_tags(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
limit = int(request.query.get("limit", "20"))
|
||||
if limit < 0:
|
||||
if limit < 1 or limit > 100:
|
||||
limit = 20
|
||||
top_tags = await self._service.get_top_tags(limit)
|
||||
return web.json_response({"success": True, "tags": top_tags})
|
||||
@@ -762,22 +755,19 @@ class ModelQueryHandler:
|
||||
|
||||
async def find_duplicate_models(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
filters = self._parse_duplicate_filters(request)
|
||||
duplicates = self._service.find_duplicate_hashes()
|
||||
result = []
|
||||
cache = await self._service.scanner.get_cached_data()
|
||||
|
||||
for sha256, paths in duplicates.items():
|
||||
# Collect all models in this group
|
||||
all_models = []
|
||||
group = {"hash": sha256, "models": []}
|
||||
for path in paths:
|
||||
model = next(
|
||||
(m for m in cache.raw_data if m["file_path"] == path), None
|
||||
)
|
||||
if model:
|
||||
all_models.append(model)
|
||||
|
||||
# Include primary if not already in paths
|
||||
group["models"].append(
|
||||
await self._service.format_response(model)
|
||||
)
|
||||
primary_path = self._service.get_path_by_hash(sha256)
|
||||
if primary_path and primary_path not in paths:
|
||||
primary_model = next(
|
||||
@@ -785,25 +775,11 @@ class ModelQueryHandler:
|
||||
None,
|
||||
)
|
||||
if primary_model:
|
||||
all_models.insert(0, primary_model)
|
||||
|
||||
# Apply filters
|
||||
filtered = self._apply_duplicate_filters(all_models, filters)
|
||||
|
||||
# Sort: originals first, copies last
|
||||
sorted_models = self._sort_duplicate_group(filtered)
|
||||
|
||||
# Format response
|
||||
group = {"hash": sha256, "models": []}
|
||||
for model in sorted_models:
|
||||
group["models"].append(
|
||||
await self._service.format_response(model)
|
||||
)
|
||||
|
||||
# Only include groups with 2+ models after filtering
|
||||
group["models"].insert(
|
||||
0, await self._service.format_response(primary_model)
|
||||
)
|
||||
if len(group["models"]) > 1:
|
||||
result.append(group)
|
||||
|
||||
return web.json_response(
|
||||
{"success": True, "duplicates": result, "count": len(result)}
|
||||
)
|
||||
@@ -816,83 +792,6 @@ class ModelQueryHandler:
|
||||
)
|
||||
return web.json_response({"success": False, "error": str(exc)}, status=500)
|
||||
|
||||
def _parse_duplicate_filters(self, request: web.Request) -> Dict[str, Any]:
|
||||
"""Parse filter parameters from the request for duplicate finding."""
|
||||
return {
|
||||
"base_models": request.query.getall("base_model", []),
|
||||
"tag_include": request.query.getall("tag_include", []),
|
||||
"tag_exclude": request.query.getall("tag_exclude", []),
|
||||
"model_types": request.query.getall("model_type", []),
|
||||
"folder": request.query.get("folder"),
|
||||
"favorites_only": request.query.get("favorites_only", "").lower() == "true",
|
||||
}
|
||||
|
||||
def _apply_duplicate_filters(self, models: List[Dict[str, Any]], filters: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Apply filters to a list of models within a duplicate group."""
|
||||
result = models
|
||||
|
||||
# Apply base model filter
|
||||
if filters.get("base_models"):
|
||||
base_set = set(filters["base_models"])
|
||||
result = [m for m in result if m.get("base_model") in base_set]
|
||||
|
||||
# Apply tag filters (include)
|
||||
for tag in filters.get("tag_include", []):
|
||||
if tag == "__no_tags__":
|
||||
result = [m for m in result if not m.get("tags")]
|
||||
else:
|
||||
result = [m for m in result if tag in (m.get("tags") or [])]
|
||||
|
||||
# Apply tag filters (exclude)
|
||||
for tag in filters.get("tag_exclude", []):
|
||||
if tag == "__no_tags__":
|
||||
result = [m for m in result if m.get("tags")]
|
||||
else:
|
||||
result = [m for m in result if tag not in (m.get("tags") or [])]
|
||||
|
||||
# Apply model type filter
|
||||
if filters.get("model_types"):
|
||||
type_set = {t.lower() for t in filters["model_types"]}
|
||||
result = [
|
||||
m for m in result if (m.get("model_type") or "").lower() in type_set
|
||||
]
|
||||
|
||||
# Apply folder filter
|
||||
if filters.get("folder"):
|
||||
folder = filters["folder"]
|
||||
result = [m for m in result if m.get("folder", "").startswith(folder)]
|
||||
|
||||
# Apply favorites filter
|
||||
if filters.get("favorites_only"):
|
||||
result = [m for m in result if m.get("favorite", False)]
|
||||
|
||||
return result
|
||||
|
||||
def _sort_duplicate_group(self, models: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Sort models: originals first (left), copies (with -????. pattern) last (right)."""
|
||||
if len(models) <= 1:
|
||||
return models
|
||||
|
||||
min_len = min(len(m.get("file_name", "")) for m in models)
|
||||
|
||||
def copy_score(m):
|
||||
fn = m.get("file_name", "")
|
||||
score = 0
|
||||
# Match -0001.safetensors, -1234.safetensors etc.
|
||||
if re.search(r"-\d{4}\.", fn):
|
||||
score += 100
|
||||
# Match (1), (2) etc.
|
||||
if re.search(r"\(\d+\)", fn):
|
||||
score += 50
|
||||
# Match 'copy' in filename
|
||||
if "copy" in fn.lower():
|
||||
score += 50
|
||||
# Longer filenames are more likely copies
|
||||
score += len(fn) - min_len
|
||||
return (score, fn.lower())
|
||||
|
||||
return sorted(models, key=copy_score)
|
||||
|
||||
async def find_filename_conflicts(self, request: web.Request) -> web.Response:
|
||||
try:
|
||||
duplicates = self._service.find_duplicate_filenames()
|
||||
@@ -1142,7 +1041,6 @@ class ModelDownloadHandler:
|
||||
request.query.get("use_default_paths", "false").lower() == "true"
|
||||
)
|
||||
source = request.query.get("source")
|
||||
file_params_json = request.query.get("file_params")
|
||||
|
||||
data = {"model_id": model_id, "use_default_paths": use_default_paths}
|
||||
if model_version_id:
|
||||
@@ -1151,12 +1049,6 @@ class ModelDownloadHandler:
|
||||
data["download_id"] = download_id
|
||||
if source:
|
||||
data["source"] = source
|
||||
if file_params_json:
|
||||
import json
|
||||
try:
|
||||
data["file_params"] = json.loads(file_params_json)
|
||||
except json.JSONDecodeError:
|
||||
self._logger.warning("Invalid file_params JSON: %s", file_params_json)
|
||||
|
||||
loop = asyncio.get_event_loop()
|
||||
future = loop.create_future()
|
||||
@@ -1540,13 +1432,11 @@ class ModelUpdateHandler:
|
||||
service,
|
||||
update_service,
|
||||
metadata_provider_selector,
|
||||
settings_service,
|
||||
logger: logging.Logger,
|
||||
) -> None:
|
||||
self._service = service
|
||||
self._update_service = update_service
|
||||
self._metadata_provider_selector = metadata_provider_selector
|
||||
self._settings = settings_service
|
||||
self._logger = logger
|
||||
|
||||
async def fetch_missing_civitai_license_data(
|
||||
@@ -1783,9 +1673,6 @@ class ModelUpdateHandler:
|
||||
{"success": False, "error": "Model not tracked"}, status=404
|
||||
)
|
||||
|
||||
# Enrich EA versions with detailed info if needed
|
||||
record = await self._enrich_early_access_details(record)
|
||||
|
||||
overrides = await self._build_version_context(record)
|
||||
return web.json_response(
|
||||
{
|
||||
@@ -1824,78 +1711,6 @@ class ModelUpdateHandler:
|
||||
)
|
||||
return None
|
||||
|
||||
async def _enrich_early_access_details(self, record):
|
||||
"""Fetch detailed EA info for versions missing exact end time.
|
||||
|
||||
Identifies versions with is_early_access=True but no early_access_ends_at,
|
||||
then fetches detailed info from CivitAI to get the exact end time.
|
||||
"""
|
||||
if not record or not record.versions:
|
||||
return record
|
||||
|
||||
# Find versions that need enrichment
|
||||
versions_needing_update = []
|
||||
for version in record.versions:
|
||||
if version.is_early_access and not version.early_access_ends_at:
|
||||
versions_needing_update.append(version)
|
||||
|
||||
if not versions_needing_update:
|
||||
return record
|
||||
|
||||
provider = await self._get_civitai_provider()
|
||||
if not provider:
|
||||
return record
|
||||
|
||||
# Fetch detailed info for each version needing update
|
||||
updated_versions = []
|
||||
for version in versions_needing_update:
|
||||
try:
|
||||
version_info, error = await provider.get_model_version_info(
|
||||
str(version.version_id)
|
||||
)
|
||||
if version_info and not error:
|
||||
ea_ends_at = version_info.get("earlyAccessEndsAt")
|
||||
if ea_ends_at:
|
||||
# Create updated version with EA end time
|
||||
from dataclasses import replace
|
||||
|
||||
updated_version = replace(
|
||||
version, early_access_ends_at=ea_ends_at
|
||||
)
|
||||
updated_versions.append(updated_version)
|
||||
self._logger.debug(
|
||||
"Enriched EA info for version %s: %s",
|
||||
version.version_id,
|
||||
ea_ends_at,
|
||||
)
|
||||
except Exception as exc:
|
||||
self._logger.debug(
|
||||
"Failed to fetch EA details for version %s: %s",
|
||||
version.version_id,
|
||||
exc,
|
||||
)
|
||||
|
||||
if not updated_versions:
|
||||
return record
|
||||
|
||||
# Update record with enriched versions
|
||||
version_map = {v.version_id: v for v in record.versions}
|
||||
for updated in updated_versions:
|
||||
version_map[updated.version_id] = updated
|
||||
|
||||
# Create new record with updated versions
|
||||
from dataclasses import replace
|
||||
|
||||
new_record = replace(
|
||||
record, versions=list(version_map.values()),
|
||||
)
|
||||
|
||||
# Optionally persist to database for caching
|
||||
# Note: We don't persist here to avoid side effects; the data will be
|
||||
# refreshed on next bulk update if still needed
|
||||
|
||||
return new_record
|
||||
|
||||
async def _collect_models_missing_license(
|
||||
self,
|
||||
cache,
|
||||
@@ -2062,15 +1877,6 @@ class ModelUpdateHandler:
|
||||
version_context: Optional[Dict[int, Dict[str, Optional[str]]]] = None,
|
||||
) -> Dict:
|
||||
context = version_context or {}
|
||||
# Check user setting for hiding early access versions
|
||||
hide_early_access = False
|
||||
if self._settings is not None:
|
||||
try:
|
||||
hide_early_access = bool(
|
||||
self._settings.get("hide_early_access_updates", False)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
return {
|
||||
"modelType": record.model_type,
|
||||
"modelId": record.model_id,
|
||||
@@ -2079,7 +1885,7 @@ class ModelUpdateHandler:
|
||||
"inLibraryVersionIds": record.in_library_version_ids,
|
||||
"lastCheckedAt": record.last_checked_at,
|
||||
"shouldIgnore": record.should_ignore_model,
|
||||
"hasUpdate": record.has_update(hide_early_access=hide_early_access),
|
||||
"hasUpdate": record.has_update(),
|
||||
"versions": [
|
||||
self._serialize_version(version, context.get(version.version_id))
|
||||
for version in record.versions
|
||||
@@ -2095,24 +1901,6 @@ class ModelUpdateHandler:
|
||||
preview_url = (
|
||||
preview_override if preview_override is not None else version.preview_url
|
||||
)
|
||||
|
||||
# Determine if version is currently in early access
|
||||
# Two-phase detection: use exact end time if available, otherwise fallback to basic flag
|
||||
is_early_access = False
|
||||
if version.early_access_ends_at:
|
||||
try:
|
||||
from datetime import datetime, timezone
|
||||
ea_date = datetime.fromisoformat(
|
||||
version.early_access_ends_at.replace("Z", "+00:00")
|
||||
)
|
||||
is_early_access = ea_date > datetime.now(timezone.utc)
|
||||
except (ValueError, AttributeError):
|
||||
# If date parsing fails, treat as active EA (conservative)
|
||||
is_early_access = True
|
||||
elif getattr(version, 'is_early_access', False):
|
||||
# Fallback to basic EA flag from bulk API
|
||||
is_early_access = True
|
||||
|
||||
return {
|
||||
"versionId": version.version_id,
|
||||
"name": version.name,
|
||||
@@ -2122,8 +1910,6 @@ class ModelUpdateHandler:
|
||||
"previewUrl": preview_url,
|
||||
"isInLibrary": version.is_in_library,
|
||||
"shouldIgnore": version.should_ignore,
|
||||
"earlyAccessEndsAt": version.early_access_ends_at,
|
||||
"isEarlyAccess": is_early_access,
|
||||
"filePath": context.get("file_path"),
|
||||
"fileName": context.get("file_name"),
|
||||
}
|
||||
|
||||
@@ -33,10 +33,6 @@ class PreviewHandler:
|
||||
raise web.HTTPBadRequest(text="Invalid preview path encoding") from exc
|
||||
|
||||
normalized = decoded_path.replace("\\", "/")
|
||||
|
||||
if not self._config.is_preview_path_allowed(normalized):
|
||||
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
|
||||
|
||||
candidate = Path(normalized)
|
||||
try:
|
||||
resolved = candidate.expanduser().resolve(strict=False)
|
||||
@@ -44,8 +40,12 @@ class PreviewHandler:
|
||||
logger.debug("Failed to resolve preview path %s: %s", normalized, exc)
|
||||
raise web.HTTPBadRequest(text="Unable to resolve preview path") from exc
|
||||
|
||||
resolved_str = str(resolved)
|
||||
if not self._config.is_preview_path_allowed(resolved_str):
|
||||
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
|
||||
|
||||
if not resolved.is_file():
|
||||
logger.debug("Preview file not found at %s", str(resolved))
|
||||
logger.debug("Preview file not found at %s", resolved_str)
|
||||
raise web.HTTPNotFound(text="Preview file not found")
|
||||
|
||||
# aiohttp's FileResponse handles range requests and content headers for us.
|
||||
|
||||
@@ -412,11 +412,10 @@ class RecipeQueryHandler:
|
||||
if recipe_scanner is None:
|
||||
raise RuntimeError("Recipe scanner unavailable")
|
||||
|
||||
fingerprint_groups = await recipe_scanner.find_all_duplicate_recipes()
|
||||
url_groups = await recipe_scanner.find_duplicate_recipes_by_source()
|
||||
duplicate_groups = await recipe_scanner.find_all_duplicate_recipes()
|
||||
response_data = []
|
||||
|
||||
for fingerprint, recipe_ids in fingerprint_groups.items():
|
||||
for fingerprint, recipe_ids in duplicate_groups.items():
|
||||
if len(recipe_ids) <= 1:
|
||||
continue
|
||||
|
||||
@@ -440,44 +439,12 @@ class RecipeQueryHandler:
|
||||
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
|
||||
response_data.append(
|
||||
{
|
||||
"type": "fingerprint",
|
||||
"fingerprint": fingerprint,
|
||||
"count": len(recipes),
|
||||
"recipes": recipes,
|
||||
}
|
||||
)
|
||||
|
||||
for url, recipe_ids in url_groups.items():
|
||||
if len(recipe_ids) <= 1:
|
||||
continue
|
||||
|
||||
recipes = []
|
||||
for recipe_id in recipe_ids:
|
||||
recipe = await recipe_scanner.get_recipe_by_id(recipe_id)
|
||||
if recipe:
|
||||
recipes.append(
|
||||
{
|
||||
"id": recipe.get("id"),
|
||||
"title": recipe.get("title"),
|
||||
"file_url": recipe.get("file_url")
|
||||
or self._format_recipe_file_url(recipe.get("file_path", "")),
|
||||
"modified": recipe.get("modified"),
|
||||
"created_date": recipe.get("created_date"),
|
||||
"lora_count": len(recipe.get("loras", [])),
|
||||
}
|
||||
)
|
||||
|
||||
if len(recipes) >= 2:
|
||||
recipes.sort(key=lambda entry: entry.get("modified", 0), reverse=True)
|
||||
response_data.append(
|
||||
{
|
||||
"type": "source_url",
|
||||
"fingerprint": url,
|
||||
"count": len(recipes),
|
||||
"recipes": recipes,
|
||||
}
|
||||
)
|
||||
|
||||
response_data.sort(key=lambda entry: entry["count"], reverse=True)
|
||||
return web.json_response({"success": True, "duplicate_groups": response_data})
|
||||
except Exception as exc:
|
||||
@@ -1054,7 +1021,7 @@ class RecipeManagementHandler:
|
||||
"exclude": False,
|
||||
}
|
||||
|
||||
async def _download_remote_media(self, image_url: str) -> tuple[bytes, str, Any]:
|
||||
async def _download_remote_media(self, image_url: str) -> tuple[bytes, str]:
|
||||
civitai_client = self._civitai_client_getter()
|
||||
downloader = await self._downloader_factory()
|
||||
temp_path = None
|
||||
@@ -1062,7 +1029,6 @@ class RecipeManagementHandler:
|
||||
with tempfile.NamedTemporaryFile(delete=False) as temp_file:
|
||||
temp_path = temp_file.name
|
||||
download_url = image_url
|
||||
image_info = None
|
||||
civitai_match = re.match(r"https://civitai\.com/images/(\d+)", image_url)
|
||||
if civitai_match:
|
||||
if civitai_client is None:
|
||||
|
||||
112
py/routes/misc_model_routes.py
Normal file
112
py/routes/misc_model_routes.py
Normal file
@@ -0,0 +1,112 @@
|
||||
import logging
|
||||
from typing import Dict
|
||||
from aiohttp import web
|
||||
|
||||
from .base_model_routes import BaseModelRoutes
|
||||
from .model_route_registrar import ModelRouteRegistrar
|
||||
from ..services.misc_service import MiscService
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
from ..config import config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MiscModelRoutes(BaseModelRoutes):
|
||||
"""Misc-specific route controller (VAE, Upscaler)"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize Misc routes with Misc service"""
|
||||
super().__init__()
|
||||
self.template_name = "misc.html"
|
||||
|
||||
async def initialize_services(self):
|
||||
"""Initialize services from ServiceRegistry"""
|
||||
misc_scanner = await ServiceRegistry.get_misc_scanner()
|
||||
update_service = await ServiceRegistry.get_model_update_service()
|
||||
self.service = MiscService(misc_scanner, update_service=update_service)
|
||||
self.set_model_update_service(update_service)
|
||||
|
||||
# Attach service dependencies
|
||||
self.attach_service(self.service)
|
||||
|
||||
def setup_routes(self, app: web.Application):
|
||||
"""Setup Misc routes"""
|
||||
# Schedule service initialization on app startup
|
||||
app.on_startup.append(lambda _: self.initialize_services())
|
||||
|
||||
# Setup common routes with 'misc' prefix (includes page route)
|
||||
super().setup_routes(app, 'misc')
|
||||
|
||||
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
|
||||
"""Setup Misc-specific routes"""
|
||||
# Misc info by name
|
||||
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/info/{name}', prefix, self.get_misc_info)
|
||||
|
||||
# VAE roots and Upscaler roots
|
||||
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/vae_roots', prefix, self.get_vae_roots)
|
||||
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/upscaler_roots', prefix, self.get_upscaler_roots)
|
||||
|
||||
def _validate_civitai_model_type(self, model_type: str) -> bool:
|
||||
"""Validate CivitAI model type for Misc (VAE or Upscaler)"""
|
||||
return model_type.lower() in ['vae', 'upscaler']
|
||||
|
||||
def _get_expected_model_types(self) -> str:
|
||||
"""Get expected model types string for error messages"""
|
||||
return "VAE or Upscaler"
|
||||
|
||||
def _parse_specific_params(self, request: web.Request) -> Dict:
|
||||
"""Parse Misc-specific parameters"""
|
||||
params: Dict = {}
|
||||
|
||||
if 'misc_hash' in request.query:
|
||||
params['hash_filters'] = {'single_hash': request.query['misc_hash'].lower()}
|
||||
elif 'misc_hashes' in request.query:
|
||||
params['hash_filters'] = {
|
||||
'multiple_hashes': [h.lower() for h in request.query['misc_hashes'].split(',')]
|
||||
}
|
||||
|
||||
return params
|
||||
|
||||
async def get_misc_info(self, request: web.Request) -> web.Response:
|
||||
"""Get detailed information for a specific misc model by name"""
|
||||
try:
|
||||
name = request.match_info.get('name', '')
|
||||
misc_info = await self.service.get_model_info_by_name(name)
|
||||
|
||||
if misc_info:
|
||||
return web.json_response(misc_info)
|
||||
else:
|
||||
return web.json_response({"error": "Misc model not found"}, status=404)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_misc_info: {e}", exc_info=True)
|
||||
return web.json_response({"error": str(e)}, status=500)
|
||||
|
||||
async def get_vae_roots(self, request: web.Request) -> web.Response:
|
||||
"""Return the list of VAE roots from config"""
|
||||
try:
|
||||
roots = config.vae_roots
|
||||
return web.json_response({
|
||||
"success": True,
|
||||
"roots": roots
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting VAE roots: {e}", exc_info=True)
|
||||
return web.json_response({
|
||||
"success": False,
|
||||
"error": str(e)
|
||||
}, status=500)
|
||||
|
||||
async def get_upscaler_roots(self, request: web.Request) -> web.Response:
|
||||
"""Return the list of upscaler roots from config"""
|
||||
try:
|
||||
roots = config.upscaler_roots
|
||||
return web.json_response({
|
||||
"success": True,
|
||||
"roots": roots
|
||||
})
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting upscaler roots: {e}", exc_info=True)
|
||||
return web.json_response({
|
||||
"success": False,
|
||||
"error": str(e)
|
||||
}, status=500)
|
||||
@@ -81,7 +81,6 @@ class BaseModelService(ABC):
|
||||
update_available_only: bool = False,
|
||||
credit_required: Optional[bool] = None,
|
||||
allow_selling_generated_content: Optional[bool] = None,
|
||||
tag_logic: str = "any",
|
||||
**kwargs,
|
||||
) -> Dict:
|
||||
"""Get paginated and filtered model data"""
|
||||
@@ -110,7 +109,6 @@ class BaseModelService(ABC):
|
||||
tags=tags,
|
||||
favorites_only=favorites_only,
|
||||
search_options=search_options,
|
||||
tag_logic=tag_logic,
|
||||
)
|
||||
|
||||
if search:
|
||||
@@ -243,7 +241,6 @@ class BaseModelService(ABC):
|
||||
tags: Optional[Dict[str, str]] = None,
|
||||
favorites_only: bool = False,
|
||||
search_options: dict = None,
|
||||
tag_logic: str = "any",
|
||||
) -> List[Dict]:
|
||||
"""Apply common filters that work across all model types"""
|
||||
normalized_options = self.search_strategy.normalize_options(search_options)
|
||||
@@ -256,7 +253,6 @@ class BaseModelService(ABC):
|
||||
tags=tags,
|
||||
favorites_only=favorites_only,
|
||||
search_options=normalized_options,
|
||||
tag_logic=tag_logic,
|
||||
)
|
||||
return self.filter_set.apply(data, criteria)
|
||||
|
||||
@@ -380,13 +376,6 @@ class BaseModelService(ABC):
|
||||
strategy = "same_base"
|
||||
same_base_mode = strategy == "same_base"
|
||||
|
||||
# Check user setting for hiding early access updates
|
||||
hide_early_access = False
|
||||
try:
|
||||
hide_early_access = bool(self.settings.get("hide_early_access_updates", False))
|
||||
except Exception:
|
||||
hide_early_access = False
|
||||
|
||||
records = None
|
||||
resolved: Optional[Dict[int, bool]] = None
|
||||
if same_base_mode:
|
||||
@@ -395,7 +384,7 @@ class BaseModelService(ABC):
|
||||
try:
|
||||
records = await record_method(self.model_type, ordered_ids)
|
||||
resolved = {
|
||||
model_id: record.has_update(hide_early_access=hide_early_access)
|
||||
model_id: record.has_update()
|
||||
for model_id, record in records.items()
|
||||
}
|
||||
except Exception as exc:
|
||||
@@ -413,7 +402,7 @@ class BaseModelService(ABC):
|
||||
bulk_method = getattr(self.update_service, "has_updates_bulk", None)
|
||||
if callable(bulk_method):
|
||||
try:
|
||||
resolved = await bulk_method(self.model_type, ordered_ids, hide_early_access=hide_early_access)
|
||||
resolved = await bulk_method(self.model_type, ordered_ids)
|
||||
except Exception as exc:
|
||||
logger.error(
|
||||
"Failed to resolve update status in bulk for %s models (%s): %s",
|
||||
@@ -426,7 +415,7 @@ class BaseModelService(ABC):
|
||||
|
||||
if resolved is None:
|
||||
tasks = [
|
||||
self.update_service.has_update(self.model_type, model_id, hide_early_access=hide_early_access)
|
||||
self.update_service.has_update(self.model_type, model_id)
|
||||
for model_id in ordered_ids
|
||||
]
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
@@ -464,7 +453,6 @@ class BaseModelService(ABC):
|
||||
flag = record.has_update_for_base(
|
||||
threshold_version,
|
||||
base_model,
|
||||
hide_early_access=hide_early_access,
|
||||
)
|
||||
else:
|
||||
flag = default_flag
|
||||
|
||||
@@ -1,259 +0,0 @@
|
||||
"""
|
||||
Cache Entry Validator
|
||||
|
||||
Validates and repairs cache entries to prevent runtime errors from
|
||||
missing or invalid critical fields.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
import logging
|
||||
import os
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class ValidationResult:
|
||||
"""Result of validating a single cache entry."""
|
||||
is_valid: bool
|
||||
repaired: bool
|
||||
errors: List[str] = field(default_factory=list)
|
||||
entry: Optional[Dict[str, Any]] = None
|
||||
|
||||
|
||||
class CacheEntryValidator:
|
||||
"""
|
||||
Validates and repairs cache entry core fields.
|
||||
|
||||
Critical fields that cause runtime errors when missing:
|
||||
- file_path: KeyError in multiple locations
|
||||
- sha256: KeyError/AttributeError in hash operations
|
||||
|
||||
Medium severity fields that may cause sorting/display issues:
|
||||
- size: KeyError during sorting
|
||||
- modified: KeyError during sorting
|
||||
- model_name: AttributeError on .lower() calls
|
||||
|
||||
Low severity fields:
|
||||
- tags: KeyError/TypeError in recipe operations
|
||||
"""
|
||||
|
||||
# Field definitions: (default_value, is_required)
|
||||
CORE_FIELDS: Dict[str, Tuple[Any, bool]] = {
|
||||
'file_path': ('', True),
|
||||
'sha256': ('', True),
|
||||
'file_name': ('', False),
|
||||
'model_name': ('', False),
|
||||
'folder': ('', False),
|
||||
'size': (0, False),
|
||||
'modified': (0.0, False),
|
||||
'tags': ([], False),
|
||||
'preview_url': ('', False),
|
||||
'base_model': ('', False),
|
||||
'from_civitai': (True, False),
|
||||
'favorite': (False, False),
|
||||
'exclude': (False, False),
|
||||
'db_checked': (False, False),
|
||||
'preview_nsfw_level': (0, False),
|
||||
'notes': ('', False),
|
||||
'usage_tips': ('', False),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def validate(cls, entry: Dict[str, Any], *, auto_repair: bool = True) -> ValidationResult:
|
||||
"""
|
||||
Validate a single cache entry.
|
||||
|
||||
Args:
|
||||
entry: The cache entry dictionary to validate
|
||||
auto_repair: If True, attempt to repair missing/invalid fields
|
||||
|
||||
Returns:
|
||||
ValidationResult with validation status and optionally repaired entry
|
||||
"""
|
||||
if entry is None:
|
||||
return ValidationResult(
|
||||
is_valid=False,
|
||||
repaired=False,
|
||||
errors=['Entry is None'],
|
||||
entry=None
|
||||
)
|
||||
|
||||
if not isinstance(entry, dict):
|
||||
return ValidationResult(
|
||||
is_valid=False,
|
||||
repaired=False,
|
||||
errors=[f'Entry is not a dict: {type(entry).__name__}'],
|
||||
entry=None
|
||||
)
|
||||
|
||||
errors: List[str] = []
|
||||
repaired = False
|
||||
working_entry = dict(entry) if auto_repair else entry
|
||||
|
||||
for field_name, (default_value, is_required) in cls.CORE_FIELDS.items():
|
||||
value = working_entry.get(field_name)
|
||||
|
||||
# Check if field is missing or None
|
||||
if value is None:
|
||||
if is_required:
|
||||
errors.append(f"Required field '{field_name}' is missing or None")
|
||||
if auto_repair:
|
||||
working_entry[field_name] = cls._get_default_copy(default_value)
|
||||
repaired = True
|
||||
continue
|
||||
|
||||
# Validate field type and value
|
||||
field_error = cls._validate_field(field_name, value, default_value)
|
||||
if field_error:
|
||||
errors.append(field_error)
|
||||
if auto_repair:
|
||||
working_entry[field_name] = cls._get_default_copy(default_value)
|
||||
repaired = True
|
||||
|
||||
# Special validation: file_path must not be empty for required field
|
||||
file_path = working_entry.get('file_path', '')
|
||||
if not file_path or (isinstance(file_path, str) and not file_path.strip()):
|
||||
errors.append("Required field 'file_path' is empty")
|
||||
# Cannot repair empty file_path - entry is invalid
|
||||
return ValidationResult(
|
||||
is_valid=False,
|
||||
repaired=repaired,
|
||||
errors=errors,
|
||||
entry=working_entry if auto_repair else None
|
||||
)
|
||||
|
||||
# Special validation: sha256 must not be empty for required field
|
||||
sha256 = working_entry.get('sha256', '')
|
||||
if not sha256 or (isinstance(sha256, str) and not sha256.strip()):
|
||||
errors.append("Required field 'sha256' is empty")
|
||||
# Cannot repair empty sha256 - entry is invalid
|
||||
return ValidationResult(
|
||||
is_valid=False,
|
||||
repaired=repaired,
|
||||
errors=errors,
|
||||
entry=working_entry if auto_repair else None
|
||||
)
|
||||
|
||||
# Normalize sha256 to lowercase if needed
|
||||
if isinstance(sha256, str):
|
||||
normalized_sha = sha256.lower().strip()
|
||||
if normalized_sha != sha256:
|
||||
working_entry['sha256'] = normalized_sha
|
||||
repaired = True
|
||||
|
||||
# Determine if entry is valid
|
||||
# Entry is valid if no critical required field errors remain after repair
|
||||
# Critical fields are file_path and sha256
|
||||
CRITICAL_REQUIRED_FIELDS = {'file_path', 'sha256'}
|
||||
has_critical_errors = any(
|
||||
"Required field" in error and
|
||||
any(f"'{field}'" in error for field in CRITICAL_REQUIRED_FIELDS)
|
||||
for error in errors
|
||||
)
|
||||
|
||||
is_valid = not has_critical_errors
|
||||
|
||||
return ValidationResult(
|
||||
is_valid=is_valid,
|
||||
repaired=repaired,
|
||||
errors=errors,
|
||||
entry=working_entry if auto_repair else entry
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def validate_batch(
|
||||
cls,
|
||||
entries: List[Dict[str, Any]],
|
||||
*,
|
||||
auto_repair: bool = True
|
||||
) -> Tuple[List[Dict[str, Any]], List[Dict[str, Any]]]:
|
||||
"""
|
||||
Validate a batch of cache entries.
|
||||
|
||||
Args:
|
||||
entries: List of cache entry dictionaries to validate
|
||||
auto_repair: If True, attempt to repair missing/invalid fields
|
||||
|
||||
Returns:
|
||||
Tuple of (valid_entries, invalid_entries)
|
||||
"""
|
||||
if not entries:
|
||||
return [], []
|
||||
|
||||
valid_entries: List[Dict[str, Any]] = []
|
||||
invalid_entries: List[Dict[str, Any]] = []
|
||||
|
||||
for entry in entries:
|
||||
result = cls.validate(entry, auto_repair=auto_repair)
|
||||
|
||||
if result.is_valid:
|
||||
# Use repaired entry if available, otherwise original
|
||||
valid_entries.append(result.entry if result.entry else entry)
|
||||
else:
|
||||
invalid_entries.append(entry)
|
||||
# Log invalid entries for debugging
|
||||
file_path = entry.get('file_path', '<unknown>') if isinstance(entry, dict) else '<not a dict>'
|
||||
logger.warning(
|
||||
f"Invalid cache entry for '{file_path}': {', '.join(result.errors)}"
|
||||
)
|
||||
|
||||
return valid_entries, invalid_entries
|
||||
|
||||
@classmethod
|
||||
def _validate_field(cls, field_name: str, value: Any, default_value: Any) -> Optional[str]:
|
||||
"""
|
||||
Validate a specific field value.
|
||||
|
||||
Returns an error message if invalid, None if valid.
|
||||
"""
|
||||
expected_type = type(default_value)
|
||||
|
||||
# Special handling for numeric types
|
||||
if expected_type == int:
|
||||
if not isinstance(value, (int, float)):
|
||||
return f"Field '{field_name}' should be numeric, got {type(value).__name__}"
|
||||
elif expected_type == float:
|
||||
if not isinstance(value, (int, float)):
|
||||
return f"Field '{field_name}' should be numeric, got {type(value).__name__}"
|
||||
elif expected_type == bool:
|
||||
# Be lenient with boolean fields - accept truthy/falsy values
|
||||
pass
|
||||
elif expected_type == str:
|
||||
if not isinstance(value, str):
|
||||
return f"Field '{field_name}' should be string, got {type(value).__name__}"
|
||||
elif expected_type == list:
|
||||
if not isinstance(value, (list, tuple)):
|
||||
return f"Field '{field_name}' should be list, got {type(value).__name__}"
|
||||
|
||||
return None
|
||||
|
||||
@classmethod
|
||||
def _get_default_copy(cls, default_value: Any) -> Any:
|
||||
"""Get a copy of the default value to avoid shared mutable state."""
|
||||
if isinstance(default_value, list):
|
||||
return list(default_value)
|
||||
if isinstance(default_value, dict):
|
||||
return dict(default_value)
|
||||
return default_value
|
||||
|
||||
@classmethod
|
||||
def get_file_path_safe(cls, entry: Dict[str, Any], default: str = '') -> str:
|
||||
"""Safely get file_path from an entry."""
|
||||
if not isinstance(entry, dict):
|
||||
return default
|
||||
value = entry.get('file_path')
|
||||
if isinstance(value, str):
|
||||
return value
|
||||
return default
|
||||
|
||||
@classmethod
|
||||
def get_sha256_safe(cls, entry: Dict[str, Any], default: str = '') -> str:
|
||||
"""Safely get sha256 from an entry."""
|
||||
if not isinstance(entry, dict):
|
||||
return default
|
||||
value = entry.get('sha256')
|
||||
if isinstance(value, str):
|
||||
return value.lower()
|
||||
return default
|
||||
@@ -1,201 +0,0 @@
|
||||
"""
|
||||
Cache Health Monitor
|
||||
|
||||
Monitors cache health status and determines when user intervention is needed.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
from typing import Any, Dict, List, Optional
|
||||
import logging
|
||||
|
||||
from .cache_entry_validator import CacheEntryValidator, ValidationResult
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CacheHealthStatus(Enum):
|
||||
"""Health status of the cache."""
|
||||
HEALTHY = "healthy"
|
||||
DEGRADED = "degraded"
|
||||
CORRUPTED = "corrupted"
|
||||
|
||||
|
||||
@dataclass
|
||||
class HealthReport:
|
||||
"""Report of cache health check."""
|
||||
status: CacheHealthStatus
|
||||
total_entries: int
|
||||
valid_entries: int
|
||||
invalid_entries: int
|
||||
repaired_entries: int
|
||||
invalid_paths: List[str] = field(default_factory=list)
|
||||
message: str = ""
|
||||
|
||||
@property
|
||||
def corruption_rate(self) -> float:
|
||||
"""Calculate the percentage of invalid entries."""
|
||||
if self.total_entries <= 0:
|
||||
return 0.0
|
||||
return self.invalid_entries / self.total_entries
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
'status': self.status.value,
|
||||
'total_entries': self.total_entries,
|
||||
'valid_entries': self.valid_entries,
|
||||
'invalid_entries': self.invalid_entries,
|
||||
'repaired_entries': self.repaired_entries,
|
||||
'corruption_rate': f"{self.corruption_rate:.1%}",
|
||||
'invalid_paths': self.invalid_paths[:10], # Limit to first 10
|
||||
'message': self.message,
|
||||
}
|
||||
|
||||
|
||||
class CacheHealthMonitor:
|
||||
"""
|
||||
Monitors cache health and determines appropriate status.
|
||||
|
||||
Thresholds:
|
||||
- HEALTHY: 0% invalid entries
|
||||
- DEGRADED: 0-5% invalid entries (auto-repaired, user should rebuild)
|
||||
- CORRUPTED: >5% invalid entries (significant data loss likely)
|
||||
"""
|
||||
|
||||
# Threshold percentages
|
||||
DEGRADED_THRESHOLD = 0.01 # 1% - show warning
|
||||
CORRUPTED_THRESHOLD = 0.05 # 5% - critical warning
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
degraded_threshold: float = DEGRADED_THRESHOLD,
|
||||
corrupted_threshold: float = CORRUPTED_THRESHOLD
|
||||
):
|
||||
"""
|
||||
Initialize the health monitor.
|
||||
|
||||
Args:
|
||||
degraded_threshold: Corruption rate threshold for DEGRADED status
|
||||
corrupted_threshold: Corruption rate threshold for CORRUPTED status
|
||||
"""
|
||||
self.degraded_threshold = degraded_threshold
|
||||
self.corrupted_threshold = corrupted_threshold
|
||||
|
||||
def check_health(
|
||||
self,
|
||||
entries: List[Dict[str, Any]],
|
||||
*,
|
||||
auto_repair: bool = True
|
||||
) -> HealthReport:
|
||||
"""
|
||||
Check the health of cache entries.
|
||||
|
||||
Args:
|
||||
entries: List of cache entry dictionaries to check
|
||||
auto_repair: If True, attempt to repair entries during validation
|
||||
|
||||
Returns:
|
||||
HealthReport with status and statistics
|
||||
"""
|
||||
if not entries:
|
||||
return HealthReport(
|
||||
status=CacheHealthStatus.HEALTHY,
|
||||
total_entries=0,
|
||||
valid_entries=0,
|
||||
invalid_entries=0,
|
||||
repaired_entries=0,
|
||||
message="Cache is empty"
|
||||
)
|
||||
|
||||
total_entries = len(entries)
|
||||
valid_entries: List[Dict[str, Any]] = []
|
||||
invalid_entries: List[Dict[str, Any]] = []
|
||||
repaired_count = 0
|
||||
invalid_paths: List[str] = []
|
||||
|
||||
for entry in entries:
|
||||
result = CacheEntryValidator.validate(entry, auto_repair=auto_repair)
|
||||
|
||||
if result.is_valid:
|
||||
valid_entries.append(result.entry if result.entry else entry)
|
||||
if result.repaired:
|
||||
repaired_count += 1
|
||||
else:
|
||||
invalid_entries.append(entry)
|
||||
# Extract file path for reporting
|
||||
file_path = CacheEntryValidator.get_file_path_safe(entry, '<unknown>')
|
||||
invalid_paths.append(file_path)
|
||||
|
||||
invalid_count = len(invalid_entries)
|
||||
valid_count = len(valid_entries)
|
||||
|
||||
# Determine status based on corruption rate
|
||||
corruption_rate = invalid_count / total_entries if total_entries > 0 else 0.0
|
||||
|
||||
if invalid_count == 0:
|
||||
status = CacheHealthStatus.HEALTHY
|
||||
message = "Cache is healthy"
|
||||
elif corruption_rate >= self.corrupted_threshold:
|
||||
status = CacheHealthStatus.CORRUPTED
|
||||
message = (
|
||||
f"Cache is corrupted: {invalid_count} invalid entries "
|
||||
f"({corruption_rate:.1%}). Rebuild recommended."
|
||||
)
|
||||
elif corruption_rate >= self.degraded_threshold or invalid_count > 0:
|
||||
status = CacheHealthStatus.DEGRADED
|
||||
message = (
|
||||
f"Cache has {invalid_count} invalid entries "
|
||||
f"({corruption_rate:.1%}). Consider rebuilding cache."
|
||||
)
|
||||
else:
|
||||
# This shouldn't happen, but handle gracefully
|
||||
status = CacheHealthStatus.HEALTHY
|
||||
message = "Cache is healthy"
|
||||
|
||||
# Log the health check result
|
||||
if status != CacheHealthStatus.HEALTHY:
|
||||
logger.warning(
|
||||
f"Cache health check: {status.value} - "
|
||||
f"{invalid_count}/{total_entries} invalid, "
|
||||
f"{repaired_count} repaired"
|
||||
)
|
||||
if invalid_paths:
|
||||
logger.debug(f"Invalid entry paths: {invalid_paths[:5]}")
|
||||
|
||||
return HealthReport(
|
||||
status=status,
|
||||
total_entries=total_entries,
|
||||
valid_entries=valid_count,
|
||||
invalid_entries=invalid_count,
|
||||
repaired_entries=repaired_count,
|
||||
invalid_paths=invalid_paths,
|
||||
message=message
|
||||
)
|
||||
|
||||
def should_notify_user(self, report: HealthReport) -> bool:
|
||||
"""
|
||||
Determine if the user should be notified about cache health.
|
||||
|
||||
Args:
|
||||
report: The health report to evaluate
|
||||
|
||||
Returns:
|
||||
True if user should be notified
|
||||
"""
|
||||
return report.status != CacheHealthStatus.HEALTHY
|
||||
|
||||
def get_notification_severity(self, report: HealthReport) -> str:
|
||||
"""
|
||||
Get the severity level for user notification.
|
||||
|
||||
Args:
|
||||
report: The health report to evaluate
|
||||
|
||||
Returns:
|
||||
Severity string: 'warning' or 'error'
|
||||
"""
|
||||
if report.status == CacheHealthStatus.CORRUPTED:
|
||||
return 'error'
|
||||
return 'warning'
|
||||
@@ -43,7 +43,6 @@ class CheckpointService(BaseModelService):
|
||||
"sub_type": sub_type,
|
||||
"favorite": checkpoint_data.get("favorite", False),
|
||||
"update_available": bool(checkpoint_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(checkpoint_data.get("skip_metadata_refresh", False)),
|
||||
"civitai": self.filter_civitai_data(checkpoint_data.get("civitai", {}), minimal=True)
|
||||
}
|
||||
|
||||
|
||||
@@ -86,7 +86,6 @@ class DownloadCoordinator:
|
||||
progress_callback=progress_callback,
|
||||
download_id=download_id,
|
||||
source=payload.get("source"),
|
||||
file_params=payload.get("file_params"),
|
||||
)
|
||||
|
||||
result["download_id"] = download_id
|
||||
|
||||
@@ -9,7 +9,7 @@ from collections import OrderedDict
|
||||
import uuid
|
||||
from typing import Dict, List, Optional, Set, Tuple
|
||||
from urllib.parse import urlparse
|
||||
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata
|
||||
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata, MiscMetadata
|
||||
from ..utils.constants import CARD_PREVIEW_WIDTH, DIFFUSION_MODEL_BASE_MODELS, VALID_LORA_TYPES
|
||||
from ..utils.civitai_utils import rewrite_preview_url
|
||||
from ..utils.preview_selection import select_preview_media
|
||||
@@ -60,6 +60,10 @@ class DownloadManager:
|
||||
"""Get the checkpoint scanner from registry"""
|
||||
return await ServiceRegistry.get_checkpoint_scanner()
|
||||
|
||||
async def _get_misc_scanner(self):
|
||||
"""Get the misc scanner from registry"""
|
||||
return await ServiceRegistry.get_misc_scanner()
|
||||
|
||||
async def download_from_civitai(
|
||||
self,
|
||||
model_id: int = None,
|
||||
@@ -70,7 +74,6 @@ class DownloadManager:
|
||||
use_default_paths: bool = False,
|
||||
download_id: str = None,
|
||||
source: str = None,
|
||||
file_params: Dict = None,
|
||||
) -> Dict:
|
||||
"""Download model from Civitai with task tracking and concurrency control
|
||||
|
||||
@@ -83,7 +86,6 @@ class DownloadManager:
|
||||
use_default_paths: Flag to use default paths
|
||||
download_id: Unique identifier for this download task
|
||||
source: Optional source parameter to specify metadata provider
|
||||
file_params: Optional dict with file selection params (type, format, size, fp, isPrimary)
|
||||
|
||||
Returns:
|
||||
Dict with download result
|
||||
@@ -124,7 +126,6 @@ class DownloadManager:
|
||||
progress_callback,
|
||||
use_default_paths,
|
||||
source,
|
||||
file_params,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -158,7 +159,6 @@ class DownloadManager:
|
||||
progress_callback=None,
|
||||
use_default_paths: bool = False,
|
||||
source: str = None,
|
||||
file_params: Dict = None,
|
||||
):
|
||||
"""Execute download with semaphore to limit concurrency"""
|
||||
# Update status to waiting
|
||||
@@ -219,7 +219,6 @@ class DownloadManager:
|
||||
use_default_paths,
|
||||
task_id,
|
||||
source,
|
||||
file_params,
|
||||
)
|
||||
|
||||
# Update status based on result
|
||||
@@ -271,7 +270,6 @@ class DownloadManager:
|
||||
use_default_paths,
|
||||
download_id=None,
|
||||
source=None,
|
||||
file_params=None,
|
||||
):
|
||||
"""Wrapper for original download_from_civitai implementation"""
|
||||
try:
|
||||
@@ -281,6 +279,7 @@ class DownloadManager:
|
||||
lora_scanner = await self._get_lora_scanner()
|
||||
checkpoint_scanner = await self._get_checkpoint_scanner()
|
||||
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
misc_scanner = await self._get_misc_scanner()
|
||||
|
||||
# Check lora scanner first
|
||||
if await lora_scanner.check_model_version_exists(model_version_id):
|
||||
@@ -305,6 +304,13 @@ class DownloadManager:
|
||||
"error": "Model version already exists in embedding library",
|
||||
}
|
||||
|
||||
# Check misc scanner (VAE, Upscaler)
|
||||
if await misc_scanner.check_model_version_exists(model_version_id):
|
||||
return {
|
||||
"success": False,
|
||||
"error": "Model version already exists in misc library",
|
||||
}
|
||||
|
||||
# Use CivArchive provider directly when source is 'civarchive'
|
||||
# This prioritizes CivArchive metadata (with mirror availability info) over Civitai
|
||||
if source == "civarchive":
|
||||
@@ -343,6 +349,10 @@ class DownloadManager:
|
||||
model_type = "lora"
|
||||
elif model_type_from_info == "textualinversion":
|
||||
model_type = "embedding"
|
||||
elif model_type_from_info == "vae":
|
||||
model_type = "misc"
|
||||
elif model_type_from_info == "upscaler":
|
||||
model_type = "misc"
|
||||
else:
|
||||
return {
|
||||
"success": False,
|
||||
@@ -385,6 +395,14 @@ class DownloadManager:
|
||||
"success": False,
|
||||
"error": "Model version already exists in embedding library",
|
||||
}
|
||||
elif model_type == "misc":
|
||||
# Check misc scanner (VAE, Upscaler)
|
||||
misc_scanner = await self._get_misc_scanner()
|
||||
if await misc_scanner.check_model_version_exists(version_id):
|
||||
return {
|
||||
"success": False,
|
||||
"error": "Model version already exists in misc library",
|
||||
}
|
||||
|
||||
# Handle use_default_paths
|
||||
if use_default_paths:
|
||||
@@ -419,6 +437,26 @@ class DownloadManager:
|
||||
"error": "Default embedding root path not set in settings",
|
||||
}
|
||||
save_dir = default_path
|
||||
elif model_type == "misc":
|
||||
from ..config import config
|
||||
|
||||
civitai_type = version_info.get("model", {}).get("type", "").lower()
|
||||
if civitai_type == "vae":
|
||||
default_paths = config.vae_roots
|
||||
error_msg = "VAE root path not configured"
|
||||
elif civitai_type == "upscaler":
|
||||
default_paths = config.upscaler_roots
|
||||
error_msg = "Upscaler root path not configured"
|
||||
else:
|
||||
default_paths = config.misc_roots
|
||||
error_msg = "Misc root path not configured"
|
||||
|
||||
if not default_paths:
|
||||
return {
|
||||
"success": False,
|
||||
"error": error_msg,
|
||||
}
|
||||
save_dir = default_paths[0] if default_paths else ""
|
||||
|
||||
# Calculate relative path using template
|
||||
relative_path = self._calculate_relative_path(version_info, model_type)
|
||||
@@ -462,57 +500,16 @@ class DownloadManager:
|
||||
await progress_callback(0)
|
||||
|
||||
# 2. Get file information
|
||||
files = version_info.get("files", [])
|
||||
file_info = None
|
||||
|
||||
# If file_params is provided, try to find matching file
|
||||
if file_params and model_version_id:
|
||||
target_type = file_params.get("type", "Model")
|
||||
target_format = file_params.get("format", "SafeTensor")
|
||||
target_size = file_params.get("size", "full")
|
||||
target_fp = file_params.get("fp")
|
||||
is_primary = file_params.get("isPrimary", False)
|
||||
|
||||
if is_primary:
|
||||
# Find primary file
|
||||
file_info = next(
|
||||
(f for f in files if f.get("primary") and f.get("type") in ("Model", "Negative")),
|
||||
None
|
||||
)
|
||||
else:
|
||||
# Match by metadata
|
||||
for f in files:
|
||||
f_type = f.get("type", "")
|
||||
f_meta = f.get("metadata", {})
|
||||
|
||||
# Check type match
|
||||
if f_type != target_type:
|
||||
continue
|
||||
|
||||
# Check metadata match
|
||||
if f_meta.get("format") != target_format:
|
||||
continue
|
||||
if f_meta.get("size") != target_size:
|
||||
continue
|
||||
if target_fp and f_meta.get("fp") != target_fp:
|
||||
continue
|
||||
|
||||
file_info = f
|
||||
break
|
||||
|
||||
# Fallback to primary file if no match found
|
||||
file_info = next(
|
||||
(
|
||||
f
|
||||
for f in version_info.get("files", [])
|
||||
if f.get("primary") and f.get("type") in ("Model", "Negative")
|
||||
),
|
||||
None,
|
||||
)
|
||||
if not file_info:
|
||||
file_info = next(
|
||||
(
|
||||
f
|
||||
for f in files
|
||||
if f.get("primary") and f.get("type") in ("Model", "Negative")
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
if not file_info:
|
||||
return {"success": False, "error": "No suitable file found in metadata"}
|
||||
return {"success": False, "error": "No primary file found in metadata"}
|
||||
mirrors = file_info.get("mirrors") or []
|
||||
download_urls = []
|
||||
if mirrors:
|
||||
@@ -543,9 +540,7 @@ class DownloadManager:
|
||||
return {"success": False, "error": "No mirror URL found"}
|
||||
|
||||
# 3. Prepare download
|
||||
file_name = file_info.get("name", "")
|
||||
if not file_name:
|
||||
return {"success": False, "error": "No filename found in file info"}
|
||||
file_name = file_info["name"]
|
||||
save_path = os.path.join(save_dir, file_name)
|
||||
|
||||
# 5. Prepare metadata based on model type
|
||||
@@ -564,6 +559,11 @@ class DownloadManager:
|
||||
version_info, file_info, save_path
|
||||
)
|
||||
logger.info(f"Creating EmbeddingMetadata for {file_name}")
|
||||
elif model_type == "misc":
|
||||
metadata = MiscMetadata.from_civitai_info(
|
||||
version_info, file_info, save_path
|
||||
)
|
||||
logger.info(f"Creating MiscMetadata for {file_name}")
|
||||
|
||||
# 6. Start download process
|
||||
result = await self._execute_download(
|
||||
@@ -669,6 +669,8 @@ class DownloadManager:
|
||||
scanner = await self._get_checkpoint_scanner()
|
||||
elif model_type == "embedding":
|
||||
scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
elif model_type == "misc":
|
||||
scanner = await self._get_misc_scanner()
|
||||
except Exception as exc:
|
||||
logger.debug("Failed to acquire scanner for %s models: %s", model_type, exc)
|
||||
|
||||
@@ -1065,6 +1067,9 @@ class DownloadManager:
|
||||
elif model_type == "embedding":
|
||||
scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
logger.info(f"Updating embedding cache for {actual_file_paths[0]}")
|
||||
elif model_type == "misc":
|
||||
scanner = await self._get_misc_scanner()
|
||||
logger.info(f"Updating misc cache for {actual_file_paths[0]}")
|
||||
|
||||
adjust_cached_entry = (
|
||||
getattr(scanner, "adjust_cached_entry", None)
|
||||
@@ -1174,6 +1179,14 @@ class DownloadManager:
|
||||
".pkl",
|
||||
".sft",
|
||||
}
|
||||
if model_type == "misc":
|
||||
return {
|
||||
".ckpt",
|
||||
".pt",
|
||||
".bin",
|
||||
".pth",
|
||||
".safetensors",
|
||||
}
|
||||
return {".safetensors"}
|
||||
|
||||
async def _extract_model_files_from_archive(
|
||||
|
||||
@@ -43,7 +43,6 @@ class EmbeddingService(BaseModelService):
|
||||
"sub_type": sub_type,
|
||||
"favorite": embedding_data.get("favorite", False),
|
||||
"update_available": bool(embedding_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(embedding_data.get("skip_metadata_refresh", False)),
|
||||
"civitai": self.filter_civitai_data(embedding_data.get("civitai", {}), minimal=True)
|
||||
}
|
||||
|
||||
|
||||
@@ -30,36 +30,36 @@ class LoraScanner(ModelScanner):
|
||||
|
||||
async def diagnose_hash_index(self):
|
||||
"""Diagnostic method to verify hash index functionality"""
|
||||
logger.debug("\n\n*** DIAGNOSING LORA HASH INDEX ***\n\n")
|
||||
print("\n\n*** DIAGNOSING LORA HASH INDEX ***\n\n", file=sys.stderr)
|
||||
|
||||
# First check if the hash index has any entries
|
||||
if hasattr(self, '_hash_index'):
|
||||
index_entries = len(self._hash_index._hash_to_path)
|
||||
logger.debug(f"Hash index has {index_entries} entries")
|
||||
print(f"Hash index has {index_entries} entries", file=sys.stderr)
|
||||
|
||||
# Print a few example entries if available
|
||||
if index_entries > 0:
|
||||
logger.debug("\nSample hash index entries:")
|
||||
print("\nSample hash index entries:", file=sys.stderr)
|
||||
count = 0
|
||||
for hash_val, path in self._hash_index._hash_to_path.items():
|
||||
if count < 5: # Just show the first 5
|
||||
logger.debug(f"Hash: {hash_val[:8]}... -> Path: {path}")
|
||||
print(f"Hash: {hash_val[:8]}... -> Path: {path}", file=sys.stderr)
|
||||
count += 1
|
||||
else:
|
||||
break
|
||||
else:
|
||||
logger.debug("Hash index not initialized")
|
||||
print("Hash index not initialized", file=sys.stderr)
|
||||
|
||||
# Try looking up by a known hash for testing
|
||||
if not hasattr(self, '_hash_index') or not self._hash_index._hash_to_path:
|
||||
logger.debug("No hash entries to test lookup with")
|
||||
print("No hash entries to test lookup with", file=sys.stderr)
|
||||
return
|
||||
|
||||
test_hash = next(iter(self._hash_index._hash_to_path.keys()))
|
||||
test_path = self._hash_index.get_path(test_hash)
|
||||
logger.debug(f"\nTest lookup by hash: {test_hash[:8]}... -> {test_path}")
|
||||
print(f"\nTest lookup by hash: {test_hash[:8]}... -> {test_path}", file=sys.stderr)
|
||||
|
||||
# Also test reverse lookup
|
||||
test_hash_result = self._hash_index.get_hash(test_path)
|
||||
logger.debug(f"Test reverse lookup: {test_path} -> {test_hash_result[:8]}...\n\n")
|
||||
print(f"Test reverse lookup: {test_path} -> {test_hash_result[:8]}...\n\n", file=sys.stderr)
|
||||
|
||||
|
||||
@@ -48,7 +48,6 @@ class LoraService(BaseModelService):
|
||||
"notes": lora_data.get("notes", ""),
|
||||
"favorite": lora_data.get("favorite", False),
|
||||
"update_available": bool(lora_data.get("update_available", False)),
|
||||
"skip_metadata_refresh": bool(lora_data.get("skip_metadata_refresh", False)),
|
||||
"sub_type": sub_type,
|
||||
"civitai": self.filter_civitai_data(
|
||||
lora_data.get("civitai", {}), minimal=True
|
||||
|
||||
@@ -44,8 +44,6 @@ async def initialize_metadata_providers():
|
||||
logger.debug(f"SQLite metadata provider registered with database: {db_path}")
|
||||
else:
|
||||
logger.warning("Metadata archive database is enabled but database file not found")
|
||||
logger.info("Automatically disabling enable_metadata_archive_db setting")
|
||||
settings_manager.set('enable_metadata_archive_db', False)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize SQLite metadata provider: {e}")
|
||||
|
||||
|
||||
@@ -243,27 +243,17 @@ class MetadataSyncService:
|
||||
last_error = error or last_error
|
||||
|
||||
if civitai_metadata is None or metadata_provider is None:
|
||||
# Track if we need to save metadata
|
||||
needs_save = False
|
||||
|
||||
if sqlite_attempted:
|
||||
model_data["db_checked"] = True
|
||||
needs_save = True
|
||||
|
||||
if civitai_api_not_found:
|
||||
model_data["from_civitai"] = False
|
||||
model_data["civitai_deleted"] = True
|
||||
model_data["db_checked"] = sqlite_attempted or (enable_archive and model_data.get("db_checked", False))
|
||||
model_data["last_checked_at"] = datetime.now().timestamp()
|
||||
needs_save = True
|
||||
|
||||
# Save metadata if any state was updated
|
||||
if needs_save:
|
||||
data_to_save = model_data.copy()
|
||||
data_to_save.pop("folder", None)
|
||||
# Update last_checked_at for sqlite-only attempts if not already set
|
||||
if "last_checked_at" not in data_to_save:
|
||||
data_to_save["last_checked_at"] = datetime.now().timestamp()
|
||||
await self._metadata_manager.save_metadata(file_path, data_to_save)
|
||||
|
||||
default_error = (
|
||||
|
||||
55
py/services/misc_scanner.py
Normal file
55
py/services/misc_scanner.py
Normal file
@@ -0,0 +1,55 @@
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from ..utils.models import MiscMetadata
|
||||
from ..config import config
|
||||
from .model_scanner import ModelScanner
|
||||
from .model_hash_index import ModelHashIndex
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MiscScanner(ModelScanner):
|
||||
"""Service for scanning and managing misc files (VAE, Upscaler)"""
|
||||
|
||||
def __init__(self):
|
||||
# Define supported file extensions (combined from VAE and upscaler)
|
||||
file_extensions = {'.safetensors', '.pt', '.bin', '.ckpt', '.pth'}
|
||||
super().__init__(
|
||||
model_type="misc",
|
||||
model_class=MiscMetadata,
|
||||
file_extensions=file_extensions,
|
||||
hash_index=ModelHashIndex()
|
||||
)
|
||||
|
||||
def _resolve_sub_type(self, root_path: Optional[str]) -> Optional[str]:
|
||||
"""Resolve the sub-type based on the root path."""
|
||||
if not root_path:
|
||||
return None
|
||||
|
||||
if config.vae_roots and root_path in config.vae_roots:
|
||||
return "vae"
|
||||
|
||||
if config.upscaler_roots and root_path in config.upscaler_roots:
|
||||
return "upscaler"
|
||||
|
||||
return None
|
||||
|
||||
def adjust_metadata(self, metadata, file_path, root_path):
|
||||
"""Adjust metadata during scanning to set sub_type."""
|
||||
sub_type = self._resolve_sub_type(root_path)
|
||||
if sub_type:
|
||||
metadata.sub_type = sub_type
|
||||
return metadata
|
||||
|
||||
def adjust_cached_entry(self, entry: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Adjust entries loaded from the persisted cache to ensure sub_type is set."""
|
||||
sub_type = self._resolve_sub_type(
|
||||
self._find_root_for_file(entry.get("file_path"))
|
||||
)
|
||||
if sub_type:
|
||||
entry["sub_type"] = sub_type
|
||||
return entry
|
||||
|
||||
def get_model_roots(self) -> List[str]:
|
||||
"""Get misc root directories (VAE and upscaler)"""
|
||||
return config.misc_roots
|
||||
55
py/services/misc_service.py
Normal file
55
py/services/misc_service.py
Normal file
@@ -0,0 +1,55 @@
|
||||
import os
|
||||
import logging
|
||||
from typing import Dict
|
||||
|
||||
from .base_model_service import BaseModelService
|
||||
from ..utils.models import MiscMetadata
|
||||
from ..config import config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class MiscService(BaseModelService):
|
||||
"""Misc-specific service implementation (VAE, Upscaler)"""
|
||||
|
||||
def __init__(self, scanner, update_service=None):
|
||||
"""Initialize Misc service
|
||||
|
||||
Args:
|
||||
scanner: Misc scanner instance
|
||||
update_service: Optional service for remote update tracking.
|
||||
"""
|
||||
super().__init__("misc", scanner, MiscMetadata, update_service=update_service)
|
||||
|
||||
async def format_response(self, misc_data: Dict) -> Dict:
|
||||
"""Format Misc data for API response"""
|
||||
# Get sub_type from cache entry (new canonical field)
|
||||
sub_type = misc_data.get("sub_type", "vae")
|
||||
|
||||
return {
|
||||
"model_name": misc_data["model_name"],
|
||||
"file_name": misc_data["file_name"],
|
||||
"preview_url": config.get_preview_static_url(misc_data.get("preview_url", "")),
|
||||
"preview_nsfw_level": misc_data.get("preview_nsfw_level", 0),
|
||||
"base_model": misc_data.get("base_model", ""),
|
||||
"folder": misc_data["folder"],
|
||||
"sha256": misc_data.get("sha256", ""),
|
||||
"file_path": misc_data["file_path"].replace(os.sep, "/"),
|
||||
"file_size": misc_data.get("size", 0),
|
||||
"modified": misc_data.get("modified", ""),
|
||||
"tags": misc_data.get("tags", []),
|
||||
"from_civitai": misc_data.get("from_civitai", True),
|
||||
"usage_count": misc_data.get("usage_count", 0),
|
||||
"notes": misc_data.get("notes", ""),
|
||||
"sub_type": sub_type,
|
||||
"favorite": misc_data.get("favorite", False),
|
||||
"update_available": bool(misc_data.get("update_available", False)),
|
||||
"civitai": self.filter_civitai_data(misc_data.get("civitai", {}), minimal=True)
|
||||
}
|
||||
|
||||
def find_duplicate_hashes(self) -> Dict:
|
||||
"""Find Misc models with duplicate SHA256 hashes"""
|
||||
return self.scanner._hash_index.get_duplicate_hashes()
|
||||
|
||||
def find_duplicate_filenames(self) -> Dict:
|
||||
"""Find Misc models with conflicting filenames"""
|
||||
return self.scanner._hash_index.get_duplicate_filenames()
|
||||
@@ -5,6 +5,7 @@ import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
from dataclasses import dataclass, field
|
||||
from operator import itemgetter
|
||||
from natsort import natsorted
|
||||
|
||||
# Supported sort modes: (sort_key, order)
|
||||
@@ -228,17 +229,17 @@ class ModelCache:
|
||||
reverse=reverse
|
||||
)
|
||||
elif sort_key == 'date':
|
||||
# Sort by modified timestamp (use .get() with default to handle missing fields)
|
||||
# Sort by modified timestamp
|
||||
result = sorted(
|
||||
data,
|
||||
key=lambda x: x.get('modified', 0.0),
|
||||
key=itemgetter('modified'),
|
||||
reverse=reverse
|
||||
)
|
||||
elif sort_key == 'size':
|
||||
# Sort by file size (use .get() with default to handle missing fields)
|
||||
# Sort by file size
|
||||
result = sorted(
|
||||
data,
|
||||
key=lambda x: x.get('size', 0),
|
||||
key=itemgetter('size'),
|
||||
reverse=reverse
|
||||
)
|
||||
elif sort_key == 'usage':
|
||||
|
||||
@@ -676,12 +676,10 @@ class ModelMetadataProviderManager:
|
||||
|
||||
def _get_provider(self, provider_name: str = None) -> ModelMetadataProvider:
|
||||
"""Get provider by name or default provider"""
|
||||
if provider_name:
|
||||
if provider_name not in self.providers:
|
||||
raise ValueError(f"Provider '{provider_name}' is not registered")
|
||||
if provider_name and provider_name in self.providers:
|
||||
return self.providers[provider_name]
|
||||
|
||||
|
||||
if self.default_provider is None:
|
||||
raise ValueError("No default provider set and no valid provider specified")
|
||||
|
||||
|
||||
return self.providers[self.default_provider]
|
||||
|
||||
@@ -99,7 +99,6 @@ class FilterCriteria:
|
||||
favorites_only: bool = False
|
||||
search_options: Optional[Dict[str, Any]] = None
|
||||
model_types: Optional[Sequence[str]] = None
|
||||
tag_logic: str = "any" # "any" (OR) or "all" (AND)
|
||||
|
||||
|
||||
class ModelCacheRepository:
|
||||
@@ -301,29 +300,11 @@ class ModelFilterSet:
|
||||
include_tags = {tag for tag in tag_filters if tag}
|
||||
|
||||
if include_tags:
|
||||
tag_logic = criteria.tag_logic.lower() if criteria.tag_logic else "any"
|
||||
|
||||
def matches_include(item_tags):
|
||||
if not item_tags and "__no_tags__" in include_tags:
|
||||
return True
|
||||
if tag_logic == "all":
|
||||
# AND logic: item must have ALL include tags
|
||||
# Special case: __no_tags__ is handled separately
|
||||
non_special_tags = include_tags - {"__no_tags__"}
|
||||
if "__no_tags__" in include_tags:
|
||||
# If __no_tags__ is selected along with other tags,
|
||||
# treat it as "no tags OR (all other tags)"
|
||||
if not item_tags:
|
||||
return True
|
||||
# Otherwise, check if all non-special tags match
|
||||
if non_special_tags:
|
||||
return all(tag in (item_tags or []) for tag in non_special_tags)
|
||||
return True
|
||||
# Normal case: all tags must match
|
||||
return all(tag in (item_tags or []) for tag in non_special_tags)
|
||||
else:
|
||||
# OR logic (default): item must have ANY include tag
|
||||
return any(tag in include_tags for tag in (item_tags or []))
|
||||
return any(tag in include_tags for tag in (item_tags or []))
|
||||
|
||||
items = [item for item in items if matches_include(item.get("tags"))]
|
||||
|
||||
|
||||
@@ -20,8 +20,6 @@ from .service_registry import ServiceRegistry
|
||||
from .websocket_manager import ws_manager
|
||||
from .persistent_model_cache import get_persistent_cache
|
||||
from .settings_manager import get_settings_manager
|
||||
from .cache_entry_validator import CacheEntryValidator
|
||||
from .cache_health_monitor import CacheHealthMonitor, CacheHealthStatus
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -248,7 +246,6 @@ class ModelScanner:
|
||||
'tags': tags_list,
|
||||
'civitai': civitai_slim,
|
||||
'civitai_deleted': bool(get_value('civitai_deleted', False)),
|
||||
'skip_metadata_refresh': bool(get_value('skip_metadata_refresh', False)),
|
||||
}
|
||||
|
||||
license_source: Dict[str, Any] = {}
|
||||
@@ -471,39 +468,6 @@ class ModelScanner:
|
||||
for tag in adjusted_item.get('tags') or []:
|
||||
tags_count[tag] = tags_count.get(tag, 0) + 1
|
||||
|
||||
# Validate cache entries and check health
|
||||
valid_entries, invalid_entries = CacheEntryValidator.validate_batch(
|
||||
adjusted_raw_data, auto_repair=True
|
||||
)
|
||||
|
||||
if invalid_entries:
|
||||
monitor = CacheHealthMonitor()
|
||||
report = monitor.check_health(adjusted_raw_data, auto_repair=True)
|
||||
|
||||
if report.status != CacheHealthStatus.HEALTHY:
|
||||
# Broadcast health warning to frontend
|
||||
await ws_manager.broadcast_cache_health_warning(report, page_type)
|
||||
logger.warning(
|
||||
f"{self.model_type.capitalize()} Scanner: Cache health issue detected - "
|
||||
f"{report.invalid_entries} invalid entries, {report.repaired_entries} repaired"
|
||||
)
|
||||
|
||||
# Use only valid entries
|
||||
adjusted_raw_data = valid_entries
|
||||
|
||||
# Rebuild tags count from valid entries only
|
||||
tags_count = {}
|
||||
for item in adjusted_raw_data:
|
||||
for tag in item.get('tags') or []:
|
||||
tags_count[tag] = tags_count.get(tag, 0) + 1
|
||||
|
||||
# Remove invalid entries from hash index
|
||||
for invalid_entry in invalid_entries:
|
||||
file_path = CacheEntryValidator.get_file_path_safe(invalid_entry)
|
||||
sha256 = CacheEntryValidator.get_sha256_safe(invalid_entry)
|
||||
if file_path:
|
||||
hash_index.remove_by_path(file_path, sha256)
|
||||
|
||||
scan_result = CacheBuildResult(
|
||||
raw_data=adjusted_raw_data,
|
||||
hash_index=hash_index,
|
||||
@@ -687,6 +651,7 @@ class ModelScanner:
|
||||
|
||||
async def _initialize_cache(self) -> None:
|
||||
"""Initialize or refresh the cache"""
|
||||
print("init start", flush=True)
|
||||
self._is_initializing = True # Set flag
|
||||
try:
|
||||
start_time = time.time()
|
||||
@@ -700,6 +665,7 @@ class ModelScanner:
|
||||
scan_result = await self._gather_model_data()
|
||||
await self._apply_scan_result(scan_result)
|
||||
await self._save_persistent_cache(scan_result)
|
||||
print("init end", flush=True)
|
||||
|
||||
logger.info(
|
||||
f"{self.model_type.capitalize()} Scanner: Cache initialization completed in {time.time() - start_time:.2f} seconds, "
|
||||
@@ -810,18 +776,6 @@ class ModelScanner:
|
||||
model_data = self.adjust_cached_entry(dict(model_data))
|
||||
if not model_data:
|
||||
continue
|
||||
|
||||
# Validate the new entry before adding
|
||||
validation_result = CacheEntryValidator.validate(
|
||||
model_data, auto_repair=True
|
||||
)
|
||||
if not validation_result.is_valid:
|
||||
logger.warning(
|
||||
f"Skipping invalid entry during reconcile: {path}"
|
||||
)
|
||||
continue
|
||||
model_data = validation_result.entry
|
||||
|
||||
self._ensure_license_flags(model_data)
|
||||
# Add to cache
|
||||
self._cache.raw_data.append(model_data)
|
||||
@@ -1136,17 +1090,6 @@ class ModelScanner:
|
||||
processed_files += 1
|
||||
|
||||
if result:
|
||||
# Validate the entry before adding
|
||||
validation_result = CacheEntryValidator.validate(
|
||||
result, auto_repair=True
|
||||
)
|
||||
if not validation_result.is_valid:
|
||||
logger.warning(
|
||||
f"Skipping invalid scan result: {file_path}"
|
||||
)
|
||||
continue
|
||||
result = validation_result.entry
|
||||
|
||||
self._ensure_license_flags(result)
|
||||
raw_data.append(result)
|
||||
|
||||
@@ -1448,7 +1391,7 @@ class ModelScanner:
|
||||
return None
|
||||
|
||||
async def get_top_tags(self, limit: int = 20) -> List[Dict[str, any]]:
|
||||
"""Get top tags sorted by count. If limit is 0, return all tags."""
|
||||
"""Get top tags sorted by count"""
|
||||
await self.get_cached_data()
|
||||
|
||||
sorted_tags = sorted(
|
||||
@@ -1457,8 +1400,6 @@ class ModelScanner:
|
||||
reverse=True
|
||||
)
|
||||
|
||||
if limit == 0:
|
||||
return sorted_tags
|
||||
return sorted_tags[:limit]
|
||||
|
||||
async def get_base_models(self, limit: int = 20) -> List[Dict[str, any]]:
|
||||
|
||||
@@ -118,19 +118,24 @@ class ModelServiceFactory:
|
||||
|
||||
|
||||
def register_default_model_types():
|
||||
"""Register the default model types (LoRA, Checkpoint, and Embedding)"""
|
||||
"""Register the default model types (LoRA, Checkpoint, Embedding, and Misc)"""
|
||||
from ..services.lora_service import LoraService
|
||||
from ..services.checkpoint_service import CheckpointService
|
||||
from ..services.embedding_service import EmbeddingService
|
||||
from ..services.misc_service import MiscService
|
||||
from ..routes.lora_routes import LoraRoutes
|
||||
from ..routes.checkpoint_routes import CheckpointRoutes
|
||||
from ..routes.embedding_routes import EmbeddingRoutes
|
||||
|
||||
from ..routes.misc_model_routes import MiscModelRoutes
|
||||
|
||||
# Register LoRA model type
|
||||
ModelServiceFactory.register_model_type('lora', LoraService, LoraRoutes)
|
||||
|
||||
|
||||
# Register Checkpoint model type
|
||||
ModelServiceFactory.register_model_type('checkpoint', CheckpointService, CheckpointRoutes)
|
||||
|
||||
|
||||
# Register Embedding model type
|
||||
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)
|
||||
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)
|
||||
|
||||
# Register Misc model type (VAE, Upscaler)
|
||||
ModelServiceFactory.register_model_type('misc', MiscService, MiscModelRoutes)
|
||||
@@ -7,8 +7,7 @@ import os
|
||||
import sqlite3
|
||||
import time
|
||||
from dataclasses import dataclass, replace
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, Iterable, List, Mapping, Optional, Sequence
|
||||
from typing import Dict, Iterable, List, Mapping, Optional, Sequence
|
||||
|
||||
from .errors import RateLimitError, ResourceNotFoundError
|
||||
from .settings_manager import get_settings_manager
|
||||
@@ -65,9 +64,7 @@ class ModelVersionRecord:
|
||||
preview_url: Optional[str]
|
||||
is_in_library: bool
|
||||
should_ignore: bool
|
||||
early_access_ends_at: Optional[str] = None
|
||||
sort_index: int = 0
|
||||
is_early_access: bool = False
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -100,12 +97,8 @@ class ModelUpdateRecord:
|
||||
|
||||
return [version.version_id for version in self.versions if version.is_in_library]
|
||||
|
||||
def has_update(self, hide_early_access: bool = False) -> bool:
|
||||
"""Return True when a non-ignored remote version newer than the newest local copy is available.
|
||||
|
||||
Args:
|
||||
hide_early_access: If True, exclude early access versions from update check.
|
||||
"""
|
||||
def has_update(self) -> bool:
|
||||
"""Return True when a non-ignored remote version newer than the newest local copy is available."""
|
||||
|
||||
if self.should_ignore_model:
|
||||
return False
|
||||
@@ -117,56 +110,22 @@ class ModelUpdateRecord:
|
||||
|
||||
if max_in_library is None:
|
||||
return any(
|
||||
not version.is_in_library
|
||||
and not version.should_ignore
|
||||
and not (hide_early_access and ModelUpdateRecord._is_early_access_active(version))
|
||||
for version in self.versions
|
||||
not version.is_in_library and not version.should_ignore for version in self.versions
|
||||
)
|
||||
|
||||
for version in self.versions:
|
||||
if version.is_in_library or version.should_ignore:
|
||||
continue
|
||||
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
|
||||
continue
|
||||
if version.version_id > max_in_library:
|
||||
return True
|
||||
return False
|
||||
|
||||
@staticmethod
|
||||
def _is_early_access_active(version: ModelVersionRecord) -> bool:
|
||||
"""Check if a version is currently in early access period.
|
||||
|
||||
Uses two-phase detection:
|
||||
1. If exact EA end time available (from single version API), use it for precise check
|
||||
2. Otherwise fallback to basic EA flag (from bulk API)
|
||||
"""
|
||||
# Phase 2: Precise check with exact end time
|
||||
if version.early_access_ends_at:
|
||||
try:
|
||||
ea_date = datetime.fromisoformat(
|
||||
version.early_access_ends_at.replace("Z", "+00:00")
|
||||
)
|
||||
return ea_date > datetime.now(timezone.utc)
|
||||
except (ValueError, AttributeError):
|
||||
# If date parsing fails, treat as active EA (conservative)
|
||||
return True
|
||||
|
||||
# Phase 1: Basic EA flag from bulk API
|
||||
return version.is_early_access
|
||||
|
||||
def has_update_for_base(
|
||||
self,
|
||||
local_version_id: Optional[int],
|
||||
local_base_model: Optional[str],
|
||||
hide_early_access: bool = False,
|
||||
) -> bool:
|
||||
"""Return True when a newer remote version with the same base model exists.
|
||||
|
||||
Args:
|
||||
local_version_id: The current local version id.
|
||||
local_base_model: The base model to filter by.
|
||||
hide_early_access: If True, exclude early access versions from update check.
|
||||
"""
|
||||
"""Return True when a newer remote version with the same base model exists."""
|
||||
|
||||
if self.should_ignore_model:
|
||||
return False
|
||||
@@ -194,8 +153,6 @@ class ModelUpdateRecord:
|
||||
for version in self.versions:
|
||||
if version.is_in_library or version.should_ignore:
|
||||
continue
|
||||
if hide_early_access and ModelUpdateRecord._is_early_access_active(version):
|
||||
continue
|
||||
version_base = _normalize_base_model(version.base_model)
|
||||
if version_base != normalized_base:
|
||||
continue
|
||||
@@ -311,14 +268,6 @@ class ModelUpdateService:
|
||||
"ALTER TABLE model_update_versions "
|
||||
"ADD COLUMN should_ignore INTEGER NOT NULL DEFAULT 0"
|
||||
),
|
||||
"early_access_ends_at": (
|
||||
"ALTER TABLE model_update_versions "
|
||||
"ADD COLUMN early_access_ends_at TEXT"
|
||||
),
|
||||
"is_early_access": (
|
||||
"ALTER TABLE model_update_versions "
|
||||
"ADD COLUMN is_early_access INTEGER NOT NULL DEFAULT 0"
|
||||
),
|
||||
}
|
||||
|
||||
for column, statement in migrations.items():
|
||||
@@ -418,8 +367,6 @@ class ModelUpdateService:
|
||||
preview_url TEXT,
|
||||
is_in_library INTEGER NOT NULL DEFAULT 0,
|
||||
should_ignore INTEGER NOT NULL DEFAULT 0,
|
||||
early_access_ends_at TEXT,
|
||||
is_early_access INTEGER NOT NULL DEFAULT 0,
|
||||
PRIMARY KEY (model_id, version_id),
|
||||
FOREIGN KEY(model_id) REFERENCES model_update_status(model_id) ON DELETE CASCADE
|
||||
)
|
||||
@@ -437,8 +384,6 @@ class ModelUpdateService:
|
||||
"preview_url",
|
||||
"is_in_library",
|
||||
"should_ignore",
|
||||
"early_access_ends_at",
|
||||
"is_early_access",
|
||||
]
|
||||
defaults = {
|
||||
"sort_index": "0",
|
||||
@@ -449,8 +394,6 @@ class ModelUpdateService:
|
||||
"preview_url": "NULL",
|
||||
"is_in_library": "0",
|
||||
"should_ignore": "0",
|
||||
"early_access_ends_at": "NULL",
|
||||
"is_early_access": "0",
|
||||
}
|
||||
|
||||
select_parts = []
|
||||
@@ -724,8 +667,6 @@ class ModelUpdateService:
|
||||
is_in_library=False,
|
||||
should_ignore=should_ignore,
|
||||
sort_index=len(versions),
|
||||
early_access_ends_at=None,
|
||||
is_early_access=False,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -745,17 +686,16 @@ class ModelUpdateService:
|
||||
async with self._lock:
|
||||
return self._get_record(model_type, model_id)
|
||||
|
||||
async def has_update(self, model_type: str, model_id: int, hide_early_access: bool = False) -> bool:
|
||||
async def has_update(self, model_type: str, model_id: int) -> bool:
|
||||
"""Determine if a model has updates pending."""
|
||||
|
||||
record = await self.get_record(model_type, model_id)
|
||||
return record.has_update(hide_early_access=hide_early_access) if record else False
|
||||
return record.has_update() if record else False
|
||||
|
||||
async def has_updates_bulk(
|
||||
self,
|
||||
model_type: str,
|
||||
model_ids: Sequence[int],
|
||||
hide_early_access: bool = False,
|
||||
) -> Dict[int, bool]:
|
||||
"""Return update availability for each model id in a single database pass."""
|
||||
|
||||
@@ -767,7 +707,7 @@ class ModelUpdateService:
|
||||
records = self._get_records_bulk(model_type, normalized_ids)
|
||||
|
||||
return {
|
||||
model_id: records.get(model_id).has_update(hide_early_access=hide_early_access) if records.get(model_id) else False
|
||||
model_id: records.get(model_id).has_update() if records.get(model_id) else False
|
||||
for model_id in normalized_ids
|
||||
}
|
||||
|
||||
@@ -1047,8 +987,6 @@ class ModelUpdateService:
|
||||
is_in_library=True,
|
||||
should_ignore=ignore_map.get(missing_id, False),
|
||||
sort_index=len(versions),
|
||||
early_access_ends_at=None,
|
||||
is_early_access=False,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1091,8 +1029,6 @@ class ModelUpdateService:
|
||||
is_in_library=version_id in local_set,
|
||||
should_ignore=ignore_map.get(version_id, remote_version.should_ignore),
|
||||
sort_index=sort_map.get(version_id, index),
|
||||
early_access_ends_at=remote_version.early_access_ends_at,
|
||||
is_early_access=remote_version.is_early_access,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1119,8 +1055,6 @@ class ModelUpdateService:
|
||||
is_in_library=True,
|
||||
should_ignore=ignore_map.get(version_id, False),
|
||||
sort_index=len(versions),
|
||||
early_access_ends_at=None,
|
||||
is_early_access=False,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1186,11 +1120,6 @@ class ModelUpdateService:
|
||||
released_at = _normalize_string(entry.get("publishedAt") or entry.get("createdAt"))
|
||||
size_bytes = self._extract_size_bytes(entry.get("files"))
|
||||
preview_url = self._extract_preview_url(entry.get("images"))
|
||||
early_access_ends_at = _normalize_string(entry.get("earlyAccessEndsAt"))
|
||||
|
||||
# Check availability field from bulk API for basic EA detection
|
||||
availability = _normalize_string(entry.get("availability"))
|
||||
is_early_access = availability == "EarlyAccess"
|
||||
|
||||
return ModelVersionRecord(
|
||||
version_id=version_id,
|
||||
@@ -1201,9 +1130,7 @@ class ModelUpdateService:
|
||||
preview_url=preview_url,
|
||||
is_in_library=False,
|
||||
should_ignore=False,
|
||||
early_access_ends_at=early_access_ends_at,
|
||||
sort_index=index,
|
||||
is_early_access=is_early_access,
|
||||
)
|
||||
|
||||
def _extract_size_bytes(self, files) -> Optional[int]:
|
||||
@@ -1304,8 +1231,7 @@ class ModelUpdateService:
|
||||
version_rows = conn.execute(
|
||||
f"""
|
||||
SELECT model_id, version_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
|
||||
is_early_access
|
||||
size_bytes, preview_url, is_in_library, should_ignore
|
||||
FROM model_update_versions
|
||||
WHERE model_id IN ({placeholders})
|
||||
ORDER BY model_id ASC, sort_index ASC, version_id ASC
|
||||
@@ -1326,9 +1252,7 @@ class ModelUpdateService:
|
||||
preview_url=row["preview_url"],
|
||||
is_in_library=bool(row["is_in_library"]),
|
||||
should_ignore=bool(row["should_ignore"]),
|
||||
early_access_ends_at=row["early_access_ends_at"],
|
||||
sort_index=_normalize_int(row["sort_index"]) or 0,
|
||||
is_early_access=bool(row["is_early_access"]),
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1384,9 +1308,8 @@ class ModelUpdateService:
|
||||
"""
|
||||
INSERT INTO model_update_versions (
|
||||
version_id, model_id, sort_index, name, base_model, released_at,
|
||||
size_bytes, preview_url, is_in_library, should_ignore, early_access_ends_at,
|
||||
is_early_access
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
size_bytes, preview_url, is_in_library, should_ignore
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
""",
|
||||
(
|
||||
version.version_id,
|
||||
@@ -1399,8 +1322,6 @@ class ModelUpdateService:
|
||||
version.preview_url,
|
||||
1 if version.is_in_library else 0,
|
||||
1 if version.should_ignore else 0,
|
||||
version.early_access_ends_at,
|
||||
1 if version.is_early_access else 0,
|
||||
),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
@@ -52,7 +52,6 @@ class PersistentModelCache:
|
||||
"trained_words",
|
||||
"license_flags",
|
||||
"civitai_deleted",
|
||||
"skip_metadata_refresh",
|
||||
"exclude",
|
||||
"db_checked",
|
||||
"last_checked_at",
|
||||
@@ -184,7 +183,6 @@ class PersistentModelCache:
|
||||
"tags": tags.get(file_path, []),
|
||||
"civitai": civitai,
|
||||
"civitai_deleted": bool(row["civitai_deleted"]),
|
||||
"skip_metadata_refresh": bool(row["skip_metadata_refresh"]),
|
||||
"license_flags": int(license_value),
|
||||
}
|
||||
raw_data.append(item)
|
||||
@@ -493,7 +491,6 @@ class PersistentModelCache:
|
||||
"civitai_creator_username": "TEXT",
|
||||
"civitai_model_type": "TEXT",
|
||||
"civitai_deleted": "INTEGER DEFAULT 0",
|
||||
"skip_metadata_refresh": "INTEGER DEFAULT 0",
|
||||
# Persisting without explicit flags should assume CivitAI's documented defaults (0b111001 == 57).
|
||||
"license_flags": f"INTEGER DEFAULT {DEFAULT_LICENSE_FLAGS}",
|
||||
}
|
||||
@@ -566,7 +563,6 @@ class PersistentModelCache:
|
||||
trained_words_json,
|
||||
int(license_flags),
|
||||
1 if item.get("civitai_deleted") else 0,
|
||||
1 if item.get("skip_metadata_refresh") else 0,
|
||||
1 if item.get("exclude") else 0,
|
||||
1 if item.get("db_checked") else 0,
|
||||
float(item.get("last_checked_at") or 0.0),
|
||||
|
||||
@@ -9,7 +9,7 @@ from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple
|
||||
from ..config import config
|
||||
from .recipe_cache import RecipeCache
|
||||
from .recipe_fts_index import RecipeFTSIndex
|
||||
from .persistent_recipe_cache import PersistentRecipeCache, get_persistent_recipe_cache, PersistedRecipeData
|
||||
from .persistent_recipe_cache import PersistentRecipeCache, get_persistent_recipe_cache
|
||||
from .service_registry import ServiceRegistry
|
||||
from .lora_scanner import LoraScanner
|
||||
from .metadata_service import get_default_metadata_provider
|
||||
@@ -431,16 +431,6 @@ class RecipeScanner:
|
||||
4. Persist results for next startup
|
||||
"""
|
||||
try:
|
||||
# Ensure cache exists to avoid None reference errors
|
||||
if self._cache is None:
|
||||
self._cache = RecipeCache(
|
||||
raw_data=[],
|
||||
sorted_by_name=[],
|
||||
sorted_by_date=[],
|
||||
folders=[],
|
||||
folder_tree={},
|
||||
)
|
||||
|
||||
# Create a new event loop for this thread
|
||||
loop = asyncio.new_event_loop()
|
||||
asyncio.set_event_loop(loop)
|
||||
@@ -502,7 +492,7 @@ class RecipeScanner:
|
||||
|
||||
def _reconcile_recipe_cache(
|
||||
self,
|
||||
persisted: PersistedRecipeData,
|
||||
persisted: "PersistedRecipeData",
|
||||
recipes_dir: str,
|
||||
) -> Tuple[List[Dict], bool, Dict[str, str]]:
|
||||
"""Reconcile persisted cache with current filesystem state.
|
||||
@@ -514,6 +504,8 @@ class RecipeScanner:
|
||||
Returns:
|
||||
Tuple of (recipes list, changed flag, json_paths dict).
|
||||
"""
|
||||
from .persistent_recipe_cache import PersistedRecipeData
|
||||
|
||||
recipes: List[Dict] = []
|
||||
json_paths: Dict[str, str] = {}
|
||||
changed = False
|
||||
@@ -530,37 +522,32 @@ class RecipeScanner:
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
# Build recipe_id -> recipe lookup (O(n) instead of O(n²))
|
||||
recipe_by_id: Dict[str, Dict] = {
|
||||
# Build lookup of persisted recipes by json_path
|
||||
persisted_by_path: Dict[str, Dict] = {}
|
||||
for recipe in persisted.raw_data:
|
||||
recipe_id = str(recipe.get('id', ''))
|
||||
if recipe_id:
|
||||
# Find the json_path from file_stats
|
||||
for json_path, (mtime, size) in persisted.file_stats.items():
|
||||
if os.path.basename(json_path).startswith(recipe_id):
|
||||
persisted_by_path[json_path] = recipe
|
||||
break
|
||||
|
||||
# Also index by recipe ID for faster lookups
|
||||
persisted_by_id: Dict[str, Dict] = {
|
||||
str(r.get('id', '')): r for r in persisted.raw_data if r.get('id')
|
||||
}
|
||||
|
||||
# Build json_path -> recipe lookup from file_stats (O(m))
|
||||
persisted_by_path: Dict[str, Dict] = {}
|
||||
for json_path in persisted.file_stats.keys():
|
||||
basename = os.path.basename(json_path)
|
||||
if basename.lower().endswith('.recipe.json'):
|
||||
recipe_id = basename[:-len('.recipe.json')]
|
||||
if recipe_id in recipe_by_id:
|
||||
persisted_by_path[json_path] = recipe_by_id[recipe_id]
|
||||
|
||||
# Process current files
|
||||
for file_path, (current_mtime, current_size) in current_files.items():
|
||||
cached_stats = persisted.file_stats.get(file_path)
|
||||
|
||||
# Extract recipe_id from current file for fallback lookup
|
||||
basename = os.path.basename(file_path)
|
||||
recipe_id_from_file = basename[:-len('.recipe.json')] if basename.lower().endswith('.recipe.json') else None
|
||||
|
||||
if cached_stats:
|
||||
cached_mtime, cached_size = cached_stats
|
||||
# Check if file is unchanged
|
||||
if abs(current_mtime - cached_mtime) < 1.0 and current_size == cached_size:
|
||||
# Try direct path lookup first
|
||||
# Use cached data
|
||||
cached_recipe = persisted_by_path.get(file_path)
|
||||
# Fallback to recipe_id lookup if path lookup fails
|
||||
if not cached_recipe and recipe_id_from_file:
|
||||
cached_recipe = recipe_by_id.get(recipe_id_from_file)
|
||||
if cached_recipe:
|
||||
recipe_id = str(cached_recipe.get('id', ''))
|
||||
# Track folder from file path
|
||||
@@ -1351,9 +1338,8 @@ class RecipeScanner:
|
||||
|
||||
# Get hash from the first file
|
||||
for file_info in version_info.get('files', []):
|
||||
sha256_hash = (file_info.get('hashes') or {}).get('SHA256')
|
||||
if sha256_hash:
|
||||
return sha256_hash, False # Return hash with False for isDeleted flag
|
||||
if file_info.get('hashes', {}).get('SHA256'):
|
||||
return file_info['hashes']['SHA256'], False # Return hash with False for isDeleted flag
|
||||
|
||||
logger.debug(f"No SHA256 hash found in version info for ID: {model_version_id}")
|
||||
return None, False
|
||||
@@ -2232,26 +2218,3 @@ class RecipeScanner:
|
||||
duplicate_groups = {k: v for k, v in fingerprint_groups.items() if len(v) > 1}
|
||||
|
||||
return duplicate_groups
|
||||
|
||||
async def find_duplicate_recipes_by_source(self) -> dict:
|
||||
"""Find all recipe duplicates based on source_path (Civitai image URLs)
|
||||
|
||||
Returns:
|
||||
Dictionary where keys are source URLs and values are lists of recipe IDs
|
||||
"""
|
||||
cache = await self.get_cached_data()
|
||||
|
||||
url_groups = {}
|
||||
for recipe in cache.raw_data:
|
||||
source_url = recipe.get('source_path', '').strip()
|
||||
if not source_url:
|
||||
continue
|
||||
|
||||
if source_url not in url_groups:
|
||||
url_groups[source_url] = []
|
||||
|
||||
url_groups[source_url].append(recipe.get('id'))
|
||||
|
||||
duplicate_groups = {k: v for k, v in url_groups.items() if len(v) > 1}
|
||||
|
||||
return duplicate_groups
|
||||
|
||||
@@ -233,23 +233,44 @@ class ServiceRegistry:
|
||||
async def get_embedding_scanner(cls):
|
||||
"""Get or create Embedding scanner instance"""
|
||||
service_name = "embedding_scanner"
|
||||
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
# Double-check after acquiring lock
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from .embedding_scanner import EmbeddingScanner
|
||||
|
||||
|
||||
scanner = await EmbeddingScanner.get_instance()
|
||||
cls._services[service_name] = scanner
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return scanner
|
||||
|
||||
|
||||
@classmethod
|
||||
async def get_misc_scanner(cls):
|
||||
"""Get or create Misc scanner instance (VAE, Upscaler)"""
|
||||
service_name = "misc_scanner"
|
||||
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
async with cls._get_lock(service_name):
|
||||
# Double-check after acquiring lock
|
||||
if service_name in cls._services:
|
||||
return cls._services[service_name]
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from .misc_scanner import MiscScanner
|
||||
|
||||
scanner = await MiscScanner.get_instance()
|
||||
cls._services[service_name] = scanner
|
||||
logger.debug(f"Created and registered {service_name}")
|
||||
return scanner
|
||||
|
||||
@classmethod
|
||||
def clear_services(cls):
|
||||
"""Clear all registered services - mainly for testing"""
|
||||
|
||||
@@ -28,9 +28,6 @@ CORE_USER_SETTING_KEYS: Tuple[str, ...] = (
|
||||
"folder_paths",
|
||||
)
|
||||
|
||||
# Threshold for aggressive cleanup: if file contains this many default keys, clean it up
|
||||
DEFAULT_KEYS_CLEANUP_THRESHOLD = 10
|
||||
|
||||
|
||||
DEFAULT_SETTINGS: Dict[str, Any] = {
|
||||
"civitai_api_key": "",
|
||||
@@ -66,10 +63,9 @@ DEFAULT_SETTINGS: Dict[str, Any] = {
|
||||
"compact_mode": False,
|
||||
"priority_tags": DEFAULT_PRIORITY_TAG_CONFIG.copy(),
|
||||
"model_name_display": "model_name",
|
||||
"model_card_footer_action": "replace_preview",
|
||||
"model_card_footer_action": "example_images",
|
||||
"update_flag_strategy": "same_base",
|
||||
"auto_organize_exclusions": [],
|
||||
"metadata_refresh_skip_paths": [],
|
||||
}
|
||||
|
||||
|
||||
@@ -99,9 +95,6 @@ class SettingsManager:
|
||||
if self._needs_initial_save:
|
||||
self._save_settings()
|
||||
self._needs_initial_save = False
|
||||
else:
|
||||
# Clean up existing settings file by removing default values
|
||||
self._cleanup_default_values_from_disk()
|
||||
|
||||
def _detect_standalone_mode(self) -> bool:
|
||||
"""Return ``True`` when running in standalone mode."""
|
||||
@@ -233,7 +226,7 @@ class SettingsManager:
|
||||
return merged
|
||||
|
||||
def _ensure_default_settings(self) -> None:
|
||||
"""Ensure all default settings keys exist in memory (but don't save defaults to disk)"""
|
||||
"""Ensure all default settings keys exist"""
|
||||
defaults = self._get_default_settings()
|
||||
updated_existing = False
|
||||
inserted_defaults = False
|
||||
@@ -262,17 +255,6 @@ class SettingsManager:
|
||||
self.settings["auto_organize_exclusions"] = []
|
||||
inserted_defaults = True
|
||||
|
||||
if "metadata_refresh_skip_paths" in self.settings:
|
||||
normalized_skip_paths = self.normalize_metadata_refresh_skip_paths(
|
||||
self.settings.get("metadata_refresh_skip_paths")
|
||||
)
|
||||
if normalized_skip_paths != self.settings.get("metadata_refresh_skip_paths"):
|
||||
self.settings["metadata_refresh_skip_paths"] = normalized_skip_paths
|
||||
updated_existing = True
|
||||
else:
|
||||
self.settings["metadata_refresh_skip_paths"] = []
|
||||
inserted_defaults = True
|
||||
|
||||
for key, value in defaults.items():
|
||||
if key == "priority_tags":
|
||||
continue
|
||||
@@ -283,10 +265,10 @@ class SettingsManager:
|
||||
self.settings[key] = value
|
||||
inserted_defaults = True
|
||||
|
||||
# Save only if existing values were normalized/updated
|
||||
if updated_existing:
|
||||
if updated_existing or (
|
||||
inserted_defaults and self._bootstrap_reason in {"invalid", "unreadable"}
|
||||
):
|
||||
self._save_settings()
|
||||
# Note: inserted_defaults no longer triggers save - defaults stay in memory only
|
||||
|
||||
def _migrate_to_library_registry(self) -> None:
|
||||
"""Ensure settings include the multi-library registry structure."""
|
||||
@@ -729,42 +711,6 @@ class SettingsManager:
|
||||
|
||||
self._startup_messages.append(payload)
|
||||
|
||||
def _cleanup_default_values_from_disk(self) -> None:
|
||||
"""Remove default values from existing settings.json to keep it clean.
|
||||
|
||||
Only performs cleanup if the file contains a significant number of default
|
||||
values (indicating it's "bloated"). Small files (like template-based configs)
|
||||
are preserved as-is to avoid unexpected changes.
|
||||
"""
|
||||
# Only cleanup existing files (not new ones)
|
||||
if self._bootstrap_reason == "missing" or self._original_disk_payload is None:
|
||||
return
|
||||
|
||||
defaults = self._get_default_settings()
|
||||
disk_keys = set(self._original_disk_payload.keys())
|
||||
|
||||
# Count how many keys on disk are set to their default values
|
||||
default_value_keys = set()
|
||||
for key in disk_keys:
|
||||
if key in CORE_USER_SETTING_KEYS:
|
||||
continue # Core keys don't count as "cleanup candidates"
|
||||
disk_value = self._original_disk_payload.get(key)
|
||||
default_value = defaults.get(key)
|
||||
# Compare using JSON serialization for complex objects
|
||||
if json.dumps(disk_value, sort_keys=True, default=str) == json.dumps(default_value, sort_keys=True, default=str):
|
||||
default_value_keys.add(key)
|
||||
|
||||
# Only cleanup if there are "many" default keys (indicating a bloated file)
|
||||
# This preserves small/template-based configs while cleaning up legacy bloated files
|
||||
if len(default_value_keys) >= DEFAULT_KEYS_CLEANUP_THRESHOLD:
|
||||
logger.info(
|
||||
"Cleaning up %d default value(s) from settings.json to keep it minimal",
|
||||
len(default_value_keys)
|
||||
)
|
||||
self._save_settings()
|
||||
# Update original payload to match what we just saved
|
||||
self._original_disk_payload = self._serialize_settings_for_disk()
|
||||
|
||||
def _collect_configuration_warnings(self) -> None:
|
||||
if not self._standalone_mode:
|
||||
return
|
||||
@@ -817,7 +763,6 @@ class SettingsManager:
|
||||
defaults['priority_tags'] = DEFAULT_PRIORITY_TAG_CONFIG.copy()
|
||||
defaults.setdefault('folder_paths', {})
|
||||
defaults['auto_organize_exclusions'] = []
|
||||
defaults['metadata_refresh_skip_paths'] = []
|
||||
|
||||
library_name = defaults.get("active_library") or "default"
|
||||
default_library = self._build_library_payload(
|
||||
@@ -889,44 +834,6 @@ class SettingsManager:
|
||||
self._save_settings()
|
||||
return exclusions
|
||||
|
||||
def normalize_metadata_refresh_skip_paths(self, value: Any) -> List[str]:
|
||||
if value is None:
|
||||
return []
|
||||
|
||||
if isinstance(value, str):
|
||||
candidates: Iterable[str] = (
|
||||
value.replace("\n", ",").replace(";", ",").split(",")
|
||||
)
|
||||
elif isinstance(value, Sequence) and not isinstance(value, (bytes, bytearray, str)):
|
||||
candidates = value
|
||||
else:
|
||||
return []
|
||||
|
||||
paths: List[str] = []
|
||||
for raw in candidates:
|
||||
if isinstance(raw, str):
|
||||
token = raw.replace("\\", "/").strip().strip("/")
|
||||
if token:
|
||||
paths.append(token)
|
||||
|
||||
unique_paths: List[str] = []
|
||||
seen = set()
|
||||
for path in paths:
|
||||
if path not in seen:
|
||||
seen.add(path)
|
||||
unique_paths.append(path)
|
||||
|
||||
return unique_paths
|
||||
|
||||
def get_metadata_refresh_skip_paths(self) -> List[str]:
|
||||
skip_paths = self.normalize_metadata_refresh_skip_paths(
|
||||
self.settings.get("metadata_refresh_skip_paths")
|
||||
)
|
||||
if skip_paths != self.settings.get("metadata_refresh_skip_paths"):
|
||||
self.settings["metadata_refresh_skip_paths"] = skip_paths
|
||||
self._save_settings()
|
||||
return skip_paths
|
||||
|
||||
def get_startup_messages(self) -> List[Dict[str, Any]]:
|
||||
return [message.copy() for message in self._startup_messages]
|
||||
|
||||
@@ -964,8 +871,6 @@ class SettingsManager:
|
||||
"""Set setting value and save"""
|
||||
if key == "auto_organize_exclusions":
|
||||
value = self.normalize_auto_organize_exclusions(value)
|
||||
elif key == "metadata_refresh_skip_paths":
|
||||
value = self.normalize_metadata_refresh_skip_paths(value)
|
||||
self.settings[key] = value
|
||||
portable_switch_pending = False
|
||||
if key == "use_portable_settings" and isinstance(value, bool):
|
||||
@@ -994,10 +899,6 @@ class SettingsManager:
|
||||
self._save_settings()
|
||||
logger.info(f"Deleted setting: {key}")
|
||||
|
||||
def keys(self) -> Iterable[str]:
|
||||
"""Return all setting keys."""
|
||||
return self.settings.keys()
|
||||
|
||||
def _prepare_portable_switch(self, use_portable: bool) -> None:
|
||||
"""Prepare switching the settings storage location."""
|
||||
|
||||
@@ -1200,12 +1101,7 @@ class SettingsManager:
|
||||
self._seed_template = None
|
||||
|
||||
def _serialize_settings_for_disk(self) -> Dict[str, Any]:
|
||||
"""Return the settings payload that should be persisted to disk.
|
||||
|
||||
Only saves settings that differ from defaults, keeping the config file
|
||||
clean and focused on user customizations. Default values are still
|
||||
available at runtime via _get_default_settings().
|
||||
"""
|
||||
"""Return the settings payload that should be persisted to disk."""
|
||||
|
||||
if self._bootstrap_reason == "missing":
|
||||
minimal: Dict[str, Any] = {}
|
||||
@@ -1219,25 +1115,7 @@ class SettingsManager:
|
||||
|
||||
return minimal
|
||||
|
||||
# Only save settings that differ from defaults
|
||||
defaults = self._get_default_settings()
|
||||
minimal = {}
|
||||
|
||||
for key, value in self.settings.items():
|
||||
default_value = defaults.get(key)
|
||||
|
||||
# Core settings are always saved (even if equal to default)
|
||||
if key in CORE_USER_SETTING_KEYS:
|
||||
minimal[key] = copy.deepcopy(value)
|
||||
# Complex objects need deep comparison
|
||||
elif isinstance(value, (dict, list)) and default_value is not None:
|
||||
if json.dumps(value, sort_keys=True, default=str) != json.dumps(default_value, sort_keys=True, default=str):
|
||||
minimal[key] = copy.deepcopy(value)
|
||||
# Simple values use direct comparison
|
||||
elif value != default_value:
|
||||
minimal[key] = copy.deepcopy(value)
|
||||
|
||||
return minimal
|
||||
return copy.deepcopy(self.settings)
|
||||
|
||||
def get_libraries(self) -> Dict[str, Dict[str, Any]]:
|
||||
"""Return a copy of the registered libraries."""
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional, Protocol, Sequence
|
||||
from typing import Any, Dict, Optional, Protocol, Sequence
|
||||
|
||||
from ..metadata_sync_service import MetadataSyncService
|
||||
from ...utils.metadata_manager import MetadataManager
|
||||
@@ -43,22 +43,14 @@ class BulkMetadataRefreshUseCase:
|
||||
total_models = len(cache.raw_data)
|
||||
|
||||
enable_metadata_archive_db = self._settings.get("enable_metadata_archive_db", False)
|
||||
skip_paths = self._settings.get("metadata_refresh_skip_paths", [])
|
||||
to_process: Sequence[Dict[str, Any]] = [
|
||||
model
|
||||
for model in cache.raw_data
|
||||
if model.get("sha256")
|
||||
and not model.get("skip_metadata_refresh", False)
|
||||
and not self._is_in_skip_path(model.get("folder", ""), skip_paths)
|
||||
and (not model.get("civitai") or not model["civitai"].get("id"))
|
||||
and not (
|
||||
# Skip models confirmed not on CivitAI when no need to retry
|
||||
model.get("from_civitai") is False
|
||||
and model.get("civitai_deleted") is True
|
||||
and (
|
||||
not enable_metadata_archive_db
|
||||
or model.get("db_checked", False)
|
||||
)
|
||||
and (
|
||||
(enable_metadata_archive_db and not model.get("db_checked", False))
|
||||
or (not enable_metadata_archive_db and model.get("from_civitai") is True)
|
||||
)
|
||||
]
|
||||
|
||||
@@ -123,21 +115,6 @@ class BulkMetadataRefreshUseCase:
|
||||
|
||||
return {"success": True, "message": message, "processed": processed, "updated": success, "total": total_models}
|
||||
|
||||
@staticmethod
|
||||
def _is_in_skip_path(folder: str, skip_paths: List[str]) -> bool:
|
||||
if not skip_paths or not folder:
|
||||
return False
|
||||
normalized = folder.replace("\\", "/").strip("/")
|
||||
if not normalized:
|
||||
return False
|
||||
for sp in skip_paths:
|
||||
nsp = sp.replace("\\", "/").strip("/")
|
||||
if not nsp:
|
||||
continue
|
||||
if normalized == nsp or normalized.startswith(nsp + "/"):
|
||||
return True
|
||||
return False
|
||||
|
||||
async def execute_with_error_handling(
|
||||
self,
|
||||
*,
|
||||
|
||||
@@ -255,42 +255,6 @@ class WebSocketManager:
|
||||
self._download_progress.pop(download_id, None)
|
||||
logger.debug(f"Cleaned up old download progress for {download_id}")
|
||||
|
||||
async def broadcast_cache_health_warning(self, report: 'HealthReport', page_type: str = None):
|
||||
"""
|
||||
Broadcast cache health warning to frontend.
|
||||
|
||||
Args:
|
||||
report: HealthReport instance from CacheHealthMonitor
|
||||
page_type: The page type (loras, checkpoints, embeddings)
|
||||
"""
|
||||
from .cache_health_monitor import CacheHealthStatus
|
||||
|
||||
# Only broadcast if there are issues
|
||||
if report.status == CacheHealthStatus.HEALTHY:
|
||||
return
|
||||
|
||||
payload = {
|
||||
'type': 'cache_health_warning',
|
||||
'status': report.status.value,
|
||||
'message': report.message,
|
||||
'pageType': page_type,
|
||||
'details': {
|
||||
'total': report.total_entries,
|
||||
'valid': report.valid_entries,
|
||||
'invalid': report.invalid_entries,
|
||||
'repaired': report.repaired_entries,
|
||||
'corruption_rate': f"{report.corruption_rate:.1%}",
|
||||
'invalid_paths': report.invalid_paths[:5], # Limit to first 5
|
||||
}
|
||||
}
|
||||
|
||||
logger.info(
|
||||
f"Broadcasting cache health warning: {report.status.value} "
|
||||
f"({report.invalid_entries} invalid entries)"
|
||||
)
|
||||
|
||||
await self.broadcast(payload)
|
||||
|
||||
def get_connected_clients_count(self) -> int:
|
||||
"""Get number of connected clients"""
|
||||
return len(self._websockets)
|
||||
|
||||
@@ -49,6 +49,7 @@ SUPPORTED_MEDIA_EXTENSIONS = {
|
||||
VALID_LORA_SUB_TYPES = ["lora", "locon", "dora"]
|
||||
VALID_CHECKPOINT_SUB_TYPES = ["checkpoint", "diffusion_model"]
|
||||
VALID_EMBEDDING_SUB_TYPES = ["embedding"]
|
||||
VALID_MISC_SUB_TYPES = ["vae", "upscaler"]
|
||||
|
||||
# Backward compatibility alias
|
||||
VALID_LORA_TYPES = VALID_LORA_SUB_TYPES
|
||||
@@ -94,6 +95,7 @@ DEFAULT_PRIORITY_TAG_CONFIG = {
|
||||
"lora": ", ".join(CIVITAI_MODEL_TAGS),
|
||||
"checkpoint": ", ".join(CIVITAI_MODEL_TAGS),
|
||||
"embedding": ", ".join(CIVITAI_MODEL_TAGS),
|
||||
"misc": ", ".join(CIVITAI_MODEL_TAGS),
|
||||
}
|
||||
|
||||
# baseModel values from CivitAI that should be treated as diffusion models (unet)
|
||||
|
||||
@@ -121,71 +121,101 @@ class DownloadManager:
|
||||
async def start_download(self, options: dict):
|
||||
"""Start downloading example images for models."""
|
||||
|
||||
# Step 1: Parse options (fast, non-blocking)
|
||||
data = options or {}
|
||||
auto_mode = data.get("auto_mode", False)
|
||||
optimize = data.get("optimize", True)
|
||||
model_types = data.get("model_types", ["lora", "checkpoint"])
|
||||
delay = float(data.get("delay", 0.2))
|
||||
force = data.get("force", False)
|
||||
|
||||
# Step 2: Validate configuration (fast lookup)
|
||||
settings_manager = get_settings_manager()
|
||||
base_path = settings_manager.get("example_images_path")
|
||||
|
||||
if not base_path:
|
||||
error_msg = "Example images path not configured in settings"
|
||||
if auto_mode:
|
||||
logger.debug(error_msg)
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Example images path not configured, skipping auto download",
|
||||
}
|
||||
raise DownloadConfigurationError(error_msg)
|
||||
|
||||
active_library = settings_manager.get_active_library_name()
|
||||
output_dir = self._resolve_output_dir(active_library)
|
||||
if not output_dir:
|
||||
raise DownloadConfigurationError(
|
||||
"Example images path not configured in settings"
|
||||
)
|
||||
|
||||
# Step 3: Load progress file (I/O operation, done outside lock)
|
||||
processed_models = set()
|
||||
failed_models = set()
|
||||
|
||||
try:
|
||||
progress_file, processed_models, failed_models = await self._load_progress_file(output_dir)
|
||||
logger.debug(
|
||||
"Loaded previous progress, %s models already processed, %s models marked as failed",
|
||||
len(processed_models),
|
||||
len(failed_models),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load progress file: {e}")
|
||||
# Continue with empty sets
|
||||
|
||||
# Step 4: Quick state check and update (minimal lock time)
|
||||
async with self._state_lock:
|
||||
if self._is_downloading:
|
||||
raise DownloadInProgressError(self._progress.snapshot())
|
||||
|
||||
try:
|
||||
# Reset progress with loaded data
|
||||
data = options or {}
|
||||
auto_mode = data.get("auto_mode", False)
|
||||
optimize = data.get("optimize", True)
|
||||
model_types = data.get("model_types", ["lora", "checkpoint"])
|
||||
delay = float(data.get("delay", 0.2))
|
||||
force = data.get("force", False)
|
||||
|
||||
settings_manager = get_settings_manager()
|
||||
base_path = settings_manager.get("example_images_path")
|
||||
|
||||
if not base_path:
|
||||
error_msg = "Example images path not configured in settings"
|
||||
if auto_mode:
|
||||
logger.debug(error_msg)
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Example images path not configured, skipping auto download",
|
||||
}
|
||||
raise DownloadConfigurationError(error_msg)
|
||||
|
||||
active_library = get_settings_manager().get_active_library_name()
|
||||
output_dir = self._resolve_output_dir(active_library)
|
||||
if not output_dir:
|
||||
raise DownloadConfigurationError(
|
||||
"Example images path not configured in settings"
|
||||
)
|
||||
|
||||
self._progress.reset()
|
||||
self._progress["processed_models"] = processed_models
|
||||
self._progress["failed_models"] = failed_models
|
||||
self._stop_requested = False
|
||||
self._progress["status"] = "running"
|
||||
self._progress["start_time"] = time.time()
|
||||
self._progress["end_time"] = None
|
||||
|
||||
self._is_downloading = True
|
||||
snapshot = self._progress.snapshot()
|
||||
progress_file = os.path.join(output_dir, ".download_progress.json")
|
||||
progress_source = progress_file
|
||||
if uses_library_scoped_folders():
|
||||
legacy_root = (
|
||||
get_settings_manager().get("example_images_path") or ""
|
||||
)
|
||||
legacy_progress = (
|
||||
os.path.join(legacy_root, ".download_progress.json")
|
||||
if legacy_root
|
||||
else ""
|
||||
)
|
||||
if (
|
||||
legacy_progress
|
||||
and os.path.exists(legacy_progress)
|
||||
and not os.path.exists(progress_file)
|
||||
):
|
||||
try:
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
shutil.move(legacy_progress, progress_file)
|
||||
logger.info(
|
||||
"Migrated legacy download progress file '%s' to '%s'",
|
||||
legacy_progress,
|
||||
progress_file,
|
||||
)
|
||||
except OSError as exc:
|
||||
logger.warning(
|
||||
"Failed to migrate download progress file from '%s' to '%s': %s",
|
||||
legacy_progress,
|
||||
progress_file,
|
||||
exc,
|
||||
)
|
||||
progress_source = legacy_progress
|
||||
|
||||
# Create the download task without awaiting it
|
||||
# This ensures the HTTP response is returned immediately
|
||||
# while the actual processing happens in the background
|
||||
if os.path.exists(progress_source):
|
||||
try:
|
||||
with open(progress_source, "r", encoding="utf-8") as f:
|
||||
saved_progress = json.load(f)
|
||||
self._progress["processed_models"] = set(
|
||||
saved_progress.get("processed_models", [])
|
||||
)
|
||||
self._progress["failed_models"] = set(
|
||||
saved_progress.get("failed_models", [])
|
||||
)
|
||||
logger.debug(
|
||||
"Loaded previous progress, %s models already processed, %s models marked as failed",
|
||||
len(self._progress["processed_models"]),
|
||||
len(self._progress["failed_models"]),
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load progress file: {e}")
|
||||
self._progress["processed_models"] = set()
|
||||
self._progress["failed_models"] = set()
|
||||
else:
|
||||
self._progress["processed_models"] = set()
|
||||
self._progress["failed_models"] = set()
|
||||
|
||||
self._is_downloading = True
|
||||
self._download_task = asyncio.create_task(
|
||||
self._download_all_example_images(
|
||||
output_dir,
|
||||
@@ -197,10 +227,7 @@ class DownloadManager:
|
||||
)
|
||||
)
|
||||
|
||||
# Add a callback to handle task completion/errors
|
||||
self._download_task.add_done_callback(
|
||||
lambda t: self._handle_download_task_done(t, output_dir)
|
||||
)
|
||||
snapshot = self._progress.snapshot()
|
||||
except ExampleImagesDownloadError:
|
||||
# Re-raise our own exception types without wrapping
|
||||
self._is_downloading = False
|
||||
@@ -214,26 +241,11 @@ class DownloadManager:
|
||||
)
|
||||
raise ExampleImagesDownloadError(str(e)) from e
|
||||
|
||||
# Broadcast progress in the background without blocking the response
|
||||
# This ensures the HTTP response is returned immediately
|
||||
asyncio.create_task(self._broadcast_progress(status="running"))
|
||||
await self._broadcast_progress(status="running")
|
||||
|
||||
return {"success": True, "message": "Download started", "status": snapshot}
|
||||
|
||||
def _handle_download_task_done(self, task: asyncio.Task, output_dir: str) -> None:
|
||||
"""Handle download task completion, including saving progress on error."""
|
||||
try:
|
||||
# This will re-raise any exception from the task
|
||||
task.result()
|
||||
except Exception as e:
|
||||
logger.error(f"Download task failed with error: {e}", exc_info=True)
|
||||
# Ensure progress is saved even on failure
|
||||
try:
|
||||
self._save_progress(output_dir)
|
||||
except Exception as save_error:
|
||||
logger.error(f"Failed to save progress after task failure: {save_error}")
|
||||
|
||||
async def get_status(self, request) -> dict:
|
||||
async def get_status(self, request):
|
||||
"""Get the current status of example images download."""
|
||||
|
||||
return {
|
||||
@@ -242,198 +254,6 @@ class DownloadManager:
|
||||
"status": self._progress.snapshot(),
|
||||
}
|
||||
|
||||
async def _load_progress_file(self, output_dir: str) -> tuple[str, set, set]:
|
||||
"""Load progress file from disk. Returns (progress_file_path, processed_models, failed_models).
|
||||
|
||||
This is a separate async method to allow running in executor to avoid blocking event loop.
|
||||
"""
|
||||
loop = asyncio.get_event_loop()
|
||||
return await loop.run_in_executor(
|
||||
None, self._load_progress_file_sync, output_dir
|
||||
)
|
||||
|
||||
def _load_progress_file_sync(self, output_dir: str) -> tuple[str, set, set]:
|
||||
"""Synchronous implementation of progress file loading."""
|
||||
progress_file = os.path.join(output_dir, ".download_progress.json")
|
||||
progress_source = progress_file
|
||||
|
||||
# Handle legacy migration if needed
|
||||
if uses_library_scoped_folders():
|
||||
legacy_root = get_settings_manager().get("example_images_path") or ""
|
||||
legacy_progress = (
|
||||
os.path.join(legacy_root, ".download_progress.json")
|
||||
if legacy_root
|
||||
else ""
|
||||
)
|
||||
if (
|
||||
legacy_progress
|
||||
and os.path.exists(legacy_progress)
|
||||
and not os.path.exists(progress_file)
|
||||
):
|
||||
try:
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
shutil.move(legacy_progress, progress_file)
|
||||
logger.info(
|
||||
"Migrated legacy download progress file '%s' to '%s'",
|
||||
legacy_progress,
|
||||
progress_file,
|
||||
)
|
||||
except OSError as exc:
|
||||
logger.warning(
|
||||
"Failed to migrate download progress file from '%s' to '%s': %s",
|
||||
legacy_progress,
|
||||
progress_file,
|
||||
exc,
|
||||
)
|
||||
progress_source = legacy_progress
|
||||
|
||||
processed_models = set()
|
||||
failed_models = set()
|
||||
|
||||
if os.path.exists(progress_source):
|
||||
try:
|
||||
with open(progress_source, "r", encoding="utf-8") as f:
|
||||
saved_progress = json.load(f)
|
||||
processed_models = set(saved_progress.get("processed_models", []))
|
||||
failed_models = set(saved_progress.get("failed_models", []))
|
||||
except Exception:
|
||||
# Return empty sets on error
|
||||
pass
|
||||
|
||||
return progress_file, processed_models, failed_models
|
||||
|
||||
def _load_progress_sets_sync(self, progress_file: str) -> tuple[set, set]:
|
||||
"""Load only the processed and failed model sets from progress file.
|
||||
|
||||
This is a lighter version for quick checks without legacy migration.
|
||||
Returns (processed_models, failed_models).
|
||||
"""
|
||||
processed_models = set()
|
||||
failed_models = set()
|
||||
|
||||
if os.path.exists(progress_file):
|
||||
try:
|
||||
with open(progress_file, "r", encoding="utf-8") as f:
|
||||
saved_progress = json.load(f)
|
||||
processed_models = set(saved_progress.get("processed_models", []))
|
||||
failed_models = set(saved_progress.get("failed_models", []))
|
||||
except Exception:
|
||||
# Return empty sets on error
|
||||
pass
|
||||
|
||||
return processed_models, failed_models
|
||||
|
||||
async def check_pending_models(self, model_types: list[str]) -> dict:
|
||||
"""Quickly check how many models need example images downloaded.
|
||||
|
||||
This is a lightweight check that avoids the overhead of starting
|
||||
a full download task when no work is needed.
|
||||
|
||||
Returns:
|
||||
dict with keys:
|
||||
- total_models: Total number of models across specified types
|
||||
- pending_count: Number of models needing example images
|
||||
- processed_count: Number of already processed models
|
||||
- failed_count: Number of models marked as failed
|
||||
- needs_download: True if there are pending models to process
|
||||
"""
|
||||
from ..services.service_registry import ServiceRegistry
|
||||
|
||||
if self._is_downloading:
|
||||
return {
|
||||
"success": True,
|
||||
"is_downloading": True,
|
||||
"total_models": 0,
|
||||
"pending_count": 0,
|
||||
"processed_count": 0,
|
||||
"failed_count": 0,
|
||||
"needs_download": False,
|
||||
"message": "Download already in progress",
|
||||
}
|
||||
|
||||
try:
|
||||
# Get scanners
|
||||
scanners = []
|
||||
if "lora" in model_types:
|
||||
lora_scanner = await ServiceRegistry.get_lora_scanner()
|
||||
scanners.append(("lora", lora_scanner))
|
||||
|
||||
if "checkpoint" in model_types:
|
||||
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
|
||||
scanners.append(("checkpoint", checkpoint_scanner))
|
||||
|
||||
if "embedding" in model_types:
|
||||
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
|
||||
scanners.append(("embedding", embedding_scanner))
|
||||
|
||||
# Load progress file to check processed models (async to avoid blocking)
|
||||
settings_manager = get_settings_manager()
|
||||
active_library = settings_manager.get_active_library_name()
|
||||
output_dir = self._resolve_output_dir(active_library)
|
||||
|
||||
processed_models: set[str] = set()
|
||||
failed_models: set[str] = set()
|
||||
|
||||
if output_dir:
|
||||
progress_file = os.path.join(output_dir, ".download_progress.json")
|
||||
loop = asyncio.get_event_loop()
|
||||
processed_models, failed_models = await loop.run_in_executor(
|
||||
None, self._load_progress_sets_sync, progress_file
|
||||
)
|
||||
|
||||
# Collect all models and count in a single pass per scanner
|
||||
total_models = 0
|
||||
all_models_with_hash: list[tuple[str, str]] = [] # (hash, name) pairs
|
||||
|
||||
for scanner_type, scanner in scanners:
|
||||
cache = await scanner.get_cached_data()
|
||||
if cache and cache.raw_data:
|
||||
for model in cache.raw_data:
|
||||
total_models += 1
|
||||
raw_hash = model.get("sha256")
|
||||
if raw_hash:
|
||||
model_hash = raw_hash.lower()
|
||||
all_models_with_hash.append((model_hash, model.get("model_name", "Unknown")))
|
||||
|
||||
models_with_hash = len(all_models_with_hash)
|
||||
|
||||
# Calculate pending count: check which models actually need processing
|
||||
# A model is pending if it has a hash, is not in processed_models,
|
||||
# and its folder doesn't exist or is empty
|
||||
pending_hashes = set()
|
||||
for model_hash, model_name in all_models_with_hash:
|
||||
if model_hash not in processed_models:
|
||||
# Check if model folder exists with files
|
||||
model_dir = ExampleImagePathResolver.get_model_folder(
|
||||
model_hash, active_library
|
||||
)
|
||||
if not _model_directory_has_files(model_dir):
|
||||
pending_hashes.add(model_hash)
|
||||
|
||||
pending_count = len(pending_hashes)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"is_downloading": False,
|
||||
"total_models": total_models,
|
||||
"pending_count": pending_count,
|
||||
"processed_count": len(processed_models),
|
||||
"failed_count": len(failed_models),
|
||||
"needs_download": pending_count > 0,
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking pending models: {e}", exc_info=True)
|
||||
return {
|
||||
"success": False,
|
||||
"error": str(e),
|
||||
"total_models": 0,
|
||||
"pending_count": 0,
|
||||
"processed_count": 0,
|
||||
"failed_count": 0,
|
||||
"needs_download": False,
|
||||
}
|
||||
|
||||
async def pause_download(self, request):
|
||||
"""Pause the example images download."""
|
||||
|
||||
|
||||
@@ -43,15 +43,8 @@ class ExampleImagesProcessor:
|
||||
return media_url
|
||||
|
||||
@staticmethod
|
||||
def _get_file_extension_from_content_or_headers(content, headers, fallback_url=None, media_type_hint=None):
|
||||
"""Determine file extension from content magic bytes or headers
|
||||
|
||||
Args:
|
||||
content: File content bytes
|
||||
headers: HTTP response headers
|
||||
fallback_url: Original URL for extension extraction
|
||||
media_type_hint: Optional media type hint from metadata (e.g., "video" or "image")
|
||||
"""
|
||||
def _get_file_extension_from_content_or_headers(content, headers, fallback_url=None):
|
||||
"""Determine file extension from content magic bytes or headers"""
|
||||
# Check magic bytes for common formats
|
||||
if content:
|
||||
if content.startswith(b'\xFF\xD8\xFF'):
|
||||
@@ -89,10 +82,6 @@ class ExampleImagesProcessor:
|
||||
if ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or ext in SUPPORTED_MEDIA_EXTENSIONS['videos']:
|
||||
return ext
|
||||
|
||||
# Use media type hint from metadata if available
|
||||
if media_type_hint == "video":
|
||||
return '.mp4'
|
||||
|
||||
# Default fallback
|
||||
return '.jpg'
|
||||
|
||||
@@ -147,7 +136,7 @@ class ExampleImagesProcessor:
|
||||
if success:
|
||||
# Determine file extension from content or headers
|
||||
media_ext = ExampleImagesProcessor._get_file_extension_from_content_or_headers(
|
||||
content, headers, original_url, image.get("type")
|
||||
content, headers, original_url
|
||||
)
|
||||
|
||||
# Check if the detected file type is supported
|
||||
@@ -230,7 +219,7 @@ class ExampleImagesProcessor:
|
||||
if success:
|
||||
# Determine file extension from content or headers
|
||||
media_ext = ExampleImagesProcessor._get_file_extension_from_content_or_headers(
|
||||
content, headers, original_url, image.get("type")
|
||||
content, headers, original_url
|
||||
)
|
||||
|
||||
# Check if the detected file type is supported
|
||||
|
||||
@@ -17,7 +17,7 @@ async def extract_lora_metadata(file_path: str) -> Dict:
|
||||
base_model = determine_base_model(metadata.get("ss_base_model_version"))
|
||||
return {"base_model": base_model}
|
||||
except Exception as e:
|
||||
logger.error(f"Error reading metadata from {file_path}: {str(e)}")
|
||||
print(f"Error reading metadata from {file_path}: {str(e)}")
|
||||
return {"base_model": "Unknown"}
|
||||
|
||||
async def extract_checkpoint_metadata(file_path: str) -> dict:
|
||||
|
||||
@@ -223,7 +223,7 @@ class MetadataManager:
|
||||
preview_url=normalize_path(preview_url),
|
||||
tags=[],
|
||||
modelDescription="",
|
||||
sub_type="checkpoint",
|
||||
model_type="checkpoint",
|
||||
from_civitai=True
|
||||
)
|
||||
elif model_class.__name__ == "EmbeddingMetadata":
|
||||
@@ -238,7 +238,6 @@ class MetadataManager:
|
||||
preview_url=normalize_path(preview_url),
|
||||
tags=[],
|
||||
modelDescription="",
|
||||
sub_type="embedding",
|
||||
from_civitai=True
|
||||
)
|
||||
else: # Default to LoraMetadata
|
||||
|
||||
@@ -25,7 +25,6 @@ class BaseModelMetadata:
|
||||
favorite: bool = False # Whether the model is a favorite
|
||||
exclude: bool = False # Whether to exclude this model from the cache
|
||||
db_checked: bool = False # Whether checked in archive DB
|
||||
skip_metadata_refresh: bool = False # Whether to skip this model during bulk metadata refresh
|
||||
metadata_source: Optional[str] = None # Last provider that supplied metadata
|
||||
last_checked_at: float = 0 # Last checked timestamp
|
||||
_unknown_fields: Dict[str, Any] = field(default_factory=dict, repr=False, compare=False) # Store unknown fields
|
||||
@@ -143,27 +142,27 @@ class LoraMetadata(BaseModelMetadata):
|
||||
@classmethod
|
||||
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'LoraMetadata':
|
||||
"""Create LoraMetadata instance from Civitai version info"""
|
||||
file_name = file_info.get('name', '')
|
||||
file_name = file_info['name']
|
||||
base_model = determine_base_model(version_info.get('baseModel', ''))
|
||||
|
||||
|
||||
# Extract tags and description if available
|
||||
tags = []
|
||||
description = ""
|
||||
model_data = version_info.get('model') or {}
|
||||
if 'tags' in model_data:
|
||||
tags = model_data['tags']
|
||||
if 'description' in model_data:
|
||||
description = model_data['description']
|
||||
|
||||
if 'model' in version_info:
|
||||
if 'tags' in version_info['model']:
|
||||
tags = version_info['model']['tags']
|
||||
if 'description' in version_info['model']:
|
||||
description = version_info['model']['description']
|
||||
|
||||
return cls(
|
||||
file_name=os.path.splitext(file_name)[0],
|
||||
model_name=model_data.get('name', os.path.splitext(file_name)[0]),
|
||||
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
|
||||
file_path=save_path.replace(os.sep, '/'),
|
||||
size=file_info.get('sizeKB', 0) * 1024,
|
||||
modified=datetime.now().timestamp(),
|
||||
sha256=(file_info.get('hashes') or {}).get('SHA256', '').lower(),
|
||||
sha256=file_info['hashes'].get('SHA256', '').lower(),
|
||||
base_model=base_model,
|
||||
preview_url='', # Will be updated after preview download
|
||||
preview_url=None, # Will be updated after preview download
|
||||
preview_nsfw_level=0, # Will be updated after preview download
|
||||
from_civitai=True,
|
||||
civitai=version_info,
|
||||
@@ -179,28 +178,28 @@ class CheckpointMetadata(BaseModelMetadata):
|
||||
@classmethod
|
||||
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'CheckpointMetadata':
|
||||
"""Create CheckpointMetadata instance from Civitai version info"""
|
||||
file_name = file_info.get('name', '')
|
||||
file_name = file_info['name']
|
||||
base_model = determine_base_model(version_info.get('baseModel', ''))
|
||||
sub_type = version_info.get('type', 'checkpoint')
|
||||
|
||||
|
||||
# Extract tags and description if available
|
||||
tags = []
|
||||
description = ""
|
||||
model_data = version_info.get('model') or {}
|
||||
if 'tags' in model_data:
|
||||
tags = model_data['tags']
|
||||
if 'description' in model_data:
|
||||
description = model_data['description']
|
||||
|
||||
if 'model' in version_info:
|
||||
if 'tags' in version_info['model']:
|
||||
tags = version_info['model']['tags']
|
||||
if 'description' in version_info['model']:
|
||||
description = version_info['model']['description']
|
||||
|
||||
return cls(
|
||||
file_name=os.path.splitext(file_name)[0],
|
||||
model_name=model_data.get('name', os.path.splitext(file_name)[0]),
|
||||
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
|
||||
file_path=save_path.replace(os.sep, '/'),
|
||||
size=file_info.get('sizeKB', 0) * 1024,
|
||||
modified=datetime.now().timestamp(),
|
||||
sha256=(file_info.get('hashes') or {}).get('SHA256', '').lower(),
|
||||
sha256=file_info['hashes'].get('SHA256', '').lower(),
|
||||
base_model=base_model,
|
||||
preview_url='', # Will be updated after preview download
|
||||
preview_url=None, # Will be updated after preview download
|
||||
preview_nsfw_level=0,
|
||||
from_civitai=True,
|
||||
civitai=version_info,
|
||||
@@ -217,28 +216,74 @@ class EmbeddingMetadata(BaseModelMetadata):
|
||||
@classmethod
|
||||
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'EmbeddingMetadata':
|
||||
"""Create EmbeddingMetadata instance from Civitai version info"""
|
||||
file_name = file_info.get('name', '')
|
||||
file_name = file_info['name']
|
||||
base_model = determine_base_model(version_info.get('baseModel', ''))
|
||||
sub_type = version_info.get('type', 'embedding')
|
||||
|
||||
# Extract tags and description if available
|
||||
tags = []
|
||||
description = ""
|
||||
model_data = version_info.get('model') or {}
|
||||
if 'tags' in model_data:
|
||||
tags = model_data['tags']
|
||||
if 'description' in model_data:
|
||||
description = model_data['description']
|
||||
if 'model' in version_info:
|
||||
if 'tags' in version_info['model']:
|
||||
tags = version_info['model']['tags']
|
||||
if 'description' in version_info['model']:
|
||||
description = version_info['model']['description']
|
||||
|
||||
return cls(
|
||||
file_name=os.path.splitext(file_name)[0],
|
||||
model_name=model_data.get('name', os.path.splitext(file_name)[0]),
|
||||
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
|
||||
file_path=save_path.replace(os.sep, '/'),
|
||||
size=file_info.get('sizeKB', 0) * 1024,
|
||||
modified=datetime.now().timestamp(),
|
||||
sha256=(file_info.get('hashes') or {}).get('SHA256', '').lower(),
|
||||
sha256=file_info['hashes'].get('SHA256', '').lower(),
|
||||
base_model=base_model,
|
||||
preview_url='', # Will be updated after preview download
|
||||
preview_url=None, # Will be updated after preview download
|
||||
preview_nsfw_level=0,
|
||||
from_civitai=True,
|
||||
civitai=version_info,
|
||||
sub_type=sub_type,
|
||||
tags=tags,
|
||||
modelDescription=description
|
||||
)
|
||||
|
||||
@dataclass
|
||||
class MiscMetadata(BaseModelMetadata):
|
||||
"""Represents the metadata structure for a Misc model (VAE, Upscaler)"""
|
||||
sub_type: str = "vae"
|
||||
|
||||
@classmethod
|
||||
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'MiscMetadata':
|
||||
"""Create MiscMetadata instance from Civitai version info"""
|
||||
file_name = file_info['name']
|
||||
base_model = determine_base_model(version_info.get('baseModel', ''))
|
||||
|
||||
# Determine sub_type from CivitAI model type
|
||||
civitai_type = version_info.get('model', {}).get('type', '').lower()
|
||||
if civitai_type == 'vae':
|
||||
sub_type = 'vae'
|
||||
elif civitai_type == 'upscaler':
|
||||
sub_type = 'upscaler'
|
||||
else:
|
||||
sub_type = 'vae' # Default to vae
|
||||
|
||||
# Extract tags and description if available
|
||||
tags = []
|
||||
description = ""
|
||||
if 'model' in version_info:
|
||||
if 'tags' in version_info['model']:
|
||||
tags = version_info['model']['tags']
|
||||
if 'description' in version_info['model']:
|
||||
description = version_info['model']['description']
|
||||
|
||||
return cls(
|
||||
file_name=os.path.splitext(file_name)[0],
|
||||
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
|
||||
file_path=save_path.replace(os.sep, '/'),
|
||||
size=file_info.get('sizeKB', 0) * 1024,
|
||||
modified=datetime.now().timestamp(),
|
||||
sha256=file_info['hashes'].get('SHA256', '').lower(),
|
||||
base_model=base_model,
|
||||
preview_url=None, # Will be updated after preview download
|
||||
preview_nsfw_level=0,
|
||||
from_civitai=True,
|
||||
civitai=version_info,
|
||||
|
||||
@@ -138,15 +138,19 @@ def calculate_recipe_fingerprint(loras):
|
||||
if not loras:
|
||||
return ""
|
||||
|
||||
# Filter valid entries and extract hash and strength
|
||||
valid_loras = []
|
||||
for lora in loras:
|
||||
# Skip excluded loras
|
||||
if lora.get("exclude", False):
|
||||
continue
|
||||
|
||||
# Get the hash - use modelVersionId as fallback if hash is empty
|
||||
hash_value = lora.get("hash", "").lower()
|
||||
if not hash_value and lora.get("modelVersionId"):
|
||||
if not hash_value and lora.get("isDeleted", False) and lora.get("modelVersionId"):
|
||||
hash_value = str(lora.get("modelVersionId"))
|
||||
|
||||
# Skip entries without a valid hash
|
||||
if not hash_value:
|
||||
continue
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[project]
|
||||
name = "comfyui-lora-manager"
|
||||
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
|
||||
version = "0.9.16"
|
||||
version = "0.9.13"
|
||||
license = {file = "LICENSE"}
|
||||
dependencies = [
|
||||
"aiohttp",
|
||||
|
||||
@@ -4,13 +4,9 @@ testpaths = tests
|
||||
python_files = test_*.py
|
||||
python_classes = Test*
|
||||
python_functions = test_*
|
||||
# Asyncio configuration
|
||||
asyncio_mode = auto
|
||||
asyncio_default_fixture_loop_scope = function
|
||||
# Register markers
|
||||
# Register async marker for coroutine-style tests
|
||||
markers =
|
||||
asyncio: execute test within asyncio event loop
|
||||
no_settings_dir_isolation: allow tests to use real settings paths
|
||||
integration: integration tests requiring external resources
|
||||
# Skip problematic directories to avoid import conflicts
|
||||
norecursedirs = .git .tox dist build *.egg __pycache__ py .hypothesis
|
||||
norecursedirs = .git .tox dist build *.egg __pycache__ py
|
||||
@@ -1,7 +1,3 @@
|
||||
-r requirements.txt
|
||||
pytest>=7.4
|
||||
pytest-cov>=4.1
|
||||
pytest-asyncio>=0.21.0
|
||||
hypothesis>=6.0
|
||||
syrupy>=5.0
|
||||
pytest-benchmark>=5.0
|
||||
|
||||
0
scripts/sync_translation_keys.py
Executable file → Normal file
0
scripts/sync_translation_keys.py
Executable file → Normal file
@@ -154,7 +154,6 @@ class StandaloneServer:
|
||||
self.app = web.Application(
|
||||
logger=logger,
|
||||
middlewares=[cache_control],
|
||||
client_max_size=256 * 1024 * 1024,
|
||||
handler_args={
|
||||
"max_field_size": HEADER_SIZE_LIMIT,
|
||||
"max_line_size": HEADER_SIZE_LIMIT,
|
||||
|
||||
@@ -60,9 +60,6 @@ body {
|
||||
--badge-update-bg: oklch(72% 0.2 220);
|
||||
--badge-update-text: oklch(28% 0.03 220);
|
||||
--badge-update-glow: oklch(72% 0.2 220 / 0.28);
|
||||
--badge-skip-refresh-bg: oklch(82% 0.12 45);
|
||||
--badge-skip-refresh-text: oklch(35% 0.02 45);
|
||||
--badge-skip-refresh-glow: oklch(82% 0.12 45 / 0.15);
|
||||
|
||||
/* Spacing Scale */
|
||||
--space-1: calc(8px * 1);
|
||||
@@ -117,9 +114,6 @@ html[data-theme="light"] {
|
||||
--badge-update-bg: oklch(62% 0.18 220);
|
||||
--badge-update-text: oklch(98% 0.02 240);
|
||||
--badge-update-glow: oklch(62% 0.18 220 / 0.4);
|
||||
--badge-skip-refresh-bg: oklch(82% 0.12 45);
|
||||
--badge-skip-refresh-text: oklch(98% 0.02 45);
|
||||
--badge-skip-refresh-glow: oklch(82% 0.12 45 / 0.15);
|
||||
}
|
||||
|
||||
body {
|
||||
|
||||
@@ -113,12 +113,6 @@
|
||||
max-width: 110px;
|
||||
}
|
||||
|
||||
/* Compact mode: hide sub-type to save space */
|
||||
.compact-density .model-sub-type,
|
||||
.compact-density .model-separator {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.compact-density .card-actions i {
|
||||
font-size: 0.95em;
|
||||
padding: 3px;
|
||||
@@ -658,25 +652,3 @@
|
||||
margin-left: 1px;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.model-skip-refresh-badge {
|
||||
width: 16px;
|
||||
height: 16px;
|
||||
padding: 0;
|
||||
border-radius: 3px;
|
||||
background: var(--badge-skip-refresh-bg);
|
||||
color: var(--badge-skip-refresh-text);
|
||||
font-size: 0.65rem;
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
flex-shrink: 0;
|
||||
box-shadow: 0 1px 3px var(--badge-skip-refresh-glow);
|
||||
border: 1px solid color-mix(in oklab, var(--badge-skip-refresh-bg) 70%, transparent);
|
||||
opacity: 0.85;
|
||||
}
|
||||
|
||||
.model-skip-refresh-badge i {
|
||||
margin-left: 0;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
@@ -387,51 +387,3 @@
|
||||
min-width: 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* Early Access styles - Buzz theme color (#F59F00) */
|
||||
.version-badge-early-access {
|
||||
background: color-mix(in oklch, #F59F00 25%, transparent);
|
||||
color: #E67700;
|
||||
border-color: color-mix(in oklch, #F59F00 55%, transparent);
|
||||
}
|
||||
|
||||
[data-theme="dark"] .version-badge-early-access {
|
||||
background: color-mix(in oklch, #F59F00 20%, transparent);
|
||||
color: #F59F00;
|
||||
border-color: color-mix(in oklch, #F59F00 45%, transparent);
|
||||
}
|
||||
|
||||
.version-meta-ea {
|
||||
color: #E67700;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .version-meta-ea {
|
||||
color: #F59F00;
|
||||
}
|
||||
|
||||
/* Early Access row - gray out effect */
|
||||
.model-version-row.is-early-access {
|
||||
opacity: 0.85;
|
||||
filter: grayscale(40%);
|
||||
transition: opacity 0.2s ease, filter 0.2s ease;
|
||||
}
|
||||
|
||||
.model-version-row.is-early-access:hover {
|
||||
opacity: 0.95;
|
||||
filter: grayscale(25%);
|
||||
}
|
||||
|
||||
/* Early Access download button - Buzz theme color (#F59F00) */
|
||||
.version-action-early-access {
|
||||
background: color-mix(in oklch, #F59F00 15%, transparent);
|
||||
color: #E67700;
|
||||
border-color: color-mix(in oklch, #F59F00 50%, transparent);
|
||||
cursor: not-allowed;
|
||||
}
|
||||
|
||||
[data-theme="dark"] .version-action-early-access {
|
||||
background: color-mix(in oklch, #F59F00 12%, transparent);
|
||||
color: #F59F00;
|
||||
border-color: color-mix(in oklch, #F59F00 40%, transparent);
|
||||
}
|
||||
|
||||
@@ -512,10 +512,6 @@
|
||||
|
||||
.filter-preset.active .preset-delete-btn {
|
||||
color: white;
|
||||
opacity: 0;
|
||||
}
|
||||
|
||||
.filter-preset:hover.active .preset-delete-btn {
|
||||
opacity: 0.8;
|
||||
}
|
||||
|
||||
@@ -533,16 +529,13 @@
|
||||
align-items: center;
|
||||
gap: 6px;
|
||||
white-space: nowrap;
|
||||
max-width: 120px; /* Prevent long names from breaking layout */
|
||||
overflow: hidden;
|
||||
text-overflow: ellipsis;
|
||||
}
|
||||
|
||||
.preset-delete-btn {
|
||||
background: none;
|
||||
border: none;
|
||||
color: var(--text-color);
|
||||
opacity: 0; /* Hidden by default */
|
||||
opacity: 0.5;
|
||||
cursor: pointer;
|
||||
padding: 4px;
|
||||
display: flex;
|
||||
@@ -553,10 +546,6 @@
|
||||
margin-left: auto;
|
||||
}
|
||||
|
||||
.filter-preset:hover .preset-delete-btn {
|
||||
opacity: 0.5; /* Show on hover */
|
||||
}
|
||||
|
||||
.preset-delete-btn:hover {
|
||||
opacity: 1;
|
||||
color: var(--lora-error, #e74c3c);
|
||||
@@ -673,57 +662,6 @@
|
||||
|
||||
|
||||
|
||||
/* Tag Logic Toggle Styles */
|
||||
.filter-section-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
margin-bottom: 8px;
|
||||
}
|
||||
|
||||
.filter-section-header h4 {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.tag-logic-toggle {
|
||||
display: flex;
|
||||
background-color: var(--lora-surface);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: var(--border-radius-sm);
|
||||
overflow: hidden;
|
||||
}
|
||||
|
||||
.tag-logic-option {
|
||||
background: none;
|
||||
border: none;
|
||||
padding: 2px 8px;
|
||||
font-size: 11px;
|
||||
cursor: pointer;
|
||||
color: var(--text-color);
|
||||
opacity: 0.7;
|
||||
transition: all 0.2s ease;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.tag-logic-option:hover {
|
||||
opacity: 1;
|
||||
background-color: var(--lora-surface-hover);
|
||||
}
|
||||
|
||||
.tag-logic-option.active {
|
||||
background-color: var(--lora-accent);
|
||||
color: white;
|
||||
opacity: 1;
|
||||
}
|
||||
|
||||
.tag-logic-option:first-child {
|
||||
border-right: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.tag-logic-option.active:first-child {
|
||||
border-right: 1px solid rgba(255, 255, 255, 0.3);
|
||||
}
|
||||
|
||||
/* Mobile adjustments */
|
||||
@media (max-width: 768px) {
|
||||
.search-options-panel,
|
||||
|
||||
@@ -9,7 +9,8 @@ import { state } from '../state/index.js';
|
||||
export const MODEL_TYPES = {
|
||||
LORA: 'loras',
|
||||
CHECKPOINT: 'checkpoints',
|
||||
EMBEDDING: 'embeddings' // Future model type
|
||||
EMBEDDING: 'embeddings',
|
||||
MISC: 'misc'
|
||||
};
|
||||
|
||||
// Base API configuration for each model type
|
||||
@@ -40,6 +41,15 @@ export const MODEL_CONFIG = {
|
||||
supportsBulkOperations: true,
|
||||
supportsMove: true,
|
||||
templateName: 'embeddings.html'
|
||||
},
|
||||
[MODEL_TYPES.MISC]: {
|
||||
displayName: 'Misc',
|
||||
singularName: 'misc',
|
||||
defaultPageSize: 100,
|
||||
supportsLetterFilter: false,
|
||||
supportsBulkOperations: true,
|
||||
supportsMove: true,
|
||||
templateName: 'misc.html'
|
||||
}
|
||||
};
|
||||
|
||||
@@ -133,6 +143,11 @@ export const MODEL_SPECIFIC_ENDPOINTS = {
|
||||
},
|
||||
[MODEL_TYPES.EMBEDDING]: {
|
||||
metadata: `/api/lm/${MODEL_TYPES.EMBEDDING}/metadata`,
|
||||
},
|
||||
[MODEL_TYPES.MISC]: {
|
||||
metadata: `/api/lm/${MODEL_TYPES.MISC}/metadata`,
|
||||
vae_roots: `/api/lm/${MODEL_TYPES.MISC}/vae_roots`,
|
||||
upscaler_roots: `/api/lm/${MODEL_TYPES.MISC}/upscaler_roots`,
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
@@ -924,11 +924,6 @@ export class BaseModelApiClient {
|
||||
params.append('model_type', type);
|
||||
});
|
||||
}
|
||||
|
||||
// Add tag logic parameter (any = OR, all = AND)
|
||||
if (pageState.filters.tagLogic) {
|
||||
params.append('tag_logic', pageState.filters.tagLogic);
|
||||
}
|
||||
}
|
||||
|
||||
this._addModelSpecificParams(params, pageState);
|
||||
|
||||
62
static/js/api/miscApi.js
Normal file
62
static/js/api/miscApi.js
Normal file
@@ -0,0 +1,62 @@
|
||||
import { BaseModelApiClient } from './baseModelApi.js';
|
||||
import { getSessionItem } from '../utils/storageHelpers.js';
|
||||
|
||||
export class MiscApiClient extends BaseModelApiClient {
|
||||
_addModelSpecificParams(params, pageState) {
|
||||
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
|
||||
const filterMiscHashes = getSessionItem('recipe_to_misc_filterHashes');
|
||||
|
||||
if (filterMiscHash) {
|
||||
params.append('misc_hash', filterMiscHash);
|
||||
} else if (filterMiscHashes) {
|
||||
try {
|
||||
if (Array.isArray(filterMiscHashes) && filterMiscHashes.length > 0) {
|
||||
params.append('misc_hashes', filterMiscHashes.join(','));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error parsing misc hashes from session storage:', error);
|
||||
}
|
||||
}
|
||||
|
||||
if (pageState.subType) {
|
||||
params.append('sub_type', pageState.subType);
|
||||
}
|
||||
}
|
||||
|
||||
async getMiscInfo(filePath) {
|
||||
try {
|
||||
const response = await fetch(this.apiConfig.endpoints.specific.info, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ file_path: filePath })
|
||||
});
|
||||
if (!response.ok) throw new Error('Failed to fetch misc info');
|
||||
return await response.json();
|
||||
} catch (error) {
|
||||
console.error('Error fetching misc info:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async getVaeRoots() {
|
||||
try {
|
||||
const response = await fetch(this.apiConfig.endpoints.specific.vae_roots, { method: 'GET' });
|
||||
if (!response.ok) throw new Error('Failed to fetch VAE roots');
|
||||
return await response.json();
|
||||
} catch (error) {
|
||||
console.error('Error fetching VAE roots:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
async getUpscalerRoots() {
|
||||
try {
|
||||
const response = await fetch(this.apiConfig.endpoints.specific.upscaler_roots, { method: 'GET' });
|
||||
if (!response.ok) throw new Error('Failed to fetch upscaler roots');
|
||||
return await response.json();
|
||||
} catch (error) {
|
||||
console.error('Error fetching upscaler roots:', error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
import { LoraApiClient } from './loraApi.js';
|
||||
import { CheckpointApiClient } from './checkpointApi.js';
|
||||
import { EmbeddingApiClient } from './embeddingApi.js';
|
||||
import { MiscApiClient } from './miscApi.js';
|
||||
import { MODEL_TYPES, isValidModelType } from './apiConfig.js';
|
||||
import { state } from '../state/index.js';
|
||||
|
||||
@@ -12,6 +13,8 @@ export function createModelApiClient(modelType) {
|
||||
return new CheckpointApiClient(MODEL_TYPES.CHECKPOINT);
|
||||
case MODEL_TYPES.EMBEDDING:
|
||||
return new EmbeddingApiClient(MODEL_TYPES.EMBEDDING);
|
||||
case MODEL_TYPES.MISC:
|
||||
return new MiscApiClient(MODEL_TYPES.MISC);
|
||||
default:
|
||||
throw new Error(`Unsupported model type: ${modelType}`);
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { BaseContextMenu } from './BaseContextMenu.js';
|
||||
import { state } from '../../state/index.js';
|
||||
import { bulkManager } from '../../managers/BulkManager.js';
|
||||
import { updateElementText, translate } from '../../utils/i18nHelpers.js';
|
||||
import { updateElementText } from '../../utils/i18nHelpers.js';
|
||||
|
||||
export class BulkContextMenu extends BaseContextMenu {
|
||||
constructor() {
|
||||
@@ -71,40 +71,6 @@ export class BulkContextMenu extends BaseContextMenu {
|
||||
if (setContentRatingItem) {
|
||||
setContentRatingItem.style.display = config.setContentRating ? 'flex' : 'none';
|
||||
}
|
||||
|
||||
const skipMetadataRefreshItem = this.menu.querySelector('[data-action="skip-metadata-refresh"]');
|
||||
const resumeMetadataRefreshItem = this.menu.querySelector('[data-action="resume-metadata-refresh"]');
|
||||
|
||||
if (skipMetadataRefreshItem && resumeMetadataRefreshItem) {
|
||||
const skipCount = this.countSkipStatus(true);
|
||||
const resumeCount = this.countSkipStatus(false);
|
||||
const totalCount = skipCount + resumeCount;
|
||||
|
||||
if (skipCount === totalCount) {
|
||||
skipMetadataRefreshItem.style.display = 'none';
|
||||
resumeMetadataRefreshItem.style.display = 'flex';
|
||||
resumeMetadataRefreshItem.querySelector('span').textContent = translate(
|
||||
'loras.bulkOperations.resumeMetadataRefresh'
|
||||
);
|
||||
} else if (resumeCount === totalCount) {
|
||||
skipMetadataRefreshItem.style.display = 'flex';
|
||||
resumeMetadataRefreshItem.style.display = 'none';
|
||||
skipMetadataRefreshItem.querySelector('span').textContent = translate(
|
||||
'loras.bulkOperations.skipMetadataRefresh'
|
||||
);
|
||||
} else {
|
||||
skipMetadataRefreshItem.style.display = 'flex';
|
||||
resumeMetadataRefreshItem.style.display = 'flex';
|
||||
skipMetadataRefreshItem.querySelector('span').textContent = translate(
|
||||
'loras.bulkOperations.skipMetadataRefreshCount',
|
||||
{ count: resumeCount }
|
||||
);
|
||||
resumeMetadataRefreshItem.querySelector('span').textContent = translate(
|
||||
'loras.bulkOperations.resumeMetadataRefreshCount',
|
||||
{ count: skipCount }
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
updateSelectedCountHeader() {
|
||||
@@ -114,20 +80,6 @@ export class BulkContextMenu extends BaseContextMenu {
|
||||
}
|
||||
}
|
||||
|
||||
countSkipStatus(skipState) {
|
||||
let count = 0;
|
||||
for (const filePath of state.selectedModels) {
|
||||
const card = document.querySelector(`.model-card[data-filepath="${filePath}"]`);
|
||||
if (card) {
|
||||
const isSkipped = card.dataset.skip_metadata_refresh === 'true';
|
||||
if (isSkipped === skipState) {
|
||||
count++;
|
||||
}
|
||||
}
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
showMenu(x, y, card) {
|
||||
this.updateMenuItemsForModelType();
|
||||
this.updateSelectedCountHeader();
|
||||
@@ -166,12 +118,6 @@ export class BulkContextMenu extends BaseContextMenu {
|
||||
case 'auto-organize':
|
||||
bulkManager.autoOrganizeSelectedModels();
|
||||
break;
|
||||
case 'skip-metadata-refresh':
|
||||
bulkManager.setSkipMetadataRefresh(true);
|
||||
break;
|
||||
case 'resume-metadata-refresh':
|
||||
bulkManager.setSkipMetadataRefresh(false);
|
||||
break;
|
||||
case 'delete-all':
|
||||
bulkManager.showBulkDeleteModal();
|
||||
break;
|
||||
|
||||
85
static/js/components/ContextMenu/MiscContextMenu.js
Normal file
85
static/js/components/ContextMenu/MiscContextMenu.js
Normal file
@@ -0,0 +1,85 @@
|
||||
import { BaseContextMenu } from './BaseContextMenu.js';
|
||||
import { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
|
||||
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
|
||||
import { showDeleteModal, showExcludeModal } from '../../utils/modalUtils.js';
|
||||
import { moveManager } from '../../managers/MoveManager.js';
|
||||
import { i18n } from '../../i18n/index.js';
|
||||
|
||||
export class MiscContextMenu extends BaseContextMenu {
|
||||
constructor() {
|
||||
super('miscContextMenu', '.model-card');
|
||||
this.nsfwSelector = document.getElementById('nsfwLevelSelector');
|
||||
this.modelType = 'misc';
|
||||
this.resetAndReload = resetAndReload;
|
||||
|
||||
this.initNSFWSelector();
|
||||
}
|
||||
|
||||
// Implementation needed by the mixin
|
||||
async saveModelMetadata(filePath, data) {
|
||||
return getModelApiClient().saveModelMetadata(filePath, data);
|
||||
}
|
||||
|
||||
showMenu(x, y, card) {
|
||||
super.showMenu(x, y, card);
|
||||
|
||||
// Update the "Move to other root" label based on current model type
|
||||
const moveOtherItem = this.menu.querySelector('[data-action="move-other"]');
|
||||
if (moveOtherItem) {
|
||||
const currentType = card.dataset.sub_type || 'vae';
|
||||
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
|
||||
const typeLabel = i18n.t(`misc.modelTypes.${otherType}`);
|
||||
moveOtherItem.innerHTML = `<i class="fas fa-exchange-alt"></i> ${i18n.t('misc.contextMenu.moveToOtherTypeFolder', { otherType: typeLabel })}`;
|
||||
}
|
||||
}
|
||||
|
||||
handleMenuAction(action) {
|
||||
// First try to handle with common actions
|
||||
if (ModelContextMenuMixin.handleCommonMenuActions.call(this, action)) {
|
||||
return;
|
||||
}
|
||||
|
||||
const apiClient = getModelApiClient();
|
||||
|
||||
// Otherwise handle misc-specific actions
|
||||
switch (action) {
|
||||
case 'details':
|
||||
// Show misc details
|
||||
this.currentCard.click();
|
||||
break;
|
||||
case 'replace-preview':
|
||||
// Add new action for replacing preview images
|
||||
apiClient.replaceModelPreview(this.currentCard.dataset.filepath);
|
||||
break;
|
||||
case 'delete':
|
||||
showDeleteModal(this.currentCard.dataset.filepath);
|
||||
break;
|
||||
case 'copyname':
|
||||
// Copy misc model name
|
||||
if (this.currentCard.querySelector('.fa-copy')) {
|
||||
this.currentCard.querySelector('.fa-copy').click();
|
||||
}
|
||||
break;
|
||||
case 'refresh-metadata':
|
||||
// Refresh metadata from CivitAI
|
||||
apiClient.refreshSingleModelMetadata(this.currentCard.dataset.filepath);
|
||||
break;
|
||||
case 'move':
|
||||
moveManager.showMoveModal(this.currentCard.dataset.filepath, this.currentCard.dataset.sub_type);
|
||||
break;
|
||||
case 'move-other':
|
||||
{
|
||||
const currentType = this.currentCard.dataset.sub_type || 'vae';
|
||||
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
|
||||
moveManager.showMoveModal(this.currentCard.dataset.filepath, otherType);
|
||||
}
|
||||
break;
|
||||
case 'exclude':
|
||||
showExcludeModal(this.currentCard.dataset.filepath);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Mix in shared methods
|
||||
Object.assign(MiscContextMenu.prototype, ModelContextMenuMixin);
|
||||
@@ -2,6 +2,7 @@ export { LoraContextMenu } from './LoraContextMenu.js';
|
||||
export { RecipeContextMenu } from './RecipeContextMenu.js';
|
||||
export { CheckpointContextMenu } from './CheckpointContextMenu.js';
|
||||
export { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
|
||||
export { MiscContextMenu } from './MiscContextMenu.js';
|
||||
export { GlobalContextMenu } from './GlobalContextMenu.js';
|
||||
export { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
|
||||
|
||||
@@ -9,6 +10,7 @@ import { LoraContextMenu } from './LoraContextMenu.js';
|
||||
import { RecipeContextMenu } from './RecipeContextMenu.js';
|
||||
import { CheckpointContextMenu } from './CheckpointContextMenu.js';
|
||||
import { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
|
||||
import { MiscContextMenu } from './MiscContextMenu.js';
|
||||
import { GlobalContextMenu } from './GlobalContextMenu.js';
|
||||
|
||||
// Factory method to create page-specific context menu instances
|
||||
@@ -22,6 +24,8 @@ export function createPageContextMenu(pageType) {
|
||||
return new CheckpointContextMenu();
|
||||
case 'embeddings':
|
||||
return new EmbeddingContextMenu();
|
||||
case 'misc':
|
||||
return new MiscContextMenu();
|
||||
default:
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -32,6 +32,7 @@ export class HeaderManager {
|
||||
if (path.includes('/checkpoints')) return 'checkpoints';
|
||||
if (path.includes('/embeddings')) return 'embeddings';
|
||||
if (path.includes('/statistics')) return 'statistics';
|
||||
if (path.includes('/misc')) return 'misc';
|
||||
if (path.includes('/loras')) return 'loras';
|
||||
return 'unknown';
|
||||
}
|
||||
|
||||
@@ -48,18 +48,15 @@ export class ModelDuplicatesManager {
|
||||
// Method to check for duplicates count using existing endpoint
|
||||
async checkDuplicatesCount() {
|
||||
try {
|
||||
const params = this._buildFilterQueryParams();
|
||||
const endpoint = `/api/lm/${this.modelType}/find-duplicates`;
|
||||
const url = params.toString() ? `${endpoint}?${params}` : endpoint;
|
||||
|
||||
const response = await fetch(url);
|
||||
|
||||
const response = await fetch(endpoint);
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get duplicates count: ${response.statusText}`);
|
||||
}
|
||||
|
||||
|
||||
const data = await response.json();
|
||||
|
||||
|
||||
if (data.success) {
|
||||
const duplicatesCount = (data.duplicates || []).length;
|
||||
this.updateDuplicatesBadge(duplicatesCount);
|
||||
@@ -106,34 +103,29 @@ export class ModelDuplicatesManager {
|
||||
|
||||
async findDuplicates() {
|
||||
try {
|
||||
const params = this._buildFilterQueryParams();
|
||||
// Determine API endpoint based on model type
|
||||
const endpoint = `/api/lm/${this.modelType}/find-duplicates`;
|
||||
const url = params.toString() ? `${endpoint}?${params}` : endpoint;
|
||||
|
||||
const response = await fetch(url);
|
||||
|
||||
const response = await fetch(endpoint);
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to find duplicates: ${response.statusText}`);
|
||||
}
|
||||
|
||||
|
||||
const data = await response.json();
|
||||
if (!data.success) {
|
||||
throw new Error(data.error || 'Unknown error finding duplicates');
|
||||
}
|
||||
|
||||
|
||||
this.duplicateGroups = data.duplicates || [];
|
||||
|
||||
|
||||
// Update the badge with the current count
|
||||
this.updateDuplicatesBadge(this.duplicateGroups.length);
|
||||
|
||||
|
||||
if (this.duplicateGroups.length === 0) {
|
||||
showToast('toast.duplicates.noDuplicatesFound', { type: this.modelType }, 'info');
|
||||
// If already in duplicate mode, exit to clear the display
|
||||
if (this.inDuplicateMode) {
|
||||
this.exitDuplicateMode();
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
this.enterDuplicateMode();
|
||||
return true;
|
||||
} catch (error) {
|
||||
@@ -142,51 +134,6 @@ export class ModelDuplicatesManager {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Build query parameters from current filter state for duplicate finding.
|
||||
* @returns {URLSearchParams} The query parameters to append to the API endpoint
|
||||
*/
|
||||
_buildFilterQueryParams() {
|
||||
const params = new URLSearchParams();
|
||||
const pageState = getCurrentPageState();
|
||||
const filters = pageState?.filters;
|
||||
|
||||
if (!filters) return params;
|
||||
|
||||
// Base model filters
|
||||
if (filters.baseModel && Array.isArray(filters.baseModel)) {
|
||||
filters.baseModel.forEach(m => params.append('base_model', m));
|
||||
}
|
||||
|
||||
// Tag filters (tri-state: include/exclude)
|
||||
if (filters.tags && typeof filters.tags === 'object') {
|
||||
Object.entries(filters.tags).forEach(([tag, state]) => {
|
||||
if (state === 'include') {
|
||||
params.append('tag_include', tag);
|
||||
} else if (state === 'exclude') {
|
||||
params.append('tag_exclude', tag);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Model type filters
|
||||
if (filters.modelTypes && Array.isArray(filters.modelTypes)) {
|
||||
filters.modelTypes.forEach(t => params.append('model_type', t));
|
||||
}
|
||||
|
||||
// Folder filter (from active folder state)
|
||||
if (pageState.activeFolder) {
|
||||
params.append('folder', pageState.activeFolder);
|
||||
}
|
||||
|
||||
// Favorites filter
|
||||
if (pageState.showFavoritesOnly) {
|
||||
params.append('favorites_only', 'true');
|
||||
}
|
||||
|
||||
return params;
|
||||
}
|
||||
|
||||
enterDuplicateMode() {
|
||||
this.inDuplicateMode = true;
|
||||
|
||||
@@ -26,7 +26,6 @@ class RecipeCard {
|
||||
card.dataset.nsfwLevel = this.recipe.preview_nsfw_level || 0;
|
||||
card.dataset.created = this.recipe.created_date;
|
||||
card.dataset.id = this.recipe.id || '';
|
||||
card.dataset.folder = this.recipe.folder || '';
|
||||
|
||||
// Get base model with fallback
|
||||
const baseModelLabel = (this.recipe.base_model || '').trim() || 'Unknown';
|
||||
|
||||
@@ -217,18 +217,8 @@ class RecipeModal {
|
||||
}
|
||||
|
||||
// Set recipe image
|
||||
const mediaContainer = document.getElementById('recipePreviewContainer');
|
||||
if (mediaContainer) {
|
||||
// Stop any playing video before replacing content
|
||||
const existingVideo = mediaContainer.querySelector('video');
|
||||
if (existingVideo) {
|
||||
existingVideo.pause();
|
||||
existingVideo.currentTime = 0;
|
||||
}
|
||||
|
||||
// Clear the container
|
||||
mediaContainer.innerHTML = '';
|
||||
|
||||
const modalImage = document.getElementById('recipeModalImage');
|
||||
if (modalImage) {
|
||||
// Ensure file_url exists, fallback to file_path if needed
|
||||
const imageUrl = recipe.file_url ||
|
||||
(recipe.file_path ? `/loras_static/root1/preview/${recipe.file_path.split('/').pop()}` :
|
||||
@@ -237,6 +227,10 @@ class RecipeModal {
|
||||
// Check if the file is a video (mp4)
|
||||
const isVideo = imageUrl.toLowerCase().endsWith('.mp4');
|
||||
|
||||
// Replace the image element with appropriate media element
|
||||
const mediaContainer = modalImage.parentElement;
|
||||
mediaContainer.innerHTML = '';
|
||||
|
||||
if (isVideo) {
|
||||
const videoElement = document.createElement('video');
|
||||
videoElement.id = 'recipeModalVideo';
|
||||
|
||||
119
static/js/components/controls/MiscControls.js
Normal file
119
static/js/components/controls/MiscControls.js
Normal file
@@ -0,0 +1,119 @@
|
||||
// MiscControls.js - Specific implementation for the Misc (VAE/Upscaler) page
|
||||
import { PageControls } from './PageControls.js';
|
||||
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
|
||||
import { getSessionItem, removeSessionItem } from '../../utils/storageHelpers.js';
|
||||
import { downloadManager } from '../../managers/DownloadManager.js';
|
||||
|
||||
/**
|
||||
* MiscControls class - Extends PageControls for Misc-specific functionality
|
||||
*/
|
||||
export class MiscControls extends PageControls {
|
||||
constructor() {
|
||||
// Initialize with 'misc' page type
|
||||
super('misc');
|
||||
|
||||
// Register API methods specific to the Misc page
|
||||
this.registerMiscAPI();
|
||||
|
||||
// Check for custom filters (e.g., from recipe navigation)
|
||||
this.checkCustomFilters();
|
||||
}
|
||||
|
||||
/**
|
||||
* Register Misc-specific API methods
|
||||
*/
|
||||
registerMiscAPI() {
|
||||
const miscAPI = {
|
||||
// Core API functions
|
||||
loadMoreModels: async (resetPage = false, updateFolders = false) => {
|
||||
return await getModelApiClient().loadMoreWithVirtualScroll(resetPage, updateFolders);
|
||||
},
|
||||
|
||||
resetAndReload: async (updateFolders = false) => {
|
||||
return await resetAndReload(updateFolders);
|
||||
},
|
||||
|
||||
refreshModels: async (fullRebuild = false) => {
|
||||
return await getModelApiClient().refreshModels(fullRebuild);
|
||||
},
|
||||
|
||||
// Add fetch from Civitai functionality for misc models
|
||||
fetchFromCivitai: async () => {
|
||||
return await getModelApiClient().fetchCivitaiMetadata();
|
||||
},
|
||||
|
||||
// Add show download modal functionality
|
||||
showDownloadModal: () => {
|
||||
downloadManager.showDownloadModal();
|
||||
},
|
||||
|
||||
toggleBulkMode: () => {
|
||||
if (window.bulkManager) {
|
||||
window.bulkManager.toggleBulkMode();
|
||||
} else {
|
||||
console.error('Bulk manager not available');
|
||||
}
|
||||
},
|
||||
|
||||
clearCustomFilter: async () => {
|
||||
await this.clearCustomFilter();
|
||||
}
|
||||
};
|
||||
|
||||
// Register the API
|
||||
this.registerAPI(miscAPI);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check for custom filters sent from other pages (e.g., recipe modal)
|
||||
*/
|
||||
checkCustomFilters() {
|
||||
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
|
||||
const filterRecipeName = getSessionItem('filterMiscRecipeName');
|
||||
|
||||
if (filterMiscHash && filterRecipeName) {
|
||||
const indicator = document.getElementById('customFilterIndicator');
|
||||
const filterText = indicator?.querySelector('.customFilterText');
|
||||
|
||||
if (indicator && filterText) {
|
||||
indicator.classList.remove('hidden');
|
||||
|
||||
const displayText = `Viewing misc model from: ${filterRecipeName}`;
|
||||
filterText.textContent = this._truncateText(displayText, 30);
|
||||
filterText.setAttribute('title', displayText);
|
||||
|
||||
const filterElement = indicator.querySelector('.filter-active');
|
||||
if (filterElement) {
|
||||
filterElement.classList.add('animate');
|
||||
setTimeout(() => filterElement.classList.remove('animate'), 600);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear misc custom filter and reload
|
||||
*/
|
||||
async clearCustomFilter() {
|
||||
removeSessionItem('recipe_to_misc_filterHash');
|
||||
removeSessionItem('recipe_to_misc_filterHashes');
|
||||
removeSessionItem('filterMiscRecipeName');
|
||||
|
||||
const indicator = document.getElementById('customFilterIndicator');
|
||||
if (indicator) {
|
||||
indicator.classList.add('hidden');
|
||||
}
|
||||
|
||||
await resetAndReload();
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to truncate text with ellipsis
|
||||
* @param {string} text
|
||||
* @param {number} maxLength
|
||||
* @returns {string}
|
||||
*/
|
||||
_truncateText(text, maxLength) {
|
||||
return text.length > maxLength ? `${text.substring(0, maxLength - 3)}...` : text;
|
||||
}
|
||||
}
|
||||
@@ -3,13 +3,14 @@ import { PageControls } from './PageControls.js';
|
||||
import { LorasControls } from './LorasControls.js';
|
||||
import { CheckpointsControls } from './CheckpointsControls.js';
|
||||
import { EmbeddingsControls } from './EmbeddingsControls.js';
|
||||
import { MiscControls } from './MiscControls.js';
|
||||
|
||||
// Export the classes
|
||||
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls };
|
||||
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls, MiscControls };
|
||||
|
||||
/**
|
||||
* Factory function to create the appropriate controls based on page type
|
||||
* @param {string} pageType - The type of page ('loras', 'checkpoints', or 'embeddings')
|
||||
* @param {string} pageType - The type of page ('loras', 'checkpoints', 'embeddings', or 'misc')
|
||||
* @returns {PageControls} - The appropriate controls instance
|
||||
*/
|
||||
export function createPageControls(pageType) {
|
||||
@@ -19,6 +20,8 @@ export function createPageControls(pageType) {
|
||||
return new CheckpointsControls();
|
||||
} else if (pageType === 'embeddings') {
|
||||
return new EmbeddingsControls();
|
||||
} else if (pageType === 'misc') {
|
||||
return new MiscControls();
|
||||
} else {
|
||||
console.error(`Unknown page type: ${pageType}`);
|
||||
return null;
|
||||
|
||||
@@ -198,12 +198,6 @@ class InitializationManager {
|
||||
handleProgressUpdate(data) {
|
||||
if (!data) return;
|
||||
console.log('Received progress update:', data);
|
||||
|
||||
// Handle cache health warning messages
|
||||
if (data.type === 'cache_health_warning') {
|
||||
this.handleCacheHealthWarning(data);
|
||||
return;
|
||||
}
|
||||
|
||||
// Check if this update is for our page type
|
||||
if (data.pageType && data.pageType !== this.pageType) {
|
||||
@@ -472,29 +466,6 @@ class InitializationManager {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle cache health warning messages from WebSocket
|
||||
*/
|
||||
handleCacheHealthWarning(data) {
|
||||
console.log('Cache health warning received:', data);
|
||||
|
||||
// Import bannerService dynamically to avoid circular dependencies
|
||||
import('../managers/BannerService.js').then(({ bannerService }) => {
|
||||
// Initialize banner service if not already done
|
||||
if (!bannerService.initialized) {
|
||||
bannerService.initialize().then(() => {
|
||||
bannerService.registerCacheHealthBanner(data);
|
||||
}).catch(err => {
|
||||
console.error('Failed to initialize banner service:', err);
|
||||
});
|
||||
} else {
|
||||
bannerService.registerCacheHealthBanner(data);
|
||||
}
|
||||
}).catch(err => {
|
||||
console.error('Failed to load banner service:', err);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up resources when the component is destroyed
|
||||
*/
|
||||
|
||||
@@ -214,6 +214,52 @@ function handleSendToWorkflow(card, replaceMode, modelType) {
|
||||
missingNodesMessage,
|
||||
missingTargetMessage,
|
||||
});
|
||||
} else if (modelType === MODEL_TYPES.MISC) {
|
||||
const modelPath = card.dataset.filepath;
|
||||
if (!modelPath) {
|
||||
const message = translate('modelCard.sendToWorkflow.missingPath', {}, 'Unable to determine model path for this card');
|
||||
showToast(message, {}, 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
const subtype = (card.dataset.sub_type || 'vae').toLowerCase();
|
||||
const isVae = subtype === 'vae';
|
||||
const widgetName = isVae ? 'vae_name' : 'model_name';
|
||||
const actionTypeText = translate(
|
||||
isVae ? 'uiHelpers.nodeSelector.vae' : 'uiHelpers.nodeSelector.upscaler',
|
||||
{},
|
||||
isVae ? 'VAE' : 'Upscaler'
|
||||
);
|
||||
const successMessage = translate(
|
||||
isVae ? 'uiHelpers.workflow.vaeUpdated' : 'uiHelpers.workflow.upscalerUpdated',
|
||||
{},
|
||||
isVae ? 'VAE updated in workflow' : 'Upscaler updated in workflow'
|
||||
);
|
||||
const failureMessage = translate(
|
||||
isVae ? 'uiHelpers.workflow.vaeFailed' : 'uiHelpers.workflow.upscalerFailed',
|
||||
{},
|
||||
isVae ? 'Failed to update VAE node' : 'Failed to update upscaler node'
|
||||
);
|
||||
const missingNodesMessage = translate(
|
||||
'uiHelpers.workflow.noMatchingNodes',
|
||||
{},
|
||||
'No compatible nodes available in the current workflow'
|
||||
);
|
||||
const missingTargetMessage = translate(
|
||||
'uiHelpers.workflow.noTargetNodeSelected',
|
||||
{},
|
||||
'No target node selected'
|
||||
);
|
||||
|
||||
sendModelPathToWorkflow(modelPath, {
|
||||
widgetName,
|
||||
collectionType: MODEL_TYPES.MISC,
|
||||
actionTypeText,
|
||||
successMessage,
|
||||
failureMessage,
|
||||
missingNodesMessage,
|
||||
missingTargetMessage,
|
||||
});
|
||||
} else {
|
||||
showToast('modelCard.sendToWorkflow.checkpointNotImplemented', {}, 'info');
|
||||
}
|
||||
@@ -230,6 +276,10 @@ function handleCopyAction(card, modelType) {
|
||||
} else if (modelType === MODEL_TYPES.EMBEDDING) {
|
||||
const embeddingName = card.dataset.file_name;
|
||||
copyToClipboard(embeddingName, 'Embedding name copied');
|
||||
} else if (modelType === MODEL_TYPES.MISC) {
|
||||
const miscName = card.dataset.file_name;
|
||||
const message = translate('modelCard.actions.miscNameCopied', {}, 'Model name copied');
|
||||
copyToClipboard(miscName, message);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -433,10 +483,9 @@ export function createModelCard(model, modelType) {
|
||||
card.dataset.usage_count = String(model.usage_count);
|
||||
card.dataset.notes = model.notes || '';
|
||||
card.dataset.base_model = model.base_model || 'Unknown';
|
||||
card.dataset.favorite = model.favorite ? 'true' : 'false';
|
||||
const hasUpdateAvailable = Boolean(model.update_available);
|
||||
card.dataset.update_available = hasUpdateAvailable ? 'true' : 'false';
|
||||
card.dataset.skip_metadata_refresh = model.skip_metadata_refresh ? 'true' : 'false';
|
||||
card.dataset.favorite = model.favorite ? 'true' : 'false';
|
||||
const hasUpdateAvailable = Boolean(model.update_available);
|
||||
card.dataset.update_available = hasUpdateAvailable ? 'true' : 'false';
|
||||
|
||||
// To only show usage_count when sorting by usage.
|
||||
const pageState = getCurrentPageState();
|
||||
@@ -483,10 +532,6 @@ export function createModelCard(model, modelType) {
|
||||
card.classList.add('nsfw-content');
|
||||
}
|
||||
|
||||
if (model.skip_metadata_refresh) {
|
||||
card.classList.add('skip-refresh');
|
||||
}
|
||||
|
||||
// Apply selection state if in bulk mode and this card is in the selected set (LoRA only)
|
||||
if (modelType === MODEL_TYPES.LORA && state.bulkMode && state.selectedLoras.has(model.file_path)) {
|
||||
card.classList.add('selected');
|
||||
@@ -613,11 +658,6 @@ export function createModelCard(model, modelType) {
|
||||
<i class="fas fa-arrow-up"></i>
|
||||
</span>
|
||||
` : ''}
|
||||
${model.skip_metadata_refresh ? `
|
||||
<span class="model-skip-refresh-badge" title="${translate('modelCard.badges.skipRefresh', {}, 'Metadata refresh skipped')}">
|
||||
<i class="fas fa-ban"></i>
|
||||
</span>
|
||||
` : ''}
|
||||
</div>
|
||||
<div class="card-actions">
|
||||
${actionIcons}
|
||||
|
||||
@@ -123,70 +123,7 @@ function formatDateLabel(value) {
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Format EA end time as smart relative time
|
||||
* - < 1 day: "in Xh" (hours)
|
||||
* - 1-7 days: "in Xd" (days)
|
||||
* - > 7 days: "Jan 15" (short date)
|
||||
*/
|
||||
function formatEarlyAccessTime(endsAt) {
|
||||
if (!endsAt) {
|
||||
return null;
|
||||
}
|
||||
const endDate = new Date(endsAt);
|
||||
if (Number.isNaN(endDate.getTime())) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const now = new Date();
|
||||
const diffMs = endDate.getTime() - now.getTime();
|
||||
const diffHours = diffMs / (1000 * 60 * 60);
|
||||
const diffDays = diffHours / 24;
|
||||
|
||||
if (diffHours < 1) {
|
||||
return translate('modals.model.versions.eaTime.endingSoon', {}, 'ending soon');
|
||||
}
|
||||
if (diffHours < 24) {
|
||||
const hours = Math.ceil(diffHours);
|
||||
return translate(
|
||||
'modals.model.versions.eaTime.hours',
|
||||
{ count: hours },
|
||||
`in ${hours}h`
|
||||
);
|
||||
}
|
||||
if (diffDays <= 7) {
|
||||
const days = Math.ceil(diffDays);
|
||||
return translate(
|
||||
'modals.model.versions.eaTime.days',
|
||||
{ count: days },
|
||||
`in ${days}d`
|
||||
);
|
||||
}
|
||||
// More than 7 days: show short date
|
||||
return endDate.toLocaleDateString(undefined, {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
});
|
||||
}
|
||||
|
||||
function isEarlyAccessActive(version) {
|
||||
// Two-phase detection:
|
||||
// 1. Use pre-computed isEarlyAccess flag if available (from backend)
|
||||
// 2. Otherwise check exact end time if available
|
||||
if (typeof version.isEarlyAccess === 'boolean') {
|
||||
return version.isEarlyAccess;
|
||||
}
|
||||
if (!version.earlyAccessEndsAt) {
|
||||
return false;
|
||||
}
|
||||
try {
|
||||
return new Date(version.earlyAccessEndsAt) > new Date();
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
function buildMetaMarkup(version, options = {}) {
|
||||
function buildMetaMarkup(version) {
|
||||
const segments = [];
|
||||
if (version.baseModel) {
|
||||
segments.push(
|
||||
@@ -201,14 +138,6 @@ function buildMetaMarkup(version, options = {}) {
|
||||
segments.push(escapeHtml(formatFileSize(version.sizeBytes)));
|
||||
}
|
||||
|
||||
// Add early access info if applicable
|
||||
if (options.showEarlyAccess && isEarlyAccessActive(version)) {
|
||||
const eaTime = formatEarlyAccessTime(version.earlyAccessEndsAt);
|
||||
if (eaTime) {
|
||||
segments.push(`<span class="version-meta-ea"><i class="fas fa-clock"></i> ${escapeHtml(eaTime)}</span>`);
|
||||
}
|
||||
}
|
||||
|
||||
if (!segments.length) {
|
||||
return escapeHtml(
|
||||
translate('modals.model.versions.labels.noDetails', {}, 'No additional details')
|
||||
@@ -306,7 +235,6 @@ function resolveUpdateAvailability(record, baseModel, currentVersionId) {
|
||||
|
||||
const strategy = state?.global?.settings?.update_flag_strategy;
|
||||
const sameBaseMode = strategy === DISPLAY_FILTER_MODES.SAME_BASE;
|
||||
const hideEarlyAccess = state?.global?.settings?.hide_early_access_updates;
|
||||
|
||||
if (!sameBaseMode) {
|
||||
return Boolean(record?.hasUpdate);
|
||||
@@ -350,9 +278,6 @@ function resolveUpdateAvailability(record, baseModel, currentVersionId) {
|
||||
if (version.isInLibrary || version.shouldIgnore) {
|
||||
return false;
|
||||
}
|
||||
if (hideEarlyAccess && isEarlyAccessActive(version)) {
|
||||
return false;
|
||||
}
|
||||
const versionBase = normalizeBaseModelName(version.baseModel);
|
||||
if (versionBase !== normalizedBase) {
|
||||
return false;
|
||||
@@ -424,7 +349,6 @@ function renderRow(version, options) {
|
||||
const isNewer =
|
||||
typeof latestLibraryVersionId === 'number' &&
|
||||
version.versionId > latestLibraryVersionId;
|
||||
const isEarlyAccess = isEarlyAccessActive(version);
|
||||
const badges = [];
|
||||
|
||||
if (isCurrent) {
|
||||
@@ -437,10 +361,6 @@ function renderRow(version, options) {
|
||||
badges.push(buildBadge(translate('modals.model.versions.badges.newer', {}, 'Newer Version'), 'info'));
|
||||
}
|
||||
|
||||
if (isEarlyAccess) {
|
||||
badges.push(buildBadge(translate('modals.model.versions.badges.earlyAccess', {}, 'Early Access'), 'early-access'));
|
||||
}
|
||||
|
||||
if (version.shouldIgnore) {
|
||||
badges.push(buildBadge(translate('modals.model.versions.badges.ignored', {}, 'Ignored'), 'muted'));
|
||||
}
|
||||
@@ -457,10 +377,8 @@ function renderRow(version, options) {
|
||||
|
||||
const actions = [];
|
||||
if (!version.isInLibrary) {
|
||||
// Download button with optional EA bolt icon
|
||||
const downloadIcon = isEarlyAccess ? '<i class="fas fa-bolt"></i> ' : '';
|
||||
actions.push(
|
||||
`<button class="version-action version-action-primary" data-version-action="download">${downloadIcon}${escapeHtml(downloadLabel)}</button>`
|
||||
`<button class="version-action version-action-primary" data-version-action="download">${escapeHtml(downloadLabel)}</button>`
|
||||
);
|
||||
} else if (version.filePath) {
|
||||
actions.push(
|
||||
@@ -484,7 +402,7 @@ function renderRow(version, options) {
|
||||
);
|
||||
|
||||
const rowAttributes = [
|
||||
`class="model-version-row${isCurrent ? ' is-current' : ''}${linkTarget ? ' is-clickable' : ''}${isEarlyAccess ? ' is-early-access' : ''}"`,
|
||||
`class="model-version-row${isCurrent ? ' is-current' : ''}${linkTarget ? ' is-clickable' : ''}"`,
|
||||
`data-version-id="${escapeHtml(version.versionId)}"`,
|
||||
];
|
||||
if (linkTarget) {
|
||||
@@ -501,7 +419,7 @@ function renderRow(version, options) {
|
||||
</div>
|
||||
<div class="version-badges">${badges.join('')}</div>
|
||||
<div class="version-meta">
|
||||
${buildMetaMarkup(version, { showEarlyAccess: true })}
|
||||
${buildMetaMarkup(version)}
|
||||
</div>
|
||||
</div>
|
||||
<div class="version-actions">
|
||||
@@ -1091,56 +1009,6 @@ export function initVersionsTab({
|
||||
});
|
||||
}
|
||||
|
||||
async function resolveDownloadPathFromCurrentVersion() {
|
||||
if (!normalizedCurrentVersionId || !controller.record?.versions) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const currentVersion = controller.record.versions.find(
|
||||
v => v.versionId === normalizedCurrentVersionId && v.isInLibrary && v.filePath
|
||||
);
|
||||
if (!currentVersion?.filePath) {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const client = ensureClient();
|
||||
const rootsData = await client.fetchModelRoots();
|
||||
const roots = rootsData?.roots;
|
||||
if (!Array.isArray(roots) || roots.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const normalizedFilePath = currentVersion.filePath.replace(/\\/g, '/');
|
||||
let matchedRoot = null;
|
||||
let relativePath = null;
|
||||
|
||||
for (const root of roots) {
|
||||
const normalizedRoot = root.replace(/\\/g, '/');
|
||||
if (normalizedFilePath.startsWith(normalizedRoot)) {
|
||||
matchedRoot = root;
|
||||
relativePath = normalizedFilePath.slice(normalizedRoot.length);
|
||||
if (relativePath.startsWith('/')) {
|
||||
relativePath = relativePath.slice(1);
|
||||
}
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!matchedRoot || !relativePath) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const lastSlash = relativePath.lastIndexOf('/');
|
||||
const targetFolder = lastSlash > 0 ? relativePath.slice(0, lastSlash) : '';
|
||||
|
||||
return { modelRoot: matchedRoot, targetFolder };
|
||||
} catch (error) {
|
||||
console.debug('Failed to resolve download path from current version:', error);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
async function handleDownloadVersion(button, versionId) {
|
||||
if (!controller.record) {
|
||||
return;
|
||||
@@ -1155,11 +1023,8 @@ export function initVersionsTab({
|
||||
button.disabled = true;
|
||||
|
||||
try {
|
||||
const pathInfo = await resolveDownloadPathFromCurrentVersion();
|
||||
const success = await downloadManager.downloadVersionWithDefaults(modelType, modelId, versionId, {
|
||||
versionName: version.name || `#${version.versionId}`,
|
||||
modelRoot: pathInfo?.modelRoot || '',
|
||||
targetFolder: pathInfo?.targetFolder || '',
|
||||
});
|
||||
|
||||
if (success) {
|
||||
@@ -1195,11 +1060,6 @@ export function initVersionsTab({
|
||||
|
||||
const actionButton = event.target.closest('[data-version-action]');
|
||||
if (actionButton) {
|
||||
// Check if browser extension has already handled this action
|
||||
if (actionButton.dataset.lmExtensionHandled === 'true') {
|
||||
return;
|
||||
}
|
||||
|
||||
const row = actionButton.closest('.model-version-row');
|
||||
if (!row) {
|
||||
return;
|
||||
@@ -1248,11 +1108,6 @@ export function initVersionsTab({
|
||||
window.open(targetUrl, '_blank', 'noopener,noreferrer');
|
||||
});
|
||||
|
||||
// Listen for extension-triggered refresh requests
|
||||
container.addEventListener('lm:refreshVersions', async () => {
|
||||
await refresh();
|
||||
});
|
||||
|
||||
return {
|
||||
load: options => loadVersions(options),
|
||||
refresh,
|
||||
|
||||
@@ -455,49 +455,34 @@ async function handleImportFiles(files, modelHash, importContainer) {
|
||||
}
|
||||
|
||||
try {
|
||||
// Upload files one at a time to avoid exceeding server size limits
|
||||
let lastSuccessResult = null;
|
||||
let successCount = 0;
|
||||
const errors = [];
|
||||
|
||||
for (const file of validFiles) {
|
||||
try {
|
||||
const formData = new FormData();
|
||||
formData.append('model_hash', modelHash);
|
||||
formData.append('files', file);
|
||||
|
||||
const response = await fetch('/api/lm/import-example-images', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (!result.success) {
|
||||
errors.push(`${file.name}: ${result.error || 'Unknown error'}`);
|
||||
} else {
|
||||
lastSuccessResult = result;
|
||||
successCount++;
|
||||
}
|
||||
} catch (err) {
|
||||
errors.push(`${file.name}: ${err.message}`);
|
||||
}
|
||||
// Use FormData to upload files
|
||||
const formData = new FormData();
|
||||
formData.append('model_hash', modelHash);
|
||||
|
||||
validFiles.forEach(file => {
|
||||
formData.append('files', file);
|
||||
});
|
||||
|
||||
// Call API to import files
|
||||
const response = await fetch('/api/lm/import-example-images', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (!result.success) {
|
||||
throw new Error(result.error || 'Failed to import example files');
|
||||
}
|
||||
|
||||
if (successCount === 0) {
|
||||
throw new Error(errors.join('; '));
|
||||
}
|
||||
|
||||
const result = lastSuccessResult;
|
||||
|
||||
|
||||
// Get updated local files
|
||||
const updatedFilesResponse = await fetch(`/api/lm/example-image-files?model_hash=${modelHash}`);
|
||||
const updatedFilesResult = await updatedFilesResponse.json();
|
||||
|
||||
|
||||
if (!updatedFilesResult.success) {
|
||||
throw new Error(updatedFilesResult.error || 'Failed to get updated file list');
|
||||
}
|
||||
|
||||
|
||||
// Re-render the showcase content
|
||||
const showcaseTab = document.getElementById('showcase-tab');
|
||||
if (showcaseTab) {
|
||||
@@ -507,22 +492,18 @@ async function handleImportFiles(files, modelHash, importContainer) {
|
||||
// Combine both arrays for rendering
|
||||
const allImages = [...regularImages, ...customImages];
|
||||
showcaseTab.innerHTML = renderShowcaseContent(allImages, updatedFilesResult.files, true);
|
||||
|
||||
|
||||
// Re-initialize showcase functionality
|
||||
const carousel = showcaseTab.querySelector('.carousel');
|
||||
if (carousel && !carousel.classList.contains('collapsed')) {
|
||||
initShowcaseContent(carousel);
|
||||
}
|
||||
|
||||
|
||||
// Initialize the import UI for the new content
|
||||
initExampleImport(modelHash, showcaseTab);
|
||||
|
||||
if (errors.length > 0) {
|
||||
showToast('toast.import.imagesPartial', { success: successCount, failed: errors.length }, 'warning');
|
||||
} else {
|
||||
showToast('toast.import.imagesImported', {}, 'success');
|
||||
}
|
||||
|
||||
|
||||
showToast('toast.import.imagesImported', {}, 'success');
|
||||
|
||||
// Update VirtualScroller if available
|
||||
if (state.virtualScroller && result.model_file_path) {
|
||||
// Create an update object with only the necessary properties
|
||||
@@ -532,7 +513,7 @@ async function handleImportFiles(files, modelHash, importContainer) {
|
||||
customImages: customImages
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
// Update the item in the virtual scroller
|
||||
state.virtualScroller.updateSingleItem(result.model_file_path, updateData);
|
||||
}
|
||||
|
||||
@@ -99,7 +99,7 @@ export class AppCore {
|
||||
initializePageFeatures() {
|
||||
const pageType = this.getPageType();
|
||||
|
||||
if (['loras', 'recipes', 'checkpoints', 'embeddings'].includes(pageType)) {
|
||||
if (['loras', 'recipes', 'checkpoints', 'embeddings', 'misc'].includes(pageType)) {
|
||||
this.initializeContextMenus(pageType);
|
||||
initializeInfiniteScroll(pageType);
|
||||
}
|
||||
|
||||
@@ -4,11 +4,9 @@ import {
|
||||
removeStorageItem
|
||||
} from '../utils/storageHelpers.js';
|
||||
import { translate } from '../utils/i18nHelpers.js';
|
||||
import { state } from '../state/index.js';
|
||||
import { getModelApiClient } from '../api/modelApiFactory.js';
|
||||
import { state } from '../state/index.js'
|
||||
|
||||
const COMMUNITY_SUPPORT_BANNER_ID = 'community-support';
|
||||
const CACHE_HEALTH_BANNER_ID = 'cache-health-warning';
|
||||
const COMMUNITY_SUPPORT_BANNER_DELAY_MS = 5 * 24 * 60 * 60 * 1000; // 5 days
|
||||
const COMMUNITY_SUPPORT_FIRST_SEEN_AT_KEY = 'community_support_banner_first_seen_at';
|
||||
const COMMUNITY_SUPPORT_VERSION_KEY = 'community_support_banner_state_version';
|
||||
@@ -295,177 +293,6 @@ class BannerService {
|
||||
location.reload();
|
||||
}
|
||||
|
||||
/**
|
||||
* Register a cache health warning banner
|
||||
* @param {Object} healthData - Health data from WebSocket
|
||||
*/
|
||||
registerCacheHealthBanner(healthData) {
|
||||
if (!healthData || healthData.status === 'healthy') {
|
||||
return;
|
||||
}
|
||||
|
||||
// Remove existing cache health banner if any
|
||||
this.removeBannerElement(CACHE_HEALTH_BANNER_ID);
|
||||
|
||||
const isCorrupted = healthData.status === 'corrupted';
|
||||
const titleKey = isCorrupted
|
||||
? 'banners.cacheHealth.corrupted.title'
|
||||
: 'banners.cacheHealth.degraded.title';
|
||||
const defaultTitle = isCorrupted
|
||||
? 'Cache Corruption Detected'
|
||||
: 'Cache Issues Detected';
|
||||
|
||||
const title = translate(titleKey, {}, defaultTitle);
|
||||
|
||||
const contentKey = 'banners.cacheHealth.content';
|
||||
const defaultContent = 'Found {invalid} of {total} cache entries are invalid ({rate}). This may cause missing models or errors. Rebuilding the cache is recommended.';
|
||||
const content = translate(contentKey, {
|
||||
invalid: healthData.details?.invalid || 0,
|
||||
total: healthData.details?.total || 0,
|
||||
rate: healthData.details?.corruption_rate || '0%'
|
||||
}, defaultContent);
|
||||
|
||||
this.registerBanner(CACHE_HEALTH_BANNER_ID, {
|
||||
id: CACHE_HEALTH_BANNER_ID,
|
||||
title: title,
|
||||
content: content,
|
||||
pageType: healthData.pageType,
|
||||
actions: [
|
||||
{
|
||||
text: translate('banners.cacheHealth.rebuildCache', {}, 'Rebuild Cache'),
|
||||
icon: 'fas fa-sync-alt',
|
||||
action: 'rebuild-cache',
|
||||
type: 'primary'
|
||||
},
|
||||
{
|
||||
text: translate('banners.cacheHealth.dismiss', {}, 'Dismiss'),
|
||||
icon: 'fas fa-times',
|
||||
action: 'dismiss',
|
||||
type: 'secondary'
|
||||
}
|
||||
],
|
||||
dismissible: true,
|
||||
priority: 10, // High priority
|
||||
onRegister: (bannerElement) => {
|
||||
// Attach click handlers for actions
|
||||
const rebuildBtn = bannerElement.querySelector('[data-action="rebuild-cache"]');
|
||||
const dismissBtn = bannerElement.querySelector('[data-action="dismiss"]');
|
||||
|
||||
if (rebuildBtn) {
|
||||
rebuildBtn.addEventListener('click', (e) => {
|
||||
e.preventDefault();
|
||||
this.handleRebuildCache(bannerElement, healthData.pageType);
|
||||
});
|
||||
}
|
||||
|
||||
if (dismissBtn) {
|
||||
dismissBtn.addEventListener('click', (e) => {
|
||||
e.preventDefault();
|
||||
this.dismissBanner(CACHE_HEALTH_BANNER_ID);
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle rebuild cache action from banner
|
||||
* @param {HTMLElement} bannerElement - The banner element
|
||||
* @param {string} pageType - The page type (loras, checkpoints, embeddings)
|
||||
*/
|
||||
async handleRebuildCache(bannerElement, pageType) {
|
||||
const currentPageType = pageType || this.getCurrentPageType();
|
||||
|
||||
try {
|
||||
const apiClient = getModelApiClient(currentPageType);
|
||||
|
||||
// Update banner to show rebuilding status
|
||||
const actionsContainer = bannerElement.querySelector('.banner-actions');
|
||||
if (actionsContainer) {
|
||||
actionsContainer.innerHTML = `
|
||||
<span class="banner-loading">
|
||||
<i class="fas fa-spinner fa-spin"></i>
|
||||
<span>${translate('banners.cacheHealth.rebuilding', {}, 'Rebuilding cache...')}</span>
|
||||
</span>
|
||||
`;
|
||||
}
|
||||
|
||||
await apiClient.refreshModels(true);
|
||||
|
||||
// Remove banner on success without marking as dismissed
|
||||
this.removeBannerElement(CACHE_HEALTH_BANNER_ID);
|
||||
} catch (error) {
|
||||
console.error('Cache rebuild failed:', error);
|
||||
|
||||
const actionsContainer = bannerElement.querySelector('.banner-actions');
|
||||
if (actionsContainer) {
|
||||
actionsContainer.innerHTML = `
|
||||
<span class="banner-error">
|
||||
<i class="fas fa-exclamation-triangle"></i>
|
||||
<span>${translate('banners.cacheHealth.rebuildFailed', {}, 'Rebuild failed. Please try again.')}</span>
|
||||
</span>
|
||||
<a href="#" class="banner-action banner-action-primary" data-action="rebuild-cache">
|
||||
<i class="fas fa-sync-alt"></i>
|
||||
<span>${translate('banners.cacheHealth.retry', {}, 'Retry')}</span>
|
||||
</a>
|
||||
`;
|
||||
|
||||
// Re-attach click handler
|
||||
const retryBtn = actionsContainer.querySelector('[data-action="rebuild-cache"]');
|
||||
if (retryBtn) {
|
||||
retryBtn.addEventListener('click', (e) => {
|
||||
e.preventDefault();
|
||||
this.handleRebuildCache(bannerElement, pageType);
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the current page type from the URL
|
||||
* @returns {string} Page type (loras, checkpoints, embeddings, recipes)
|
||||
*/
|
||||
getCurrentPageType() {
|
||||
const path = window.location.pathname;
|
||||
if (path.includes('/checkpoints')) return 'checkpoints';
|
||||
if (path.includes('/embeddings')) return 'embeddings';
|
||||
if (path.includes('/recipes')) return 'recipes';
|
||||
return 'loras';
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the rebuild cache endpoint for the given page type
|
||||
* @param {string} pageType - The page type
|
||||
* @returns {string} The API endpoint URL
|
||||
*/
|
||||
getRebuildEndpoint(pageType) {
|
||||
const endpoints = {
|
||||
'loras': '/api/lm/loras/reload?rebuild=true',
|
||||
'checkpoints': '/api/lm/checkpoints/reload?rebuild=true',
|
||||
'embeddings': '/api/lm/embeddings/reload?rebuild=true'
|
||||
};
|
||||
return endpoints[pageType] || endpoints['loras'];
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove a banner element from DOM without marking as dismissed
|
||||
* @param {string} bannerId - Banner ID to remove
|
||||
*/
|
||||
removeBannerElement(bannerId) {
|
||||
const bannerElement = document.querySelector(`[data-banner-id="${bannerId}"]`);
|
||||
if (bannerElement) {
|
||||
bannerElement.style.animation = 'banner-slide-up 0.3s ease-in-out forwards';
|
||||
setTimeout(() => {
|
||||
bannerElement.remove();
|
||||
this.updateContainerVisibility();
|
||||
}, 300);
|
||||
}
|
||||
|
||||
// Also remove from banners map
|
||||
this.banners.delete(bannerId);
|
||||
}
|
||||
|
||||
prepareCommunitySupportBanner() {
|
||||
if (this.isBannerDismissed(COMMUNITY_SUPPORT_BANNER_ID)) {
|
||||
return;
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user