Compare commits

...

18 Commits

Author SHA1 Message Date
Will Miao
9e1a2e3bb7 chore(pyproject): bump version to 0.9.4 2025-09-20 22:03:29 +08:00
Will Miao
40cbb2155c refactor(baseModelApi): comment out failure message handling in bulk metadata refresh 2025-09-20 21:43:00 +08:00
Will Miao
a8d7070832 feat(civitai): enhance metadata fetching with error handling and cache validation 2025-09-20 21:35:34 +08:00
Will Miao
ab7266f3a4 fix(download_manager): streamline output directory retrieval by using settings directly, fixes #443 2025-09-20 08:12:14 +08:00
Will Miao
3053b13fcb feat(metadata): enhance model processing with CivitAI metadata checks and new fields for archive DB status 2025-09-19 23:22:47 +08:00
Will Miao
f3544b3471 refactor(settings): replace getStorageItem with state.global.settings for default root retrieval 2025-09-19 22:57:05 +08:00
Will Miao
1610048974 refactor(metadata): update model fetching methods to return error messages alongside results 2025-09-19 16:36:34 +08:00
Will Miao
fc6f1bf95b fix(lora_loader): remove unnecessary string stripping from lora names in loaders, fixes #441 2025-09-19 11:17:19 +08:00
Will Miao
67b274c1b2 feat(settings): add 'show_only_sfw' setting to manage content visibility 2025-09-18 21:55:21 +08:00
Will Miao
fb0d6b5641 feat(docs): add comprehensive documentation for LoRA Manager Civitai Extension, including features, installation, privacy, and usage guidelines 2025-09-18 19:33:47 +08:00
Will Miao
d30fbeb286 feat(example_images): add dedicated folder check and update settings handling for example images path, see #431 2025-09-18 19:22:29 +08:00
Will Miao
46e430ebbb fix(utils): update API endpoint for fetching connected trigger words 2025-09-18 15:45:57 +08:00
Will Miao
bc4cd45fcb fix(lora_manager): rename invalid hash folder removal to orphaned folders and update logging 2025-09-18 15:09:32 +08:00
Will Miao
bdc86ddf15 Refactor API endpoints to use '/api/lm/' prefix
- Updated all relevant routes in `stats_routes.py` and `update_routes.py` to include the new '/api/lm/' prefix for consistency.
- Modified API endpoint configurations in `apiConfig.js` to reflect the new structure, ensuring all CRUD and bulk operations are correctly routed.
- Adjusted fetch calls in various components and managers to utilize the updated API paths, including recipe, model, and example image operations.
- Ensured all instances of the old API paths were replaced with the new '/api/lm/' prefix across the codebase for uniformity and to prevent broken links.
2025-09-18 14:50:40 +08:00
Will Miao
ded17c1479 feat(routes): add model versions status endpoint and enhance metadata retrieval 2025-09-17 22:06:59 +08:00
Will Miao
933e2fc01d feat(routes): integrate CivitAI model version retrieval for various model types 2025-09-17 15:47:30 +08:00
Will Miao
1cddeee264 style(autocomplete): remove font styles from dropdown for consistency 2025-09-17 11:04:51 +08:00
Will Miao
183c000080 Refactor ComfyUI: Remove legacy tags widget and related dynamic imports
- Deleted the legacy tags widget implementation from legacy_tags_widget.js.
- Updated trigger_word_toggle.js to directly import the new tags widget.
- Removed unused dynamic import functions and version checks from utils.js.
- Cleaned up lora_loader.js and lora_stacker.js by removing redundant node registration code.
2025-09-16 21:48:20 +08:00
65 changed files with 909 additions and 2681 deletions

View File

@@ -1,182 +0,0 @@
# Event Management Implementation Summary
## What Has Been Implemented
### 1. Enhanced EventManager Class
- **Location**: `static/js/utils/EventManager.js`
- **Features**:
- Priority-based event handling
- Conditional execution based on application state
- Element filtering (target/exclude selectors)
- Mouse button filtering
- Automatic cleanup with cleanup functions
- State tracking for app modes
- Error handling for event handlers
### 2. BulkManager Integration
- **Location**: `static/js/managers/BulkManager.js`
- **Migrated Events**:
- Global keyboard shortcuts (Ctrl+A, Escape, B key)
- Marquee selection events (mousedown, mousemove, mouseup, contextmenu)
- State synchronization with EventManager
- **Benefits**:
- Centralized priority handling
- Conditional execution based on modal state
- Better coordination with other components
### 3. UIHelpers Integration
- **Location**: `static/js/utils/uiHelpers.js`
- **Migrated Events**:
- Mouse position tracking for node selector positioning
- Node selector click events (outside clicks and selection)
- State management for node selector
- **Benefits**:
- Reduced direct DOM listeners
- Coordinated state tracking
- Better cleanup
### 4. ModelCard Integration
- **Location**: `static/js/components/shared/ModelCard.js`
- **Migrated Events**:
- Model card click delegation
- Action button handling (star, globe, copy, etc.)
- Better return value handling for event propagation
- **Benefits**:
- Single event listener for all model cards
- Priority-based execution
- Better event flow control
### 5. Documentation and Initialization
- **EventManagerDocs.md**: Comprehensive documentation
- **eventManagementInit.js**: Initialization and global handlers
- **Features**:
- Global escape key handling
- Modal state synchronization
- Error handling
- Analytics integration points
- Cleanup on page unload
## Application States Tracked
1. **bulkMode**: When bulk selection mode is active
2. **marqueeActive**: When marquee selection is in progress
3. **modalOpen**: When any modal dialog is open
4. **nodeSelectorActive**: When node selector popup is visible
## Priority Levels Used
- **250+**: Critical system events (escape keys)
- **200+**: High priority system events (modal close)
- **100-199**: Application-level shortcuts (bulk operations)
- **80-99**: UI interactions (marquee selection)
- **60-79**: Component interactions (model cards)
- **10-49**: Tracking and monitoring
- **1-9**: Analytics and low-priority tasks
## Event Flow Examples
### Bulk Mode Toggle (B key)
1. **Priority 100**: BulkManager keyboard handler catches 'b' key
2. Toggles bulk mode state
3. Updates EventManager state
4. Updates UI accordingly
5. Stops propagation (returns true)
### Marquee Selection
1. **Priority 80**: BulkManager mousedown handler (only in .models-container, excluding cards/buttons)
2. Starts marquee selection
3. **Priority 90**: BulkManager mousemove handler (only when marquee active)
4. Updates selection rectangle
5. **Priority 90**: BulkManager mouseup handler ends selection
### Model Card Click
1. **Priority 60**: ModelCard delegation handler checks for specific elements
2. If action button: handles action and stops propagation
3. If general card click: continues to other handlers
4. Bulk selection may also handle the event if in bulk mode
## Remaining Event Listeners (Not Yet Migrated)
### High Priority for Migration
1. **SearchManager keyboard events** - Global search shortcuts
2. **ModalManager escape handling** - Already integrated with initialization
3. **Scroll-based events** - Back to top, virtual scrolling
4. **Resize events** - Panel positioning, responsive layouts
### Medium Priority
1. **Form input events** - Tag inputs, settings forms
2. **Component-specific events** - Recipe modal, showcase view
3. **Sidebar events** - Resize handling, toggle events
### Low Priority (Can Remain As-Is)
1. **VirtualScroller events** - Performance-critical, specialized
2. **Component lifecycle events** - Modal open/close callbacks
3. **One-time setup events** - Theme initialization, etc.
## Benefits Achieved
### Performance Improvements
- **Reduced DOM listeners**: From ~15+ individual listeners to ~5 coordinated handlers
- **Conditional execution**: Handlers only run when conditions are met
- **Priority ordering**: Important events handled first
- **Better memory management**: Automatic cleanup prevents leaks
### Coordination Improvements
- **State synchronization**: All components aware of app state
- **Event flow control**: Proper propagation stopping
- **Conflict resolution**: Priority system prevents conflicts
- **Debugging**: Centralized event handling for easier debugging
### Code Quality Improvements
- **Consistent patterns**: All event handling follows same patterns
- **Better separation of concerns**: Event logic separated from business logic
- **Error handling**: Centralized error catching and reporting
- **Documentation**: Clear patterns for future development
## Next Steps (Recommendations)
### 1. Migrate Search Events
```javascript
// In SearchManager.js
eventManager.addHandler('keydown', 'search-shortcuts', (e) => {
if ((e.ctrlKey || e.metaKey) && e.key === 'f') {
this.focusSearchInput();
return true;
}
}, { priority: 120 });
```
### 2. Integrate Resize Events
```javascript
// Create ResizeManager
eventManager.addHandler('resize', 'layout-resize', debounce((e) => {
this.updateLayoutDimensions();
}, 250), { priority: 50 });
```
### 3. Add Debug Mode
```javascript
// In EventManager.js
if (window.DEBUG_EVENTS) {
console.log(`Event ${eventType} handled by ${source} (priority: ${priority})`);
}
```
### 4. Create Event Analytics
```javascript
// Track event patterns for optimization
eventManager.addHandler('*', 'analytics', (e) => {
this.trackEventUsage(e.type, performance.now());
}, { priority: 1 });
```
## Testing Recommendations
1. **Verify bulk mode interactions** work correctly
2. **Test marquee selection** in various scenarios
3. **Check modal state synchronization**
4. **Verify node selector** positioning and cleanup
5. **Test keyboard shortcuts** don't conflict
6. **Verify proper cleanup** when components are destroyed
The centralized event management system provides a solid foundation for coordinated, efficient event handling across the application while maintaining good performance and code organization.

View File

@@ -1,301 +0,0 @@
# Centralized Event Management System
This document describes the centralized event management system that coordinates event handling across the ComfyUI LoRA Manager application.
## Overview
The `EventManager` class provides a centralized way to handle DOM events with priority-based execution, conditional execution based on application state, and proper cleanup mechanisms.
## Features
- **Priority-based execution**: Handlers with higher priority run first
- **Conditional execution**: Handlers can be executed based on application state
- **Element filtering**: Handlers can target specific elements or exclude others
- **Automatic cleanup**: Cleanup functions are called when handlers are removed
- **State tracking**: Tracks application states like bulk mode, modal open, etc.
## Basic Usage
### Importing
```javascript
import { eventManager } from './EventManager.js';
```
### Adding Event Handlers
```javascript
eventManager.addHandler('click', 'myComponent', (event) => {
console.log('Button clicked!');
return true; // Stop propagation to other handlers
}, {
priority: 100,
targetSelector: '.my-button',
skipWhenModalOpen: true
});
```
### Removing Event Handlers
```javascript
// Remove specific handler
eventManager.removeHandler('click', 'myComponent');
// Remove all handlers for a component
eventManager.removeAllHandlersForSource('myComponent');
```
### Updating Application State
```javascript
// Set state
eventManager.setState('bulkMode', true);
eventManager.setState('modalOpen', true);
// Get state
const isBulkMode = eventManager.getState('bulkMode');
```
## Available States
- `bulkMode`: Whether bulk selection mode is active
- `marqueeActive`: Whether marquee selection is in progress
- `modalOpen`: Whether any modal is currently open
- `nodeSelectorActive`: Whether the node selector popup is active
## Handler Options
### Priority
Higher numbers = higher priority. Handlers run in descending priority order.
```javascript
{
priority: 100 // High priority
}
```
### Conditional Execution
```javascript
{
onlyInBulkMode: true, // Only run when bulk mode is active
onlyWhenMarqueeActive: true, // Only run when marquee selection is active
skipWhenModalOpen: true, // Skip when any modal is open
skipWhenNodeSelectorActive: true, // Skip when node selector is active
onlyWhenNodeSelectorActive: true // Only run when node selector is active
}
```
### Element Filtering
```javascript
{
targetSelector: '.model-card', // Only handle events on matching elements
excludeSelector: 'button, input', // Exclude events from these elements
button: 0 // Only handle specific mouse button (0=left, 1=middle, 2=right)
}
```
### Cleanup Functions
```javascript
{
cleanup: () => {
// Custom cleanup logic
console.log('Handler cleaned up');
}
}
```
## Integration Examples
### BulkManager Integration
```javascript
class BulkManager {
registerEventHandlers() {
// High priority keyboard shortcuts
eventManager.addHandler('keydown', 'bulkManager-keyboard', (e) => {
return this.handleGlobalKeyboard(e);
}, {
priority: 100,
skipWhenModalOpen: true
});
// Marquee selection
eventManager.addHandler('mousedown', 'bulkManager-marquee-start', (e) => {
return this.handleMarqueeStart(e);
}, {
priority: 80,
skipWhenModalOpen: true,
targetSelector: '.models-container',
excludeSelector: '.model-card, button, input',
button: 0
});
}
cleanup() {
eventManager.removeAllHandlersForSource('bulkManager-keyboard');
eventManager.removeAllHandlersForSource('bulkManager-marquee-start');
}
}
```
### Modal Integration
```javascript
class ModalManager {
showModal(modalId) {
// Update state when modal opens
eventManager.setState('modalOpen', true);
this.displayModal(modalId);
}
closeModal(modalId) {
// Update state when modal closes
eventManager.setState('modalOpen', false);
this.hideModal(modalId);
}
}
```
### Component Event Delegation
```javascript
export function setupComponentEvents() {
eventManager.addHandler('click', 'myComponent-actions', (event) => {
const button = event.target.closest('.action-button');
if (!button) return false;
this.handleAction(button.dataset.action);
return true; // Stop propagation
}, {
priority: 60,
targetSelector: '.component-container'
});
}
```
## Best Practices
### 1. Use Descriptive Source Names
Use the format `componentName-purposeDescription`:
```javascript
// Good
'bulkManager-marqueeSelection'
'nodeSelector-clickOutside'
'modelCard-delegation'
// Avoid
'bulk'
'click'
'handler1'
```
### 2. Set Appropriate Priorities
- 200+: Critical system events (escape keys, critical modals)
- 100-199: High priority application events (keyboard shortcuts)
- 50-99: Normal UI interactions (buttons, cards)
- 1-49: Low priority events (tracking, analytics)
### 3. Use Conditional Execution
Instead of checking state inside handlers, use options:
```javascript
// Good
eventManager.addHandler('click', 'bulk-action', handler, {
onlyInBulkMode: true
});
// Avoid
eventManager.addHandler('click', 'bulk-action', (e) => {
if (!state.bulkMode) return;
// handler logic
});
```
### 4. Clean Up Properly
Always clean up handlers when components are destroyed:
```javascript
class MyComponent {
constructor() {
this.registerEvents();
}
destroy() {
eventManager.removeAllHandlersForSource('myComponent');
}
}
```
### 5. Return Values Matter
- Return `true` to stop event propagation to other handlers
- Return `false` or `undefined` to continue with other handlers
## Migration Guide
### From Direct Event Listeners
**Before:**
```javascript
document.addEventListener('click', (e) => {
if (e.target.closest('.my-button')) {
this.handleClick(e);
}
});
```
**After:**
```javascript
eventManager.addHandler('click', 'myComponent-button', (e) => {
this.handleClick(e);
}, {
targetSelector: '.my-button'
});
```
### From Event Delegation
**Before:**
```javascript
container.addEventListener('click', (e) => {
const card = e.target.closest('.model-card');
if (!card) return;
if (e.target.closest('.action-btn')) {
this.handleAction(e);
}
});
```
**After:**
```javascript
eventManager.addHandler('click', 'container-actions', (e) => {
const card = e.target.closest('.model-card');
if (!card) return false;
if (e.target.closest('.action-btn')) {
this.handleAction(e);
return true;
}
}, {
targetSelector: '.container'
});
```
## Performance Benefits
1. **Reduced DOM listeners**: Single listener per event type instead of multiple
2. **Conditional execution**: Handlers only run when conditions are met
3. **Priority ordering**: Important handlers run first, avoiding unnecessary work
4. **Automatic cleanup**: Prevents memory leaks from orphaned listeners
5. **Centralized debugging**: All event handling flows through one system
## Debugging
Enable debug logging to trace event handling:
```javascript
// Add to EventManager.js for debugging
console.log(`Handling ${eventType} event with ${handlers.length} handlers`);
```
The event manager provides a foundation for coordinated, efficient event handling across the entire application.

176
docs/LM-Extension-Wiki.md Normal file
View File

@@ -0,0 +1,176 @@
## Overview
The **LoRA Manager Civitai Extension** is a Browser extension designed to work seamlessly with [LoRA Manager](https://github.com/willmiao/ComfyUI-Lora-Manager) to significantly enhance your browsing experience on [Civitai](https://civitai.com). With this extension, you can:
✅ Instantly see which models are already present in your local library
✅ Download new models with a single click
✅ Manage downloads efficiently with queue and parallel download support
✅ Keep your downloaded models automatically organized according to your custom settings
![Civitai Models page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-models-page.png)
---
## Why Are All Features for Supporters Only?
I love building tools for the Stable Diffusion and ComfyUI communities, and LoRA Manager is a passion project that I've poured countless hours into. When I created this companion extension, my hope was to offer its core features for free, as a thank-you to all of you.
Unfortunately, I've reached a point where I need to be realistic. The level of support from the free model has been far lower than what's needed to justify the continuous development and maintenance for both projects. It was a difficult decision, but I've chosen to make the extension's features exclusive to supporters.
This change is crucial for me to be able to continue dedicating my time to improving the free and open-source LoRA Manager, which I'm committed to keeping available for everyone.
Your support does more than just unlock a few features—it allows me to keep innovating and ensures the core LoRA Manager project thrives. I'm incredibly grateful for your understanding and any support you can offer. ❤️
(_For those who previously supported me on Ko-fi with a one-time donation, I'll be sending out license keys individually as a thank-you._)
---
## Installation
### Supported Browsers & Installation Methods
| Browser | Installation Method |
|--------------------|-------------------------------------------------------------------------------------|
| **Google Chrome** | [Chrome Web Store link](https://chromewebstore.google.com/detail/capigligggeijgmocnaflanlbghnamgm?utm_source=item-share-cb) |
| **Microsoft Edge** | Install via Chrome Web Store (compatible) |
| **Brave Browser** | Install via Chrome Web Store (compatible) |
| **Opera** | Install via Chrome Web Store (compatible) |
| **Firefox** | <div id="firefox-install" class="install-ok"><a href="https://github.com/willmiao/lm-civitai-extension-firefox/releases/latest/download/extension.xpi">📦 Install Firefox Extension (reviewed and verified by Mozilla)</a></div> |
For non-Chrome browsers (e.g., Microsoft Edge), you can typically install extensions from the Chrome Web Store by following these steps: open the extensions Chrome Web Store page, click 'Get extension', then click 'Allow' when prompted to enable installations from other stores, and finally click 'Add extension' to complete the installation.
---
## Privacy & Security
I understand concerns around browser extensions and privacy, and I want to be fully transparent about how the **LM Civitai Extension** works:
- **Reviewed and Verified**
This extension has been **manually reviewed and approved by the Chrome Web Store**. The Firefox version uses the **exact same code** (only the packaging format differs) and has passed **Mozillas Add-on review**.
- **Minimal Network Access**
The only external server this extension connects to is:
**`https://willmiao.shop`** — used solely for **license validation**.
It does **not collect, transmit, or store any personal or usage data**.
No browsing history, no user IDs, no analytics, no hidden trackers.
- **Local-Only Model Detection**
Model detection and LoRA Manager communication all happen **locally** within your browser, directly interacting with your local LoRA Manager backend.
I value your trust and are committed to keeping your local setup private and secure. If you have any questions, feel free to reach out!
---
## How to Use
After installing the extension, you'll automatically receive a **7-day trial** to explore all features.
When the extension is correctly installed and your license is valid:
- Open **Civitai**, and you'll see visual indicators added by the extension on model cards, showing:
- ✅ Models already present in your local library
- ⬇️ A download button for models not in your library
Clicking the download button adds the corresponding model version to the download queue, waiting to be downloaded. You can set up to **5 models to download simultaneously**.
### Visual Indicators Appear On:
- **Home Page** — Featured models
- **Models Page**
- **Creator Profiles** — If the creator has set their models to be visible
- **Recommended Resources** — On individual model pages
### Version Buttons on Model Pages
On a specific model page, visual indicators also appear on version buttons, showing which versions are already in your local library.
When switching to a specific version by clicking a version button:
- Clicking the download button will open a dropdown:
- Download via **LoRA Manager**
- Download via **Original Download** (browser download)
You can check **Remember my choice** to set your preferred default. You can change this setting anytime in the extension's settings.
![Civitai Model Page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-model-page.png)
### Resources on Image Pages (2025-08-05) — now shows in-library indicators for image resources. Import image as recipe coming soon!
![Civitai Image Page](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/civitai-image-page.jpg)
---
## Model Download Location & LoRA Manager Settings
To use the **one-click download function**, you must first set:
- Your **Default LoRAs Root**
- Your **Default Checkpoints Root**
These are set within LoRA Manager's settings.
When everything is configured, downloaded model files will be placed in:
`<Default_Models_Root>/<Base_Model_of_the_Model>/<First_Tag_of_the_Model>`
### Update: Default Path Customization (2025-07-21)
A new setting to customize the default download path has been added in the nightly version. You can now personalize where models are saved when downloading via the LM Civitai Extension.
![Default Path Customization](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/wiki-images/default-path-customization.png)
The previous YAML path mapping file will be deprecated—settings will now be unified in settings.json to simplify configuration.
---
## Backend Port Configuration
If your **ComfyUI** or **LoRA Manager** backend is running on a port **other than the default 8188**, you must configure the backend port in the extension's settings.
After correctly setting and saving the port, you'll see in the extension's header area:
- A **Healthy** status with the tooltip: `Connected to LoRA Manager on port xxxx`
---
## Advanced Usage
### Connecting to a Remote LoRA Manager
If your LoRA Manager is running on another computer, you can still connect from your browser using port forwarding.
> **Why can't you set a remote IP directly?**
>
> For privacy and security, the extension only requests access to `http://127.0.0.1/*`. Supporting remote IPs would require much broader permissions, which may be rejected by browser stores and could raise user concerns.
**Solution: Port Forwarding with `socat`**
On your browser computer, run:
`socat TCP-LISTEN:8188,bind=127.0.0.1,fork TCP:REMOTE.IP.ADDRESS.HERE:8188`
- Replace `REMOTE.IP.ADDRESS.HERE` with the IP of the machine running LoRA Manager.
- Adjust the port if needed.
This lets the extension connect to `127.0.0.1:8188` as usual, with traffic forwarded to your remote server.
_Thanks to user **Temikus** for sharing this solution!_
---
## Roadmap
The extension will evolve alongside **LoRA Manager** improvements. Planned features include:
- [x] Support for **additional model types** (e.g., embeddings)
- [ ] One-click **Recipe Import**
- [x] Display of in-library status for all resources in the **Resources Used** section of the image page
- [x] One-click **Auto-organize Models**
**Stay tuned — and thank you for your support!**
---

View File

@@ -368,7 +368,7 @@ class LoraManager:
total_folders_checked = 0
empty_folders_removed = 0
invalid_hash_folders_removed = 0
orphaned_folders_removed = 0
# Scan the example images directory
try:
@@ -392,9 +392,8 @@ class LoraManager:
# Check if folder name is a valid SHA256 hash (64 hex characters)
if len(folder_name) != 64 or not all(c in '0123456789abcdefABCDEF' for c in folder_name):
logger.debug(f"Removing invalid hash folder: {folder_name}")
await cls._remove_folder_safely(folder_path)
invalid_hash_folders_removed += 1
# Skip non-hash folders to avoid deleting other content
logger.debug(f"Skipping non-hash folder: {folder_name}")
continue
# Check if hash exists in any of the scanners
@@ -407,7 +406,7 @@ class LoraManager:
if not hash_exists:
logger.debug(f"Removing example images folder for deleted model: {folder_name}")
await cls._remove_folder_safely(folder_path)
invalid_hash_folders_removed += 1
orphaned_folders_removed += 1
continue
except Exception as e:
@@ -421,11 +420,11 @@ class LoraManager:
return
# Log final cleanup report
total_removed = empty_folders_removed + invalid_hash_folders_removed
total_removed = empty_folders_removed + orphaned_folders_removed
if total_removed > 0:
logger.info(f"Example images cleanup completed: checked {total_folders_checked} folders, "
f"removed {empty_folders_removed} empty folders and {invalid_hash_folders_removed} "
f"folders for deleted/invalid models (total: {total_removed} removed)")
f"removed {empty_folders_removed} empty folders and {orphaned_folders_removed} "
f"folders for deleted models (total: {total_removed} removed)")
else:
logger.debug(f"Example images cleanup completed: checked {total_folders_checked} folders, "
f"no cleanup needed")
@@ -470,11 +469,5 @@ class LoraManager:
try:
logger.info("LoRA Manager: Cleaning up services")
# Close CivitaiClient gracefully
civitai_client = await ServiceRegistry.get_service("civitai_client")
if civitai_client:
await civitai_client.close()
logger.info("Closed CivitaiClient connection")
except Exception as e:
logger.error(f"Error during cleanup: {e}", exc_info=True)

View File

@@ -115,7 +115,7 @@ class LoraManagerLoader:
formatted_loras = []
for item in loaded_loras:
parts = item.split(":")
lora_name = parts[0].strip()
lora_name = parts[0]
strength_parts = parts[1].strip().split(",")
if len(strength_parts) > 1:
@@ -165,7 +165,7 @@ class LoraManagerTextLoader:
loras = []
for match in matches:
lora_name = match[0].strip()
lora_name = match[0]
model_strength = float(match[1])
clip_strength = float(match[2]) if match[2] else model_strength

View File

@@ -55,7 +55,7 @@ class RecipeMetadataParser(ABC):
# Unpack the tuple to get the actual data
civitai_info, error_msg = civitai_info_tuple if isinstance(civitai_info_tuple, tuple) else (civitai_info_tuple, None)
if not civitai_info or civitai_info.get("error") == "Model not found":
if not civitai_info or error_msg == "Model not found":
# Model not found or deleted
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'

View File

@@ -91,7 +91,7 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
result["base_model"] = metadata["baseModel"]
elif "Model hash" in metadata and metadata_provider:
model_hash = metadata["Model hash"]
model_info = await metadata_provider.get_model_by_hash(model_hash)
model_info, error = await metadata_provider.get_model_by_hash(model_hash)
if model_info:
result["base_model"] = model_info.get("baseModel", "")
elif "Model" in metadata and isinstance(metadata.get("resources"), list):
@@ -100,7 +100,7 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if resource.get("type") == "model" and resource.get("name") == metadata.get("Model"):
# This is likely the checkpoint model
if metadata_provider and resource.get("hash"):
model_info = await metadata_provider.get_model_by_hash(resource.get("hash"))
model_info, error = await metadata_provider.get_model_by_hash(resource.get("hash"))
if model_info:
result["base_model"] = model_info.get("baseModel", "")
@@ -201,11 +201,7 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if version_id and metadata_provider:
try:
# Use get_model_version_info instead of get_model_version
civitai_info, error = await metadata_provider.get_model_version_info(version_id)
if error:
logger.warning(f"Error getting model version info: {error}")
continue
civitai_info = await metadata_provider.get_model_version_info(version_id)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
@@ -267,26 +263,23 @@ class CivitaiApiMetadataParser(RecipeMetadataParser):
if version_id and metadata_provider:
try:
# Use get_model_version_info with the version ID
civitai_info, error = await metadata_provider.get_model_version_info(version_id)
civitai_info = await metadata_provider.get_model_version_info(version_id)
if error:
logger.warning(f"Error getting model version info: {error}")
else:
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
# Track this LoRA for deduplication
if version_id:
added_loras[version_id] = len(result["loras"])
lora_entry = populated_entry
# Track this LoRA for deduplication
if version_id:
added_loras[version_id] = len(result["loras"])
except Exception as e:
logger.error(f"Error fetching Civitai info for model ID {version_id}: {e}")

View File

@@ -14,6 +14,7 @@ from ..services.settings_manager import settings
from ..services.server_i18n import server_i18n
from ..services.model_file_service import ModelFileService, ModelMoveService
from ..services.websocket_progress_callback import WebSocketProgressCallback
from ..services.metadata_service import get_default_metadata_provider
from ..config import config
logger = logging.getLogger(__name__)
@@ -47,50 +48,53 @@ class BaseModelRoutes(ABC):
prefix: URL prefix (e.g., 'loras', 'checkpoints')
"""
# Common model management routes
app.router.add_get(f'/api/{prefix}/list', self.get_models)
app.router.add_post(f'/api/{prefix}/delete', self.delete_model)
app.router.add_post(f'/api/{prefix}/exclude', self.exclude_model)
app.router.add_post(f'/api/{prefix}/fetch-civitai', self.fetch_civitai)
app.router.add_post(f'/api/{prefix}/fetch-all-civitai', self.fetch_all_civitai)
app.router.add_post(f'/api/{prefix}/relink-civitai', self.relink_civitai)
app.router.add_post(f'/api/{prefix}/replace-preview', self.replace_preview)
app.router.add_post(f'/api/{prefix}/save-metadata', self.save_metadata)
app.router.add_post(f'/api/{prefix}/add-tags', self.add_tags)
app.router.add_post(f'/api/{prefix}/rename', self.rename_model)
app.router.add_post(f'/api/{prefix}/bulk-delete', self.bulk_delete_models)
app.router.add_post(f'/api/{prefix}/verify-duplicates', self.verify_duplicates)
app.router.add_post(f'/api/{prefix}/move_model', self.move_model)
app.router.add_post(f'/api/{prefix}/move_models_bulk', self.move_models_bulk)
app.router.add_get(f'/api/{prefix}/auto-organize', self.auto_organize_models)
app.router.add_post(f'/api/{prefix}/auto-organize', self.auto_organize_models)
app.router.add_get(f'/api/{prefix}/auto-organize-progress', self.get_auto_organize_progress)
app.router.add_get(f'/api/lm/{prefix}/list', self.get_models)
app.router.add_post(f'/api/lm/{prefix}/delete', self.delete_model)
app.router.add_post(f'/api/lm/{prefix}/exclude', self.exclude_model)
app.router.add_post(f'/api/lm/{prefix}/fetch-civitai', self.fetch_civitai)
app.router.add_post(f'/api/lm/{prefix}/fetch-all-civitai', self.fetch_all_civitai)
app.router.add_post(f'/api/lm/{prefix}/relink-civitai', self.relink_civitai)
app.router.add_post(f'/api/lm/{prefix}/replace-preview', self.replace_preview)
app.router.add_post(f'/api/lm/{prefix}/save-metadata', self.save_metadata)
app.router.add_post(f'/api/lm/{prefix}/add-tags', self.add_tags)
app.router.add_post(f'/api/lm/{prefix}/rename', self.rename_model)
app.router.add_post(f'/api/lm/{prefix}/bulk-delete', self.bulk_delete_models)
app.router.add_post(f'/api/lm/{prefix}/verify-duplicates', self.verify_duplicates)
app.router.add_post(f'/api/lm/{prefix}/move_model', self.move_model)
app.router.add_post(f'/api/lm/{prefix}/move_models_bulk', self.move_models_bulk)
app.router.add_get(f'/api/lm/{prefix}/auto-organize', self.auto_organize_models)
app.router.add_post(f'/api/lm/{prefix}/auto-organize', self.auto_organize_models)
app.router.add_get(f'/api/lm/{prefix}/auto-organize-progress', self.get_auto_organize_progress)
# Common query routes
app.router.add_get(f'/api/{prefix}/top-tags', self.get_top_tags)
app.router.add_get(f'/api/{prefix}/base-models', self.get_base_models)
app.router.add_get(f'/api/{prefix}/scan', self.scan_models)
app.router.add_get(f'/api/{prefix}/roots', self.get_model_roots)
app.router.add_get(f'/api/{prefix}/folders', self.get_folders)
app.router.add_get(f'/api/{prefix}/folder-tree', self.get_folder_tree)
app.router.add_get(f'/api/{prefix}/unified-folder-tree', self.get_unified_folder_tree)
app.router.add_get(f'/api/{prefix}/find-duplicates', self.find_duplicate_models)
app.router.add_get(f'/api/{prefix}/find-filename-conflicts', self.find_filename_conflicts)
app.router.add_get(f'/api/{prefix}/get-notes', self.get_model_notes)
app.router.add_get(f'/api/{prefix}/preview-url', self.get_model_preview_url)
app.router.add_get(f'/api/{prefix}/civitai-url', self.get_model_civitai_url)
app.router.add_get(f'/api/{prefix}/metadata', self.get_model_metadata)
app.router.add_get(f'/api/{prefix}/model-description', self.get_model_description)
app.router.add_get(f'/api/lm/{prefix}/top-tags', self.get_top_tags)
app.router.add_get(f'/api/lm/{prefix}/base-models', self.get_base_models)
app.router.add_get(f'/api/lm/{prefix}/scan', self.scan_models)
app.router.add_get(f'/api/lm/{prefix}/roots', self.get_model_roots)
app.router.add_get(f'/api/lm/{prefix}/folders', self.get_folders)
app.router.add_get(f'/api/lm/{prefix}/folder-tree', self.get_folder_tree)
app.router.add_get(f'/api/lm/{prefix}/unified-folder-tree', self.get_unified_folder_tree)
app.router.add_get(f'/api/lm/{prefix}/find-duplicates', self.find_duplicate_models)
app.router.add_get(f'/api/lm/{prefix}/find-filename-conflicts', self.find_filename_conflicts)
app.router.add_get(f'/api/lm/{prefix}/get-notes', self.get_model_notes)
app.router.add_get(f'/api/lm/{prefix}/preview-url', self.get_model_preview_url)
app.router.add_get(f'/api/lm/{prefix}/civitai-url', self.get_model_civitai_url)
app.router.add_get(f'/api/lm/{prefix}/metadata', self.get_model_metadata)
app.router.add_get(f'/api/lm/{prefix}/model-description', self.get_model_description)
# Autocomplete route
app.router.add_get(f'/api/{prefix}/relative-paths', self.get_relative_paths)
app.router.add_get(f'/api/lm/{prefix}/relative-paths', self.get_relative_paths)
# Common CivitAI integration
app.router.add_get(f'/api/lm/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions)
app.router.add_get(f'/api/lm/{prefix}/civitai/model/version/{{modelVersionId}}', self.get_civitai_model_by_version)
app.router.add_get(f'/api/lm/{prefix}/civitai/model/hash/{{hash}}', self.get_civitai_model_by_hash)
# Common Download management
app.router.add_post(f'/api/download-model', self.download_model)
app.router.add_get(f'/api/download-model-get', self.download_model_get)
app.router.add_get(f'/api/cancel-download-get', self.cancel_download_get)
app.router.add_get(f'/api/download-progress/{{download_id}}', self.get_download_progress)
# app.router.add_get(f'/api/civitai/versions/{{model_id}}', self.get_civitai_versions)
app.router.add_post(f'/api/lm/download-model', self.download_model)
app.router.add_get(f'/api/lm/download-model-get', self.download_model_get)
app.router.add_get(f'/api/lm/cancel-download-get', self.cancel_download_get)
app.router.add_get(f'/api/lm/download-progress/{{download_id}}', self.get_download_progress)
# Add generic page route
app.router.add_get(f'/{prefix}', self.handle_models_page)
@@ -251,20 +255,45 @@ class BaseModelRoutes(ABC):
return await ModelRouteUtils.handle_exclude_model(request, self.service.scanner)
async def fetch_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata fetch request"""
response = await ModelRouteUtils.handle_fetch_civitai(request, self.service.scanner)
# If successful, format the metadata before returning
if response.status == 200:
data = json.loads(response.body.decode('utf-8'))
if data.get("success") and data.get("metadata"):
formatted_metadata = await self.service.format_response(data["metadata"])
return web.json_response({
"success": True,
"metadata": formatted_metadata
})
return response
"""Handle CivitAI metadata fetch request - force refresh model metadata"""
try:
data = await request.json()
file_path = data.get('file_path')
if not file_path:
return web.json_response({"success": False, "error": "File path is required"}, status=400)
# Get model data from cache
cache = await self.service.scanner.get_cached_data()
model_data = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if not model_data:
return web.json_response({"success": False, "error": "Model not found in cache"}, status=404)
# Check if model has SHA256 hash
if not model_data.get('sha256'):
return web.json_response({"success": False, "error": "No SHA256 hash found"}, status=400)
# Use fetch_and_update_model to get and update metadata
success, error = await ModelRouteUtils.fetch_and_update_model(
sha256=model_data['sha256'],
file_path=file_path,
model_data=model_data,
update_cache_func=self.service.scanner.update_single_model_cache
)
if not success:
return web.json_response({"success": False, "error": error})
# Format the updated metadata for response
formatted_metadata = await self.service.format_response(model_data)
return web.json_response({
"success": True,
"metadata": formatted_metadata
})
except Exception as e:
logger.error(f"Error fetching from CivitAI: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def relink_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata re-linking request"""
@@ -620,23 +649,18 @@ class BaseModelRoutes(ABC):
success = 0
needs_resort = False
# Prepare models to process, only those without CivitAI data or missing tags, description, or creator
# Prepare models to process, only those without CivitAI data
enable_metadata_archive_db = settings.get('enable_metadata_archive_db', False)
# Filter models that need CivitAI metadata update
to_process = [
model for model in cache.raw_data
if (
model.get('sha256')
and (
not model.get('civitai')
or not model['civitai'].get('id')
# or not model.get('tags') # Skipping tag cause it could be empty legitimately
# or not model.get('modelDescription')
# or not (model.get('civitai') and model['civitai'].get('creator'))
)
and (
(enable_metadata_archive_db)
or (not enable_metadata_archive_db and model.get('from_civitai') is True)
)
if model.get('sha256')
and (
not model.get('civitai') or not model['civitai'].get('id')
)
and (
(enable_metadata_archive_db and not model.get('db_checked', False))
or (not enable_metadata_archive_db and model.get('from_civitai') is True)
)
]
total_to_process = len(to_process)
@@ -653,12 +677,13 @@ class BaseModelRoutes(ABC):
for model in to_process:
try:
original_name = model.get('model_name')
if await ModelRouteUtils.fetch_and_update_model(
result, error = await ModelRouteUtils.fetch_and_update_model(
sha256=model['sha256'],
file_path=model['file_path'],
model_data=model,
update_cache_func=self.service.scanner.update_single_model_cache
):
)
if result:
success += 1
if original_name != model.get('model_name'):
needs_resort = True
@@ -704,10 +729,107 @@ class BaseModelRoutes(ABC):
async def get_civitai_versions(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai model with local availability info"""
# This will be implemented by subclasses as they need CivitAI client access
return web.json_response({
"error": "Not implemented in base class"
}, status=501)
try:
model_id = request.match_info['model_id']
metadata_provider = await get_default_metadata_provider()
response = await metadata_provider.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - allow subclasses to override validation
if not self._validate_civitai_model_type(model_type):
return web.json_response({
'error': f"Model type mismatch. Expected {self._get_expected_model_types()}, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the model file (type="Model" and primary=true) in the files list
model_file = self._find_model_file(version.get('files', []))
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching {self.model_type} model versions: {e}")
return web.Response(status=500, text=str(e))
async def get_civitai_model_by_version(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by model version ID"""
try:
model_version_id = request.match_info.get('modelVersionId')
# Get model details from metadata provider
metadata_provider = await get_default_metadata_provider()
model, error_msg = await metadata_provider.get_model_version_info(model_version_id)
if not model:
# Log warning for failed model retrieval
logger.warning(f"Failed to fetch model version {model_version_id}: {error_msg}")
# Determine status code based on error message
status_code = 404 if error_msg and "not found" in error_msg.lower() else 500
return web.json_response({
"success": False,
"error": error_msg or "Failed to fetch model information"
}, status=status_code)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_civitai_model_by_hash(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by hash"""
try:
hash = request.match_info.get('hash')
metadata_provider = await get_default_metadata_provider()
model, error = await metadata_provider.get_model_by_hash(hash)
if error:
logger.warning(f"Error getting model by hash: {error}")
return web.json_response({
"success": False,
"error": error
}, status=404)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details by hash: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type - to be overridden by subclasses"""
return True # Default: accept all types
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages - to be overridden by subclasses"""
return "any model type"
def _find_model_file(self, files: list) -> dict:
"""Find the appropriate model file from the files list - can be overridden by subclasses"""
# Find the primary model file (type="Model" and primary=true) in the files list
return next((file for file in files if file.get('type') == 'Model' and file.get('primary') == True), None)
# Common model move handlers
async def move_model(self, request: web.Request) -> web.Response:

View File

@@ -36,15 +36,20 @@ class CheckpointRoutes(BaseModelRoutes):
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup Checkpoint-specific routes"""
# Checkpoint-specific CivitAI integration
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_checkpoint)
# Checkpoint info by name
app.router.add_get(f'/api/{prefix}/info/{{name}}', self.get_checkpoint_info)
app.router.add_get(f'/api/lm/{prefix}/info/{{name}}', self.get_checkpoint_info)
# Checkpoint roots and Unet roots
app.router.add_get(f'/api/{prefix}/checkpoints_roots', self.get_checkpoints_roots)
app.router.add_get(f'/api/{prefix}/unet_roots', self.get_unet_roots)
app.router.add_get(f'/api/lm/{prefix}/checkpoints_roots', self.get_checkpoints_roots)
app.router.add_get(f'/api/lm/{prefix}/unet_roots', self.get_unet_roots)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Checkpoint"""
return model_type.lower() == 'checkpoint'
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "Checkpoint"
async def get_checkpoint_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific checkpoint by name"""
@@ -61,54 +66,6 @@ class CheckpointRoutes(BaseModelRoutes):
logger.error(f"Error in get_checkpoint_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_civitai_versions_checkpoint(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai checkpoint model with local availability info"""
try:
model_id = request.match_info['model_id']
metadata_provider = await get_default_metadata_provider()
response = await metadata_provider.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be Checkpoint
if model_type.lower() != 'checkpoint':
return web.json_response({
'error': f"Model type mismatch. Expected Checkpoint, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the primary model file (type="Model" and primary=true) in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model' and file.get('primary') == True), None)
# If no primary file found, try to find any model file
if not model_file:
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching checkpoint model versions: {e}")
return web.Response(status=500, text=str(e))
async def get_checkpoints_roots(self, request: web.Request) -> web.Response:
"""Return the list of checkpoint roots from config"""
try:

View File

@@ -35,11 +35,16 @@ class EmbeddingRoutes(BaseModelRoutes):
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup Embedding-specific routes"""
# Embedding-specific CivitAI integration
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_embedding)
# Embedding info by name
app.router.add_get(f'/api/{prefix}/info/{{name}}', self.get_embedding_info)
app.router.add_get(f'/api/lm/{prefix}/info/{{name}}', self.get_embedding_info)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Embedding"""
return model_type.lower() == 'textualinversion'
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "TextualInversion"
async def get_embedding_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific embedding by name"""
@@ -55,51 +60,3 @@ class EmbeddingRoutes(BaseModelRoutes):
except Exception as e:
logger.error(f"Error in get_embedding_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_civitai_versions_embedding(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai embedding model with local availability info"""
try:
model_id = request.match_info['model_id']
metadata_provider = await get_default_metadata_provider()
response = await metadata_provider.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be TextualInversion (Embedding)
if model_type.lower() not in ['textualinversion', 'embedding']:
return web.json_response({
'error': f"Model type mismatch. Expected TextualInversion/Embedding, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the primary model file (type="Model" and primary=true) in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model' and file.get('primary') == True), None)
# If no primary file found, try to find any model file
if not model_file:
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching embedding model versions: {e}")
return web.Response(status=500, text=str(e))

View File

@@ -12,16 +12,16 @@ class ExampleImagesRoutes:
@staticmethod
def setup_routes(app):
"""Register example images routes"""
app.router.add_post('/api/download-example-images', ExampleImagesRoutes.download_example_images)
app.router.add_post('/api/import-example-images', ExampleImagesRoutes.import_example_images)
app.router.add_get('/api/example-images-status', ExampleImagesRoutes.get_example_images_status)
app.router.add_post('/api/pause-example-images', ExampleImagesRoutes.pause_example_images)
app.router.add_post('/api/resume-example-images', ExampleImagesRoutes.resume_example_images)
app.router.add_post('/api/open-example-images-folder', ExampleImagesRoutes.open_example_images_folder)
app.router.add_get('/api/example-image-files', ExampleImagesRoutes.get_example_image_files)
app.router.add_get('/api/has-example-images', ExampleImagesRoutes.has_example_images)
app.router.add_post('/api/delete-example-image', ExampleImagesRoutes.delete_example_image)
app.router.add_post('/api/force-download-example-images', ExampleImagesRoutes.force_download_example_images)
app.router.add_post('/api/lm/download-example-images', ExampleImagesRoutes.download_example_images)
app.router.add_post('/api/lm/import-example-images', ExampleImagesRoutes.import_example_images)
app.router.add_get('/api/lm/example-images-status', ExampleImagesRoutes.get_example_images_status)
app.router.add_post('/api/lm/pause-example-images', ExampleImagesRoutes.pause_example_images)
app.router.add_post('/api/lm/resume-example-images', ExampleImagesRoutes.resume_example_images)
app.router.add_post('/api/lm/open-example-images-folder', ExampleImagesRoutes.open_example_images_folder)
app.router.add_get('/api/lm/example-image-files', ExampleImagesRoutes.get_example_image_files)
app.router.add_get('/api/lm/has-example-images', ExampleImagesRoutes.has_example_images)
app.router.add_post('/api/lm/delete-example-image', ExampleImagesRoutes.delete_example_image)
app.router.add_post('/api/lm/force-download-example-images', ExampleImagesRoutes.force_download_example_images)
@staticmethod
async def download_example_images(request):

View File

@@ -40,17 +40,12 @@ class LoraRoutes(BaseModelRoutes):
def setup_specific_routes(self, app: web.Application, prefix: str):
"""Setup LoRA-specific routes"""
# LoRA-specific query routes
app.router.add_get(f'/api/{prefix}/letter-counts', self.get_letter_counts)
app.router.add_get(f'/api/{prefix}/get-trigger-words', self.get_lora_trigger_words)
app.router.add_get(f'/api/{prefix}/usage-tips-by-path', self.get_lora_usage_tips_by_path)
# CivitAI integration with LoRA-specific validation
app.router.add_get(f'/api/{prefix}/civitai/versions/{{model_id}}', self.get_civitai_versions_lora)
app.router.add_get(f'/api/{prefix}/civitai/model/version/{{modelVersionId}}', self.get_civitai_model_by_version)
app.router.add_get(f'/api/{prefix}/civitai/model/hash/{{hash}}', self.get_civitai_model_by_hash)
app.router.add_get(f'/api/lm/{prefix}/letter-counts', self.get_letter_counts)
app.router.add_get(f'/api/lm/{prefix}/get-trigger-words', self.get_lora_trigger_words)
app.router.add_get(f'/api/lm/{prefix}/usage-tips-by-path', self.get_lora_usage_tips_by_path)
# ComfyUI integration
app.router.add_post(f'/api/{prefix}/get_trigger_words', self.get_trigger_words)
app.router.add_post(f'/api/lm/{prefix}/get_trigger_words', self.get_trigger_words)
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse LoRA-specific parameters"""
@@ -76,6 +71,15 @@ class LoraRoutes(BaseModelRoutes):
return params
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for LoRA"""
from ..utils.constants import VALID_LORA_TYPES
return model_type.lower() in VALID_LORA_TYPES
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "LORA, LoCon, or DORA"
# LoRA-specific route handlers
async def get_letter_counts(self, request: web.Request) -> web.Response:
"""Get count of LoRAs for each letter of the alphabet"""
@@ -210,94 +214,6 @@ class LoraRoutes(BaseModelRoutes):
'error': str(e)
}, status=500)
# CivitAI integration methods
async def get_civitai_versions_lora(self, request: web.Request) -> web.Response:
"""Get available versions for a Civitai LoRA model with local availability info"""
try:
model_id = request.match_info['model_id']
metadata_provider = await get_default_metadata_provider()
response = await metadata_provider.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.Response(status=404, text="Model not found")
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be LORA, LoCon, or DORA
from ..utils.constants import VALID_LORA_TYPES
if model_type.lower() not in VALID_LORA_TYPES:
return web.json_response({
'error': f"Model type mismatch. Expected LORA or LoCon, got {model_type}"
}, status=400)
# Check local availability for each version
for version in versions:
# Find the model file (type="Model") in the files list
model_file = next((file for file in version.get('files', [])
if file.get('type') == 'Model'), None)
if model_file:
sha256 = model_file.get('hashes', {}).get('SHA256')
if sha256:
# Set existsLocally and localPath at the version level
version['existsLocally'] = self.service.has_hash(sha256)
if version['existsLocally']:
version['localPath'] = self.service.get_path_by_hash(sha256)
# Also set the model file size at the version level for easier access
version['modelSizeKB'] = model_file.get('sizeKB')
else:
# No model file found in this version
version['existsLocally'] = False
return web.json_response(versions)
except Exception as e:
logger.error(f"Error fetching LoRA model versions: {e}")
return web.Response(status=500, text=str(e))
async def get_civitai_model_by_version(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by model version ID"""
try:
model_version_id = request.match_info.get('modelVersionId')
# Get model details from metadata provider
metadata_provider = await get_default_metadata_provider()
model, error_msg = await metadata_provider.get_model_version_info(model_version_id)
if not model:
# Log warning for failed model retrieval
logger.warning(f"Failed to fetch model version {model_version_id}: {error_msg}")
# Determine status code based on error message
status_code = 404 if error_msg and "not found" in error_msg.lower() else 500
return web.json_response({
"success": False,
"error": error_msg or "Failed to fetch model information"
}, status=status_code)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_civitai_model_by_hash(self, request: web.Request) -> web.Response:
"""Get CivitAI model details by hash"""
try:
hash = request.match_info.get('hash')
metadata_provider = await get_default_metadata_provider()
model = await metadata_provider.get_model_by_hash(hash)
return web.json_response(model)
except Exception as e:
logger.error(f"Error fetching model details by hash: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for specified LoRA models"""
try:

View File

@@ -4,6 +4,7 @@ import sys
import threading
import asyncio
import subprocess
import re
from server import PromptServer # type: ignore
from aiohttp import web
from ..services.settings_manager import settings
@@ -12,7 +13,7 @@ from ..utils.lora_metadata import extract_trained_words
from ..config import config
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS, NODE_TYPES, DEFAULT_NODE_COLOR
from ..services.service_registry import ServiceRegistry
from ..services.metadata_service import get_metadata_archive_manager, update_metadata_providers
from ..services.metadata_service import get_metadata_archive_manager, update_metadata_providers, get_metadata_provider
from ..services.websocket_manager import ws_manager
from ..services.downloader import get_downloader
logger = logging.getLogger(__name__)
@@ -85,40 +86,91 @@ node_registry = NodeRegistry()
class MiscRoutes:
"""Miscellaneous routes for various utility functions"""
@staticmethod
def is_dedicated_example_images_folder(folder_path):
"""
Check if a folder is a dedicated example images folder.
A dedicated folder should either be:
1. Empty
2. Only contain .download_progress.json file and/or folders with valid SHA256 hash names (64 hex characters)
Args:
folder_path (str): Path to the folder to check
Returns:
bool: True if the folder is dedicated, False otherwise
"""
try:
if not os.path.exists(folder_path) or not os.path.isdir(folder_path):
return False
items = os.listdir(folder_path)
# Empty folder is considered dedicated
if not items:
return True
# Check each item in the folder
for item in items:
item_path = os.path.join(folder_path, item)
# Allow .download_progress.json file
if item == '.download_progress.json' and os.path.isfile(item_path):
continue
# Allow folders with valid SHA256 hash names (64 hex characters)
if os.path.isdir(item_path):
# Check if the folder name is a valid SHA256 hash
if re.match(r'^[a-fA-F0-9]{64}$', item):
continue
# If we encounter anything else, it's not a dedicated folder
return False
return True
except Exception as e:
logger.error(f"Error checking if folder is dedicated: {e}")
return False
@staticmethod
def setup_routes(app):
"""Register miscellaneous routes"""
app.router.add_get('/api/lm/settings', MiscRoutes.get_settings)
app.router.add_post('/api/lm/settings', MiscRoutes.update_settings)
app.router.add_get('/api/health-check', lambda request: web.json_response({'status': 'ok'}))
app.router.add_get('/api/lm/health-check', lambda request: web.json_response({'status': 'ok'}))
app.router.add_post('/api/open-file-location', MiscRoutes.open_file_location)
app.router.add_post('/api/lm/open-file-location', MiscRoutes.open_file_location)
# Usage stats routes
app.router.add_post('/api/update-usage-stats', MiscRoutes.update_usage_stats)
app.router.add_get('/api/get-usage-stats', MiscRoutes.get_usage_stats)
app.router.add_post('/api/lm/update-usage-stats', MiscRoutes.update_usage_stats)
app.router.add_get('/api/lm/get-usage-stats', MiscRoutes.get_usage_stats)
# Lora code update endpoint
app.router.add_post('/api/update-lora-code', MiscRoutes.update_lora_code)
app.router.add_post('/api/lm/update-lora-code', MiscRoutes.update_lora_code)
# Add new route for getting trained words
app.router.add_get('/api/trained-words', MiscRoutes.get_trained_words)
app.router.add_get('/api/lm/trained-words', MiscRoutes.get_trained_words)
# Add new route for getting model example files
app.router.add_get('/api/model-example-files', MiscRoutes.get_model_example_files)
app.router.add_get('/api/lm/model-example-files', MiscRoutes.get_model_example_files)
# Node registry endpoints
app.router.add_post('/api/register-nodes', MiscRoutes.register_nodes)
app.router.add_get('/api/get-registry', MiscRoutes.get_registry)
app.router.add_post('/api/lm/register-nodes', MiscRoutes.register_nodes)
app.router.add_get('/api/lm/get-registry', MiscRoutes.get_registry)
# Add new route for checking if a model exists in the library
app.router.add_get('/api/check-model-exists', MiscRoutes.check_model_exists)
app.router.add_get('/api/lm/check-model-exists', MiscRoutes.check_model_exists)
# Add routes for metadata archive database management
app.router.add_post('/api/download-metadata-archive', MiscRoutes.download_metadata_archive)
app.router.add_post('/api/remove-metadata-archive', MiscRoutes.remove_metadata_archive)
app.router.add_get('/api/metadata-archive-status', MiscRoutes.get_metadata_archive_status)
app.router.add_post('/api/lm/download-metadata-archive', MiscRoutes.download_metadata_archive)
app.router.add_post('/api/lm/remove-metadata-archive', MiscRoutes.remove_metadata_archive)
app.router.add_get('/api/lm/metadata-archive-status', MiscRoutes.get_metadata_archive_status)
# Add route for checking model versions in library
app.router.add_get('/api/lm/model-versions-status', MiscRoutes.get_model_versions_status)
@staticmethod
async def get_settings(request):
@@ -177,7 +229,7 @@ class MiscRoutes:
if value == settings.get(key):
# No change, skip
continue
# Special handling for example_images_path - verify path exists
# Special handling for example_images_path - verify path exists and is dedicated
if key == 'example_images_path' and value:
if not os.path.exists(value):
return web.json_response({
@@ -185,6 +237,13 @@ class MiscRoutes:
'error': f"Path does not exist: {value}"
})
# Check if folder is dedicated for example images
if not MiscRoutes.is_dedicated_example_images_folder(value):
return web.json_response({
'success': False,
'error': "Please set a dedicated folder for example images."
})
# Path changed - server restart required for new path to take effect
old_path = settings.get('example_images_path')
if old_path != value:
@@ -832,6 +891,113 @@ class MiscRoutes:
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_model_versions_status(request):
"""
Get all versions of a model from metadata provider and check their library status
Expects query parameters:
- modelId: int - Civitai model ID (required)
Returns:
- JSON with model type and versions list, each version includes 'inLibrary' flag
"""
try:
# Get the modelId from query parameters
model_id_str = request.query.get('modelId')
# Validate modelId parameter (required)
if not model_id_str:
return web.json_response({
'success': False,
'error': 'Missing required parameter: modelId'
}, status=400)
try:
# Convert modelId to integer
model_id = int(model_id_str)
except ValueError:
return web.json_response({
'success': False,
'error': 'Parameter modelId must be an integer'
}, status=400)
# Get metadata provider
metadata_provider = await get_metadata_provider()
if not metadata_provider:
return web.json_response({
'success': False,
'error': 'Metadata provider not available'
}, status=503)
# Get model versions from metadata provider
response = await metadata_provider.get_model_versions(model_id)
if not response or not response.get('modelVersions'):
return web.json_response({
'success': False,
'error': 'Model not found'
}, status=404)
versions = response.get('modelVersions', [])
model_name = response.get('name', '')
model_type = response.get('type', '').lower()
# Determine scanner based on model type
scanner = None
normalized_type = None
if model_type in ['lora', 'locon', 'dora']:
scanner = await ServiceRegistry.get_lora_scanner()
normalized_type = 'lora'
elif model_type == 'checkpoint':
scanner = await ServiceRegistry.get_checkpoint_scanner()
normalized_type = 'checkpoint'
elif model_type == 'textualinversion':
scanner = await ServiceRegistry.get_embedding_scanner()
normalized_type = 'embedding'
else:
return web.json_response({
'success': False,
'error': f'Model type "{model_type}" is not supported'
}, status=400)
if not scanner:
return web.json_response({
'success': False,
'error': f'Scanner for type "{normalized_type}" is not available'
}, status=503)
# Get local versions from scanner
local_versions = await scanner.get_model_versions_by_id(model_id)
local_version_ids = set(version['versionId'] for version in local_versions)
# Add inLibrary flag to each version
enriched_versions = []
for version in versions:
version_id = version.get('id')
enriched_version = {
'id': version_id,
'name': version.get('name', ''),
'thumbnailUrl': version.get('images')[0]['url'] if version.get('images') else None,
'inLibrary': version_id in local_version_ids
}
enriched_versions.append(enriched_version)
return web.json_response({
'success': True,
'modelId': model_id,
'modelName': model_name,
'modelType': model_type,
'versions': enriched_versions
})
except Exception as e:
logger.error(f"Failed to get model versions status: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def open_file_location(request):

View File

@@ -61,46 +61,46 @@ class RecipeRoutes:
routes = cls()
app.router.add_get('/loras/recipes', routes.handle_recipes_page)
app.router.add_get('/api/recipes', routes.get_recipes)
app.router.add_get('/api/recipe/{recipe_id}', routes.get_recipe_detail)
app.router.add_post('/api/recipes/analyze-image', routes.analyze_recipe_image)
app.router.add_post('/api/recipes/analyze-local-image', routes.analyze_local_image)
app.router.add_post('/api/recipes/save', routes.save_recipe)
app.router.add_delete('/api/recipe/{recipe_id}', routes.delete_recipe)
app.router.add_get('/api/lm/recipes', routes.get_recipes)
app.router.add_get('/api/lm/recipe/{recipe_id}', routes.get_recipe_detail)
app.router.add_post('/api/lm/recipes/analyze-image', routes.analyze_recipe_image)
app.router.add_post('/api/lm/recipes/analyze-local-image', routes.analyze_local_image)
app.router.add_post('/api/lm/recipes/save', routes.save_recipe)
app.router.add_delete('/api/lm/recipe/{recipe_id}', routes.delete_recipe)
# Add new filter-related endpoints
app.router.add_get('/api/recipes/top-tags', routes.get_top_tags)
app.router.add_get('/api/recipes/base-models', routes.get_base_models)
app.router.add_get('/api/lm/recipes/top-tags', routes.get_top_tags)
app.router.add_get('/api/lm/recipes/base-models', routes.get_base_models)
# Add new sharing endpoints
app.router.add_get('/api/recipe/{recipe_id}/share', routes.share_recipe)
app.router.add_get('/api/recipe/{recipe_id}/share/download', routes.download_shared_recipe)
app.router.add_get('/api/lm/recipe/{recipe_id}/share', routes.share_recipe)
app.router.add_get('/api/lm/recipe/{recipe_id}/share/download', routes.download_shared_recipe)
# Add new endpoint for getting recipe syntax
app.router.add_get('/api/recipe/{recipe_id}/syntax', routes.get_recipe_syntax)
app.router.add_get('/api/lm/recipe/{recipe_id}/syntax', routes.get_recipe_syntax)
# Add new endpoint for updating recipe metadata (name, tags and source_path)
app.router.add_put('/api/recipe/{recipe_id}/update', routes.update_recipe)
app.router.add_put('/api/lm/recipe/{recipe_id}/update', routes.update_recipe)
# Add new endpoint for reconnecting deleted LoRAs
app.router.add_post('/api/recipe/lora/reconnect', routes.reconnect_lora)
app.router.add_post('/api/lm/recipe/lora/reconnect', routes.reconnect_lora)
# Add new endpoint for finding duplicate recipes
app.router.add_get('/api/recipes/find-duplicates', routes.find_duplicates)
app.router.add_get('/api/lm/recipes/find-duplicates', routes.find_duplicates)
# Add new endpoint for bulk deletion of recipes
app.router.add_post('/api/recipes/bulk-delete', routes.bulk_delete)
app.router.add_post('/api/lm/recipes/bulk-delete', routes.bulk_delete)
# Start cache initialization
app.on_startup.append(routes._init_cache)
app.router.add_post('/api/recipes/save-from-widget', routes.save_recipe_from_widget)
app.router.add_post('/api/lm/recipes/save-from-widget', routes.save_recipe_from_widget)
# Add route to get recipes for a specific Lora
app.router.add_get('/api/recipes/for-lora', routes.get_recipes_for_lora)
app.router.add_get('/api/lm/recipes/for-lora', routes.get_recipes_for_lora)
# Add new endpoint for scanning and rebuilding the recipe cache
app.router.add_get('/api/recipes/scan', routes.scan_recipes)
app.router.add_get('/api/lm/recipes/scan', routes.scan_recipes)
async def _init_cache(self, app):
"""Initialize cache on startup"""

View File

@@ -507,12 +507,12 @@ class StatsRoutes:
app.router.add_get('/statistics', self.handle_stats_page)
# Register API routes
app.router.add_get('/api/stats/collection-overview', self.get_collection_overview)
app.router.add_get('/api/stats/usage-analytics', self.get_usage_analytics)
app.router.add_get('/api/stats/base-model-distribution', self.get_base_model_distribution)
app.router.add_get('/api/stats/tag-analytics', self.get_tag_analytics)
app.router.add_get('/api/stats/storage-analytics', self.get_storage_analytics)
app.router.add_get('/api/stats/insights', self.get_insights)
app.router.add_get('/api/lm/stats/collection-overview', self.get_collection_overview)
app.router.add_get('/api/lm/stats/usage-analytics', self.get_usage_analytics)
app.router.add_get('/api/lm/stats/base-model-distribution', self.get_base_model_distribution)
app.router.add_get('/api/lm/stats/tag-analytics', self.get_tag_analytics)
app.router.add_get('/api/lm/stats/storage-analytics', self.get_storage_analytics)
app.router.add_get('/api/lm/stats/insights', self.get_insights)
async def _on_startup(self, app):
"""Initialize services when the app starts"""

View File

@@ -7,7 +7,7 @@ import shutil
import tempfile
from aiohttp import web
from typing import Dict, List
from ..services.downloader import get_downloader, Downloader
from ..services.downloader import get_downloader
logger = logging.getLogger(__name__)
@@ -17,9 +17,9 @@ class UpdateRoutes:
@staticmethod
def setup_routes(app):
"""Register update check routes"""
app.router.add_get('/api/check-updates', UpdateRoutes.check_updates)
app.router.add_get('/api/version-info', UpdateRoutes.get_version_info)
app.router.add_post('/api/perform-update', UpdateRoutes.perform_update)
app.router.add_get('/api/lm/check-updates', UpdateRoutes.check_updates)
app.router.add_get('/api/lm/version-info', UpdateRoutes.get_version_info)
app.router.add_post('/api/lm/perform-update', UpdateRoutes.perform_update)
@staticmethod
async def check_updates(request):
@@ -265,7 +265,7 @@ class UpdateRoutes:
github_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}/commits/main"
try:
downloader = await Downloader.get_instance()
downloader = await get_downloader()
success, data = await downloader.make_request('GET', github_url, custom_headers={'Accept': 'application/vnd.github+json'})
if not success:
@@ -431,7 +431,7 @@ class UpdateRoutes:
github_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}/releases/latest"
try:
downloader = await Downloader.get_instance()
downloader = await get_downloader()
success, data = await downloader.make_request('GET', github_url, custom_headers={'Accept': 'application/vnd.github+json'})
if not success:

View File

@@ -1,4 +1,3 @@
from datetime import datetime
import os
import logging
import asyncio
@@ -59,17 +58,17 @@ class CivitaiClient:
return success, result
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
try:
downloader = await get_downloader()
success, version = await downloader.make_request(
success, result = await downloader.make_request(
'GET',
f"{self.base_url}/model-versions/by-hash/{model_hash}",
use_auth=True
)
if success:
# Get model ID from version data
model_id = version.get('modelId')
model_id = result.get('modelId')
if model_id:
# Fetch additional model metadata
success_model, data = await downloader.make_request(
@@ -79,17 +78,24 @@ class CivitaiClient:
)
if success_model:
# Enrich version_info with model data
version['model']['description'] = data.get("description")
version['model']['tags'] = data.get("tags", [])
result['model']['description'] = data.get("description")
result['model']['tags'] = data.get("tags", [])
# Add creator from model data
version['creator'] = data.get("creator")
result['creator'] = data.get("creator")
return version
return None
return result, None
# Handle specific error cases
if "not found" in str(result):
return None, "Model not found"
# Other error cases
logger.error(f"Failed to fetch model info for {model_hash[:10]}: {result}")
return None, str(result)
except Exception as e:
logger.error(f"API Error: {str(e)}")
return None
return None, str(e)
async def download_preview_image(self, image_url: str, save_path: str):
try:
@@ -122,7 +128,8 @@ class CivitaiClient:
# Also return model type along with versions
return {
'modelVersions': result.get('modelVersions', []),
'type': result.get('type', '')
'type': result.get('type', ''),
'name': result.get('name', '')
}
return None
except Exception as e:
@@ -245,8 +252,8 @@ class CivitaiClient:
return result, None
# Handle specific error cases
if "404" in str(result):
error_msg = f"Model not found (status 404)"
if "not found" in str(result):
error_msg = f"Model not found"
logger.warning(f"Model version not found: {version_id} - {error_msg}")
return None, error_msg
@@ -258,59 +265,6 @@ class CivitaiClient:
logger.error(error_msg)
return None, error_msg
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
"""Fetch model metadata (description, tags, and creator info) from Civitai API
Args:
model_id: The Civitai model ID
Returns:
Tuple[Optional[Dict], int]: A tuple containing:
- A dictionary with model metadata or None if not found
- The HTTP status code from the request (0 for exceptions)
"""
try:
downloader = await get_downloader()
url = f"{self.base_url}/models/{model_id}"
success, result = await downloader.make_request(
'GET',
url,
use_auth=True
)
if not success:
# Try to extract status code from error message
status_code = 0
if "404" in str(result):
status_code = 404
elif "401" in str(result):
status_code = 401
elif "403" in str(result):
status_code = 403
logger.warning(f"Failed to fetch model metadata: {result}")
return None, status_code
# Extract relevant metadata
metadata = {
"description": result.get("description") or "No model description available",
"tags": result.get("tags", []),
"creator": {
"username": result.get("creator", {}).get("username"),
"image": result.get("creator", {}).get("image")
}
}
if metadata["description"] or metadata["tags"] or metadata["creator"]["username"]:
return metadata, 200
else:
logger.warning(f"No metadata found for model {model_id}")
return None, 200
except Exception as e:
logger.error(f"Error fetching model metadata: {e}", exc_info=True)
return None, 0
async def get_image_info(self, image_id: str) -> Optional[Dict]:
"""Fetch image information from Civitai API

View File

@@ -2,7 +2,6 @@ from abc import ABC, abstractmethod
import json
import aiosqlite
import logging
import aiohttp
from bs4 import BeautifulSoup
from typing import Optional, Dict, Tuple
from .downloader import get_downloader
@@ -13,7 +12,7 @@ class ModelMetadataProvider(ABC):
"""Base abstract class for all model metadata providers"""
@abstractmethod
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Find model by hash value"""
pass
@@ -31,11 +30,6 @@ class ModelMetadataProvider(ABC):
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Fetch model version metadata"""
pass
@abstractmethod
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
"""Fetch model metadata (description, tags, and creator info)"""
pass
class CivitaiModelMetadataProvider(ModelMetadataProvider):
"""Provider that uses Civitai API for metadata"""
@@ -43,7 +37,7 @@ class CivitaiModelMetadataProvider(ModelMetadataProvider):
def __init__(self, civitai_client):
self.client = civitai_client
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
return await self.client.get_model_by_hash(model_hash)
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
@@ -54,16 +48,13 @@ class CivitaiModelMetadataProvider(ModelMetadataProvider):
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
return await self.client.get_model_version_info(version_id)
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
return await self.client.get_model_metadata(model_id)
class CivArchiveModelMetadataProvider(ModelMetadataProvider):
"""Provider that uses CivArchive HTML page parsing for metadata"""
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Not supported by CivArchive provider"""
return None
return None, "CivArchive provider does not support hash lookup"
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
"""Not supported by CivArchive provider"""
@@ -174,10 +165,6 @@ class CivArchiveModelMetadataProvider(ModelMetadataProvider):
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Not supported by CivArchive provider - requires both model_id and version_id"""
return None, "CivArchive provider requires both model_id and version_id"
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
"""Not supported by CivArchive provider"""
return None, 404
class SQLiteModelMetadataProvider(ModelMetadataProvider):
"""Provider that uses SQLite database for metadata"""
@@ -185,7 +172,7 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
def __init__(self, db_path: str):
self.db_path = db_path
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Find model by hash value from SQLite database"""
async with aiosqlite.connect(self.db_path) as db:
# Look up in model_files table to get model_id and version_id
@@ -200,14 +187,15 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
file_row = await cursor.fetchone()
if not file_row:
return None
return None, "Model not found"
# Get version details
model_id = file_row['model_id']
version_id = file_row['version_id']
# Build response in the same format as Civitai API
return await self._get_version_with_model_data(db, model_id, version_id)
result = await self._get_version_with_model_data(db, model_id, version_id)
return result, None if result else "Error retrieving model data"
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
"""Get all versions of a model from SQLite database"""
@@ -224,6 +212,7 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
model_data = json.loads(model_row['data'])
model_type = model_row['type']
model_name = model_row['name']
# Get all versions for this model
versions_query = """
@@ -260,7 +249,8 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
return {
'modelVersions': model_versions,
'type': model_type
'type': model_type,
'name': model_name
}
async def get_model_version(self, model_id: int = None, version_id: int = None) -> Optional[Dict]:
@@ -322,37 +312,6 @@ class SQLiteModelMetadataProvider(ModelMetadataProvider):
version_data = await self._get_version_with_model_data(db, model_id, version_id)
return version_data, None
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
"""Fetch model metadata from SQLite database"""
async with aiosqlite.connect(self.db_path) as db:
db.row_factory = aiosqlite.Row
# Get model details
model_query = "SELECT name, type, data, username FROM models WHERE id = ?"
cursor = await db.execute(model_query, (model_id,))
model_row = await cursor.fetchone()
if not model_row:
return None, 404
# Parse data JSON
try:
model_data = json.loads(model_row['data'])
# Extract relevant metadata
metadata = {
"description": model_data.get("description", "No model description available"),
"tags": model_data.get("tags", []),
"creator": {
"username": model_row['username'] or model_data.get("creator", {}).get("username"),
"image": model_data.get("creator", {}).get("image")
}
}
return metadata, 200
except json.JSONDecodeError:
return None, 500
async def _get_version_with_model_data(self, db, model_id, version_id) -> Optional[Dict]:
"""Helper to build version data with model information"""
# Get version details
@@ -407,15 +366,16 @@ class FallbackMetadataProvider(ModelMetadataProvider):
def __init__(self, providers: list):
self.providers = providers
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str) -> Tuple[Optional[Dict], Optional[str]]:
for provider in self.providers:
try:
result = await provider.get_model_by_hash(model_hash)
result, error = await provider.get_model_by_hash(model_hash)
if result:
return result
except Exception:
return result, error
except Exception as e:
logger.debug(f"Provider failed for get_model_by_hash: {e}")
continue
return None
return None, "Model not found"
async def get_model_versions(self, model_id: str) -> Optional[Dict]:
for provider in self.providers:
@@ -450,17 +410,6 @@ class FallbackMetadataProvider(ModelMetadataProvider):
continue
return None, "No provider could retrieve the data"
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
for provider in self.providers:
try:
result, status = await provider.get_model_metadata(model_id)
if result:
return result, status
except Exception as e:
logger.debug(f"Provider failed for get_model_metadata: {e}")
continue
return None, 404
class ModelMetadataProviderManager:
"""Manager for selecting and using model metadata providers"""
@@ -483,7 +432,7 @@ class ModelMetadataProviderManager:
if is_default or self.default_provider is None:
self.default_provider = name
async def get_model_by_hash(self, model_hash: str, provider_name: str = None) -> Optional[Dict]:
async def get_model_by_hash(self, model_hash: str, provider_name: str = None) -> Tuple[Optional[Dict], Optional[str]]:
"""Find model by hash using specified or default provider"""
provider = self._get_provider(provider_name)
return await provider.get_model_by_hash(model_hash)
@@ -503,11 +452,6 @@ class ModelMetadataProviderManager:
provider = self._get_provider(provider_name)
return await provider.get_model_version_info(version_id)
async def get_model_metadata(self, model_id: str, provider_name: str = None) -> Tuple[Optional[Dict], int]:
"""Fetch model metadata using specified or default provider"""
provider = self._get_provider(provider_name)
return await provider.get_model_metadata(model_id)
def _get_provider(self, provider_name: str = None) -> ModelMetadataProvider:
"""Get provider by name or default provider"""
if provider_name and provider_name in self.providers:

View File

@@ -81,6 +81,7 @@ class SettingsManager:
return {
"civitai_api_key": "",
"language": "en",
"show_only_sfw": False, # Show only SFW content
"enable_metadata_archive_db": False, # Enable metadata archive database
"proxy_enabled": False, # Enable app-level proxy
"proxy_host": "", # Proxy host

View File

@@ -10,6 +10,7 @@ from .example_images_processor import ExampleImagesProcessor
from .example_images_metadata import MetadataUpdater
from ..services.websocket_manager import ws_manager # Add this import at the top
from ..services.downloader import get_downloader
from ..services.settings_manager import settings
logger = logging.getLogger(__name__)
@@ -40,10 +41,10 @@ class DownloadManager:
Expects a JSON body with:
{
"output_dir": "path/to/output", # Base directory to save example images
"optimize": true, # Whether to optimize images (default: true)
"model_types": ["lora", "checkpoint"], # Model types to process (default: both)
"delay": 1.0 # Delay between downloads to avoid rate limiting (default: 1.0)
"delay": 1.0, # Delay between downloads to avoid rate limiting (default: 1.0)
"auto_mode": false # Flag to indicate automatic download (default: false)
}
"""
global download_task, is_downloading, download_progress
@@ -64,16 +65,28 @@ class DownloadManager:
try:
# Parse the request body
data = await request.json()
output_dir = data.get('output_dir')
auto_mode = data.get('auto_mode', False)
optimize = data.get('optimize', True)
model_types = data.get('model_types', ['lora', 'checkpoint'])
delay = float(data.get('delay', 0.2)) # Default to 0.2 seconds
# Get output directory from settings
output_dir = settings.get('example_images_path')
if not output_dir:
return web.json_response({
'success': False,
'error': 'Missing output_dir parameter'
}, status=400)
error_msg = 'Example images path not configured in settings'
if auto_mode:
# For auto mode, just log and return success to avoid showing error toasts
logger.debug(error_msg)
return web.json_response({
'success': True,
'message': 'Example images path not configured, skipping auto download'
})
else:
return web.json_response({
'success': False,
'error': error_msg
}, status=400)
# Create the output directory
os.makedirs(output_dir, exist_ok=True)
@@ -426,7 +439,6 @@ class DownloadManager:
Expects a JSON body with:
{
"model_hashes": ["hash1", "hash2", ...], # List of model hashes to download
"output_dir": "path/to/output", # Base directory to save example images
"optimize": true, # Whether to optimize images (default: true)
"model_types": ["lora", "checkpoint"], # Model types to process (default: both)
"delay": 1.0 # Delay between downloads (default: 1.0)
@@ -444,7 +456,6 @@ class DownloadManager:
# Parse the request body
data = await request.json()
model_hashes = data.get('model_hashes', [])
output_dir = data.get('output_dir')
optimize = data.get('optimize', True)
model_types = data.get('model_types', ['lora', 'checkpoint'])
delay = float(data.get('delay', 0.2)) # Default to 0.2 seconds
@@ -454,11 +465,14 @@ class DownloadManager:
'success': False,
'error': 'Missing model_hashes parameter'
}, status=400)
# Get output directory from settings
output_dir = settings.get('example_images_path')
if not output_dir:
return web.json_response({
'success': False,
'error': 'Missing output_dir parameter'
'error': 'Example images path not configured in settings'
}, status=400)
# Create the output directory

View File

@@ -53,7 +53,7 @@ class MetadataUpdater:
async def update_cache_func(old_path, new_path, metadata):
return await scanner.update_single_model_cache(old_path, new_path, metadata)
success = await ModelRouteUtils.fetch_and_update_model(
success, error = await ModelRouteUtils.fetch_and_update_model(
model_hash,
file_path,
model_data,
@@ -64,7 +64,7 @@ class MetadataUpdater:
logger.info(f"Successfully refreshed metadata for {model_name}")
return True
else:
logger.warning(f"Failed to refresh metadata for {model_name}")
logger.warning(f"Failed to refresh metadata for {model_name}, {error}")
return False
except Exception as e:

View File

@@ -24,6 +24,8 @@ class BaseModelMetadata:
civitai_deleted: bool = False # Whether deleted from Civitai
favorite: bool = False # Whether the model is a favorite
exclude: bool = False # Whether to exclude this model from the cache
db_checked: bool = False # Whether checked in archive DB
last_checked_at: float = 0 # Last checked timestamp
_unknown_fields: Dict[str, Any] = field(default_factory=dict, repr=False, compare=False) # Store unknown fields
def __post_init__(self):

View File

@@ -3,6 +3,7 @@ import json
import logging
from typing import Dict, List, Callable, Awaitable
from aiohttp import web
from datetime import datetime
from .model_utils import determine_base_model
from .constants import PREVIEW_EXTENSIONS, CARD_PREVIEW_WIDTH
@@ -82,7 +83,6 @@ class ModelRouteUtils:
# Update local metadata with merged civitai data
local_metadata['civitai'] = merged_civitai
local_metadata['from_civitai'] = True
# Update model-related metadata from civitai_metadata.model
if 'model' in civitai_metadata and civitai_metadata['model']:
@@ -184,7 +184,7 @@ class ModelRouteUtils:
file_path: str,
model_data: dict,
update_cache_func: Callable[[str, str, Dict], Awaitable[bool]]
) -> bool:
) -> tuple[bool, str]:
"""Fetch and update metadata for a single model
Args:
@@ -194,34 +194,52 @@ class ModelRouteUtils:
update_cache_func: Function to update the cache with new metadata
Returns:
bool: True if successful, False otherwise
tuple[bool, str]: (success, error_message). When success is True, error_message is None.
"""
try:
# Validate input parameters
if not isinstance(model_data, dict):
logger.error(f"Invalid model_data type: {type(model_data)}")
return False
error_msg = f"Invalid model_data type: {type(model_data)}"
logger.error(error_msg)
return False, error_msg
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
# Check if model metadata exists
local_metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
enable_metadata_archive_db = settings.get('enable_metadata_archive_db', False)
if model_data.get('from_civitai') is False:
if not settings.get('enable_metadata_archive_db', False):
return False
if model_data.get('civitai_deleted') is True:
# If CivitAI deleted flag is set, skip CivitAI API provider
if not enable_metadata_archive_db or model_data.get('db_checked') is True:
return False, "CivitAI model is deleted and metadata archive DB is not enabled"
# Likely deleted from CivitAI, use archive_db if available
metadata_provider = await get_metadata_provider('sqlite')
else:
metadata_provider = await get_default_metadata_provider()
civitai_metadata = await metadata_provider.get_model_by_hash(sha256)
civitai_metadata, error = await metadata_provider.get_model_by_hash(sha256)
if not civitai_metadata:
# Mark as not from CivitAI if not found
local_metadata['from_civitai'] = False
model_data['from_civitai'] = False
await MetadataManager.save_metadata(file_path, local_metadata)
return False
if error == "Model not found":
model_data['from_civitai'] = False
model_data['civitai_deleted'] = True
model_data['db_checked'] = enable_metadata_archive_db
model_data['last_checked_at'] = datetime.now().timestamp()
# Remove 'folder' key from model_data if present before saving
data_to_save = model_data.copy()
data_to_save.pop('folder', None)
await MetadataManager.save_metadata(file_path, data_to_save)
# For other errors, log and return False with error message
error_msg = f"Error fetching metadata: {error} (model_name={model_data.get('model_name', '')})"
logger.error(error_msg)
return False, error_msg
model_data['from_civitai'] = True
model_data['civitai_deleted'] = civitai_metadata.get('source') == 'archive_db'
model_data['db_checked'] = enable_metadata_archive_db
model_data['last_checked_at'] = datetime.now().timestamp()
local_metadata = model_data.copy()
local_metadata.pop('folder', None) # Remove 'folder' key if present
# Update metadata
await ModelRouteUtils.update_model_metadata(
@@ -235,22 +253,23 @@ class ModelRouteUtils:
update_dict = {
'model_name': local_metadata.get('model_name'),
'preview_url': local_metadata.get('preview_url'),
'from_civitai': True,
'civitai': civitai_metadata
'civitai': local_metadata.get('civitai'),
}
model_data.update(update_dict)
# Update cache using the provided function
await update_cache_func(file_path, file_path, local_metadata)
return True
return True, None
except KeyError as e:
logger.error(f"Error fetching CivitAI data - Missing key: {e} in model_data={model_data}")
return False
error_msg = f"Error fetching metadata - Missing key: {e} in model_data={model_data}"
logger.error(error_msg)
return False, error_msg
except Exception as e:
logger.error(f"Error fetching CivitAI data: {str(e)}", exc_info=True) # Include stack trace
return False
error_msg = f"Error fetching metadata: {str(e)}"
logger.error(error_msg, exc_info=True) # Include stack trace
return False, error_msg
@staticmethod
def filter_civitai_data(data: Dict, minimal: bool = False) -> Dict:
@@ -387,10 +406,10 @@ class ModelRouteUtils:
metadata_provider = await get_default_metadata_provider()
# Fetch and update metadata
civitai_metadata = await metadata_provider.get_model_by_hash(local_metadata["sha256"])
civitai_metadata, error = await metadata_provider.get_model_by_hash(local_metadata["sha256"])
if not civitai_metadata:
await ModelRouteUtils.handle_not_found_on_civitai(metadata_path, local_metadata)
return web.json_response({"success": False, "error": "Not found on CivitAI"}, status=404)
return web.json_response({"success": False, "error": error}, status=404)
await ModelRouteUtils.update_model_metadata(metadata_path, local_metadata, civitai_metadata, metadata_provider)

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-lora-manager"
description = "Revolutionize your workflow with the ultimate LoRA companion for ComfyUI!"
version = "0.9.3"
version = "0.9.4"
license = {file = "LICENSE"}
dependencies = [
"aiohttp",

View File

@@ -55,48 +55,48 @@ export function getApiEndpoints(modelType) {
return {
// Base CRUD operations
list: `/api/${modelType}/list`,
delete: `/api/${modelType}/delete`,
exclude: `/api/${modelType}/exclude`,
rename: `/api/${modelType}/rename`,
save: `/api/${modelType}/save-metadata`,
list: `/api/lm/${modelType}/list`,
delete: `/api/lm/${modelType}/delete`,
exclude: `/api/lm/${modelType}/exclude`,
rename: `/api/lm/${modelType}/rename`,
save: `/api/lm/${modelType}/save-metadata`,
// Bulk operations
bulkDelete: `/api/${modelType}/bulk-delete`,
bulkDelete: `/api/lm/${modelType}/bulk-delete`,
// Tag operations
addTags: `/api/${modelType}/add-tags`,
addTags: `/api/lm/${modelType}/add-tags`,
// Move operations (now common for all model types that support move)
moveModel: `/api/${modelType}/move_model`,
moveBulk: `/api/${modelType}/move_models_bulk`,
moveModel: `/api/lm/${modelType}/move_model`,
moveBulk: `/api/lm/${modelType}/move_models_bulk`,
// CivitAI integration
fetchCivitai: `/api/${modelType}/fetch-civitai`,
fetchAllCivitai: `/api/${modelType}/fetch-all-civitai`,
relinkCivitai: `/api/${modelType}/relink-civitai`,
civitaiVersions: `/api/${modelType}/civitai/versions`,
fetchCivitai: `/api/lm/${modelType}/fetch-civitai`,
fetchAllCivitai: `/api/lm/${modelType}/fetch-all-civitai`,
relinkCivitai: `/api/lm/${modelType}/relink-civitai`,
civitaiVersions: `/api/lm/${modelType}/civitai/versions`,
// Preview management
replacePreview: `/api/${modelType}/replace-preview`,
replacePreview: `/api/lm/${modelType}/replace-preview`,
// Query operations
scan: `/api/${modelType}/scan`,
topTags: `/api/${modelType}/top-tags`,
baseModels: `/api/${modelType}/base-models`,
roots: `/api/${modelType}/roots`,
folders: `/api/${modelType}/folders`,
folderTree: `/api/${modelType}/folder-tree`,
unifiedFolderTree: `/api/${modelType}/unified-folder-tree`,
duplicates: `/api/${modelType}/find-duplicates`,
conflicts: `/api/${modelType}/find-filename-conflicts`,
verify: `/api/${modelType}/verify-duplicates`,
metadata: `/api/${modelType}/metadata`,
modelDescription: `/api/${modelType}/model-description`,
scan: `/api/lm/${modelType}/scan`,
topTags: `/api/lm/${modelType}/top-tags`,
baseModels: `/api/lm/${modelType}/base-models`,
roots: `/api/lm/${modelType}/roots`,
folders: `/api/lm/${modelType}/folders`,
folderTree: `/api/lm/${modelType}/folder-tree`,
unifiedFolderTree: `/api/lm/${modelType}/unified-folder-tree`,
duplicates: `/api/lm/${modelType}/find-duplicates`,
conflicts: `/api/lm/${modelType}/find-filename-conflicts`,
verify: `/api/lm/${modelType}/verify-duplicates`,
metadata: `/api/lm/${modelType}/metadata`,
modelDescription: `/api/lm/${modelType}/model-description`,
// Auto-organize operations
autoOrganize: `/api/${modelType}/auto-organize`,
autoOrganizeProgress: `/api/${modelType}/auto-organize-progress`,
autoOrganize: `/api/lm/${modelType}/auto-organize`,
autoOrganizeProgress: `/api/lm/${modelType}/auto-organize-progress`,
// Model-specific endpoints (will be merged with specific configs)
specific: {}
@@ -108,24 +108,24 @@ export function getApiEndpoints(modelType) {
*/
export const MODEL_SPECIFIC_ENDPOINTS = {
[MODEL_TYPES.LORA]: {
letterCounts: `/api/${MODEL_TYPES.LORA}/letter-counts`,
notes: `/api/${MODEL_TYPES.LORA}/get-notes`,
triggerWords: `/api/${MODEL_TYPES.LORA}/get-trigger-words`,
previewUrl: `/api/${MODEL_TYPES.LORA}/preview-url`,
civitaiUrl: `/api/${MODEL_TYPES.LORA}/civitai-url`,
metadata: `/api/${MODEL_TYPES.LORA}/metadata`,
getTriggerWordsPost: `/api/${MODEL_TYPES.LORA}/get_trigger_words`,
civitaiModelByVersion: `/api/${MODEL_TYPES.LORA}/civitai/model/version`,
civitaiModelByHash: `/api/${MODEL_TYPES.LORA}/civitai/model/hash`,
letterCounts: `/api/lm/${MODEL_TYPES.LORA}/letter-counts`,
notes: `/api/lm/${MODEL_TYPES.LORA}/get-notes`,
triggerWords: `/api/lm/${MODEL_TYPES.LORA}/get-trigger-words`,
previewUrl: `/api/lm/${MODEL_TYPES.LORA}/preview-url`,
civitaiUrl: `/api/lm/${MODEL_TYPES.LORA}/civitai-url`,
metadata: `/api/lm/${MODEL_TYPES.LORA}/metadata`,
getTriggerWordsPost: `/api/lm/${MODEL_TYPES.LORA}/get_trigger_words`,
civitaiModelByVersion: `/api/lm/${MODEL_TYPES.LORA}/civitai/model/version`,
civitaiModelByHash: `/api/lm/${MODEL_TYPES.LORA}/civitai/model/hash`,
},
[MODEL_TYPES.CHECKPOINT]: {
info: `/api/${MODEL_TYPES.CHECKPOINT}/info`,
checkpoints_roots: `/api/${MODEL_TYPES.CHECKPOINT}/checkpoints_roots`,
unet_roots: `/api/${MODEL_TYPES.CHECKPOINT}/unet_roots`,
metadata: `/api/${MODEL_TYPES.CHECKPOINT}/metadata`,
info: `/api/lm/${MODEL_TYPES.CHECKPOINT}/info`,
checkpoints_roots: `/api/lm/${MODEL_TYPES.CHECKPOINT}/checkpoints_roots`,
unet_roots: `/api/lm/${MODEL_TYPES.CHECKPOINT}/unet_roots`,
metadata: `/api/lm/${MODEL_TYPES.CHECKPOINT}/metadata`,
},
[MODEL_TYPES.EMBEDDING]: {
metadata: `/api/${MODEL_TYPES.EMBEDDING}/metadata`,
metadata: `/api/lm/${MODEL_TYPES.EMBEDDING}/metadata`,
}
};
@@ -173,11 +173,11 @@ export function getCurrentModelType(explicitType = null) {
// Download API endpoints (shared across all model types)
export const DOWNLOAD_ENDPOINTS = {
download: '/api/download-model',
downloadGet: '/api/download-model-get',
cancelGet: '/api/cancel-download-get',
progress: '/api/download-progress',
exampleImages: '/api/force-download-example-images' // New endpoint for downloading example images
download: '/api/lm/download-model',
downloadGet: '/api/lm/download-model-get',
cancelGet: '/api/lm/cancel-download-get',
progress: '/api/lm/download-progress',
exampleImages: '/api/lm/force-download-example-images' // New endpoint for downloading example images
};
// WebSocket endpoints

View File

@@ -538,13 +538,13 @@ export class BaseModelApiClient {
completionMessage = translate('toast.api.bulkMetadataCompletePartial', { success: successCount, total: totalItems, type: this.apiConfig.config.displayName }, `Refreshed ${successCount} of ${totalItems} ${this.apiConfig.config.displayName}s`);
showToast('toast.api.bulkMetadataCompletePartial', { success: successCount, total: totalItems, type: this.apiConfig.config.displayName }, 'warning');
if (failedItems.length > 0) {
const failureMessage = failedItems.length <= 3
? failedItems.map(item => `${item.fileName}: ${item.error}`).join('\n')
: failedItems.slice(0, 3).map(item => `${item.fileName}: ${item.error}`).join('\n') +
`\n(and ${failedItems.length - 3} more)`;
showToast('toast.api.bulkMetadataFailureDetails', { failures: failureMessage }, 'warning', 6000);
}
// if (failedItems.length > 0) {
// const failureMessage = failedItems.length <= 3
// ? failedItems.map(item => `${item.fileName}: ${item.error}`).join('\n')
// : failedItems.slice(0, 3).map(item => `${item.fileName}: ${item.error}`).join('\n') +
// `\n(and ${failedItems.length - 3} more)`;
// showToast('toast.api.bulkMetadataFailureDetails', { failures: failureMessage }, 'warning', 6000);
// }
} else {
completionMessage = translate('toast.api.bulkMetadataCompleteNone', { type: this.apiConfig.config.displayName }, `Failed to refresh metadata for any ${this.apiConfig.config.displayName}s`);
showToast('toast.api.bulkMetadataCompleteNone', { type: this.apiConfig.config.displayName }, 'error');

View File

@@ -21,7 +21,7 @@ export async function fetchRecipesPage(page = 1, pageSize = 100) {
// If we have a specific recipe ID to load
if (pageState.customFilter?.active && pageState.customFilter?.recipeId) {
// Special case: load specific recipe
const response = await fetch(`/api/recipe/${pageState.customFilter.recipeId}`);
const response = await fetch(`/api/lm/recipe/${pageState.customFilter.recipeId}`);
if (!response.ok) {
throw new Error(`Failed to load recipe: ${response.statusText}`);
@@ -72,7 +72,7 @@ export async function fetchRecipesPage(page = 1, pageSize = 100) {
}
// Fetch recipes
const response = await fetch(`/api/recipes?${params.toString()}`);
const response = await fetch(`/api/lm/recipes?${params.toString()}`);
if (!response.ok) {
throw new Error(`Failed to load recipes: ${response.statusText}`);
@@ -207,7 +207,7 @@ export async function refreshRecipes() {
state.loadingManager.showSimpleLoading('Refreshing recipes...');
// Call the API endpoint to rebuild the recipe cache
const response = await fetch('/api/recipes/scan');
const response = await fetch('/api/lm/recipes/scan');
if (!response.ok) {
const data = await response.json();
@@ -274,7 +274,7 @@ export async function updateRecipeMetadata(filePath, updates) {
const basename = filePath.split('/').pop().split('\\').pop();
const recipeId = basename.substring(0, basename.lastIndexOf('.'));
const response = await fetch(`/api/recipe/${recipeId}/update`, {
const response = await fetch(`/api/lm/recipe/${recipeId}/update`, {
method: 'PUT',
headers: {
'Content-Type': 'application/json',

View File

@@ -125,8 +125,8 @@ export const ModelContextMenuMixin = {
state.loadingManager.showSimpleLoading('Re-linking to Civitai...');
const endpoint = this.modelType === 'checkpoint' ?
'/api/checkpoints/relink-civitai' :
'/api/loras/relink-civitai';
'/api/lm/checkpoints/relink-civitai' :
'/api/lm/loras/relink-civitai';
const response = await fetch(endpoint, {
method: 'POST',

View File

@@ -103,7 +103,7 @@ export class RecipeContextMenu extends BaseContextMenu {
return;
}
fetch(`/api/recipe/${recipeId}/syntax`)
fetch(`/api/lm/recipe/${recipeId}/syntax`)
.then(response => response.json())
.then(data => {
if (data.success && data.syntax) {
@@ -126,7 +126,7 @@ export class RecipeContextMenu extends BaseContextMenu {
return;
}
fetch(`/api/recipe/${recipeId}/syntax`)
fetch(`/api/lm/recipe/${recipeId}/syntax`)
.then(response => response.json())
.then(data => {
if (data.success && data.syntax) {
@@ -149,7 +149,7 @@ export class RecipeContextMenu extends BaseContextMenu {
}
// First get the recipe details to access its LoRAs
fetch(`/api/recipe/${recipeId}`)
fetch(`/api/lm/recipe/${recipeId}`)
.then(response => response.json())
.then(recipe => {
// Clear any previous filters first
@@ -189,7 +189,7 @@ export class RecipeContextMenu extends BaseContextMenu {
try {
// First get the recipe details
const response = await fetch(`/api/recipe/${recipeId}`);
const response = await fetch(`/api/lm/recipe/${recipeId}`);
const recipe = await response.json();
// Get missing LoRAs
@@ -209,9 +209,9 @@ export class RecipeContextMenu extends BaseContextMenu {
// Determine which endpoint to use based on available data
if (lora.modelVersionId) {
endpoint = `/api/loras/civitai/model/version/${lora.modelVersionId}`;
endpoint = `/api/lm/loras/civitai/model/version/${lora.modelVersionId}`;
} else if (lora.hash) {
endpoint = `/api/loras/civitai/model/hash/${lora.hash}`;
endpoint = `/api/lm/loras/civitai/model/hash/${lora.hash}`;
} else {
console.error("Missing both hash and modelVersionId for lora:", lora);
return null;

View File

@@ -13,7 +13,7 @@ export class DuplicatesManager {
async findDuplicates() {
try {
const response = await fetch('/api/recipes/find-duplicates');
const response = await fetch('/api/lm/recipes/find-duplicates');
if (!response.ok) {
throw new Error('Failed to find duplicates');
}
@@ -354,7 +354,7 @@ export class DuplicatesManager {
const recipeIds = Array.from(this.selectedForDeletion);
// Call API to bulk delete
const response = await fetch('/api/recipes/bulk-delete', {
const response = await fetch('/api/lm/recipes/bulk-delete', {
method: 'POST',
headers: {
'Content-Type': 'application/json'

View File

@@ -48,7 +48,7 @@ export class ModelDuplicatesManager {
// Method to check for duplicates count using existing endpoint
async checkDuplicatesCount() {
try {
const endpoint = `/api/${this.modelType}/find-duplicates`;
const endpoint = `/api/lm/${this.modelType}/find-duplicates`;
const response = await fetch(endpoint);
if (!response.ok) {
@@ -104,7 +104,7 @@ export class ModelDuplicatesManager {
async findDuplicates() {
try {
// Determine API endpoint based on model type
const endpoint = `/api/${this.modelType}/find-duplicates`;
const endpoint = `/api/lm/${this.modelType}/find-duplicates`;
const response = await fetch(endpoint);
if (!response.ok) {
@@ -623,7 +623,7 @@ export class ModelDuplicatesManager {
const filePaths = Array.from(this.selectedForDeletion);
// Call API to bulk delete
const response = await fetch(`/api/${this.modelType}/bulk-delete`, {
const response = await fetch(`/api/lm/${this.modelType}/bulk-delete`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -648,7 +648,7 @@ export class ModelDuplicatesManager {
// Check if there are still duplicates
try {
const endpoint = `/api/${this.modelType}/find-duplicates`;
const endpoint = `/api/lm/${this.modelType}/find-duplicates`;
const dupResponse = await fetch(endpoint);
if (!dupResponse.ok) {
@@ -756,7 +756,7 @@ export class ModelDuplicatesManager {
const filePaths = group.models.map(model => model.file_path);
// Make API request to verify hashes
const response = await fetch(`/api/${this.modelType}/verify-duplicates`, {
const response = await fetch(`/api/lm/${this.modelType}/verify-duplicates`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'

View File

@@ -203,7 +203,7 @@ class RecipeCard {
return;
}
fetch(`/api/recipe/${recipeId}/syntax`)
fetch(`/api/lm/recipe/${recipeId}/syntax`)
.then(response => response.json())
.then(data => {
if (data.success && data.syntax) {
@@ -299,7 +299,7 @@ class RecipeCard {
deleteBtn.disabled = true;
// Call API to delete the recipe
fetch(`/api/recipe/${recipeId}`, {
fetch(`/api/lm/recipe/${recipeId}`, {
method: 'DELETE',
headers: {
'Content-Type': 'application/json'
@@ -341,7 +341,7 @@ class RecipeCard {
showToast('toast.recipes.preparingForSharing', {}, 'info');
// Call the API to process the image with metadata
fetch(`/api/recipe/${recipeId}/share`)
fetch(`/api/lm/recipe/${recipeId}/share`)
.then(response => {
if (!response.ok) {
throw new Error('Failed to prepare recipe for sharing');

View File

@@ -784,7 +784,7 @@ class RecipeModal {
try {
// Fetch recipe syntax from backend
const response = await fetch(`/api/recipe/${this.recipeId}/syntax`);
const response = await fetch(`/api/lm/recipe/${this.recipeId}/syntax`);
if (!response.ok) {
throw new Error(`Failed to get recipe syntax: ${response.statusText}`);
@@ -830,9 +830,9 @@ class RecipeModal {
// Determine which endpoint to use based on available data
if (lora.modelVersionId) {
endpoint = `/api/loras/civitai/model/version/${lora.modelVersionId}`;
endpoint = `/api/lm/loras/civitai/model/version/${lora.modelVersionId}`;
} else if (lora.hash) {
endpoint = `/api/loras/civitai/model/hash/${lora.hash}`;
endpoint = `/api/lm/loras/civitai/model/hash/${lora.hash}`;
} else {
console.error("Missing both hash and modelVersionId for lora:", lora);
return null;
@@ -1003,7 +1003,7 @@ class RecipeModal {
state.loadingManager.showSimpleLoading('Reconnecting LoRA...');
// Call API to reconnect the LoRA
const response = await fetch('/api/recipe/lora/reconnect', {
const response = await fetch('/api/lm/recipe/lora/reconnect', {
method: 'POST',
headers: {
'Content-Type': 'application/json',

View File

@@ -46,7 +46,7 @@ export class AlphabetBar {
*/
async fetchLetterCounts() {
try {
const response = await fetch('/api/loras/letter-counts');
const response = await fetch('/api/lm/loras/letter-counts');
if (!response.ok) {
throw new Error(`Failed to fetch letter counts: ${response.statusText}`);

View File

@@ -169,7 +169,7 @@ class InitializationManager {
*/
pollProgress() {
const checkProgress = () => {
fetch('/api/init-status')
fetch('/api/lm/init-status')
.then(response => response.json())
.then(data => {
this.handleProgressUpdate(data);

View File

@@ -186,7 +186,7 @@ async function handleExampleImagesAccess(card, modelType) {
const modelHash = card.dataset.sha256;
try {
const response = await fetch(`/api/has-example-images?model_hash=${modelHash}`);
const response = await fetch(`/api/lm/has-example-images?model_hash=${modelHash}`);
const data = await response.json();
if (data.has_images) {

View File

@@ -460,7 +460,7 @@ async function saveNotes(filePath) {
*/
async function openFileLocation(filePath) {
try {
const resp = await fetch('/api/open-file-location', {
const resp = await fetch('/api/lm/open-file-location', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ 'file_path': filePath })

View File

@@ -22,7 +22,7 @@ export function loadRecipesForLora(loraName, sha256) {
`;
// Fetch recipes that use this Lora by hash
fetch(`/api/recipes/for-lora?hash=${encodeURIComponent(sha256.toLowerCase())}`)
fetch(`/api/lm/recipes/for-lora?hash=${encodeURIComponent(sha256.toLowerCase())}`)
.then(response => response.json())
.then(data => {
if (!data.success) {
@@ -166,7 +166,7 @@ function copyRecipeSyntax(recipeId) {
return;
}
fetch(`/api/recipe/${recipeId}/syntax`)
fetch(`/api/lm/recipe/${recipeId}/syntax`)
.then(response => response.json())
.then(data => {
if (data.success && data.syntax) {

View File

@@ -14,7 +14,7 @@ import { getModelApiClient } from '../../api/modelApiFactory.js';
*/
async function fetchTrainedWords(filePath) {
try {
const response = await fetch(`/api/trained-words?file_path=${encodeURIComponent(filePath)}`);
const response = await fetch(`/api/lm/trained-words?file_path=${encodeURIComponent(filePath)}`);
const data = await response.json();
if (data.success) {

View File

@@ -408,7 +408,7 @@ export function initMediaControlHandlers(container) {
try {
// Call the API to delete the custom example
const response = await fetch('/api/delete-example-image', {
const response = await fetch('/api/lm/delete-example-image', {
method: 'POST',
headers: {
'Content-Type': 'application/json'

View File

@@ -29,7 +29,7 @@ export async function loadExampleImages(images, modelHash) {
let localFiles = [];
try {
const endpoint = '/api/example-image-files';
const endpoint = '/api/lm/example-image-files';
const params = `model_hash=${modelHash}`;
const response = await fetch(`${endpoint}?${params}`);
@@ -374,7 +374,7 @@ async function handleImportFiles(files, modelHash, importContainer) {
});
// Call API to import files
const response = await fetch('/api/import-example-images', {
const response = await fetch('/api/lm/import-example-images', {
method: 'POST',
body: formData
});
@@ -386,7 +386,7 @@ async function handleImportFiles(files, modelHash, importContainer) {
}
// Get updated local files
const updatedFilesResponse = await fetch(`/api/example-image-files?model_hash=${modelHash}`);
const updatedFilesResponse = await fetch(`/api/lm/example-image-files?model_hash=${modelHash}`);
const updatedFilesResult = await updatedFilesResponse.json();
if (!updatedFilesResult.success) {

View File

@@ -308,7 +308,7 @@ export class DownloadManager {
// Set default root if available
const singularType = this.apiClient.modelType.replace(/s$/, '');
const defaultRootKey = `default_${singularType}_root`;
const defaultRoot = getStorageItem('settings', {})[defaultRootKey];
const defaultRoot = state.global.settings[defaultRootKey];
console.log(`Default root for ${this.apiClient.modelType}:`, defaultRoot);
console.log('Available roots:', rootsData.roots);
if (defaultRoot && rootsData.roots.includes(defaultRoot)) {

View File

@@ -100,9 +100,7 @@ export class ExampleImagesManager {
this.updateDownloadButtonState(hasPath);
try {
await settingsManager.saveSetting('example_images_path', pathInput.value);
if (hasPath) {
showToast('toast.exampleImages.pathUpdated', {}, 'success');
}
} catch (error) {
console.error('Failed to update example images path:', error);
showToast('toast.exampleImages.pathUpdateFailed', { message: error.message }, 'error');
@@ -172,7 +170,7 @@ export class ExampleImagesManager {
async checkDownloadStatus() {
try {
const response = await fetch('/api/example-images-status');
const response = await fetch('/api/lm/example-images-status');
const data = await response.json();
if (data.success) {
@@ -227,22 +225,14 @@ export class ExampleImagesManager {
}
try {
const outputDir = document.getElementById('exampleImagesPath').value || '';
if (!outputDir) {
showToast('toast.exampleImages.enterLocationFirst', {}, 'warning');
return;
}
const optimize = state.global.settings.optimizeExampleImages;
const response = await fetch('/api/download-example-images', {
const response = await fetch('/api/lm/download-example-images', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
output_dir: outputDir,
optimize: optimize,
model_types: ['lora', 'checkpoint', 'embedding'] // Example types, adjust as needed
})
@@ -278,7 +268,7 @@ export class ExampleImagesManager {
}
try {
const response = await fetch('/api/pause-example-images', {
const response = await fetch('/api/lm/pause-example-images', {
method: 'POST'
});
@@ -314,7 +304,7 @@ export class ExampleImagesManager {
}
try {
const response = await fetch('/api/resume-example-images', {
const response = await fetch('/api/lm/resume-example-images', {
method: 'POST'
});
@@ -358,7 +348,7 @@ export class ExampleImagesManager {
async updateProgress() {
try {
const response = await fetch('/api/example-images-status');
const response = await fetch('/api/lm/example-images-status');
const data = await response.json();
if (data.success) {
@@ -691,9 +681,8 @@ export class ExampleImagesManager {
return false;
}
// Check if download path is set
const pathInput = document.getElementById('exampleImagesPath');
if (!pathInput || !pathInput.value.trim()) {
// Check if download path is set in settings
if (!state.global.settings.example_images_path) {
return false;
}
@@ -724,16 +713,14 @@ export class ExampleImagesManager {
try {
console.log('Performing auto download check...');
const outputDir = document.getElementById('exampleImagesPath').value;
const optimize = state.global.settings.optimizeExampleImages;
const response = await fetch('/api/download-example-images', {
const response = await fetch('/api/lm/download-example-images', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({
output_dir: outputDir,
optimize: optimize,
model_types: ['lora', 'checkpoint', 'embedding'],
auto_mode: true // Flag to indicate this is an automatic download

View File

@@ -66,7 +66,7 @@ export class FilterManager {
tagsContainer.innerHTML = '<div class="tags-loading">Loading tags...</div>';
// Determine the API endpoint based on the page type
const tagsEndpoint = `/api/${this.currentPage}/top-tags?limit=20`;
const tagsEndpoint = `/api/lm/${this.currentPage}/top-tags?limit=20`;
const response = await fetch(tagsEndpoint);
if (!response.ok) throw new Error('Failed to fetch tags');
@@ -134,7 +134,7 @@ export class FilterManager {
if (!baseModelTagsContainer) return;
// Set the API endpoint based on current page
const apiEndpoint = `/api/${this.currentPage}/base-models`;
const apiEndpoint = `/api/lm/${this.currentPage}/base-models`;
// Fetch base models
fetch(apiEndpoint)

View File

@@ -228,7 +228,7 @@ export class ImportManager {
// Set default root if available
const defaultRootKey = 'default_lora_root';
const defaultRoot = getStorageItem('settings', {})[defaultRootKey];
const defaultRoot = state.global.settings[defaultRootKey];
if (defaultRoot && rootsData.roots.includes(defaultRoot)) {
loraRoot.value = defaultRoot;
}

View File

@@ -2,7 +2,6 @@ import { showToast } from '../utils/uiHelpers.js';
import { state, getCurrentPageState } from '../state/index.js';
import { modalManager } from './ModalManager.js';
import { bulkManager } from './BulkManager.js';
import { getStorageItem } from '../utils/storageHelpers.js';
import { getModelApiClient } from '../api/modelApiFactory.js';
import { FolderTreeManager } from '../components/FolderTreeManager.js';
import { sidebarManager } from '../components/SidebarManager.js';
@@ -87,7 +86,7 @@ class MoveManager {
// Set default root if available
const settingsKey = `default_${currentPageType.slice(0, -1)}_root`;
const defaultRoot = getStorageItem('settings', {})[settingsKey];
const defaultRoot = state.global.settings[settingsKey];
if (defaultRoot && rootsData.roots.includes(defaultRoot)) {
modelRootSelect.value = defaultRoot;
}

View File

@@ -43,7 +43,6 @@ export class SettingsManager {
// Frontend-only settings that should be stored in localStorage
const frontendOnlyKeys = [
'blurMatureContent',
'show_only_sfw',
'autoplayOnHover',
'displayDensity',
'cardInfoDisplay',
@@ -132,6 +131,7 @@ export class SettingsManager {
download_path_templates: { ...DEFAULT_PATH_TEMPLATES },
enable_metadata_archive_db: false,
language: 'en',
show_only_sfw: false,
proxy_enabled: false,
proxy_type: 'http',
proxy_host: '',
@@ -140,7 +140,7 @@ export class SettingsManager {
proxy_password: '',
example_images_path: '',
optimizeExampleImages: true,
autoDownloadExampleImages: true
autoDownloadExampleImages: false
};
Object.keys(backendDefaults).forEach(key => {
@@ -161,7 +161,6 @@ export class SettingsManager {
// Save only frontend-specific settings to localStorage
const frontendOnlyKeys = [
'blurMatureContent',
'show_only_sfw',
'autoplayOnHover',
'displayDensity',
'cardInfoDisplay',
@@ -189,6 +188,7 @@ export class SettingsManager {
'download_path_templates',
'enable_metadata_archive_db',
'language',
'show_only_sfw',
'proxy_enabled',
'proxy_type',
'proxy_host',
@@ -429,7 +429,7 @@ export class SettingsManager {
if (!defaultLoraRootSelect) return;
// Fetch lora roots
const response = await fetch('/api/loras/roots');
const response = await fetch('/api/lm/loras/roots');
if (!response.ok) {
throw new Error('Failed to fetch LoRA roots');
}
@@ -468,7 +468,7 @@ export class SettingsManager {
if (!defaultCheckpointRootSelect) return;
// Fetch checkpoint roots
const response = await fetch('/api/checkpoints/roots');
const response = await fetch('/api/lm/checkpoints/roots');
if (!response.ok) {
throw new Error('Failed to fetch checkpoint roots');
}
@@ -507,7 +507,7 @@ export class SettingsManager {
if (!defaultEmbeddingRootSelect) return;
// Fetch embedding roots
const response = await fetch('/api/embeddings/roots');
const response = await fetch('/api/lm/embeddings/roots');
if (!response.ok) {
throw new Error('Failed to fetch embedding roots');
}
@@ -972,10 +972,6 @@ export class SettingsManager {
await this.saveSetting('default_embedding_root', value);
} else if (settingKey === 'display_density') {
await this.saveSetting('displayDensity', value);
// Also update compactMode for backwards compatibility
state.global.settings.compactMode = (value !== 'default');
this.saveFrontendSettingsToStorage();
} else if (settingKey === 'card_info_display') {
await this.saveSetting('cardInfoDisplay', value);
} else if (settingKey === 'proxy_type') {
@@ -1023,7 +1019,7 @@ export class SettingsManager {
async updateMetadataArchiveStatus() {
try {
const response = await fetch('/api/metadata-archive-status');
const response = await fetch('/api/lm/metadata-archive-status');
const data = await response.json();
const statusContainer = document.getElementById('metadataArchiveStatus');
@@ -1152,7 +1148,7 @@ export class SettingsManager {
// Wait for WebSocket to be ready
await wsReady;
const response = await fetch(`/api/download-metadata-archive?download_id=${encodeURIComponent(actualDownloadId)}`, {
const response = await fetch(`/api/lm/download-metadata-archive?download_id=${encodeURIComponent(actualDownloadId)}`, {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -1215,7 +1211,7 @@ export class SettingsManager {
removeBtn.textContent = translate('settings.metadataArchive.removingButton');
}
const response = await fetch('/api/remove-metadata-archive', {
const response = await fetch('/api/lm/remove-metadata-archive', {
method: 'POST',
headers: {
'Content-Type': 'application/json'

View File

@@ -97,7 +97,7 @@ export class UpdateService {
try {
// Call backend API to check for updates with nightly flag
const response = await fetch(`/api/check-updates?nightly=${this.nightlyMode}`);
const response = await fetch(`/api/lm/check-updates?nightly=${this.nightlyMode}`);
const data = await response.json();
if (data.success) {
@@ -280,7 +280,7 @@ export class UpdateService {
// Update progress
this.updateProgress(10, translate('update.updateProgress.preparing'));
const response = await fetch('/api/perform-update', {
const response = await fetch('/api/lm/perform-update', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -444,7 +444,7 @@ export class UpdateService {
async checkVersionInfo() {
try {
// Call API to get current version info
const response = await fetch('/api/version-info');
const response = await fetch('/api/lm/version-info');
const data = await response.json();
if (data.success) {

View File

@@ -68,7 +68,7 @@ export class DownloadManager {
formData.append('metadata', JSON.stringify(completeMetadata));
// Send save request
const response = await fetch('/api/recipes/save', {
const response = await fetch('/api/lm/recipes/save', {
method: 'POST',
body: formData
});

View File

@@ -1,221 +0,0 @@
import { showToast } from '../../utils/uiHelpers.js';
import { translate } from '../../utils/i18nHelpers.js';
import { getStorageItem } from '../../utils/storageHelpers.js';
export class FolderBrowser {
constructor(importManager) {
this.importManager = importManager;
this.folderClickHandler = null;
this.updateTargetPath = this.updateTargetPath.bind(this);
}
async proceedToLocation() {
// Show the location step with special handling
this.importManager.stepManager.showStep('locationStep');
// Double-check after a short delay to ensure the step is visible
setTimeout(() => {
const locationStep = document.getElementById('locationStep');
if (locationStep.style.display !== 'block' ||
window.getComputedStyle(locationStep).display !== 'block') {
// Force display again
locationStep.style.display = 'block';
// If still not visible, try with injected style
if (window.getComputedStyle(locationStep).display !== 'block') {
this.importManager.stepManager.injectedStyles = document.createElement('style');
this.importManager.stepManager.injectedStyles.innerHTML = `
#locationStep {
display: block !important;
opacity: 1 !important;
visibility: visible !important;
}
`;
document.head.appendChild(this.importManager.stepManager.injectedStyles);
}
}
}, 100);
try {
// Display missing LoRAs that will be downloaded
const missingLorasList = document.getElementById('missingLorasList');
if (missingLorasList && this.importManager.downloadableLoRAs.length > 0) {
// Calculate total size
const totalSize = this.importManager.downloadableLoRAs.reduce((sum, lora) => {
return sum + (lora.size ? parseInt(lora.size) : 0);
}, 0);
// Update total size display
const totalSizeDisplay = document.getElementById('totalDownloadSize');
if (totalSizeDisplay) {
totalSizeDisplay.textContent = this.importManager.formatFileSize(totalSize);
}
// Update header to include count of missing LoRAs
const missingLorasHeader = document.querySelector('.summary-header h3');
if (missingLorasHeader) {
missingLorasHeader.innerHTML = `Missing LoRAs <span class="lora-count-badge">(${this.importManager.downloadableLoRAs.length})</span> <span id="totalDownloadSize" class="total-size-badge">${this.importManager.formatFileSize(totalSize)}</span>`;
}
// Generate missing LoRAs list
missingLorasList.innerHTML = this.importManager.downloadableLoRAs.map(lora => {
const sizeDisplay = lora.size ?
this.importManager.formatFileSize(lora.size) : 'Unknown size';
const baseModel = lora.baseModel ?
`<span class="lora-base-model">${lora.baseModel}</span>` : '';
const isEarlyAccess = lora.isEarlyAccess;
// Early access badge
let earlyAccessBadge = '';
if (isEarlyAccess) {
earlyAccessBadge = `<span class="early-access-badge">
<i class="fas fa-clock"></i> Early Access
</span>`;
}
return `
<div class="missing-lora-item ${isEarlyAccess ? 'is-early-access' : ''}">
<div class="missing-lora-info">
<div class="missing-lora-name">${lora.name}</div>
${baseModel}
${earlyAccessBadge}
</div>
<div class="missing-lora-size">${sizeDisplay}</div>
</div>
`;
}).join('');
// Set up toggle for missing LoRAs list
const toggleBtn = document.getElementById('toggleMissingLorasList');
if (toggleBtn) {
toggleBtn.addEventListener('click', () => {
missingLorasList.classList.toggle('collapsed');
const icon = toggleBtn.querySelector('i');
if (icon) {
icon.classList.toggle('fa-chevron-down');
icon.classList.toggle('fa-chevron-up');
}
});
}
}
// Fetch LoRA roots
const rootsResponse = await fetch('/api/loras/roots');
if (!rootsResponse.ok) {
throw new Error(`Failed to fetch LoRA roots: ${rootsResponse.status}`);
}
const rootsData = await rootsResponse.json();
const loraRoot = document.getElementById('importLoraRoot');
if (loraRoot) {
loraRoot.innerHTML = rootsData.roots.map(root =>
`<option value="${root}">${root}</option>`
).join('');
// Set default lora root if available
const defaultRoot = getStorageItem('settings', {}).default_lora_root;
if (defaultRoot && rootsData.roots.includes(defaultRoot)) {
loraRoot.value = defaultRoot;
}
}
// Fetch folders
const foldersResponse = await fetch('/api/loras/folders');
if (!foldersResponse.ok) {
throw new Error(`Failed to fetch folders: ${foldersResponse.status}`);
}
const foldersData = await foldersResponse.json();
const folderBrowser = document.getElementById('importFolderBrowser');
if (folderBrowser) {
folderBrowser.innerHTML = foldersData.folders.map(folder =>
folder ? `<div class="folder-item" data-folder="${folder}">${folder}</div>` : ''
).join('');
}
// Initialize folder browser after loading data
this.initializeFolderBrowser();
} catch (error) {
console.error('Error in API calls:', error);
showToast('toast.recipes.folderBrowserError', { message: error.message }, 'error');
}
}
initializeFolderBrowser() {
const folderBrowser = document.getElementById('importFolderBrowser');
if (!folderBrowser) return;
// Cleanup existing handler if any
this.cleanup();
// Create new handler
this.folderClickHandler = (event) => {
const folderItem = event.target.closest('.folder-item');
if (!folderItem) return;
if (folderItem.classList.contains('selected')) {
folderItem.classList.remove('selected');
this.importManager.selectedFolder = '';
} else {
folderBrowser.querySelectorAll('.folder-item').forEach(f =>
f.classList.remove('selected'));
folderItem.classList.add('selected');
this.importManager.selectedFolder = folderItem.dataset.folder;
}
// Update path display after folder selection
this.updateTargetPath();
};
// Add the new handler
folderBrowser.addEventListener('click', this.folderClickHandler);
// Add event listeners for path updates
const loraRoot = document.getElementById('importLoraRoot');
const newFolder = document.getElementById('importNewFolder');
if (loraRoot) loraRoot.addEventListener('change', this.updateTargetPath);
if (newFolder) newFolder.addEventListener('input', this.updateTargetPath);
// Update initial path
this.updateTargetPath();
}
cleanup() {
if (this.folderClickHandler) {
const folderBrowser = document.getElementById('importFolderBrowser');
if (folderBrowser) {
folderBrowser.removeEventListener('click', this.folderClickHandler);
this.folderClickHandler = null;
}
}
// Remove path update listeners
const loraRoot = document.getElementById('importLoraRoot');
const newFolder = document.getElementById('importNewFolder');
if (loraRoot) loraRoot.removeEventListener('change', this.updateTargetPath);
if (newFolder) newFolder.removeEventListener('input', this.updateTargetPath);
}
updateTargetPath() {
const pathDisplay = document.getElementById('importTargetPathDisplay');
if (!pathDisplay) return;
const loraRoot = document.getElementById('importLoraRoot')?.value || '';
const newFolder = document.getElementById('importNewFolder')?.value?.trim() || '';
let fullPath = loraRoot || translate('recipes.controls.import.selectLoraRoot', {}, 'Select a LoRA root directory');
if (loraRoot) {
if (this.importManager.selectedFolder) {
fullPath += '/' + this.importManager.selectedFolder;
}
if (newFolder) {
fullPath += '/' + newFolder;
}
}
pathDisplay.innerHTML = `<span class="path-text">${fullPath}</span>`;
}
}

View File

@@ -62,7 +62,7 @@ export class ImageProcessor {
async analyzeImageFromUrl(url) {
try {
// Call the API with URL data
const response = await fetch('/api/recipes/analyze-image', {
const response = await fetch('/api/lm/recipes/analyze-image', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -110,7 +110,7 @@ export class ImageProcessor {
async analyzeImageFromLocalPath(path) {
try {
// Call the API with local path data
const response = await fetch('/api/recipes/analyze-local-image', {
const response = await fetch('/api/lm/recipes/analyze-local-image', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -169,7 +169,7 @@ export class ImageProcessor {
formData.append('image', this.importManager.recipeImage);
// Upload image for analysis
const response = await fetch('/api/recipes/analyze-image', {
const response = await fetch('/api/lm/recipes/analyze-image', {
method: 'POST',
body: formData
});

View File

@@ -65,12 +65,12 @@ class StatisticsManager {
storageAnalytics,
insights
] = await Promise.all([
this.fetchData('/api/stats/collection-overview'),
this.fetchData('/api/stats/usage-analytics'),
this.fetchData('/api/stats/base-model-distribution'),
this.fetchData('/api/stats/tag-analytics'),
this.fetchData('/api/stats/storage-analytics'),
this.fetchData('/api/stats/insights')
this.fetchData('/api/lm/stats/collection-overview'),
this.fetchData('/api/lm/stats/usage-analytics'),
this.fetchData('/api/lm/stats/base-model-distribution'),
this.fetchData('/api/lm/stats/tag-analytics'),
this.fetchData('/api/lm/stats/storage-analytics'),
this.fetchData('/api/lm/stats/insights')
]);
this.data = {

View File

@@ -370,7 +370,7 @@ export function copyLoraSyntax(card) {
export async function sendLoraToWorkflow(loraSyntax, replaceMode = false, syntaxType = 'lora') {
try {
// Get registry information from the new endpoint
const registryResponse = await fetch('/api/get-registry');
const registryResponse = await fetch('/api/lm/get-registry');
const registryData = await registryResponse.json();
if (!registryData.success) {
@@ -417,7 +417,7 @@ export async function sendLoraToWorkflow(loraSyntax, replaceMode = false, syntax
async function sendToSpecificNode(nodeIds, loraSyntax, replaceMode, syntaxType) {
try {
// Call the backend API to update the lora code
const response = await fetch('/api/update-lora-code', {
const response = await fetch('/api/lm/update-lora-code', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
@@ -676,7 +676,7 @@ initializeMouseTracking();
*/
export async function openExampleImagesFolder(modelHash) {
try {
const response = await fetch('/api/open-example-images-folder', {
const response = await fetch('/api/lm/open-example-images-folder', {
method: 'POST',
headers: {
'Content-Type': 'application/json'

View File

@@ -47,8 +47,6 @@ class AutoComplete {
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
display: none;
font-family: Arial, sans-serif;
font-size: 14px;
min-width: 200px;
width: auto;
backdrop-filter: blur(8px);
@@ -158,7 +156,7 @@ class AutoComplete {
async search(term = '') {
try {
this.currentSearchTerm = term;
const response = await api.fetchApi(`/${this.modelType}/relative-paths?search=${encodeURIComponent(term)}&limit=${this.options.maxItems}`);
const response = await api.fetchApi(`/lm/${this.modelType}/relative-paths?search=${encodeURIComponent(term)}&limit=${this.options.maxItems}`);
const data = await response.json();
if (data.success && data.relative_paths && data.relative_paths.length > 0) {
@@ -385,7 +383,7 @@ class AutoComplete {
// Get usage tips and extract strength
let strength = 1.0; // Default strength
try {
const response = await api.fetchApi(`/loras/usage-tips-by-path?relative_path=${encodeURIComponent(relativePath)}`);
const response = await api.fetchApi(`/lm/loras/usage-tips-by-path?relative_path=${encodeURIComponent(relativePath)}`);
if (response.ok) {
const data = await response.json();
if (data.success && data.usage_tips) {

View File

@@ -1,978 +0,0 @@
import { api } from "../../scripts/api.js";
import { app } from "../../scripts/app.js";
export function addLorasWidget(node, name, opts, callback) {
// Create container for loras
const container = document.createElement("div");
container.className = "comfy-loras-container";
Object.assign(container.style, {
display: "flex",
flexDirection: "column",
gap: "8px",
padding: "6px",
backgroundColor: "rgba(40, 44, 52, 0.6)",
borderRadius: "6px",
width: "100%",
});
// Initialize default value
const defaultValue = opts?.defaultVal || [];
// Parse LoRA entries from value
const parseLoraValue = (value) => {
if (!value) return [];
return Array.isArray(value) ? value : [];
};
// Format LoRA data
const formatLoraValue = (loras) => {
return loras;
};
// Function to create toggle element
const createToggle = (active, onChange) => {
const toggle = document.createElement("div");
toggle.className = "comfy-lora-toggle";
updateToggleStyle(toggle, active);
toggle.addEventListener("click", (e) => {
e.stopPropagation();
onChange(!active);
});
return toggle;
};
// Helper function to update toggle style
function updateToggleStyle(toggleEl, active) {
Object.assign(toggleEl.style, {
width: "18px",
height: "18px",
borderRadius: "4px",
cursor: "pointer",
transition: "all 0.2s ease",
backgroundColor: active ? "rgba(66, 153, 225, 0.9)" : "rgba(45, 55, 72, 0.7)",
border: `1px solid ${active ? "rgba(66, 153, 225, 0.9)" : "rgba(226, 232, 240, 0.2)"}`,
});
// Add hover effect
toggleEl.onmouseenter = () => {
toggleEl.style.transform = "scale(1.05)";
toggleEl.style.boxShadow = "0 2px 4px rgba(0,0,0,0.15)";
};
toggleEl.onmouseleave = () => {
toggleEl.style.transform = "scale(1)";
toggleEl.style.boxShadow = "none";
};
}
// Create arrow button for strength adjustment
const createArrowButton = (direction, onClick) => {
const button = document.createElement("div");
button.className = `comfy-lora-arrow comfy-lora-arrow-${direction}`;
Object.assign(button.style, {
width: "16px",
height: "16px",
display: "flex",
alignItems: "center",
justifyContent: "center",
cursor: "pointer",
userSelect: "none",
fontSize: "12px",
color: "rgba(226, 232, 240, 0.8)",
transition: "all 0.2s ease",
});
button.textContent = direction === "left" ? "◀" : "▶";
button.addEventListener("click", (e) => {
e.stopPropagation();
onClick();
});
// Add hover effect
button.onmouseenter = () => {
button.style.color = "white";
button.style.transform = "scale(1.2)";
};
button.onmouseleave = () => {
button.style.color = "rgba(226, 232, 240, 0.8)";
button.style.transform = "scale(1)";
};
return button;
};
// 添加预览弹窗组件
class PreviewTooltip {
constructor() {
this.element = document.createElement('div');
Object.assign(this.element.style, {
position: 'fixed',
zIndex: 9999,
background: 'rgba(0, 0, 0, 0.85)',
borderRadius: '6px',
boxShadow: '0 4px 12px rgba(0, 0, 0, 0.3)',
display: 'none',
overflow: 'hidden',
maxWidth: '300px',
});
document.body.appendChild(this.element);
this.hideTimeout = null; // 添加超时处理变量
// 添加全局点击事件来隐藏tooltip
document.addEventListener('click', () => this.hide());
// 添加滚动事件监听
document.addEventListener('scroll', () => this.hide(), true);
}
async show(loraName, x, y) {
try {
// 清除之前的隐藏定时器
if (this.hideTimeout) {
clearTimeout(this.hideTimeout);
this.hideTimeout = null;
}
// 如果已经显示同一个lora的预览则不重复显示
if (this.element.style.display === 'block' && this.currentLora === loraName) {
return;
}
this.currentLora = loraName;
// 获取预览URL
const response = await api.fetchApi(`/loras/preview-url?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});
if (!response.ok) {
throw new Error('Failed to fetch preview URL');
}
const data = await response.json();
if (!data.success || !data.preview_url) {
throw new Error('No preview available');
}
// 清除现有内容
while (this.element.firstChild) {
this.element.removeChild(this.element.firstChild);
}
// Create media container with relative positioning
const mediaContainer = document.createElement('div');
Object.assign(mediaContainer.style, {
position: 'relative',
maxWidth: '300px',
maxHeight: '300px',
});
const isVideo = data.preview_url.endsWith('.mp4');
const mediaElement = isVideo ? document.createElement('video') : document.createElement('img');
Object.assign(mediaElement.style, {
maxWidth: '300px',
maxHeight: '300px',
objectFit: 'contain',
display: 'block',
});
if (isVideo) {
mediaElement.autoplay = true;
mediaElement.loop = true;
mediaElement.muted = true;
mediaElement.controls = false;
}
mediaElement.src = data.preview_url;
// Create name label with absolute positioning
const nameLabel = document.createElement('div');
nameLabel.textContent = loraName;
Object.assign(nameLabel.style, {
position: 'absolute',
bottom: '0',
left: '0',
right: '0',
padding: '8px',
color: 'rgba(255, 255, 255, 0.95)',
fontSize: '13px',
fontFamily: "'Inter', 'Segoe UI', system-ui, -apple-system, sans-serif",
background: 'linear-gradient(transparent, rgba(0, 0, 0, 0.8))',
whiteSpace: 'nowrap',
overflow: 'hidden',
textOverflow: 'ellipsis',
textAlign: 'center',
backdropFilter: 'blur(4px)',
WebkitBackdropFilter: 'blur(4px)',
});
mediaContainer.appendChild(mediaElement);
mediaContainer.appendChild(nameLabel);
this.element.appendChild(mediaContainer);
// 添加淡入效果
this.element.style.opacity = '0';
this.element.style.display = 'block';
this.position(x, y);
requestAnimationFrame(() => {
this.element.style.transition = 'opacity 0.15s ease';
this.element.style.opacity = '1';
});
} catch (error) {
console.warn('Failed to load preview:', error);
}
}
position(x, y) {
// 确保预览框不超出视窗边界
const rect = this.element.getBoundingClientRect();
const viewportWidth = window.innerWidth;
const viewportHeight = window.innerHeight;
let left = x + 10; // 默认在鼠标右侧偏移10px
let top = y + 10; // 默认在鼠标下方偏移10px
// 检查右边界
if (left + rect.width > viewportWidth) {
left = x - rect.width - 10;
}
// 检查下边界
if (top + rect.height > viewportHeight) {
top = y - rect.height - 10;
}
Object.assign(this.element.style, {
left: `${left}px`,
top: `${top}px`
});
}
hide() {
// 使用淡出效果
if (this.element.style.display === 'block') {
this.element.style.opacity = '0';
this.hideTimeout = setTimeout(() => {
this.element.style.display = 'none';
this.currentLora = null;
// 停止视频播放
const video = this.element.querySelector('video');
if (video) {
video.pause();
}
this.hideTimeout = null;
}, 150);
}
}
cleanup() {
if (this.hideTimeout) {
clearTimeout(this.hideTimeout);
}
// 移除所有事件监听器
document.removeEventListener('click', () => this.hide());
document.removeEventListener('scroll', () => this.hide(), true);
this.element.remove();
}
}
// 创建预览tooltip实例
const previewTooltip = new PreviewTooltip();
// Function to handle strength adjustment via dragging
const handleStrengthDrag = (name, initialStrength, initialX, event, widget) => {
// Calculate drag sensitivity (how much the strength changes per pixel)
// Using 0.01 per 10 pixels of movement
const sensitivity = 0.001;
// Get the current mouse position
const currentX = event.clientX;
// Calculate the distance moved
const deltaX = currentX - initialX;
// Calculate the new strength value based on movement
// Moving right increases, moving left decreases
let newStrength = Number(initialStrength) + (deltaX * sensitivity);
// Limit the strength to reasonable bounds (now between -10 and 10)
newStrength = Math.max(-10, Math.min(10, newStrength));
newStrength = Number(newStrength.toFixed(2));
// Update the lora data
const lorasData = parseLoraValue(widget.value);
const loraIndex = lorasData.findIndex(l => l.name === name);
if (loraIndex >= 0) {
lorasData[loraIndex].strength = newStrength;
// Update the widget value
widget.value = formatLoraValue(lorasData);
// Force re-render to show updated strength value
renderLoras(widget.value, widget);
}
};
// Function to initialize drag operation
const initDrag = (loraEl, nameEl, name, widget) => {
let isDragging = false;
let initialX = 0;
let initialStrength = 0;
// Create a style element for drag cursor override if it doesn't exist
if (!document.getElementById('comfy-lora-drag-style')) {
const styleEl = document.createElement('style');
styleEl.id = 'comfy-lora-drag-style';
styleEl.textContent = `
body.comfy-lora-dragging,
body.comfy-lora-dragging * {
cursor: ew-resize !important;
}
`;
document.head.appendChild(styleEl);
}
// Create a drag handler that's applied to the entire lora entry
// except toggle and strength controls
loraEl.addEventListener('mousedown', (e) => {
// Skip if clicking on toggle or strength control areas
if (e.target.closest('.comfy-lora-toggle') ||
e.target.closest('input') ||
e.target.closest('.comfy-lora-arrow')) {
return;
}
// Store initial values
const lorasData = parseLoraValue(widget.value);
const loraData = lorasData.find(l => l.name === name);
if (!loraData) return;
initialX = e.clientX;
initialStrength = loraData.strength;
isDragging = true;
// Add class to body to enforce cursor style globally
document.body.classList.add('comfy-lora-dragging');
// Prevent text selection during drag
e.preventDefault();
});
// Use the document for move and up events to ensure drag continues
// even if mouse leaves the element
document.addEventListener('mousemove', (e) => {
if (!isDragging) return;
// Call the strength adjustment function
handleStrengthDrag(name, initialStrength, initialX, e, widget);
// Prevent showing the preview tooltip during drag
previewTooltip.hide();
});
document.addEventListener('mouseup', () => {
if (isDragging) {
isDragging = false;
// Remove the class to restore normal cursor behavior
document.body.classList.remove('comfy-lora-dragging');
}
});
};
// Function to create menu item
const createMenuItem = (text, icon, onClick) => {
const menuItem = document.createElement('div');
Object.assign(menuItem.style, {
padding: '6px 20px',
cursor: 'pointer',
color: 'rgba(226, 232, 240, 0.9)',
fontSize: '13px',
userSelect: 'none',
display: 'flex',
alignItems: 'center',
gap: '8px',
});
// Create icon element
const iconEl = document.createElement('div');
iconEl.innerHTML = icon;
Object.assign(iconEl.style, {
width: '14px',
height: '14px',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
});
// Create text element
const textEl = document.createElement('span');
textEl.textContent = text;
menuItem.appendChild(iconEl);
menuItem.appendChild(textEl);
menuItem.addEventListener('mouseenter', () => {
menuItem.style.backgroundColor = 'rgba(66, 153, 225, 0.2)';
});
menuItem.addEventListener('mouseleave', () => {
menuItem.style.backgroundColor = 'transparent';
});
if (onClick) {
menuItem.addEventListener('click', onClick);
}
return menuItem;
};
// Function to create context menu
const createContextMenu = (x, y, loraName, widget) => {
// Hide preview tooltip first
previewTooltip.hide();
// Remove existing context menu if any
const existingMenu = document.querySelector('.comfy-lora-context-menu');
if (existingMenu) {
existingMenu.remove();
}
const menu = document.createElement('div');
menu.className = 'comfy-lora-context-menu';
Object.assign(menu.style, {
position: 'fixed',
left: `${x}px`,
top: `${y}px`,
backgroundColor: 'rgba(30, 30, 30, 0.95)',
border: '1px solid rgba(255, 255, 255, 0.1)',
borderRadius: '4px',
padding: '4px 0',
zIndex: 1000,
boxShadow: '0 2px 10px rgba(0,0,0,0.2)',
minWidth: '180px',
});
// View on Civitai option with globe icon
const viewOnCivitaiOption = createMenuItem(
'View on Civitai',
'<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><circle cx="12" cy="12" r="10"></circle><line x1="2" y1="12" x2="22" y2="12"></line><path d="M12 2a15.3 15.3 0 0 1 4 10 15.3 15.3 0 0 1-4 10 15.3 15.3 0 0 1-4-10 15.3 15.3 0 0 1 4-10z"></path></svg>',
async () => {
menu.remove();
document.removeEventListener('click', closeMenu);
try {
// Get Civitai URL from API
const response = await api.fetchApi(`/loras/civitai-url?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(errorText || 'Failed to get Civitai URL');
}
const data = await response.json();
if (data.success && data.civitai_url) {
// Open the URL in a new tab
window.open(data.civitai_url, '_blank');
} else {
// Show error message if no Civitai URL
if (app && app.extensionManager && app.extensionManager.toast) {
app.extensionManager.toast.add({
severity: 'warning',
summary: 'Not Found',
detail: 'This LoRA has no associated Civitai URL',
life: 3000
});
} else {
alert('This LoRA has no associated Civitai URL');
}
}
} catch (error) {
console.error('Error getting Civitai URL:', error);
if (app && app.extensionManager && app.extensionManager.toast) {
app.extensionManager.toast.add({
severity: 'error',
summary: 'Error',
detail: error.message || 'Failed to get Civitai URL',
life: 5000
});
} else {
alert('Error: ' + (error.message || 'Failed to get Civitai URL'));
}
}
}
);
// Delete option with trash icon
const deleteOption = createMenuItem(
'Delete',
'<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><path d="M3 6h18m-2 0v14c0 1-1 2-2 2H7c-1 0-2-1-2-2V6m3 0V4c0-1 1-2 2-2h4c1 0 2 1 2 2v2"></path></svg>',
() => {
menu.remove();
document.removeEventListener('click', closeMenu);
const lorasData = parseLoraValue(widget.value).filter(l => l.name !== loraName);
widget.value = formatLoraValue(lorasData);
if (widget.callback) {
widget.callback(widget.value);
}
}
);
// Save recipe option with bookmark icon
const saveOption = createMenuItem(
'Save Recipe',
'<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><path d="M19 21l-7-5-7 5V5a2 2 0 0 1 2-2h10a2 2 0 0 1 2 2z"></path></svg>',
() => {
menu.remove();
document.removeEventListener('click', closeMenu);
saveRecipeDirectly(widget);
}
);
// Add separator
const separator = document.createElement('div');
Object.assign(separator.style, {
margin: '4px 0',
borderTop: '1px solid rgba(255, 255, 255, 0.1)',
});
menu.appendChild(viewOnCivitaiOption); // Add the new menu option
menu.appendChild(deleteOption);
menu.appendChild(separator);
menu.appendChild(saveOption);
document.body.appendChild(menu);
// Close menu when clicking outside
const closeMenu = (e) => {
if (!menu.contains(e.target)) {
menu.remove();
document.removeEventListener('click', closeMenu);
}
};
setTimeout(() => document.addEventListener('click', closeMenu), 0);
};
// Function to render loras from data
const renderLoras = (value, widget) => {
// Clear existing content
while (container.firstChild) {
container.removeChild(container.firstChild);
}
// Parse the loras data
const lorasData = parseLoraValue(value);
if (lorasData.length === 0) {
// Show message when no loras are added
const emptyMessage = document.createElement("div");
emptyMessage.textContent = "No LoRAs added";
Object.assign(emptyMessage.style, {
textAlign: "center",
padding: "20px 0",
color: "rgba(226, 232, 240, 0.8)",
fontStyle: "italic",
userSelect: "none", // Add this line to prevent text selection
WebkitUserSelect: "none", // For Safari support
MozUserSelect: "none", // For Firefox support
msUserSelect: "none", // For IE/Edge support
});
container.appendChild(emptyMessage);
return;
}
// Create header
const header = document.createElement("div");
header.className = "comfy-loras-header";
Object.assign(header.style, {
display: "flex",
justifyContent: "space-between",
alignItems: "center",
padding: "4px 8px",
borderBottom: "1px solid rgba(226, 232, 240, 0.2)",
marginBottom: "8px"
});
// Add toggle all control
const allActive = lorasData.every(lora => lora.active);
const toggleAll = createToggle(allActive, (active) => {
// Update all loras active state
const lorasData = parseLoraValue(widget.value);
lorasData.forEach(lora => lora.active = active);
const newValue = formatLoraValue(lorasData);
widget.value = newValue;
});
// Add label to toggle all
const toggleLabel = document.createElement("div");
toggleLabel.textContent = "Toggle All";
Object.assign(toggleLabel.style, {
color: "rgba(226, 232, 240, 0.8)",
fontSize: "13px",
marginLeft: "8px",
userSelect: "none", // Add this line to prevent text selection
WebkitUserSelect: "none", // For Safari support
MozUserSelect: "none", // For Firefox support
msUserSelect: "none", // For IE/Edge support
});
const toggleContainer = document.createElement("div");
Object.assign(toggleContainer.style, {
display: "flex",
alignItems: "center",
});
toggleContainer.appendChild(toggleAll);
toggleContainer.appendChild(toggleLabel);
// Strength label
const strengthLabel = document.createElement("div");
strengthLabel.textContent = "Strength";
Object.assign(strengthLabel.style, {
color: "rgba(226, 232, 240, 0.8)",
fontSize: "13px",
marginRight: "8px",
userSelect: "none", // Add this line to prevent text selection
WebkitUserSelect: "none", // For Safari support
MozUserSelect: "none", // For Firefox support
msUserSelect: "none", // For IE/Edge support
});
header.appendChild(toggleContainer);
header.appendChild(strengthLabel);
container.appendChild(header);
// Render each lora entry
lorasData.forEach((loraData) => {
const { name, strength, active } = loraData;
const loraEl = document.createElement("div");
loraEl.className = "comfy-lora-entry";
Object.assign(loraEl.style, {
display: "flex",
justifyContent: "space-between",
alignItems: "center",
padding: "8px",
borderRadius: "6px",
backgroundColor: active ? "rgba(45, 55, 72, 0.7)" : "rgba(35, 40, 50, 0.5)",
transition: "all 0.2s ease",
marginBottom: "6px",
});
// Create toggle for this lora
const toggle = createToggle(active, (newActive) => {
// Update this lora's active state
const lorasData = parseLoraValue(widget.value);
const loraIndex = lorasData.findIndex(l => l.name === name);
if (loraIndex >= 0) {
lorasData[loraIndex].active = newActive;
const newValue = formatLoraValue(lorasData);
widget.value = newValue;
}
});
// Create name display
const nameEl = document.createElement("div");
nameEl.textContent = name;
Object.assign(nameEl.style, {
marginLeft: "10px",
flex: "1",
overflow: "hidden",
textOverflow: "ellipsis",
whiteSpace: "nowrap",
color: active ? "rgba(226, 232, 240, 0.9)" : "rgba(226, 232, 240, 0.6)",
fontSize: "13px",
cursor: "pointer", // Add pointer cursor to indicate hoverable area
userSelect: "none", // Add this line to prevent text selection
WebkitUserSelect: "none", // For Safari support
MozUserSelect: "none", // For Firefox support
msUserSelect: "none", // For IE/Edge support
});
// Move preview tooltip events to nameEl instead of loraEl
nameEl.addEventListener('mouseenter', async (e) => {
e.stopPropagation();
const rect = nameEl.getBoundingClientRect();
await previewTooltip.show(name, rect.right, rect.top);
});
nameEl.addEventListener('mouseleave', (e) => {
e.stopPropagation();
previewTooltip.hide();
});
// Remove the preview tooltip events from loraEl
loraEl.onmouseenter = () => {
loraEl.style.backgroundColor = active ? "rgba(50, 60, 80, 0.8)" : "rgba(40, 45, 55, 0.6)";
};
loraEl.onmouseleave = () => {
loraEl.style.backgroundColor = active ? "rgba(45, 55, 72, 0.7)" : "rgba(35, 40, 50, 0.5)";
};
// Add context menu event
loraEl.addEventListener('contextmenu', (e) => {
e.preventDefault();
e.stopPropagation();
createContextMenu(e.clientX, e.clientY, name, widget);
});
// Create strength control
const strengthControl = document.createElement("div");
Object.assign(strengthControl.style, {
display: "flex",
alignItems: "center",
gap: "8px",
});
// Left arrow
const leftArrow = createArrowButton("left", () => {
// Decrease strength
const lorasData = parseLoraValue(widget.value);
const loraIndex = lorasData.findIndex(l => l.name === name);
if (loraIndex >= 0) {
lorasData[loraIndex].strength = (lorasData[loraIndex].strength - 0.05).toFixed(2);
const newValue = formatLoraValue(lorasData);
widget.value = newValue;
}
});
// Strength display
const strengthEl = document.createElement("input");
strengthEl.type = "text";
strengthEl.value = typeof strength === 'number' ? strength.toFixed(2) : Number(strength).toFixed(2);
Object.assign(strengthEl.style, {
minWidth: "50px",
width: "50px",
textAlign: "center",
color: active ? "rgba(226, 232, 240, 0.9)" : "rgba(226, 232, 240, 0.6)",
fontSize: "13px",
background: "none",
border: "1px solid transparent",
padding: "2px 4px",
borderRadius: "3px",
outline: "none",
});
// 添加hover效果
strengthEl.addEventListener('mouseenter', () => {
strengthEl.style.border = "1px solid rgba(226, 232, 240, 0.2)";
});
strengthEl.addEventListener('mouseleave', () => {
if (document.activeElement !== strengthEl) {
strengthEl.style.border = "1px solid transparent";
}
});
// 处理焦点
strengthEl.addEventListener('focus', () => {
strengthEl.style.border = "1px solid rgba(66, 153, 225, 0.6)";
strengthEl.style.background = "rgba(0, 0, 0, 0.2)";
// 自动选中所有内容
strengthEl.select();
});
strengthEl.addEventListener('blur', () => {
strengthEl.style.border = "1px solid transparent";
strengthEl.style.background = "none";
});
// 处理输入变化
strengthEl.addEventListener('change', () => {
let newValue = parseFloat(strengthEl.value);
// 验证输入
if (isNaN(newValue)) {
newValue = 1.0;
}
// 更新数值
const lorasData = parseLoraValue(widget.value);
const loraIndex = lorasData.findIndex(l => l.name === name);
if (loraIndex >= 0) {
lorasData[loraIndex].strength = newValue.toFixed(2);
// 更新值并触发回调
const newLorasValue = formatLoraValue(lorasData);
widget.value = newLorasValue;
}
});
// 处理按键事件
strengthEl.addEventListener('keydown', (e) => {
if (e.key === 'Enter') {
strengthEl.blur();
}
});
// Right arrow
const rightArrow = createArrowButton("right", () => {
// Increase strength
const lorasData = parseLoraValue(widget.value);
const loraIndex = lorasData.findIndex(l => l.name === name);
if (loraIndex >= 0) {
lorasData[loraIndex].strength = (parseFloat(lorasData[loraIndex].strength) + 0.05).toFixed(2);
const newValue = formatLoraValue(lorasData);
widget.value = newValue;
}
});
strengthControl.appendChild(leftArrow);
strengthControl.appendChild(strengthEl);
strengthControl.appendChild(rightArrow);
// Assemble entry
const leftSection = document.createElement("div");
Object.assign(leftSection.style, {
display: "flex",
alignItems: "center",
flex: "1",
minWidth: "0", // Allow shrinking
});
leftSection.appendChild(toggle);
leftSection.appendChild(nameEl);
loraEl.appendChild(leftSection);
loraEl.appendChild(strengthControl);
container.appendChild(loraEl);
// Initialize drag functionality
initDrag(loraEl, nameEl, name, widget);
});
};
// Store the value in a variable to avoid recursion
let widgetValue = defaultValue;
// Create widget with initial properties
const widget = node.addDOMWidget(name, "loras", container, {
getValue: function() {
return widgetValue;
},
setValue: function(v) {
// Remove duplicates by keeping the last occurrence of each lora name
const uniqueValue = (v || []).reduce((acc, lora) => {
// Remove any existing lora with the same name
const filtered = acc.filter(l => l.name !== lora.name);
// Add the current lora
return [...filtered, lora];
}, []);
widgetValue = uniqueValue;
renderLoras(widgetValue, widget);
// Update container height after rendering
requestAnimationFrame(() => {
const minHeight = this.getMinHeight();
container.style.height = `${minHeight}px`;
// Force node to update size
node.setSize([node.size[0], node.computeSize()[1]]);
node.setDirtyCanvas(true, true);
});
},
getMinHeight: function() {
// Calculate height based on content
const lorasCount = parseLoraValue(widgetValue).length;
return Math.max(
100,
lorasCount > 0 ? 60 + lorasCount * 44 : 60
);
},
});
widget.value = defaultValue;
widget.callback = callback;
widget.serializeValue = () => {
// Add dummy items to avoid the 2-element serialization issue, a bug in comfyui
return [...widgetValue,
{ name: "__dummy_item1__", strength: 0, active: false, _isDummy: true },
{ name: "__dummy_item2__", strength: 0, active: false, _isDummy: true }
];
}
widget.onRemove = () => {
container.remove();
previewTooltip.cleanup();
};
return { minWidth: 400, minHeight: 200, widget };
}
// Function to directly save the recipe without dialog
async function saveRecipeDirectly(widget) {
try {
// Show loading toast
if (app && app.extensionManager && app.extensionManager.toast) {
app.extensionManager.toast.add({
severity: 'info',
summary: 'Saving Recipe',
detail: 'Please wait...',
life: 2000
});
}
// Send the request
const response = await fetch('/api/recipes/save-from-widget', {
method: 'POST'
});
const result = await response.json();
// Show result toast
if (app && app.extensionManager && app.extensionManager.toast) {
if (result.success) {
app.extensionManager.toast.add({
severity: 'success',
summary: 'Recipe Saved',
detail: 'Recipe has been saved successfully',
life: 3000
});
} else {
app.extensionManager.toast.add({
severity: 'error',
summary: 'Error',
detail: result.error || 'Failed to save recipe',
life: 5000
});
}
}
} catch (error) {
console.error('Error saving recipe:', error);
// Show error toast
if (app && app.extensionManager && app.extensionManager.toast) {
app.extensionManager.toast.add({
severity: 'error',
summary: 'Error',
detail: 'Failed to save recipe: ' + (error.message || 'Unknown error'),
life: 5000
});
}
}
}

View File

@@ -1,193 +0,0 @@
export function addTagsWidget(node, name, opts, callback) {
// Create container for tags
const container = document.createElement("div");
container.className = "comfy-tags-container";
Object.assign(container.style, {
display: "flex",
flexWrap: "wrap",
gap: "4px", // 从8px减小到4px
padding: "6px",
minHeight: "30px",
backgroundColor: "rgba(40, 44, 52, 0.6)", // Darker, more modern background
borderRadius: "6px", // Slightly larger radius
width: "100%",
});
// Initialize default value as array
const initialTagsData = opts?.defaultVal || [];
// Function to render tags from array data
const renderTags = (tagsData, widget) => {
// Clear existing tags
while (container.firstChild) {
container.removeChild(container.firstChild);
}
const normalizedTags = tagsData;
if (normalizedTags.length === 0) {
// Show message when no tags are present
const emptyMessage = document.createElement("div");
emptyMessage.textContent = "No trigger words detected";
Object.assign(emptyMessage.style, {
textAlign: "center",
padding: "20px 0",
color: "rgba(226, 232, 240, 0.8)",
fontStyle: "italic",
userSelect: "none",
WebkitUserSelect: "none",
MozUserSelect: "none",
msUserSelect: "none",
});
container.appendChild(emptyMessage);
return;
}
normalizedTags.forEach((tagData, index) => {
const { text, active } = tagData;
const tagEl = document.createElement("div");
tagEl.className = "comfy-tag";
updateTagStyle(tagEl, active);
tagEl.textContent = text;
tagEl.title = text; // Set tooltip for full content
// Add click handler to toggle state
tagEl.addEventListener("click", (e) => {
e.stopPropagation();
// Toggle active state for this specific tag using its index
const updatedTags = [...widget.value];
updatedTags[index].active = !updatedTags[index].active;
updateTagStyle(tagEl, updatedTags[index].active);
widget.value = updatedTags;
});
container.appendChild(tagEl);
});
};
// Helper function to update tag style based on active state
function updateTagStyle(tagEl, active) {
const baseStyles = {
padding: "4px 12px", // 垂直内边距从6px减小到4px
borderRadius: "6px", // Matching container radius
maxWidth: "200px", // Increased max width
overflow: "hidden",
textOverflow: "ellipsis",
whiteSpace: "nowrap",
fontSize: "13px", // Slightly larger font
cursor: "pointer",
transition: "all 0.2s ease", // Smoother transition
border: "1px solid transparent",
display: "inline-block",
boxShadow: "0 1px 2px rgba(0,0,0,0.1)",
margin: "2px", // 从4px减小到2px
userSelect: "none", // Add this line to prevent text selection
WebkitUserSelect: "none", // For Safari support
MozUserSelect: "none", // For Firefox support
msUserSelect: "none", // For IE/Edge support
};
if (active) {
Object.assign(tagEl.style, {
...baseStyles,
backgroundColor: "rgba(66, 153, 225, 0.9)", // Modern blue
color: "white",
borderColor: "rgba(66, 153, 225, 0.9)",
});
} else {
Object.assign(tagEl.style, {
...baseStyles,
backgroundColor: "rgba(45, 55, 72, 0.7)", // Darker inactive state
color: "rgba(226, 232, 240, 0.8)", // Lighter text for contrast
borderColor: "rgba(226, 232, 240, 0.2)",
});
}
// Add hover effect
tagEl.onmouseenter = () => {
tagEl.style.transform = "translateY(-1px)";
tagEl.style.boxShadow = "0 2px 4px rgba(0,0,0,0.15)";
};
tagEl.onmouseleave = () => {
tagEl.style.transform = "translateY(0)";
tagEl.style.boxShadow = "0 1px 2px rgba(0,0,0,0.1)";
};
}
// Store the value as array
let widgetValue = initialTagsData;
// Create widget with initial properties
const widget = node.addDOMWidget(name, "tags", container, {
getValue: function() {
return widgetValue;
},
setValue: function(v) {
widgetValue = v;
renderTags(widgetValue, widget);
// Update container height after rendering
requestAnimationFrame(() => {
const minHeight = this.getMinHeight();
container.style.height = `${minHeight}px`;
// Force node to update size
node.setSize([node.size[0], node.computeSize()[1]]);
node.setDirtyCanvas(true, true);
});
},
getMinHeight: function() {
const minHeight = 150;
// If no tags or only showing the empty message, return a minimum height
if (widgetValue.length === 0) {
return minHeight; // Height for empty state with message
}
// Get all tag elements
const tagElements = container.querySelectorAll('.comfy-tag');
if (tagElements.length === 0) {
return minHeight; // Fallback if elements aren't rendered yet
}
// Calculate the actual height based on tag positions
let maxBottom = 0;
tagElements.forEach(tag => {
const rect = tag.getBoundingClientRect();
const tagBottom = rect.bottom - container.getBoundingClientRect().top;
maxBottom = Math.max(maxBottom, tagBottom);
});
// Add padding (top and bottom padding of container)
const computedStyle = window.getComputedStyle(container);
const paddingTop = parseInt(computedStyle.paddingTop, 10) || 0;
const paddingBottom = parseInt(computedStyle.paddingBottom, 10) || 0;
// Add extra buffer for potential wrapping issues and to ensure no clipping
const extraBuffer = 20;
// Round up to nearest 5px for clean sizing and ensure minimum height
return Math.max(minHeight, Math.ceil((maxBottom + paddingBottom + extraBuffer) / 5) * 5);
},
});
widget.value = initialTagsData;
widget.callback = callback;
widget.serializeValue = () => {
// Add dummy items to avoid the 2-element serialization issue, a bug in comfyui
return [...widgetValue,
{ text: "__dummy_item__", active: false, _isDummy: true },
{ text: "__dummy_item__", active: false, _isDummy: true }
];
};
return { minWidth: 300, minHeight: 150, widget };
}

View File

@@ -176,32 +176,6 @@ app.registerExtension({
inputWidget,
originalCallback
);
// Register this node with the backend
this.registerNode = async () => {
try {
await fetch("/api/register-node", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
node_id: this.id,
bgcolor: this.bgcolor,
title: this.title,
graph_id: this.graph.id,
}),
});
} catch (error) {
console.warn("Failed to register node:", error);
}
};
// Ensure the node is registered after creation
// Call registration
// setTimeout(() => {
// this.registerNode();
// }, 0);
});
}
},

View File

@@ -101,31 +101,6 @@ app.registerExtension({
inputWidget,
originalCallback
);
// Register this node with the backend
this.registerNode = async () => {
try {
await fetch("/api/register-node", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
node_id: this.id,
bgcolor: this.bgcolor,
title: this.title,
graph_id: this.graph.id,
}),
});
} catch (error) {
console.warn("Failed to register node:", error);
}
};
// Call registration
// setTimeout(() => {
// this.registerNode();
// }, 0);
});
}
},

View File

@@ -269,7 +269,7 @@ export class PreviewTooltip {
this.currentLora = loraName;
// Get preview URL
const response = await api.fetchApi(`/loras/preview-url?name=${encodeURIComponent(loraName)}`, {
const response = await api.fetchApi(`/lm/loras/preview-url?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});

View File

@@ -491,7 +491,7 @@ export function createContextMenu(x, y, loraName, widget, previewTooltip, render
try {
// Get Civitai URL from API
const response = await api.fetchApi(`/loras/civitai-url?name=${encodeURIComponent(loraName)}`, {
const response = await api.fetchApi(`/lm/loras/civitai-url?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});
@@ -547,7 +547,7 @@ export function createContextMenu(x, y, loraName, widget, previewTooltip, render
try {
// Get notes from API
const response = await api.fetchApi(`/loras/get-notes?name=${encodeURIComponent(loraName)}`, {
const response = await api.fetchApi(`/lm/loras/get-notes?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});
@@ -584,7 +584,7 @@ export function createContextMenu(x, y, loraName, widget, previewTooltip, render
try {
// Get trigger words from API
const response = await api.fetchApi(`/loras/get-trigger-words?name=${encodeURIComponent(loraName)}`, {
const response = await api.fetchApi(`/lm/loras/get-trigger-words?name=${encodeURIComponent(loraName)}`, {
method: 'GET'
});

View File

@@ -70,7 +70,7 @@ export async function saveRecipeDirectly() {
}
// Send the request to the backend API
const response = await fetch('/api/recipes/save-from-widget', {
const response = await fetch('/api/lm/recipes/save-from-widget', {
method: 'POST'
});

View File

@@ -1,11 +1,7 @@
import { app } from "../../scripts/app.js";
import { api } from "../../scripts/api.js";
import { CONVERTED_TYPE, dynamicImportByVersion } from "./utils.js";
// Function to get the appropriate tags widget based on ComfyUI version
async function getTagsWidgetModule() {
return await dynamicImportByVersion("./tags_widget.js", "./legacy_tags_widget.js");
}
import { CONVERTED_TYPE } from "./utils.js";
import { addTagsWidget } from "./tags_widget.js";
// TriggerWordToggle extension for ComfyUI
app.registerExtension({
@@ -30,10 +26,6 @@ app.registerExtension({
// Wait for node to be properly initialized
requestAnimationFrame(async () => {
// Dynamically import the appropriate tags widget module
const tagsModule = await getTagsWidgetModule();
const { addTagsWidget } = tagsModule;
// Get the widget object directly from the returned object
const result = addTagsWidget(node, "toggle_trigger_words", {
defaultVal: []

View File

@@ -107,7 +107,7 @@ const initializeWidgets = () => {
// Fetch version info from the API
const fetchVersionInfo = async () => {
try {
const response = await fetch('/api/version-info');
const response = await fetch('/api/lm/version-info');
const data = await response.json();
if (data.success) {

View File

@@ -38,7 +38,7 @@ app.registerExtension({
async updateUsageStats(promptId) {
try {
// Call backend endpoint with the prompt_id
const response = await fetch(`/api/update-usage-stats`, {
const response = await fetch(`/api/lm/update-usage-stats`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@@ -79,7 +79,7 @@ app.registerExtension({
}
}
const response = await fetch('/api/register-nodes', {
const response = await fetch('/api/lm/register-nodes', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
@@ -158,7 +158,7 @@ app.registerExtension({
try {
// Search for current relative path
const response = await api.fetchApi(`/${modelType}/relative-paths?search=${encodeURIComponent(fileName)}&limit=2`);
const response = await api.fetchApi(`/lm/${modelType}/relative-paths?search=${encodeURIComponent(fileName)}&limit=2`);
const data = await response.json();
if (!data.success || !data.relative_paths || data.relative_paths.length === 0) {

View File

@@ -20,10 +20,6 @@ export function chainCallback(object, property, callback) {
}
}
export function getComfyUIFrontendVersion() {
return window['__COMFYUI_FRONTEND_VERSION__'] || "0.0.0";
}
/**
* Show a toast notification
* @param {Object|string} options - Toast options object or message string for backward compatibility
@@ -78,29 +74,6 @@ export function showToast(options, type = 'info') {
}
}
// Dynamically import the appropriate widget based on app version
export async function dynamicImportByVersion(latestModulePath, legacyModulePath) {
// Parse app version and compare with 1.12.6 (version when tags widget API changed)
const currentVersion = getComfyUIFrontendVersion();
const versionParts = currentVersion.split('.').map(part => parseInt(part, 10));
const requiredVersion = [1, 12, 6];
// Compare version numbers
for (let i = 0; i < 3; i++) {
if (versionParts[i] > requiredVersion[i]) {
console.log(`Using latest widget: ${latestModulePath}`);
return import(latestModulePath);
} else if (versionParts[i] < requiredVersion[i]) {
console.log(`Using legacy widget: ${legacyModulePath}`);
return import(legacyModulePath);
}
}
// If we get here, versions are equal, use the latest module
console.log(`Using latest widget: ${latestModulePath}`);
return import(latestModulePath);
}
export function hideWidgetForGood(node, widget, suffix = "") {
widget.origType = widget.type;
widget.origComputeSize = widget.computeSize;
@@ -124,11 +97,6 @@ export function hideWidgetForGood(node, widget, suffix = "") {
}
}
// Function to get the appropriate loras widget based on ComfyUI version
export async function getLorasWidgetModule() {
return await dynamicImportByVersion("./loras_widget.js", "./legacy_loras_widget.js");
}
// Update pattern to match both formats: <lora:name:model_strength> or <lora:name:model_strength:clip_strength>
export const LORA_PATTERN = /<lora:([^:]+):([-\d\.]+)(?::([-\d\.]+))?>/g;
@@ -215,7 +183,7 @@ export function collectActiveLorasFromChain(node, visited = new Set()) {
export function updateConnectedTriggerWords(node, loraNames) {
const connectedNodeIds = getConnectedTriggerToggleNodes(node);
if (connectedNodeIds.length > 0) {
fetch("/api/loras/get_trigger_words", {
fetch("/api/lm/loras/get_trigger_words", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({