Compare commits

...

60 Commits

Author SHA1 Message Date
Will Miao
0a340d397c feat(misc): add VAE and Upscaler model management page 2026-01-31 07:28:10 +08:00
Will Miao
b86bd44c65 feat(filter): enable model types filter for checkpoints page 2026-01-30 22:32:50 +08:00
Will Miao
77bfbe1bc9 feat(header): remove no-presets placeholder from filter presets section
The no-presets placeholder element has been removed from the filter presets section in the header component. This change likely indicates that the application now handles empty presets states differently, possibly through dynamic content rendering or alternative UI patterns.
2026-01-30 11:03:23 +08:00
Will Miao
666db4cdd0 refactor(ui): simplify filter preset empty state
- Remove default presets and restore defaults functionality
- Unify preset UI: always show '+ Add' button regardless of preset count
- Remove empty state message and restore button to reduce visual clutter
- Clean up unused translation keys (restoreDefaults, noPresets)
- Fix spacing issues in filter panel
2026-01-30 10:25:22 +08:00
Will Miao
233427600a feat(ui): enhance model card header with sub-type display and gradient overlay
- Add gradient overlay to card header for better icon readability
- Update base model label to display sub-type abbreviation alongside base model
- Add separator between sub-type and base model for visual clarity
- Improve label styling with flex layout, adjusted padding, and enhanced backdrop filter
- Add helper functions for sub-type abbreviation retrieval and display names
2026-01-30 09:46:31 +08:00
Will Miao
84c62f2954 refactor(model-type): complete phase 5 cleanup by removing deprecated model_type field
- Remove backward compatibility code for `model_type` in `ModelScanner._build_cache_entry()`
- Update `CheckpointScanner` to only handle `sub_type` in `adjust_metadata()` and `adjust_cached_entry()`
- Delete deprecated aliases `resolve_civitai_model_type` and `normalize_civitai_model_type` from `model_query.py`
- Update frontend components (`RecipeModal.js`, `ModelCard.js`, etc.) to use `sub_type` instead of `model_type`
- Update API response format to return only `sub_type`, removing `model_type` from service responses
- Revise technical documentation to mark Phase 5 as completed and remove outdated TODO items

All cleanup tasks for the model type refactoring are now complete, ensuring consistent use of `sub_type` across the codebase.
2026-01-30 07:48:31 +08:00
Will Miao
5e91073476 refactor: unify model_type semantics by introducing sub_type field
This commit resolves the semantic confusion around the model_type field by
clearly distinguishing between:
- scanner_type: architecture-level (lora/checkpoint/embedding)
- sub_type: business-level subtype (lora/locon/dora/checkpoint/diffusion_model/embedding)

Backend Changes:
- Rename model_type to sub_type in CheckpointMetadata and EmbeddingMetadata
- Add resolve_sub_type() and normalize_sub_type() in model_query.py
- Update checkpoint_scanner to use _resolve_sub_type()
- Update service format_response to include both sub_type and model_type
- Add VALID_*_SUB_TYPES constants with backward compatible aliases

Frontend Changes:
- Add MODEL_SUBTYPE_DISPLAY_NAMES constants
- Keep MODEL_TYPE_DISPLAY_NAMES as backward compatible alias

Testing:
- Add 43 new tests covering sub_type resolution and API response

Documentation:
- Add refactoring todo document to docs/technical/

BREAKING CHANGE: None - full backward compatibility maintained
2026-01-30 06:56:10 +08:00
Will Miao
08267cdb48 refactor(filter): extract preset management logic into FilterPresetManager
Move filter preset creation, deletion, application, and storage logic
from FilterManager into a dedicated FilterPresetManager class to
improve separation of concerns and maintainability.

- Add FilterPresetManager with preset CRUD operations
- Update FilterManager to use preset manager via composition
- Handle EMPTY_WILDCARD_MARKER for wildcard base model filters
- Add preset-related translations to all locale files
- Update filter preset UI styling and interactions
2026-01-29 16:32:41 +08:00
pixelpaws
e50b2c802e Merge pull request #787 from diodiogod/feat/filter-presets
feat: add filter preset system
2026-01-29 09:36:44 +08:00
Will Miao
2eea92abdf fix: allow STRING input connections for AUTOCOMPLETE_TEXT_PROMPT widgets
Use union type "AUTOCOMPLETE_TEXT_PROMPT,STRING" to enable input mode
compatibility with STRING outputs while preserving autocomplete widget
functionality via widgetType option.

Fixes issue where text inputs could not receive connections from
STRING-type outputs after changing from built-in STRING to custom
AUTOCOMPLETE_TEXT_PROMPT type.

Affected nodes:
- Prompt (LoraManager)
- Text (LoraManager)
2026-01-29 09:07:22 +08:00
Will Miao
58ae6b9de6 fix: persist onboarding and banner dismiss state to backend
Moves onboarding_completed and dismissed_banners from localStorage
to backend settings (settings.json) to survive incognito/private
browser modes.

Fixes #786
2026-01-29 08:48:04 +08:00
diodiogod
b775333d32 fix: include all WAN Video model variants in default preset
Add missing WAN Video base models to default preset:
- Wan Video (base)
- Wan Video 2.2 TI2V-5B
- Wan Video 2.2 T2V-A14B
- Wan Video 2.2 I2V-A14B
2026-01-28 17:44:01 -03:00
diodiogod
bad0a8c5df feat: add filter preset system
Add ability to save and manage filter presets for quick access to commonly used filter combinations.

Features:
- Save current active filters as named presets
- Apply presets with one click (shows active state with checkmark)
- Toggle presets on/off like regular filters
- Delete presets
- Presets stored in browser localStorage per page
- Default "WAN Models" preset for LoRA page
- Visual feedback: active preset highlighted, filter tags show blue outlines
- Inline "+ Add" button flows with preset tags

UI/UX improvements:
- Preset tags use same compact style as filter tags
- Active preset deactivates when filters manually changed
- Missing tags from presets automatically added to tag list
- Clear filters properly resets preset state
2026-01-28 17:37:47 -03:00
Will Miao
ee25643f68 feat(ui): update model update badge to icon-only design
- Change badge from text label to icon-only for cleaner UI
- Adjust CSS for smaller circular badge with centered icon
- Maintain tooltip functionality for accessibility
- Update badge styling to be more compact and visually consistent
2026-01-28 20:42:32 +08:00
Will Miao
a78868adce feat(ui): add setup guidance when example images path is not configured
When users try to import custom example images without configuring the
download location, show a helpful guidance interface instead of failing
silently or showing an error after the fact.

Changes:
- ShowcaseView.js: Check if example_images_path is configured before
  showing import interface; display setup guidance with open settings button
- showcase.css: Add styles for the setup guidance state
- locales: Add translation keys for all 10 supported languages

Clicking 'Open Settings' will:
1. Open the settings modal
2. Scroll to the Example Images section
3. Highlight the section with a brief animation
4. Focus the input field

Fixes #785
2026-01-28 15:53:58 +08:00
Will Miao
2ccfbaf073 fix(trigger-words): auto-commit pending input on save/blur to prevent data loss, see #785
- Auto-commit input value when clicking save button
- Auto-commit on blur to handle users clicking outside input
- Fixes issue where users would type a trigger word and click save,
  but the word wasn't added because they didn't press Enter first
- Maintains backward compatibility with existing comma-based workflows
2026-01-28 14:34:34 +08:00
Will Miao
565b61d1c2 feat: add Text node with autocomplete support
Introduce a new TextLM node to the Lora Manager extension, providing a simple text input with autocomplete functionality for tags and styles. The node is integrated into the module's import system and node class mappings, enabling users to utilize autocomplete features for efficient prompt creation.
2026-01-28 11:39:05 +08:00
Will Miao
18d3ecb4da refactor(vue-widgets): adopt DOM widget value persistence best practices for randomizer and cycler
- Replace custom onSetValue with ComfyUI's built-in widget.callback
- Remove widget.updateConfig, set widget.value directly
- Add isRestoring flag to break callback → watch → widget.value loop
- Update ComponentWidget types with generic parameter for type-safe callbacks

Refs: docs/dom-widgets/value-persistence-best-practices.md
2026-01-28 00:21:30 +08:00
Will Miao
a02462fff4 refactor(lora-pool-widget): make ComponentWidget generic and remove legacy config
- Add generic type parameter to ComponentWidget<T> for type-safe callbacks
- Remove LegacyLoraPoolConfig interface and migrateConfig function
- Update LoraPoolWidget to use ComponentWidget<LoraPoolConfig>
- Clean up type imports across widget files
2026-01-28 00:04:45 +08:00
Will Miao
ad4574e02f refactor(lora-pool-widget): adopt DOM widget value persistence best practices
- Replace custom onSetValue with ComfyUI's built-in widget.callback
- Remove widget.updateConfig, set widget.value directly
- Add isRestoring flag to break callback → watch → refreshPreview loop
- Update ComponentWidget types with callback and deprecate old methods

Refs: docs/dom-widgets/value-persistence-best-practices.md
2026-01-27 23:49:44 +08:00
Will Miao
822ac046e0 docs: update DOM widget value persistence best practices guide
- Restructure document to clearly separate simple vs complex widget patterns
- Add detailed explanation of ComfyUI's built-in callback mechanism
- Provide complete implementation examples for both patterns
- Remove outdated sync chain diagrams and replace with practical guidance
- Emphasize using DOM element as source of truth for simple widgets
- Document proper use of internal state with widget.callback for complex widgets
2026-01-27 22:51:09 +08:00
Will Miao
55fa31b144 fix(autocomplete): preserve space after comma when inserting / commands 2026-01-27 14:29:53 +08:00
Will Miao
d17808d9e5 feat(autocomplete): add setting to replace underscores with spaces in tag names
fixes #784
2026-01-27 13:01:03 +08:00
Will Miao
5d9f64e43b feat(autocomplete): make /commands work even when tag autocomplete is disabled 2026-01-27 01:05:57 +08:00
Will Miao
5dc5fd5971 feat(tag-search): add alias support to FTS index
- Add aliases column to tags table to store comma-separated alias lists
- Update FTS schema to version 2 with searchable_text field containing tag names and aliases
- Implement schema migration to rebuild index when upgrading from old schema
- Modify search logic to match aliases and return canonical tag with matched alias info
- Update index building to include aliases in searchable text for FTS matching

This enables users to search for tag aliases (e.g., "miku") and get results for the canonical tag (e.g., "hatsune_miku") with indication of which alias was matched.
2026-01-27 00:36:06 +08:00
Will Miao
0ff551551e fix: enable middle mouse pan in autocomplete text widget
Remove pointer event .stop modifiers from textarea to allow events
to propagate to container where forwardMiddleMouseToCanvas forwards them
to ComfyUI canvas for pan functionality
2026-01-26 23:32:33 +08:00
Will Miao
9032226724 fix(autocomplete): fix value persistence in DOM text widgets
Remove multiple sources of truth and async sync chains that caused
values to be lost during load/switch workflow or reload page.

Changes:
- Remove internalValue state variable from main.ts
- Update getValue/setValue to read/write DOM directly via widget.inputEl
- Remove textValue reactive ref and v-model from Vue component
- Remove serializeValue, onSetValue, and watch callbacks
- Register textarea reference on mount, clean up on unmount
- Simplify AutocompleteTextWidgetInterface

Follows ComfyUI built-in addMultilineWidget pattern:
- Single source of truth (DOM element value only)
- Direct sync (no intermediate variables or async chains)

Also adds documentation:
- docs/dom-widgets/value-persistence-best-practices.md
- docs/dom-widgets/README.md
- Update docs/dom_widget_dev_guide.md with reference
2026-01-26 23:24:16 +08:00
Will Miao
7249c9fd4b refactor(autocomplete): remove old CSV fallback, use TagFTSIndex exclusively
Remove all autocomplete.txt parsing logic and fallback code, simplifying
the service to use only TagFTSIndex for Danbooru/e621 tag search
with category filtering.

- Remove WordEntry dataclass and _words_cache, _file_path attributes
- Remove _determine_file_path(), get_file_path(), load_words(), save_words(),
  get_content(), _parse_csv_content() methods
- Simplify search_words() to only use TagFTSIndex, always returning
  enriched results with {tag_name, category, post_count}
- Remove GET/POST /api/lm/custom-words endpoints (unused)
- Keep GET /api/lm/custom-words/search for frontend autocomplete
- Rewrite tests to focus on TagFTSIndex integration

This reduces code by 446 lines and removes untested pysssss plugin
integration. Feature is unreleased so no backward compatibility needed.
2026-01-26 20:36:00 +08:00
Will Miao
31d94d7ea2 fix(test): fix npm test 2026-01-26 17:35:20 +08:00
pixelpaws
b28f148ce8 Merge pull request #780 from willmiao/fix-symlink
Fix symlink
2026-01-26 17:33:47 +08:00
pixelpaws
93cd0b54dc Merge branch 'main' into fix-symlink 2026-01-26 17:29:31 +08:00
Will Miao
7b0c6c8bab refactor(cache): reorganize cache directory structure with automatic legacy cleanup
- Centralize cache path resolution in new py/utils/cache_paths.py module
- Migrate legacy cache files to organized structure: {settings_dir}/cache/{model|recipe|fts|symlink}/
- Automatically clean up legacy files after successful migration with integrity verification
- Update Config symlink cache to use new path and migrate from old location
- Simplify service classes (PersistentModelCache, PersistentRecipeCache, RecipeFTSIndex, TagFTSIndex) to use centralized migration logic
- Add comprehensive test coverage for cache paths and automatic cleanup
2026-01-26 16:12:08 +08:00
Will Miao
e14afde4b3 feat(autocomplete): standardize path separators and expand embedding detection
- Change path separators from backslashes to forward slashes in embedding autocomplete
- Extend embedding detection to also trigger when searchType is 'embeddings'
- Improves cross-platform compatibility and makes embedding autocomplete more reliable
2026-01-26 16:03:00 +08:00
Will Miao
4b36d60e46 feat(prompt): enhance placeholder with quick tag search instructions
Update the placeholder text in the PromptLM class to include guidance for quick tag search functionality. The new placeholder now reads "Enter prompt... /char, /artist for quick tag search", providing users with immediate cues on how to utilize tag search features directly within the input field. This improves usability by making advanced functionality more discoverable.
2026-01-26 14:42:47 +08:00
Will Miao
6ef6c116e4 fix(autocomplete): hide embedding preview tooltip after selection
Remove searchType check from prompt behavior's hidePreview method.
When an embedding was selected, the input event dispatched by
insertSelection caused searchType to change before hide() was called,
preventing the preview tooltip from being hidden.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 14:13:16 +08:00
Will Miao
42f35be9d3 feat(autocomplete): add Danbooru/e621 tag search with category filtering
- Add TagFTSIndex service for fast SQLite FTS5-based tag search (221k+ tags)
- Implement command-mode autocomplete: /char, /artist, /general, /meta, etc.
- Support category filtering via category IDs or names
- Return enriched results with post counts and category badges
- Add UI styling for category badges and command list dropdown
2026-01-26 13:51:45 +08:00
Will Miao
d063d48417 feat(symlink): add deep validation for symlink cache invalidation
Detects symlink changes at any depth, not just at root level. Uses two-tier validation:
- Fingerprint check for new symlinks
- Deep mapping validation for removed/retargeted symlinks
2026-01-26 09:30:10 +08:00
Will Miao
c9e305397c feat: enhance symlink detection and cache invalidation
- Add `_entry_is_symlink` method to detect symlinks and Windows junctions
- Include first-level symlinks in fingerprint for better cache invalidation
- Re-enable preview path validation for security
- Update tests to verify retargeted symlinks trigger rescan
2026-01-25 19:14:16 +08:00
Will Miao
6142b3dc0c feat: consolidate ComfyUI settings and add custom words autocomplete toggle
Create unified settings.js extension to centralize all Lora Manager ComfyUI
settings registration, eliminating code duplication across multiple files.

Add new setting "Enable Custom Words Autocomplete in Prompt Nodes" (enabled
by default) to control custom words autocomplete in prompt node text widgets.
When disabled, only 'emb:' prefix triggers embeddings autocomplete.

Changes:
- Create web/comfyui/settings.js with all three settings:
  * Trigger Word Wheel Sensitivity (existing)
  * Auto path correction (existing)
  * Enable Custom Words Autocomplete in Prompt Nodes (new)
- Refactor autocomplete.js to respect the new setting
- Update trigger_word_toggle.js to import from settings.js
- Update usage_stats.js to import from settings.js
2026-01-25 12:53:41 +08:00
Will Miao
d5a2bd1e24 feat: add custom words autocomplete support for Prompt node
Adds custom words autocomplete functionality similar to comfyui-custom-scripts,
with the following features:

Backend (Python):
- Create CustomWordsService for CSV parsing and priority-based search
- Add API endpoints: GET/POST /api/lm/custom-words and
  GET /api/lm/custom-words/search
- Share storage with pysssss plugin (checks for their user/autocomplete.txt first)
- Fallback to Lora Manager's user directory for storage

Frontend (JavaScript/Vue):
- Add 'custom_words' and 'prompt' model types to autocomplete system
- Prompt node now supports dual-mode autocomplete:
  * Type 'emb:' prefix → search embeddings
  * Type normally → search custom words (no prefix required)
- Add AUTOCOMPLETE_TEXT_PROMPT widget type
- Update Vue component and composable types

Key Features:
- CSV format: word[,priority] compatible with danbooru-tags.txt
- Priority-based sorting: 20% top priority + prefix + include matches
- Preview tooltip for embeddings (not for custom words)
- Dynamic endpoint switching based on prefix detection

Breaking Changes:
- Prompt (LoraManager) node widget type changed from
  AUTOCOMPLETE_TEXT_EMBEDDINGS to AUTOCOMPLETE_TEXT_PROMPT
- Removed standalone web/comfyui/prompt.js (integrated into main widgets)

Fixes comfy_dir path calculation by prioritizing folder_paths.base_path
from ComfyUI when available, with fallback to computed path.
2026-01-25 12:24:32 +08:00
Will Miao
1f6fc59aa2 feat(autocomplete-text-widget): adjust padding for DOM mode text input
Removed excessive top padding in DOM mode to improve visual alignment and consistency with other form elements. The change reduces the top padding from 24px to 8px, eliminating unnecessary vertical space while maintaining the same bottom padding and overall styling.
2026-01-25 10:47:15 +08:00
Will Miao
41101ad5c6 refactor(nodes): standardize node class names with LM suffix
Rename all node classes to use consistent 'LM' suffix pattern:
- LoraCyclerNode → LoraCyclerLM
- LoraManagerLoader → LoraLoaderLM
- LoraManagerTextLoader → LoraTextLoaderLM
- LoraStacker → LoraStackerLM
- LoraRandomizerNode → LoraRandomizerLM
- LoraPoolNode → LoraPoolLM
- WanVideoLoraSelectFromText → WanVideoLoraTextSelectLM
- DebugMetadata → DebugMetadataLM
- TriggerWordToggle → TriggerWordToggleLM
- PromptLoraManager → PromptLM

Updated:
- Core node class definitions (9 files)
- NODE_CLASS_MAPPINGS in __init__.py
- Node type mappings in node_extractors.py
- All related test imports and references
- Logger prefixes for consistency

Frontend extension names remain unchanged (LoraManager.LoraStacker, etc.)
2026-01-25 10:38:10 +08:00
Will Miao
b71b3f99dc feat(vue-widgets): add max height constraint for LoRA autocomplete widgets
Introduce AUTOCOMPLETE_TEXT_WIDGET_MAX_HEIGHT constant and apply it to autocomplete text widgets when modelType is 'loras'. This ensures LoRA-specific widgets have a consistent maximum height of 100px, improving UI consistency and preventing excessive widget expansion.
2026-01-25 09:59:04 +08:00
Will Miao
d655fb8008 feat(nodes): improve placeholder text for LoRA autocomplete input 2026-01-25 09:10:16 +08:00
Will Miao
194f2f702c refactor: replace comfy built-in text widget with custome autocomplete text widget for better event handler binding
- Change `STRING` input type to `AUTOCOMPLETE_TEXT_LORAS` in LoraManagerLoader, LoraStacker, and WanVideoLoraSelectLM nodes for LoRA syntax input
- Change `STRING` input type to `AUTOCOMPLETE_TEXT_EMBEDDINGS` in PromptLoraManager node for prompt input
- Remove manual multiline, autocomplete, and dynamicPrompts configurations in favor of built-in autocomplete types
- Update placeholder text for consistency across nodes
- Remove unused `setupInputWidgetWithAutocomplete` mock from frontend tests
- Add Vue app cleanup logic to prevent memory leaks in widget management
2026-01-25 08:30:06 +08:00
Will Miao
fad43ad003 feat(ui): restrict drag events to left mouse button only, fixes #777
Add button condition checks in initDrag and initHeaderDrag functions to ensure only left mouse button (button 0) triggers drag interactions. This prevents conflicts with middle button canvas dragging and right button context menu actions, improving user experience and interaction clarity.
2026-01-24 22:26:17 +08:00
Will Miao
b05762b066 fix(cycler): prevent node drag when interacting with index input in Vue DOM mode
Add @pointerdown.stop, @pointermove.stop, @pointerup.stop modifiers to the
index input element to stop pointer event propagation to parent node.
This prevents unintended node dragging when user clicks/drags on the index
input for value adjustment or text selection.

Follows the pattern used by ComfyUI built-in widgets like
WidgetLayoutField and WidgetTextarea.
2026-01-24 12:16:29 +08:00
Will Miao
13b18ac85f refactor(update-modal): consolidate duplicate CSS files and fix changelog alignment
- Merged static/css/components/update-modal.css into static/css/components/modal/update-modal.css
- Fixed changelog item text alignment: added padding-left to .changelog-content and adjusted .changelog-item.latest padding
- Removed duplicate #updateBtn state definitions
- Deleted obsolete static/css/components/update-modal.css file
- Removed duplicate CSS import from style.css
2026-01-23 23:38:31 +08:00
Will Miao
eb2af454cc feat: add SQLite-based persistent recipe cache for faster startup
Introduce a new PersistentRecipeCache service that stores recipe metadata in an SQLite database to significantly reduce application startup time. The cache eliminates the need to walk directories and parse JSON files on each launch by persisting recipe data between sessions.

Key features:
- Thread-safe singleton implementation with library-specific instances
- Automatic schema initialization and migration support
- JSON serialization for complex recipe fields (LoRAs, checkpoints, generation parameters, tags)
- File system monitoring with mtime/size validation for cache invalidation
- Environment variable toggle (LORA_MANAGER_DISABLE_PERSISTENT_CACHE) for debugging
- Comprehensive test suite covering save/load cycles, cache invalidation, and edge cases

The cache improves user experience by enabling near-instantaneous recipe loading after the initial cache population, while maintaining data consistency through file change detection.
2026-01-23 22:56:38 +08:00
Will Miao
7bba24c19f feat(update-modal): display last 5 release notes instead of single
- Modified backend to fetch last 5 releases from GitHub API
- Updated frontend to iterate through and display multiple releases
- Added latest badge and publish date styling
- Added update.latestBadge translation key to all locales
- Maintains backward compatibility for single changelog display
2026-01-23 22:22:48 +08:00
Will Miao
0bb75fdf77 feat(trigger-word-toggle): use trigger_words directly when it differs from original message 2026-01-23 09:50:53 +08:00
Will Miao
7c7d2e12b5 feat: add Lora Cycler example workflow with JSON and preview image
Add a new example workflow for Lora Cycler, including a JSON configuration file and a preview image. The workflow demonstrates the use of LoraManager nodes for positive and negative prompts, along with VAEDecode, KSampler, and PreviewImage nodes. This provides a ready-to-use template for generating images with multiple LoRA models and conditioning adjustments.
2026-01-22 21:23:14 +08:00
Will Miao
2121054cb9 feat(lora-cycler): implement batch queue synchronization with dual-index mechanism
- Add execution_index and next_index fields to CyclerConfig interface
- Introduce beforeQueued hook in widget to handle index shifting for batch executions
- Use execution_index when provided, fall back to current_index for single executions
- Track execution state with Symbol to differentiate first vs subsequent executions
- Update state management to handle dual-index logic for proper LoRA cycling in batch queues
2026-01-22 21:22:52 +08:00
Will Miao
bf0291ec0e test(nodeModeChange): fix tests after mode change refactoring
After refactoring mode change logic from lora_stacker.js to main.ts
(compiled to lora-manager-widgets.js), updateConnectedTriggerWords became
a bundled inline function, making the mock from utils.js ineffective.

Changes:
- Import Vue widgets module in test to register mode change handlers
- Call both extensions' beforeRegisterNodeDef when setting up nodes
- Fix test node structure with proper widget setup (input widget with
  options property and loras widget with test data)
- Update test assertions to verify mode setter configuration via property
  descriptor check instead of mocking bundled functions

Also fix Lora Cycler widget min height from 316 to 314 pixels.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-22 20:56:41 +08:00
Will Miao
932d85617c refactor(lora-provider): extract mode change logic to shared TypeScript module
- Extract common mode change logic from lora_randomizer.js and lora_stacker.js
  into new mode-change-handler.ts TypeScript module
- Add LORA_PROVIDER_NODE_TYPES constant to centralize LoRA provider node types
- Update getActiveLorasFromNode in utils.js to support Lora Cycler's
  cycler_config widget (single current_lora_filename)
- Update getConnectedInputStackers and updateDownstreamLoaders to use
  isLoraProviderNode helper instead of hardcoded class checks
- Register mode change handlers in main.ts for all LoRA provider nodes
  (Lora Stacker, Lora Randomizer, Lora Cycler)
- Add value change callback to Lora Cycler widget to trigger
  updateDownstreamLoaders when current_lora_filename changes
- Remove duplicate mode change logic from lora_stacker.js
- Delete lora_randomizer.js (logic now centralized)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-22 20:46:09 +08:00
Will Miao
6832469889 test: temporarily disable symlink security test due to bug
Disable the test `test_preview_handler_forbids_paths_outside_active_library` by commenting it out. This test is being temporarily disabled because of a symlink scan bug that needs to be fixed before the test can be safely re-enabled.
2026-01-22 20:28:57 +08:00
Will Miao
b0f852cc6c refactor(lora-cycler): remove sort by control, always use filename
Removed the sort by selection UI from the Lora Cycler widget and
hardcoded the sorting to always use filename. This simplifies the
interface while maintaining all sorting functionality.

Changes:
- Removed sort_by prop/emit from LoraCyclerSettingsView
- Removed sort tabs UI and associated styles
- Hardcoded sort_by = "filename" in backend node
- Removed sort by handling logic from LoraCyclerWidget
- Updated widget height to accommodate removal
2026-01-22 19:58:51 +08:00
Will Miao
d1c65a6186 fix(dual-range-slider): allow equal min/max values in Lora Randomizer (#775)
Add allowEqualValues prop to DualRangeSlider component (default: false for backward compatibility).
When enabled, removes the step offset constraint that prevented min and max handles from being set to the same value.

Applied to all range sliders in LoraRandomizerSettingsView:
- LoRA Count range slider
- Model Strength Range slider
- Recommended Strength Scale slider
- Clip Strength Range slider

Backend already handles equal values correctly via rng.uniform().
2026-01-22 16:47:39 +08:00
Will Miao
6fbea77137 feat(lora-cycler): add sequential LoRA cycling through filtered pool
Add Lora Cycler node that cycles through LoRAs sequentially from a filtered pool. Supports configurable sort order, strength settings, and persists cycle progress across workflow save/load.

Backend:
- New LoraCyclerNode with cycle() method
- New /api/lm/loras/cycler-list endpoint
- LoraService.get_cycler_list() for filtered/sorted list

Frontend:
- LoraCyclerWidget with Vue.js component
- useLoraCyclerState composable
- LoraCyclerSettingsView for UI display
2026-01-22 15:36:32 +08:00
Will Miao
17c5583297 fix(fts): fix multi-word field-restricted search query building
Fixes a critical bug in FTS query building where multi-word searches
with field restrictions incorrectly used OR between all word+field
combinations instead of requiring ALL words to match within at least
one field.

Example: searching "cute cat" in {title, tags} previously produced:
  title:cute* OR title:cat* OR tags:cute* OR tags:cat*
Which matched recipes with ANY word in ANY field.

Now produces:
  (title:cute* title:cat*) OR (tags:cute* tags:cat*)
Which requires ALL words to match within at least one field.

Also adds fallback to fuzzy search when FTS returns empty results,
improving search reliability.

Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-22 10:25:03 +08:00
149 changed files with 235492 additions and 2019 deletions

View File

@@ -1,15 +1,17 @@
try: # pragma: no cover - import fallback for pytest collection
from .py.lora_manager import LoraManager
from .py.nodes.lora_loader import LoraManagerLoader, LoraManagerTextLoader
from .py.nodes.trigger_word_toggle import TriggerWordToggle
from .py.nodes.prompt import PromptLoraManager
from .py.nodes.lora_stacker import LoraStacker
from .py.nodes.lora_loader import LoraLoaderLM, LoraTextLoaderLM
from .py.nodes.trigger_word_toggle import TriggerWordToggleLM
from .py.nodes.prompt import PromptLM
from .py.nodes.text import TextLM
from .py.nodes.lora_stacker import LoraStackerLM
from .py.nodes.save_image import SaveImageLM
from .py.nodes.debug_metadata import DebugMetadata
from .py.nodes.debug_metadata import DebugMetadataLM
from .py.nodes.wanvideo_lora_select import WanVideoLoraSelectLM
from .py.nodes.wanvideo_lora_select_from_text import WanVideoLoraSelectFromText
from .py.nodes.lora_pool import LoraPoolNode
from .py.nodes.lora_randomizer import LoraRandomizerNode
from .py.nodes.wanvideo_lora_select_from_text import WanVideoLoraTextSelectLM
from .py.nodes.lora_pool import LoraPoolLM
from .py.nodes.lora_randomizer import LoraRandomizerLM
from .py.nodes.lora_cycler import LoraCyclerLM
from .py.metadata_collector import init as init_metadata_collector
except (
ImportError
@@ -22,44 +24,50 @@ except (
if str(package_root) not in sys.path:
sys.path.append(str(package_root))
PromptLoraManager = importlib.import_module("py.nodes.prompt").PromptLoraManager
PromptLM = importlib.import_module("py.nodes.prompt").PromptLM
TextLM = importlib.import_module("py.nodes.text").TextLM
LoraManager = importlib.import_module("py.lora_manager").LoraManager
LoraManagerLoader = importlib.import_module(
LoraLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraManagerLoader
LoraManagerTextLoader = importlib.import_module(
).LoraLoaderLM
LoraTextLoaderLM = importlib.import_module(
"py.nodes.lora_loader"
).LoraManagerTextLoader
TriggerWordToggle = importlib.import_module(
).LoraTextLoaderLM
TriggerWordToggleLM = importlib.import_module(
"py.nodes.trigger_word_toggle"
).TriggerWordToggle
LoraStacker = importlib.import_module("py.nodes.lora_stacker").LoraStacker
).TriggerWordToggleLM
LoraStackerLM = importlib.import_module("py.nodes.lora_stacker").LoraStackerLM
SaveImageLM = importlib.import_module("py.nodes.save_image").SaveImageLM
DebugMetadata = importlib.import_module("py.nodes.debug_metadata").DebugMetadata
DebugMetadataLM = importlib.import_module("py.nodes.debug_metadata").DebugMetadataLM
WanVideoLoraSelectLM = importlib.import_module(
"py.nodes.wanvideo_lora_select"
).WanVideoLoraSelectLM
WanVideoLoraSelectFromText = importlib.import_module(
WanVideoLoraTextSelectLM = importlib.import_module(
"py.nodes.wanvideo_lora_select_from_text"
).WanVideoLoraSelectFromText
LoraPoolNode = importlib.import_module("py.nodes.lora_pool").LoraPoolNode
LoraRandomizerNode = importlib.import_module(
).WanVideoLoraTextSelectLM
LoraPoolLM = importlib.import_module("py.nodes.lora_pool").LoraPoolLM
LoraRandomizerLM = importlib.import_module(
"py.nodes.lora_randomizer"
).LoraRandomizerNode
).LoraRandomizerLM
LoraCyclerLM = importlib.import_module(
"py.nodes.lora_cycler"
).LoraCyclerLM
init_metadata_collector = importlib.import_module("py.metadata_collector").init
NODE_CLASS_MAPPINGS = {
PromptLoraManager.NAME: PromptLoraManager,
LoraManagerLoader.NAME: LoraManagerLoader,
LoraManagerTextLoader.NAME: LoraManagerTextLoader,
TriggerWordToggle.NAME: TriggerWordToggle,
LoraStacker.NAME: LoraStacker,
PromptLM.NAME: PromptLM,
TextLM.NAME: TextLM,
LoraLoaderLM.NAME: LoraLoaderLM,
LoraTextLoaderLM.NAME: LoraTextLoaderLM,
TriggerWordToggleLM.NAME: TriggerWordToggleLM,
LoraStackerLM.NAME: LoraStackerLM,
SaveImageLM.NAME: SaveImageLM,
DebugMetadata.NAME: DebugMetadata,
DebugMetadataLM.NAME: DebugMetadataLM,
WanVideoLoraSelectLM.NAME: WanVideoLoraSelectLM,
WanVideoLoraSelectFromText.NAME: WanVideoLoraSelectFromText,
LoraPoolNode.NAME: LoraPoolNode,
LoraRandomizerNode.NAME: LoraRandomizerNode,
WanVideoLoraTextSelectLM.NAME: WanVideoLoraTextSelectLM,
LoraPoolLM.NAME: LoraPoolLM,
LoraRandomizerLM.NAME: LoraRandomizerLM,
LoraCyclerLM.NAME: LoraCyclerLM,
}
WEB_DIRECTORY = "./web/comfyui"

View File

@@ -0,0 +1,28 @@
# DOM Widgets Documentation
Documentation for custom DOM widget development in ComfyUI LoRA Manager.
## Files
- **[Value Persistence Best Practices](value-persistence-best-practices.md)** - Essential guide for implementing text input DOM widgets that persist values correctly
## Key Lessons
### Common Anti-Patterns
**Don't**: Create internal state variables
**Don't**: Use v-model for text inputs
**Don't**: Add serializeValue, onSetValue callbacks
**Don't**: Watch props.widget.value
### Best Practices
**Do**: Use DOM element as single source of truth
**Do**: Store DOM reference on widget.inputEl
**Do**: Direct getValue/setValue to DOM
**Do**: Clean up reference on unmount
## Related Documentation
- [DOM Widget Development Guide](../dom_widget_dev_guide.md) - Comprehensive guide for building DOM widgets
- [ComfyUI Built-in Example](../../../../code/ComfyUI_frontend/src/renderer/extensions/vueNodes/widgets/composables/useStringWidget.ts) - Reference implementation

View File

@@ -0,0 +1,225 @@
# DOM Widget Value Persistence - Best Practices
## Overview
DOM widgets require different persistence patterns depending on their complexity. This document covers two patterns:
1. **Simple Text Widgets**: DOM element as source of truth (e.g., textarea, input)
2. **Complex Widgets**: Internal value with `widget.callback` (e.g., LoraPoolWidget, RandomizerWidget)
## Understanding ComfyUI's Built-in Callback Mechanism
When `widget.value` is set (e.g., during workflow load), ComfyUI's `domWidget.ts` triggers this flow:
```typescript
// From ComfyUI_frontend/src/scripts/domWidget.ts:146-149
set value(v: V) {
this.options.setValue?.(v) // 1. Update internal state
this.callback?.(this.value) // 2. Notify listeners for UI updates
}
```
This means:
- `setValue()` handles storing the value
- `widget.callback()` is automatically called to notify the UI
- You don't need custom callback mechanisms like `onSetValue`
---
## Pattern 1: Simple Text Input Widgets
For widgets where the value IS the DOM element's text content (textarea, input fields).
### When to Use
- Single text input/textarea widgets
- Value is a simple string
- No complex state management needed
### Implementation
**main.ts:**
```typescript
const widget = node.addDOMWidget(name, type, container, {
getValue() {
return widget.inputEl?.value ?? ''
},
setValue(v: string) {
if (widget.inputEl) {
widget.inputEl.value = v ?? ''
}
}
})
```
**Vue Component:**
```typescript
onMounted(() => {
if (textareaRef.value) {
props.widget.inputEl = textareaRef.value
}
})
onUnmounted(() => {
if (props.widget.inputEl === textareaRef.value) {
props.widget.inputEl = undefined
}
})
```
### Why This Works
- Single source of truth: the DOM element
- `getValue()` reads directly from DOM
- `setValue()` writes directly to DOM
- No sync issues between multiple state variables
---
## Pattern 2: Complex Widgets
For widgets with structured data (JSON configs, arrays, objects) where the value cannot be stored in a DOM element.
### When to Use
- Value is a complex object/array (e.g., `{ loras: [...], settings: {...} }`)
- Multiple UI elements contribute to the value
- Vue reactive state manages the UI
### Implementation
**main.ts:**
```typescript
let internalValue: MyConfig | undefined
const widget = node.addDOMWidget(name, type, container, {
getValue() {
return internalValue
},
setValue(v: MyConfig) {
internalValue = v
// NO custom onSetValue needed - widget.callback is called automatically
},
serialize: true // Ensure value is saved with workflow
})
```
**Vue Component:**
```typescript
const config = ref<MyConfig>(getDefaultConfig())
onMounted(() => {
// Set up callback for UI updates when widget.value changes externally
// (e.g., workflow load, undo/redo)
props.widget.callback = (newValue: MyConfig) => {
if (newValue) {
config.value = newValue
}
}
// Restore initial value if workflow was already loaded
if (props.widget.value) {
config.value = props.widget.value
}
})
// When UI changes, update widget value
function onConfigChange(newConfig: MyConfig) {
config.value = newConfig
props.widget.value = newConfig // This also triggers callback
}
```
### Why This Works
1. **Clear separation**: `internalValue` stores the data, Vue ref manages the UI
2. **Built-in callback**: ComfyUI calls `widget.callback()` automatically after `setValue()`
3. **Bidirectional sync**:
- External → UI: `setValue()` updates `internalValue`, `callback()` updates Vue ref
- UI → External: User interaction updates Vue ref, which updates `widget.value`
---
## Common Mistakes
### ❌ Creating custom callback mechanisms
```typescript
// Wrong - unnecessary complexity
setValue(v: MyConfig) {
internalValue = v
widget.onSetValue?.(v) // Don't add this - use widget.callback instead
}
```
Use the built-in `widget.callback` instead.
### ❌ Using v-model for simple text inputs in DOM widgets
```html
<!-- Wrong - creates sync issues -->
<textarea v-model="textValue" />
<!-- Right for simple text widgets -->
<textarea ref="textareaRef" @input="onInput" />
```
### ❌ Watching props.widget.value
```typescript
// Wrong - creates race conditions
watch(() => props.widget.value, (newValue) => {
config.value = newValue
})
```
Use `widget.callback` instead - it's called at the right time in the lifecycle.
### ❌ Multiple sources of truth
```typescript
// Wrong - who is the source of truth?
let internalValue = '' // State 1
const textValue = ref('') // State 2
const domElement = textarea // State 3
props.widget.value // State 4
```
Choose ONE source of truth:
- **Simple widgets**: DOM element
- **Complex widgets**: `internalValue` (with Vue ref as derived UI state)
### ❌ Adding serializeValue for simple widgets
```typescript
// Wrong - getValue/setValue handle serialization
props.widget.serializeValue = async () => textValue.value
```
---
## Decision Guide
| Widget Type | Source of Truth | Use `widget.callback` | Example |
|-------------|-----------------|----------------------|---------|
| Simple text input | DOM element (`inputEl`) | Optional | AutocompleteTextWidget |
| Complex config | `internalValue` | Yes, for UI sync | LoraPoolWidget |
| Vue component widget | Vue ref + `internalValue` | Yes | RandomizerWidget |
---
## Testing Checklist
- [ ] Load workflow - value restores correctly
- [ ] Switch workflow - value persists
- [ ] Reload page - value persists
- [ ] UI interaction - value updates
- [ ] Undo/redo - value syncs with UI
- [ ] No console errors
---
## References
- ComfyUI DOMWidget implementation: `ComfyUI_frontend/src/scripts/domWidget.ts`
- Simple text widget example: `ComfyUI_frontend/src/renderer/extensions/vueNodes/widgets/composables/useStringWidget.ts`

View File

@@ -240,6 +240,8 @@ inputEl.addEventListener("change", () => {
});
```
> **⚠️ Important**: For Vue-based DOM widgets with text inputs, follow the [Value Persistence Best Practices](dom-widgets/value-persistence-best-practices.md) to avoid sync issues. Key takeaway: use DOM element as single source of truth, avoid internal state variables and v-model.
### 5.3 The Restoration Mechanism (`configure`)
* **`configure(data)`**: When a Workflow is loaded, `LGraphNode` calls its `configure(data)` method.

View File

@@ -0,0 +1,69 @@
# Danbooru/E621 Tag Categories Reference
Reference for category values used in `danbooru_e621_merged.csv` tag files.
## Category Value Mapping
### Danbooru Categories
| Value | Description |
|-------|-------------|
| 0 | General |
| 1 | Artist |
| 2 | *(unused)* |
| 3 | Copyright |
| 4 | Character |
| 5 | Meta |
### e621 Categories
| Value | Description |
|-------|-------------|
| 6 | *(unused)* |
| 7 | General |
| 8 | Artist |
| 9 | Contributor |
| 10 | Copyright |
| 11 | Character |
| 12 | Species |
| 13 | *(unused)* |
| 14 | Meta |
| 15 | Lore |
## Danbooru Category Colors
| Description | Normal Color | Hover Color |
|-------------|--------------|-------------|
| General | #009be6 | #4bb4ff |
| Artist | #ff8a8b | #ffc3c3 |
| Copyright | #c797ff | #ddc9fb |
| Character | #35c64a | #93e49a |
| Meta | #ead084 | #f7e7c3 |
## CSV Column Structure
Each row in the merged CSV file contains 4 columns:
| Column | Description | Example |
|--------|-------------|---------|
| 1 | Tag name | `1girl`, `highres`, `solo` |
| 2 | Category value (0-15) | `0`, `5`, `7` |
| 3 | Post count | `6008644`, `5256195` |
| 4 | Aliases (comma-separated, quoted) | `"1girls,sole_female"`, empty string |
### Sample Data
```
1girl,0,6008644,"1girls,sole_female"
highres,5,5256195,"high_res,high_resolution,hires"
solo,0,5000954,"alone,female_solo,single,solo_female"
long_hair,0,4350743,"/lh,longhair"
mammal,12,3437444,"cetancodont,cetancodontamorph,feralmammal"
anthro,7,3381927,"adult_anthro,anhtro,antho,anthro_horse"
skirt,0,1557883,
```
## Source
- [PR #312: Add danbooru_e621_merged.csv](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/pull/312)
- [DraconicDragon/dbr-e621-lists-archive](https://github.com/DraconicDragon/dbr-e621-lists-archive)

View File

@@ -0,0 +1,191 @@
# Model Type 字段重构 - 遗留工作清单
> **状态**: Phase 1-4 已完成 | **创建日期**: 2026-01-30
> **相关文件**: `py/utils/models.py`, `py/services/model_query.py`, `py/services/checkpoint_scanner.py`, etc.
---
## 概述
本次重构旨在解决 `model_type` 字段语义不统一的问题。系统中有两个层面的"类型"概念:
1. **Scanner Type** (`scanner_type`): 架构层面的大类 - `lora`, `checkpoint`, `embedding`
2. **Sub Type** (`sub_type`): 业务层面的细分类型 - `lora`/`locon`/`dora`, `checkpoint`/`diffusion_model`, `embedding`
重构目标是统一使用 `sub_type` 表示细分类型,保留 `model_type` 作为向后兼容的别名。
---
## 已完成工作 ✅
### Phase 1: 后端字段重命名
- [x] `CheckpointMetadata.model_type``sub_type`
- [x] `EmbeddingMetadata.model_type``sub_type`
- [x] `model_scanner.py` `_build_cache_entry()` 同时处理 `sub_type``model_type`
### Phase 2: 查询逻辑更新
- [x] `model_query.py` 新增 `resolve_sub_type()``normalize_sub_type()`
- [x] ~~保持向后兼容的别名 `resolve_civitai_model_type`, `normalize_civitai_model_type`~~ (已在 Phase 5 移除)
- [x] `ModelFilterSet.apply()` 更新为使用新的解析函数
### Phase 3: API 响应更新
- [x] `LoraService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
- [x] `CheckpointService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
- [x] `EmbeddingService.format_response()` 返回 `sub_type` ~~+ `model_type`~~ (已移除 `model_type`)
### Phase 4: 前端更新
- [x] `constants.js` 新增 `MODEL_SUBTYPE_DISPLAY_NAMES`
- [x] `MODEL_TYPE_DISPLAY_NAMES` 作为别名保留
### Phase 5: 清理废弃代码 ✅
- [x]`ModelScanner._build_cache_entry()` 中移除 `model_type` 向后兼容代码
- [x]`CheckpointScanner` 中移除 `model_type` 兼容处理
- [x]`model_query.py` 中移除 `resolve_civitai_model_type``normalize_civitai_model_type` 别名
- [x] 更新前端 `FilterManager.js` 使用 `sub_type` (已在使用 `MODEL_SUBTYPE_DISPLAY_NAMES`)
- [x] 更新所有相关测试
---
## 遗留工作 ⏳
### Phase 5: 清理废弃代码 ✅ **已完成**
所有 Phase 5 的清理工作已完成:
#### 5.1 移除 `model_type` 字段的向后兼容代码 ✅
-`ModelScanner._build_cache_entry()` 中移除了 `model_type` 的设置
- 现在只设置 `sub_type` 字段
#### 5.2 移除 CheckpointScanner 的 model_type 兼容处理 ✅
- `adjust_metadata()` 现在只处理 `sub_type`
- `adjust_cached_entry()` 现在只设置 `sub_type`
#### 5.3 移除 model_query 中的向后兼容别名 ✅
- 移除了 `resolve_civitai_model_type = resolve_sub_type`
- 移除了 `normalize_civitai_model_type = normalize_sub_type`
#### 5.4 前端清理 ✅
- `FilterManager.js` 已经在使用 `MODEL_SUBTYPE_DISPLAY_NAMES` (通过别名 `MODEL_TYPE_DISPLAY_NAMES`)
- API list endpoint 现在只返回 `sub_type`,不再返回 `model_type`
- `ModelCard.js` 现在设置 `card.dataset.sub_type` (所有模型类型通用)
- `CheckpointContextMenu.js` 现在读取 `card.dataset.sub_type`
- `MoveManager.js` 现在处理 `cache_entry.sub_type`
- `RecipeModal.js` 现在读取 `checkpoint.sub_type`
---
## 数据库迁移评估
### 当前状态
- `persistent_model_cache.py` 使用 `civitai_model_type` 列存储 CivitAI 原始类型
- 缓存 entry 中的 `sub_type` 在运行期动态计算
- 数据库 schema **无需立即修改**
### 未来可选优化
```sql
-- 可选:在 models 表中添加 sub_type 列(与 civitai_model_type 保持一致但语义更清晰)
ALTER TABLE models ADD COLUMN sub_type TEXT;
-- 数据迁移
UPDATE models SET sub_type = civitai_model_type WHERE sub_type IS NULL;
```
**建议**: 如果决定添加 `sub_type` 列,应与 Phase 5 一起进行。
---
## 测试覆盖率
### 新增/更新测试文件(已全部通过 ✅)
| 测试文件 | 数量 | 覆盖内容 |
|---------|------|---------|
| `tests/utils/test_models_sub_type.py` | 7 | Metadata sub_type 字段 |
| `tests/services/test_model_query_sub_type.py` | 19 | sub_type 解析和过滤 |
| `tests/services/test_checkpoint_scanner_sub_type.py` | 6 | CheckpointScanner sub_type |
| `tests/services/test_service_format_response_sub_type.py` | 6 | API 响应 sub_type 包含 |
| `tests/services/test_checkpoint_scanner.py` | 1 | Checkpoint 缓存 sub_type |
| `tests/services/test_model_scanner.py` | 1 | adjust_cached_entry hook |
| `tests/services/test_download_manager.py` | 1 | Checkpoint 下载 sub_type |
### 需要补充的测试(可选)
- [ ] 集成测试:验证前端过滤使用 sub_type 字段
- [ ] 数据库迁移测试(如果执行可选优化)
- [ ] 性能测试:确认 resolve_sub_type 的优先级查找没有显著性能影响
---
## 兼容性检查清单
### 已完成 ✅
- [x] 前端代码已全部改用 `sub_type` 字段
- [x] API list endpoint 已移除 `model_type`,只返回 `sub_type`
- [x] 后端 cache entry 已移除 `model_type`,只保留 `sub_type`
- [x] 所有测试已更新通过
- [x] 文档已更新
---
## 相关文件清单
### 核心文件
```
py/utils/models.py
py/utils/constants.py
py/services/model_scanner.py
py/services/model_query.py
py/services/checkpoint_scanner.py
py/services/base_model_service.py
py/services/lora_service.py
py/services/checkpoint_service.py
py/services/embedding_service.py
```
### 前端文件
```
static/js/utils/constants.js
static/js/managers/FilterManager.js
static/js/managers/MoveManager.js
static/js/components/shared/ModelCard.js
static/js/components/ContextMenu/CheckpointContextMenu.js
static/js/components/RecipeModal.js
```
### 测试文件
```
tests/utils/test_models_sub_type.py
tests/services/test_model_query_sub_type.py
tests/services/test_checkpoint_scanner_sub_type.py
tests/services/test_service_format_response_sub_type.py
```
---
## 风险评估
| 风险项 | 影响 | 缓解措施 |
|-------|------|---------|
| ~~第三方代码依赖 `model_type`~~ | ~~高~~ | ~~保持别名至少 1 个 major 版本~~ ✅ 已完成移除 |
| ~~数据库 schema 变更~~ | ~~中~~ | ~~暂缓 schema 变更,仅运行时计算~~ ✅ 无需变更 |
| ~~前端过滤失效~~ | ~~中~~ | ~~全面的集成测试覆盖~~ ✅ 测试通过 |
| CivitAI API 变化 | 低 | 保持多源解析策略 |
---
## 时间线
- **v1.x**: Phase 1-4 已完成,保持向后兼容
- **v2.0 (当前)**: ✅ Phase 5 已完成 - `model_type` 兼容代码已移除
- API list endpoint 只返回 `sub_type`
- Cache entry 只保留 `sub_type`
- 移除了 `resolve_civitai_model_type``normalize_civitai_model_type` 别名
---
## 备注
- 重构期间发现 `civitai_model_type` 数据库列命名尚可,但语义上应理解为存储 CivitAI API 返回的原始类型值
- Checkpoint 的 `diffusion_model` sub_type 不能通过 CivitAI API 获取必须通过文件路径model root判断
- LoRA 的 sub_typelora/locon/dora直接来自 CivitAI API 的 `version_info.model.type`

Binary file not shown.

After

Width:  |  Height:  |  Size: 657 KiB

File diff suppressed because one or more lines are too long

View File

@@ -10,7 +10,8 @@
"next": "Weiter",
"backToTop": "Nach oben",
"settings": "Einstellungen",
"help": "Hilfe"
"help": "Hilfe",
"add": "Hinzufügen"
},
"status": {
"loading": "Wird geladen...",
@@ -178,6 +179,7 @@
"recipes": "Rezepte",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Statistiken"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "LoRAs suchen...",
"recipes": "Rezepte suchen...",
"checkpoints": "Checkpoints suchen...",
"embeddings": "Embeddings suchen..."
"embeddings": "Embeddings suchen...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "Suchoptionen",
"searchIn": "Suchen in:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "Modelle filtern",
"presets": "Voreinstellungen",
"savePreset": "Aktive Filter als neue Voreinstellung speichern.",
"savePresetDisabledActive": "Speichern nicht möglich: Eine Voreinstellung ist bereits aktiv. Ändern Sie die Filter, um eine neue Voreinstellung zu speichern",
"savePresetDisabledNoFilters": "Wählen Sie zuerst Filter aus, um als Voreinstellung zu speichern",
"savePresetPrompt": "Voreinstellungsname eingeben:",
"presetClickTooltip": "Voreinstellung \"{name}\" anwenden",
"presetDeleteTooltip": "Voreinstellung löschen",
"presetDeleteConfirm": "Voreinstellung \"{name}\" löschen?",
"presetDeleteConfirmClick": "Zum Bestätigen erneut klicken",
"presetOverwriteConfirm": "Voreinstellung \"{name}\" existiert bereits. Überschreiben?",
"presetNamePlaceholder": "Voreinstellungsname...",
"baseModel": "Basis-Modell",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embedding-Modelle"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Stammverzeichnis",
"collapseAll": "Alle Ordner einklappen",
@@ -1092,6 +1116,10 @@
"title": "Statistiken werden initialisiert",
"message": "Modelldaten für Statistiken werden verarbeitet. Dies kann einige Minuten dauern..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Tipps & Tricks",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "Rezept zum Workflow hinzugefügt",
"recipeReplaced": "Rezept im Workflow ersetzt",
"recipeFailedToSend": "Fehler beim Senden des Rezepts an den Workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "Keine kompatiblen Knoten im aktuellen Workflow verfügbar",
"noTargetNodeSelected": "Kein Zielknoten ausgewählt"
},
"nodeSelector": {
"recipe": "Rezept",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Ersetzen",
"append": "Anhängen",
"selectTargetNode": "Zielknoten auswählen",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "Beispielbilder-Ordner geöffnet",
"openingFolder": "Beispielbilder-Ordner wird geöffnet",
"failedToOpen": "Fehler beim Öffnen des Beispielbilder-Ordners"
"failedToOpen": "Fehler beim Öffnen des Beispielbilder-Ordners",
"setupRequired": "Beispielbilder-Speicher",
"setupDescription": "Um benutzerdefinierte Beispielbilder hinzuzufügen, müssen Sie zuerst einen Download-Speicherort festlegen.",
"setupUsage": "Dieser Pfad wird sowohl für heruntergeladene als auch für benutzerdefinierte Beispielbilder verwendet.",
"openSettings": "Einstellungen öffnen"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "Nach Updates wird gesucht...",
"checkingMessage": "Bitte warten Sie, während wir nach der neuesten Version suchen.",
"showNotifications": "Update-Benachrichtigungen anzeigen",
"latestBadge": "Neueste",
"updateProgress": {
"preparing": "Update wird vorbereitet...",
"installing": "Update wird installiert...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "Filter gelöscht",
"noCustomFilterToClear": "Kein benutzerdefinierter Filter zum Löschen"
"noCustomFilterToClear": "Kein benutzerdefinierter Filter zum Löschen",
"noActiveFilters": "Keine aktiven Filter zum Speichern"
},
"presets": {
"created": "Voreinstellung \"{name}\" erstellt",
"deleted": "Voreinstellung \"{name}\" gelöscht",
"applied": "Voreinstellung \"{name}\" angewendet",
"overwritten": "Voreinstellung \"{name}\" überschrieben",
"restored": "Standard-Voreinstellungen wiederhergestellt"
},
"error": {
"presetNameEmpty": "Voreinstellungsname darf nicht leer sein",
"presetNameTooLong": "Voreinstellungsname darf maximal {max} Zeichen haben",
"presetNameInvalidChars": "Voreinstellungsname enthält ungültige Zeichen",
"presetNameExists": "Eine Voreinstellung mit diesem Namen existiert bereits",
"maxPresetsReached": "Maximal {max} Voreinstellungen erlaubt. Löschen Sie eine, um weitere hinzuzufügen.",
"presetNotFound": "Voreinstellung nicht gefunden",
"invalidPreset": "Ungültige Voreinstellungsdaten",
"deletePresetFailed": "Fehler beim Löschen der Voreinstellung",
"applyPresetFailed": "Fehler beim Anwenden der Voreinstellung"
},
"downloads": {
"imagesCompleted": "Beispielbilder {action} abgeschlossen",

View File

@@ -10,7 +10,8 @@
"next": "Next",
"backToTop": "Back to top",
"settings": "Settings",
"help": "Help"
"help": "Help",
"add": "Add"
},
"status": {
"loading": "Loading...",
@@ -178,6 +179,7 @@
"recipes": "Recipes",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "Misc",
"statistics": "Stats"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "Search LoRAs...",
"recipes": "Search recipes...",
"checkpoints": "Search checkpoints...",
"embeddings": "Search embeddings..."
"embeddings": "Search embeddings...",
"misc": "Search VAE/Upscaler models..."
},
"options": "Search Options",
"searchIn": "Search In:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "Filter Models",
"presets": "Presets",
"savePreset": "Save current active filters as a new preset.",
"savePresetDisabledActive": "Cannot save: A preset is already active. Modify filters to save new preset.",
"savePresetDisabledNoFilters": "Select filters first to save as preset",
"savePresetPrompt": "Enter preset name:",
"presetClickTooltip": "Click to apply preset \"{name}\"",
"presetDeleteTooltip": "Delete preset",
"presetDeleteConfirm": "Delete preset \"{name}\"?",
"presetDeleteConfirmClick": "Click again to confirm",
"presetOverwriteConfirm": "Preset \"{name}\" already exists. Overwrite?",
"presetNamePlaceholder": "Preset name...",
"baseModel": "Base Model",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embedding Models"
},
"misc": {
"title": "VAE & Upscaler Models",
"modelTypes": {
"vae": "VAE",
"upscaler": "Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Root",
"collapseAll": "Collapse All Folders",
@@ -1092,6 +1116,10 @@
"title": "Initializing Statistics",
"message": "Processing model data for statistics. This may take a few minutes..."
},
"misc": {
"title": "Initializing Misc Model Manager",
"message": "Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Tips & Tricks",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "Recipe appended to workflow",
"recipeReplaced": "Recipe replaced in workflow",
"recipeFailedToSend": "Failed to send recipe to workflow",
"vaeUpdated": "VAE updated in workflow",
"vaeFailed": "Failed to update VAE in workflow",
"upscalerUpdated": "Upscaler updated in workflow",
"upscalerFailed": "Failed to update upscaler in workflow",
"noMatchingNodes": "No compatible nodes available in the current workflow",
"noTargetNodeSelected": "No target node selected"
},
"nodeSelector": {
"recipe": "Recipe",
"lora": "LoRA",
"vae": "VAE",
"upscaler": "Upscaler",
"replace": "Replace",
"append": "Append",
"selectTargetNode": "Select target node",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "Example images folder opened",
"openingFolder": "Opening example images folder",
"failedToOpen": "Failed to open example images folder"
"failedToOpen": "Failed to open example images folder",
"setupRequired": "Example Images Storage",
"setupDescription": "To add custom example images, you need to set a download location first.",
"setupUsage": "This path is used for both downloaded and custom example images.",
"openSettings": "Open Settings"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "Checking for updates...",
"checkingMessage": "Please wait while we check for the latest version.",
"showNotifications": "Show update notifications",
"latestBadge": "Latest",
"updateProgress": {
"preparing": "Preparing update...",
"installing": "Installing update...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "Filters cleared",
"noCustomFilterToClear": "No custom filter to clear"
"noCustomFilterToClear": "No custom filter to clear",
"noActiveFilters": "No active filters to save"
},
"presets": {
"created": "Preset \"{name}\" created",
"deleted": "Preset \"{name}\" deleted",
"applied": "Preset \"{name}\" applied",
"overwritten": "Preset \"{name}\" overwritten",
"restored": "Default presets restored"
},
"error": {
"presetNameEmpty": "Preset name cannot be empty",
"presetNameTooLong": "Preset name must be {max} characters or less",
"presetNameInvalidChars": "Preset name contains invalid characters",
"presetNameExists": "A preset with this name already exists",
"maxPresetsReached": "Maximum {max} presets allowed. Delete one to add more.",
"presetNotFound": "Preset not found",
"invalidPreset": "Invalid preset data",
"deletePresetFailed": "Failed to delete preset",
"applyPresetFailed": "Failed to apply preset"
},
"downloads": {
"imagesCompleted": "Example images {action} completed",

View File

@@ -10,7 +10,8 @@
"next": "Siguiente",
"backToTop": "Volver arriba",
"settings": "Configuración",
"help": "Ayuda"
"help": "Ayuda",
"add": "Añadir"
},
"status": {
"loading": "Cargando...",
@@ -178,6 +179,7 @@
"recipes": "Recetas",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Estadísticas"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "Buscar LoRAs...",
"recipes": "Buscar recetas...",
"checkpoints": "Buscar checkpoints...",
"embeddings": "Buscar embeddings..."
"embeddings": "Buscar embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "Opciones de búsqueda",
"searchIn": "Buscar en:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "Filtrar modelos",
"presets": "Preajustes",
"savePreset": "Guardar filtros activos como nuevo preajuste.",
"savePresetDisabledActive": "No se puede guardar: Ya hay un preajuste activo. Modifique los filtros para guardar un nuevo preajuste",
"savePresetDisabledNoFilters": "Seleccione filtros primero para guardar como preajuste",
"savePresetPrompt": "Ingrese el nombre del preajuste:",
"presetClickTooltip": "Hacer clic para aplicar preajuste \"{name}\"",
"presetDeleteTooltip": "Eliminar preajuste",
"presetDeleteConfirm": "¿Eliminar preajuste \"{name}\"?",
"presetDeleteConfirmClick": "Haga clic de nuevo para confirmar",
"presetOverwriteConfirm": "El preset \"{name}\" ya existe. ¿Sobrescribir?",
"presetNamePlaceholder": "Nombre del preajuste...",
"baseModel": "Modelo base",
"modelTags": "Etiquetas (Top 20)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Modelos embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Raíz",
"collapseAll": "Colapsar todas las carpetas",
@@ -1092,6 +1116,10 @@
"title": "Inicializando estadísticas",
"message": "Procesando datos del modelo para estadísticas. Esto puede tomar unos minutos..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Consejos y trucos",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "Receta añadida al flujo de trabajo",
"recipeReplaced": "Receta reemplazada en el flujo de trabajo",
"recipeFailedToSend": "Error al enviar receta al flujo de trabajo",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "No hay nodos compatibles disponibles en el flujo de trabajo actual",
"noTargetNodeSelected": "No se ha seleccionado ningún nodo de destino"
},
"nodeSelector": {
"recipe": "Receta",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Reemplazar",
"append": "Añadir",
"selectTargetNode": "Seleccionar nodo de destino",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "Carpeta de imágenes de ejemplo abierta",
"openingFolder": "Abriendo carpeta de imágenes de ejemplo",
"failedToOpen": "Error al abrir carpeta de imágenes de ejemplo"
"failedToOpen": "Error al abrir carpeta de imágenes de ejemplo",
"setupRequired": "Almacenamiento de imágenes de ejemplo",
"setupDescription": "Para agregar imágenes de ejemplo personalizadas, primero necesita establecer una ubicación de descarga.",
"setupUsage": "Esta ruta se utiliza tanto para imágenes de ejemplo descargadas como personalizadas.",
"openSettings": "Abrir configuración"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "Comprobando actualizaciones...",
"checkingMessage": "Por favor espera mientras comprobamos la última versión.",
"showNotifications": "Mostrar notificaciones de actualización",
"latestBadge": "Último",
"updateProgress": {
"preparing": "Preparando actualización...",
"installing": "Instalando actualización...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "Filtros limpiados",
"noCustomFilterToClear": "No hay filtro personalizado para limpiar"
"noCustomFilterToClear": "No hay filtro personalizado para limpiar",
"noActiveFilters": "No hay filtros activos para guardar"
},
"presets": {
"created": "Preajuste \"{name}\" creado",
"deleted": "Preajuste \"{name}\" eliminado",
"applied": "Preajuste \"{name}\" aplicado",
"overwritten": "Preset \"{name}\" sobrescrito",
"restored": "Presets predeterminados restaurados"
},
"error": {
"presetNameEmpty": "El nombre del preajuste no puede estar vacío",
"presetNameTooLong": "El nombre del preajuste debe tener {max} caracteres o menos",
"presetNameInvalidChars": "El nombre del preajuste contiene caracteres inválidos",
"presetNameExists": "Ya existe un preajuste con este nombre",
"maxPresetsReached": "Máximo {max} preajustes permitidos. Elimine uno para agregar más.",
"presetNotFound": "Preajuste no encontrado",
"invalidPreset": "Datos de preajuste inválidos",
"deletePresetFailed": "Error al eliminar el preajuste",
"applyPresetFailed": "Error al aplicar el preajuste"
},
"downloads": {
"imagesCompleted": "Imágenes de ejemplo {action} completadas",

View File

@@ -10,7 +10,8 @@
"next": "Suivant",
"backToTop": "Retour en haut",
"settings": "Paramètres",
"help": "Aide"
"help": "Aide",
"add": "Ajouter"
},
"status": {
"loading": "Chargement...",
@@ -178,6 +179,7 @@
"recipes": "Recipes",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Statistiques"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "Rechercher des LoRAs...",
"recipes": "Rechercher des recipes...",
"checkpoints": "Rechercher des checkpoints...",
"embeddings": "Rechercher des embeddings..."
"embeddings": "Rechercher des embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "Options de recherche",
"searchIn": "Rechercher dans :",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "Filtrer les modèles",
"presets": "Préréglages",
"savePreset": "Enregistrer les filtres actifs comme nouveau préréglage.",
"savePresetDisabledActive": "Impossible d'enregistrer : Un préréglage est déjà actif. Modifiez les filtres pour enregistrer un nouveau préréglage",
"savePresetDisabledNoFilters": "Sélectionnez d'abord des filtres à enregistrer comme préréglage",
"savePresetPrompt": "Entrez le nom du préréglage :",
"presetClickTooltip": "Cliquer pour appliquer le préréglage \"{name}\"",
"presetDeleteTooltip": "Supprimer le préréglage",
"presetDeleteConfirm": "Supprimer le préréglage \"{name}\" ?",
"presetDeleteConfirmClick": "Cliquez à nouveau pour confirmer",
"presetOverwriteConfirm": "Le préréglage \"{name}\" existe déjà. Remplacer?",
"presetNamePlaceholder": "Nom du préréglage...",
"baseModel": "Modèle de base",
"modelTags": "Tags (Top 20)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Modèles Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Racine",
"collapseAll": "Réduire tous les dossiers",
@@ -1092,6 +1116,10 @@
"title": "Initialisation des statistiques",
"message": "Traitement des données de modèle pour les statistiques. Cela peut prendre quelques minutes..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Astuces et conseils",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "Recipe ajoutée au workflow",
"recipeReplaced": "Recipe remplacée dans le workflow",
"recipeFailedToSend": "Échec de l'envoi de la recipe au workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "Aucun nœud compatible disponible dans le workflow actuel",
"noTargetNodeSelected": "Aucun nœud cible sélectionné"
},
"nodeSelector": {
"recipe": "Recipe",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Remplacer",
"append": "Ajouter",
"selectTargetNode": "Sélectionner le nœud cible",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "Dossier d'images d'exemple ouvert",
"openingFolder": "Ouverture du dossier d'images d'exemple",
"failedToOpen": "Échec de l'ouverture du dossier d'images d'exemple"
"failedToOpen": "Échec de l'ouverture du dossier d'images d'exemple",
"setupRequired": "Stockage d'images d'exemple",
"setupDescription": "Pour ajouter des images d'exemple personnalisées, vous devez d'abord définir un emplacement de téléchargement.",
"setupUsage": "Ce chemin est utilisé pour les images d'exemple téléchargées et personnalisées.",
"openSettings": "Ouvrir les paramètres"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "Vérification des mises à jour...",
"checkingMessage": "Veuillez patienter pendant la vérification de la dernière version.",
"showNotifications": "Afficher les notifications de mise à jour",
"latestBadge": "Dernier",
"updateProgress": {
"preparing": "Préparation de la mise à jour...",
"installing": "Installation de la mise à jour...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "Filtres effacés",
"noCustomFilterToClear": "Aucun filtre personnalisé à effacer"
"noCustomFilterToClear": "Aucun filtre personnalisé à effacer",
"noActiveFilters": "Aucun filtre actif à enregistrer"
},
"presets": {
"created": "Préréglage \"{name}\" créé",
"deleted": "Préréglage \"{name}\" supprimé",
"applied": "Préréglage \"{name}\" appliqué",
"overwritten": "Préréglage \"{name}\" remplacé",
"restored": "Paramètres par défaut restaurés"
},
"error": {
"presetNameEmpty": "Le nom du préréglage ne peut pas être vide",
"presetNameTooLong": "Le nom du préréglage doit contenir au maximum {max} caractères",
"presetNameInvalidChars": "Le nom du préréglage contient des caractères invalides",
"presetNameExists": "Un préréglage avec ce nom existe déjà",
"maxPresetsReached": "Maximum {max} préréglages autorisés. Supprimez-en un pour en ajouter plus.",
"presetNotFound": "Préréglage non trouvé",
"invalidPreset": "Données de préréglage invalides",
"deletePresetFailed": "Échec de la suppression du préréglage",
"applyPresetFailed": "Échec de l'application du préréglage"
},
"downloads": {
"imagesCompleted": "Images d'exemple {action} terminées",

View File

@@ -10,7 +10,8 @@
"next": "הבא",
"backToTop": "חזור למעלה",
"settings": "הגדרות",
"help": "עזרה"
"help": "עזרה",
"add": "הוסף"
},
"status": {
"loading": "טוען...",
@@ -178,6 +179,7 @@
"recipes": "מתכונים",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "סטטיסטיקה"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "חפש LoRAs...",
"recipes": "חפש מתכונים...",
"checkpoints": "חפש checkpoints...",
"embeddings": "חפש embeddings..."
"embeddings": "חפש embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "אפשרויות חיפוש",
"searchIn": "חפש ב:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "סנן מודלים",
"presets": "קביעות מראש",
"savePreset": "שמור מסננים פעילים כקביעה מראש חדשה.",
"savePresetDisabledActive": "לא ניתן לשמור: קביעה מראש כבר פעילה. שנה מסננים כדי לשמור קביעה מראש חדשה",
"savePresetDisabledNoFilters": "בחר מסננים תחילה כדי לשמור כקביעה מראש",
"savePresetPrompt": "הזן שם קביעה מראש:",
"presetClickTooltip": "לחץ כדי להפעיל קביעה מראש \"{name}\"",
"presetDeleteTooltip": "מחק קביעה מראש",
"presetDeleteConfirm": "למחוק קביעה מראש \"{name}\"?",
"presetDeleteConfirmClick": "לחץ שוב לאישור",
"presetOverwriteConfirm": "הפריסט \"{name}\" כבר קיים. לדרוס?",
"presetNamePlaceholder": "שם קביעה מראש...",
"baseModel": "מודל בסיס",
"modelTags": "תגיות (20 המובילות)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "מודלי Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "שורש",
"collapseAll": "כווץ את כל התיקיות",
@@ -1092,6 +1116,10 @@
"title": "מאתחל סטטיסטיקה",
"message": "מעבד נתוני מודלים עבור סטטיסטיקה. זה עשוי לקחת מספר דקות..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "טיפים וטריקים",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "מתכון נוסף ל-workflow",
"recipeReplaced": "מתכון הוחלף ב-workflow",
"recipeFailedToSend": "שליחת מתכון ל-workflow נכשלה",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "אין צמתים תואמים זמינים ב-workflow הנוכחי",
"noTargetNodeSelected": "לא נבחר צומת יעד"
},
"nodeSelector": {
"recipe": "מתכון",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "החלף",
"append": "הוסף",
"selectTargetNode": "בחר צומת יעד",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "תיקיית תמונות הדוגמה נפתחה",
"openingFolder": "פותח תיקיית תמונות דוגמה",
"failedToOpen": "פתיחת תיקיית תמונות הדוגמה נכשלה"
"failedToOpen": "פתיחת תיקיית תמונות הדוגמה נכשלה",
"setupRequired": "אחסון תמונות דוגמה",
"setupDescription": "כדי להוסיף תמונות דוגמה מותאמות אישית, עליך קודם להגדיר מיקום הורדה.",
"setupUsage": "נתיב זה משמש הן עבור תמונות דוגמה שהורדו והן עבור תמונות מותאמות אישית.",
"openSettings": "פתח הגדרות"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "בודק עדכונים...",
"checkingMessage": "אנא המתן בזמן שאנו בודקים את הגרסה האחרונה.",
"showNotifications": "הצג התראות עדכון",
"latestBadge": "עדכן",
"updateProgress": {
"preparing": "מכין עדכון...",
"installing": "מתקין עדכון...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "המסננים נוקו",
"noCustomFilterToClear": "אין מסנן מותאם אישית לניקוי"
"noCustomFilterToClear": "אין מסנן מותאם אישית לניקוי",
"noActiveFilters": "אין מסננים פעילים לשמירה"
},
"presets": {
"created": "קביעה מראש \"{name}\" נוצרה",
"deleted": "קביעה מראש \"{name}\" נמחקה",
"applied": "קביעה מראש \"{name}\" הופעלה",
"overwritten": "קביעה מראש \"{name}\" נדרסה",
"restored": "ברירות המחדל שוחזרו"
},
"error": {
"presetNameEmpty": "שם קביעה מראש לא יכול להיות ריק",
"presetNameTooLong": "שם קביעה מראש חייב להיות {max} תווים או פחות",
"presetNameInvalidChars": "שם קביעה מראש מכיל תווים לא חוקיים",
"presetNameExists": "קביעה מראש עם שם זה כבר קיימת",
"maxPresetsReached": "מותר מקסימום {max} קביעות מראש. מחק אחת כדי להוסיף עוד.",
"presetNotFound": "קביעה מראש לא נמצאה",
"invalidPreset": "נתוני קביעה מראש לא חוקיים",
"deletePresetFailed": "מחיקת קביעה מראש נכשלה",
"applyPresetFailed": "הפעלת קביעה מראש נכשלה"
},
"downloads": {
"imagesCompleted": "{action} תמונות הדוגמה הושלם",

View File

@@ -10,7 +10,8 @@
"next": "次へ",
"backToTop": "トップに戻る",
"settings": "設定",
"help": "ヘルプ"
"help": "ヘルプ",
"add": "追加"
},
"status": {
"loading": "読み込み中...",
@@ -178,6 +179,7 @@
"recipes": "レシピ",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "統計"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "LoRAを検索...",
"recipes": "レシピを検索...",
"checkpoints": "checkpointを検索...",
"embeddings": "embeddingを検索..."
"embeddings": "embeddingを検索...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "検索オプション",
"searchIn": "検索対象:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "モデルをフィルタ",
"presets": "プリセット",
"savePreset": "現在のアクティブフィルタを新しいプリセットとして保存。",
"savePresetDisabledActive": "保存できません:プリセットがすでにアクティブです。フィルタを変更して新しいプリセットを保存してください",
"savePresetDisabledNoFilters": "先にフィルタを選択してからプリセットとして保存",
"savePresetPrompt": "プリセット名を入力:",
"presetClickTooltip": "プリセット \"{name}\" を適用するにはクリック",
"presetDeleteTooltip": "プリセットを削除",
"presetDeleteConfirm": "プリセット \"{name}\" を削除しますか?",
"presetDeleteConfirmClick": "もう一度クリックして確認",
"presetOverwriteConfirm": "プリセット「{name}」は既に存在します。上書きしますか?",
"presetNamePlaceholder": "プリセット名...",
"baseModel": "ベースモデル",
"modelTags": "タグ上位20",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embeddingモデル"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "ルート",
"collapseAll": "すべてのフォルダを折りたたむ",
@@ -1092,6 +1116,10 @@
"title": "統計を初期化中",
"message": "統計用のモデルデータを処理中。数分かかる場合があります..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "ヒント&コツ",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "レシピがワークフローに追加されました",
"recipeReplaced": "レシピがワークフローで置換されました",
"recipeFailedToSend": "レシピをワークフローに送信できませんでした",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "現在のワークフローには互換性のあるノードがありません",
"noTargetNodeSelected": "ターゲットノードが選択されていません"
},
"nodeSelector": {
"recipe": "レシピ",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "置換",
"append": "追加",
"selectTargetNode": "ターゲットノードを選択",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "例画像フォルダが開かれました",
"openingFolder": "例画像フォルダを開いています",
"failedToOpen": "例画像フォルダを開くのに失敗しました"
"failedToOpen": "例画像フォルダを開くのに失敗しました",
"setupRequired": "例画像ストレージ",
"setupDescription": "カスタム例画像を追加するには、まずダウンロード場所を設定する必要があります。",
"setupUsage": "このパスは、ダウンロードした例画像とカスタム画像の両方に使用されます。",
"openSettings": "設定を開く"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "更新を確認中...",
"checkingMessage": "最新バージョンを確認しています。お待ちください。",
"showNotifications": "更新通知を表示",
"latestBadge": "最新",
"updateProgress": {
"preparing": "更新を準備中...",
"installing": "更新をインストール中...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "フィルタがクリアされました",
"noCustomFilterToClear": "クリアするカスタムフィルタがありません"
"noCustomFilterToClear": "クリアするカスタムフィルタがありません",
"noActiveFilters": "保存するアクティブフィルタがありません"
},
"presets": {
"created": "プリセット \"{name}\" が作成されました",
"deleted": "プリセット \"{name}\" が削除されました",
"applied": "プリセット \"{name}\" が適用されました",
"overwritten": "プリセット「{name}」を上書きしました",
"restored": "デフォルトのプリセットを復元しました"
},
"error": {
"presetNameEmpty": "プリセット名を入力してください",
"presetNameTooLong": "プリセット名は{max}文字以内にしてください",
"presetNameInvalidChars": "プリセット名に使用できない文字が含まれています",
"presetNameExists": "同じ名前のプリセットが既に存在します",
"maxPresetsReached": "プリセットは最大{max}個までです。追加するには既存のものを削除してください。",
"presetNotFound": "プリセットが見つかりません",
"invalidPreset": "無効なプリセットデータです",
"deletePresetFailed": "プリセットの削除に失敗しました",
"applyPresetFailed": "プリセットの適用に失敗しました"
},
"downloads": {
"imagesCompleted": "例画像 {action} が完了しました",

View File

@@ -10,7 +10,8 @@
"next": "다음",
"backToTop": "맨 위로",
"settings": "설정",
"help": "도움말"
"help": "도움말",
"add": "추가"
},
"status": {
"loading": "로딩 중...",
@@ -178,6 +179,7 @@
"recipes": "레시피",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "통계"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "LoRA 검색...",
"recipes": "레시피 검색...",
"checkpoints": "Checkpoint 검색...",
"embeddings": "Embedding 검색..."
"embeddings": "Embedding 검색...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "검색 옵션",
"searchIn": "검색 범위:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "모델 필터",
"presets": "프리셋",
"savePreset": "현재 활성 필터를 새 프리셋으로 저장.",
"savePresetDisabledActive": "저장할 수 없음: 프리셋이 이미 활성화되어 있습니다. 필터를 수정한 후 새 프리셋을 저장하세요",
"savePresetDisabledNoFilters": "먼저 필터를 선택한 후 프리셋으로 저장",
"savePresetPrompt": "프리셋 이름 입력:",
"presetClickTooltip": "프리셋 \"{name}\" 적용하려면 클릭",
"presetDeleteTooltip": "프리셋 삭제",
"presetDeleteConfirm": "프리셋 \"{name}\" 삭제하시겠습니까?",
"presetDeleteConfirmClick": "다시 클릭하여 확인",
"presetOverwriteConfirm": "프리셋 \"{name}\"이(가) 이미 존재합니다. 덮어쓰시겠습니까?",
"presetNamePlaceholder": "프리셋 이름...",
"baseModel": "베이스 모델",
"modelTags": "태그 (상위 20개)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embedding 모델"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "루트",
"collapseAll": "모든 폴더 접기",
@@ -1092,6 +1116,10 @@
"title": "통계 초기화 중",
"message": "통계를 위한 모델 데이터를 처리하고 있습니다. 몇 분이 걸릴 수 있습니다..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "팁 & 요령",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "레시피가 워크플로에 추가되었습니다",
"recipeReplaced": "레시피가 워크플로에서 교체되었습니다",
"recipeFailedToSend": "레시피를 워크플로로 전송하지 못했습니다",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "현재 워크플로에서 호환되는 노드가 없습니다",
"noTargetNodeSelected": "대상 노드가 선택되지 않았습니다"
},
"nodeSelector": {
"recipe": "레시피",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "교체",
"append": "추가",
"selectTargetNode": "대상 노드 선택",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "예시 이미지 폴더가 열렸습니다",
"openingFolder": "예시 이미지 폴더를 여는 중",
"failedToOpen": "예시 이미지 폴더 열기 실패"
"failedToOpen": "예시 이미지 폴더 열기 실패",
"setupRequired": "예시 이미지 저장소",
"setupDescription": "사용자 지정 예시 이미지를 추가하려면 먼저 다운로드 위치를 설정해야 합니다.",
"setupUsage": "이 경로는 다운로드한 예시 이미지와 사용자 지정 이미지 모두에 사용됩니다.",
"openSettings": "설정 열기"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "업데이트 확인 중...",
"checkingMessage": "최신 버전을 확인하는 동안 잠시 기다려주세요.",
"showNotifications": "업데이트 알림 표시",
"latestBadge": "최신",
"updateProgress": {
"preparing": "업데이트 준비 중...",
"installing": "업데이트 설치 중...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "필터가 지워졌습니다",
"noCustomFilterToClear": "지울 사용자 정의 필터가 없습니다"
"noCustomFilterToClear": "지울 사용자 정의 필터가 없습니다",
"noActiveFilters": "저장할 활성 필터가 없습니다"
},
"presets": {
"created": "프리셋 \"{name}\" 생성됨",
"deleted": "프리셋 \"{name}\" 삭제됨",
"applied": "프리셋 \"{name}\" 적용됨",
"overwritten": "프리셋 \"{name}\" 덮어쓰기 완료",
"restored": "기본 프리셋 복원 완료"
},
"error": {
"presetNameEmpty": "프리셋 이름을 입력하세요",
"presetNameTooLong": "프리셋 이름은 {max}자 이하여야 합니다",
"presetNameInvalidChars": "프리셋 이름에 유효하지 않은 문자가 포함되어 있습니다",
"presetNameExists": "동일한 이름의 프리셋이 이미 존재합니다",
"maxPresetsReached": "최대 {max}개의 프리셋만 허용됩니다. 더 추가하려면 기존 것을 삭제하세요.",
"presetNotFound": "프리셋을 찾을 수 없습니다",
"invalidPreset": "잘못된 프리셋 데이터입니다",
"deletePresetFailed": "프리셋 삭제에 실패했습니다",
"applyPresetFailed": "프리셋 적용에 실패했습니다"
},
"downloads": {
"imagesCompleted": "예시 이미지 {action}이(가) 완료되었습니다",

View File

@@ -10,7 +10,8 @@
"next": "Далее",
"backToTop": "Наверх",
"settings": "Настройки",
"help": "Справка"
"help": "Справка",
"add": "Добавить"
},
"status": {
"loading": "Загрузка...",
@@ -178,6 +179,7 @@
"recipes": "Рецепты",
"checkpoints": "Checkpoints",
"embeddings": "Embeddings",
"misc": "[TODO: Translate] Misc",
"statistics": "Статистика"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "Поиск LoRAs...",
"recipes": "Поиск рецептов...",
"checkpoints": "Поиск checkpoints...",
"embeddings": "Поиск embeddings..."
"embeddings": "Поиск embeddings...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "Опции поиска",
"searchIn": "Искать в:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "Фильтр моделей",
"presets": "Пресеты",
"savePreset": "Сохранить текущие активные фильтры как новый пресет.",
"savePresetDisabledActive": "Невозможно сохранить: Пресет уже активен. Измените фильтры, чтобы сохранить новый пресет",
"savePresetDisabledNoFilters": "Сначала выберите фильтры для сохранения как пресет",
"savePresetPrompt": "Введите имя пресета:",
"presetClickTooltip": "Нажмите чтобы применить пресет \"{name}\"",
"presetDeleteTooltip": "Удалить пресет",
"presetDeleteConfirm": "Удалить пресет \"{name}\"?",
"presetDeleteConfirmClick": "Нажмите еще раз для подтверждения",
"presetOverwriteConfirm": "Пресет \"{name}\" уже существует. Перезаписать?",
"presetNamePlaceholder": "Имя пресета...",
"baseModel": "Базовая модель",
"modelTags": "Теги (Топ 20)",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Модели Embedding"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "Корень",
"collapseAll": "Свернуть все папки",
@@ -1092,6 +1116,10 @@
"title": "Инициализация статистики",
"message": "Обработка данных моделей для статистики. Это может занять несколько минут..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "Советы и хитрости",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "Рецепт добавлен в workflow",
"recipeReplaced": "Рецепт заменён в workflow",
"recipeFailedToSend": "Не удалось отправить рецепт в workflow",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "В текущем workflow нет совместимых узлов",
"noTargetNodeSelected": "Целевой узел не выбран"
},
"nodeSelector": {
"recipe": "Рецепт",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "Заменить",
"append": "Добавить",
"selectTargetNode": "Выберите целевой узел",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "Папка с примерами изображений открыта",
"openingFolder": "Открытие папки с примерами изображений",
"failedToOpen": "Не удалось открыть папку с примерами изображений"
"failedToOpen": "Не удалось открыть папку с примерами изображений",
"setupRequired": "Хранилище примеров изображений",
"setupDescription": "Чтобы добавить собственные примеры изображений, сначала нужно установить место загрузки.",
"setupUsage": "Этот путь используется как для загруженных, так и для пользовательских примеров изображений.",
"openSettings": "Открыть настройки"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "Проверка обновлений...",
"checkingMessage": "Пожалуйста, подождите, пока мы проверяем последнюю версию.",
"showNotifications": "Показывать уведомления об обновлениях",
"latestBadge": "Последний",
"updateProgress": {
"preparing": "Подготовка обновления...",
"installing": "Установка обновления...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "Фильтры очищены",
"noCustomFilterToClear": "Нет пользовательского фильтра для очистки"
"noCustomFilterToClear": "Нет пользовательского фильтра для очистки",
"noActiveFilters": "Нет активных фильтров для сохранения"
},
"presets": {
"created": "Пресет \"{name}\" создан",
"deleted": "Пресет \"{name}\" удален",
"applied": "Пресет \"{name}\" применен",
"overwritten": "Пресет \"{name}\" перезаписан",
"restored": "Пресеты по умолчанию восстановлены"
},
"error": {
"presetNameEmpty": "Имя пресета не может быть пустым",
"presetNameTooLong": "Имя пресета должно содержать не более {max} символов",
"presetNameInvalidChars": "Имя пресета содержит недопустимые символы",
"presetNameExists": "Пресет с таким именем уже существует",
"maxPresetsReached": "Допустимо максимум {max} пресетов. Удалите один, чтобы добавить больше.",
"presetNotFound": "Пресет не найден",
"invalidPreset": "Недопустимые данные пресета",
"deletePresetFailed": "Не удалось удалить пресет",
"applyPresetFailed": "Не удалось применить пресет"
},
"downloads": {
"imagesCompleted": "Примеры изображений {action} завершены",

View File

@@ -10,7 +10,8 @@
"next": "下一步",
"backToTop": "返回顶部",
"settings": "设置",
"help": "帮助"
"help": "帮助",
"add": "添加"
},
"status": {
"loading": "加载中...",
@@ -178,6 +179,7 @@
"recipes": "配方",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "统计"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "搜索 LoRA...",
"recipes": "搜索配方...",
"checkpoints": "搜索 Checkpoint...",
"embeddings": "搜索 Embedding..."
"embeddings": "搜索 Embedding...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "搜索选项",
"searchIn": "搜索范围:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "筛选模型",
"presets": "预设",
"savePreset": "将当前激活的筛选器保存为新预设。",
"savePresetDisabledActive": "无法保存:已有预设处于激活状态。修改筛选器后可保存新预设",
"savePresetDisabledNoFilters": "先选择筛选器,然后保存为预设",
"savePresetPrompt": "输入预设名称:",
"presetClickTooltip": "点击应用预设 \"{name}\"",
"presetDeleteTooltip": "删除预设",
"presetDeleteConfirm": "删除预设 \"{name}\"",
"presetDeleteConfirmClick": "再次点击确认",
"presetOverwriteConfirm": "预设 \"{name}\" 已存在。是否覆盖?",
"presetNamePlaceholder": "预设名称...",
"baseModel": "基础模型",
"modelTags": "标签前20",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embedding 模型"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "根目录",
"collapseAll": "折叠所有文件夹",
@@ -1092,6 +1116,10 @@
"title": "初始化统计",
"message": "正在处理模型数据以生成统计信息。这可能需要几分钟..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "技巧与提示",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "配方已追加到工作流",
"recipeReplaced": "配方已替换到工作流",
"recipeFailedToSend": "发送配方到工作流失败",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "当前工作流中没有兼容的节点",
"noTargetNodeSelected": "未选择目标节点"
},
"nodeSelector": {
"recipe": "配方",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "替换",
"append": "追加",
"selectTargetNode": "选择目标节点",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "示例图片文件夹已打开",
"openingFolder": "正在打开示例图片文件夹",
"failedToOpen": "打开示例图片文件夹失败"
"failedToOpen": "打开示例图片文件夹失败",
"setupRequired": "示例图片存储",
"setupDescription": "要添加自定义示例图片,您需要先设置下载位置。",
"setupUsage": "此路径用于存储下载的示例图片和自定义图片。",
"openSettings": "打开设置"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "正在检查更新...",
"checkingMessage": "请稍候,正在检查最新版本。",
"showNotifications": "显示更新通知",
"latestBadge": "最新",
"updateProgress": {
"preparing": "正在准备更新...",
"installing": "正在安装更新...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "筛选已清除",
"noCustomFilterToClear": "没有自定义筛选可清除"
"noCustomFilterToClear": "没有自定义筛选可清除",
"noActiveFilters": "没有可保存的激活筛选"
},
"presets": {
"created": "预设 \"{name}\" 已创建",
"deleted": "预设 \"{name}\" 已删除",
"applied": "预设 \"{name}\" 已应用",
"overwritten": "预设 \"{name}\" 已覆盖",
"restored": "默认预设已恢复"
},
"error": {
"presetNameEmpty": "预设名称不能为空",
"presetNameTooLong": "预设名称不能超过 {max} 个字符",
"presetNameInvalidChars": "预设名称包含无效字符",
"presetNameExists": "已存在同名预设",
"maxPresetsReached": "最多允许 {max} 个预设。删除一个以添加更多。",
"presetNotFound": "预设未找到",
"invalidPreset": "无效的预设数据",
"deletePresetFailed": "删除预设失败",
"applyPresetFailed": "应用预设失败"
},
"downloads": {
"imagesCompleted": "示例图片{action}完成",
@@ -1538,4 +1596,4 @@
"learnMore": "浏览器插件教程"
}
}
}
}

View File

@@ -10,7 +10,8 @@
"next": "下一步",
"backToTop": "回到頂部",
"settings": "設定",
"help": "說明"
"help": "說明",
"add": "新增"
},
"status": {
"loading": "載入中...",
@@ -178,6 +179,7 @@
"recipes": "配方",
"checkpoints": "Checkpoint",
"embeddings": "Embedding",
"misc": "[TODO: Translate] Misc",
"statistics": "統計"
},
"search": {
@@ -186,7 +188,8 @@
"loras": "搜尋 LoRA...",
"recipes": "搜尋配方...",
"checkpoints": "搜尋 checkpoint...",
"embeddings": "搜尋 embedding..."
"embeddings": "搜尋 embedding...",
"misc": "[TODO: Translate] Search VAE/Upscaler models..."
},
"options": "搜尋選項",
"searchIn": "搜尋範圍:",
@@ -204,6 +207,17 @@
},
"filter": {
"title": "篩選模型",
"presets": "預設",
"savePreset": "將目前啟用的篩選器儲存為新預設。",
"savePresetDisabledActive": "無法儲存:已有預設處於啟用狀態。修改篩選器後可儲存新預設",
"savePresetDisabledNoFilters": "先選擇篩選器,然後儲存為預設",
"savePresetPrompt": "輸入預設名稱:",
"presetClickTooltip": "點擊套用預設 \"{name}\"",
"presetDeleteTooltip": "刪除預設",
"presetDeleteConfirm": "刪除預設 \"{name}\"",
"presetDeleteConfirmClick": "再次點擊確認",
"presetOverwriteConfirm": "預設 \"{name}\" 已存在。是否覆蓋?",
"presetNamePlaceholder": "預設名稱...",
"baseModel": "基礎模型",
"modelTags": "標籤(前 20",
"modelTypes": "Model Types",
@@ -676,6 +690,16 @@
"embeddings": {
"title": "Embedding 模型"
},
"misc": {
"title": "[TODO: Translate] VAE & Upscaler Models",
"modelTypes": {
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler"
},
"contextMenu": {
"moveToOtherTypeFolder": "[TODO: Translate] Move to {otherType} Folder"
}
},
"sidebar": {
"modelRoot": "根目錄",
"collapseAll": "全部摺疊資料夾",
@@ -1092,6 +1116,10 @@
"title": "初始化統計",
"message": "正在處理模型資料以產生統計,可能需要幾分鐘..."
},
"misc": {
"title": "[TODO: Translate] Initializing Misc Model Manager",
"message": "[TODO: Translate] Scanning VAE and Upscaler models..."
},
"tips": {
"title": "小技巧",
"civitai": {
@@ -1151,12 +1179,18 @@
"recipeAdded": "配方已附加到工作流",
"recipeReplaced": "配方已取代於工作流",
"recipeFailedToSend": "傳送配方到工作流失敗",
"vaeUpdated": "[TODO: Translate] VAE updated in workflow",
"vaeFailed": "[TODO: Translate] Failed to update VAE in workflow",
"upscalerUpdated": "[TODO: Translate] Upscaler updated in workflow",
"upscalerFailed": "[TODO: Translate] Failed to update upscaler in workflow",
"noMatchingNodes": "目前工作流程中沒有相容的節點",
"noTargetNodeSelected": "未選擇目標節點"
},
"nodeSelector": {
"recipe": "配方",
"lora": "LoRA",
"vae": "[TODO: Translate] VAE",
"upscaler": "[TODO: Translate] Upscaler",
"replace": "取代",
"append": "附加",
"selectTargetNode": "選擇目標節點",
@@ -1165,7 +1199,11 @@
"exampleImages": {
"opened": "範例圖片資料夾已開啟",
"openingFolder": "正在開啟範例圖片資料夾",
"failedToOpen": "開啟範例圖片資料夾失敗"
"failedToOpen": "開啟範例圖片資料夾失敗",
"setupRequired": "範例圖片儲存",
"setupDescription": "要新增自訂範例圖片,您需要先設定下載位置。",
"setupUsage": "此路徑用於儲存下載的範例圖片和自訂圖片。",
"openSettings": "開啟設定"
}
},
"help": {
@@ -1214,6 +1252,7 @@
"checkingUpdates": "正在檢查更新...",
"checkingMessage": "請稍候,正在檢查最新版本。",
"showNotifications": "顯示更新通知",
"latestBadge": "最新",
"updateProgress": {
"preparing": "正在準備更新...",
"installing": "正在安裝更新...",
@@ -1414,7 +1453,26 @@
"filters": {
"applied": "{message}",
"cleared": "篩選已清除",
"noCustomFilterToClear": "無自訂篩選可清除"
"noCustomFilterToClear": "無自訂篩選可清除",
"noActiveFilters": "沒有可儲存的啟用篩選"
},
"presets": {
"created": "預設 \"{name}\" 已建立",
"deleted": "預設 \"{name}\" 已刪除",
"applied": "預設 \"{name}\" 已套用",
"overwritten": "預設 \"{name}\" 已覆蓋",
"restored": "預設設定已恢復"
},
"error": {
"presetNameEmpty": "預設名稱不能為空",
"presetNameTooLong": "預設名稱不能超過 {max} 個字元",
"presetNameInvalidChars": "預設名稱包含無效字元",
"presetNameExists": "已存在同名預設",
"maxPresetsReached": "最多允許 {max} 個預設。刪除一個以新增更多。",
"presetNotFound": "預設未找到",
"invalidPreset": "無效的預設資料",
"deletePresetFailed": "刪除預設失敗",
"applyPresetFailed": "套用預設失敗"
},
"downloads": {
"imagesCompleted": "範例圖片{action}完成",

View File

@@ -9,6 +9,7 @@ import json
import urllib.parse
import time
from .utils.cache_paths import CacheType, get_cache_file_path, get_legacy_cache_paths
from .utils.settings_paths import ensure_settings_file, get_settings_dir, load_settings_template
# Use an environment variable to control standalone mode
@@ -88,8 +89,11 @@ class Config:
self.checkpoints_roots = None
self.unet_roots = None
self.embeddings_roots = None
self.vae_roots = None
self.upscaler_roots = None
self.base_models_roots = self._init_checkpoint_paths()
self.embeddings_roots = self._init_embedding_paths()
self.misc_roots = self._init_misc_paths()
# Scan symbolic links during initialization
self._initialize_symlink_mappings()
@@ -150,6 +154,8 @@ class Config:
'checkpoints': list(self.checkpoints_roots or []),
'unet': list(self.unet_roots or []),
'embeddings': list(self.embeddings_roots or []),
'vae': list(self.vae_roots or []),
'upscale_models': list(self.upscaler_roots or []),
}
normalized_target_paths = _normalize_folder_paths_for_comparison(target_folder_paths)
@@ -223,26 +229,64 @@ class Config:
logger.error(f"Error checking link status for {path}: {e}")
return False
def _entry_is_symlink(self, entry: os.DirEntry) -> bool:
"""Check if a directory entry is a symlink, including Windows junctions."""
if entry.is_symlink():
return True
if platform.system() == 'Windows':
try:
import ctypes
FILE_ATTRIBUTE_REPARSE_POINT = 0x400
attrs = ctypes.windll.kernel32.GetFileAttributesW(entry.path)
return attrs != -1 and (attrs & FILE_ATTRIBUTE_REPARSE_POINT)
except Exception:
pass
return False
def _normalize_path(self, path: str) -> str:
return os.path.normpath(path).replace(os.sep, '/')
def _get_symlink_cache_path(self) -> Path:
cache_dir = Path(get_settings_dir(create=True)) / "cache"
cache_dir.mkdir(parents=True, exist_ok=True)
return cache_dir / "symlink_map.json"
canonical_path = get_cache_file_path(CacheType.SYMLINK, create_dir=True)
return Path(canonical_path)
def _symlink_roots(self) -> List[str]:
roots: List[str] = []
roots.extend(self.loras_roots or [])
roots.extend(self.base_models_roots or [])
roots.extend(self.embeddings_roots or [])
roots.extend(self.misc_roots or [])
return roots
def _build_symlink_fingerprint(self) -> Dict[str, object]:
roots = [self._normalize_path(path) for path in self._symlink_roots() if path]
unique_roots = sorted(set(roots))
# Fingerprint now only contains the root paths to avoid sensitivity to folder content changes.
return {"roots": unique_roots}
# Include first-level symlinks in fingerprint for change detection.
# This ensures new symlinks under roots trigger a cache invalidation.
# Use lists (not tuples) for JSON serialization compatibility.
direct_symlinks: List[List[str]] = []
for root in unique_roots:
try:
if os.path.isdir(root):
with os.scandir(root) as it:
for entry in it:
if self._entry_is_symlink(entry):
try:
target = os.path.realpath(entry.path)
direct_symlinks.append([
self._normalize_path(entry.path),
self._normalize_path(target)
])
except OSError:
pass
except (OSError, PermissionError):
pass
return {
"roots": unique_roots,
"direct_symlinks": sorted(direct_symlinks)
}
def _initialize_symlink_mappings(self) -> None:
start = time.perf_counter()
@@ -255,15 +299,19 @@ class Config:
)
self._rebuild_preview_roots()
# Only rescan if target roots have changed.
# This is stable across file additions/deletions.
current_fingerprint = self._build_symlink_fingerprint()
cached_fingerprint = self._cached_fingerprint
if cached_fingerprint and current_fingerprint == cached_fingerprint:
# Check 1: First-level symlinks unchanged (catches new symlinks at root)
fingerprint_valid = cached_fingerprint and current_fingerprint == cached_fingerprint
# Check 2: All cached mappings still valid (catches changes at any depth)
mappings_valid = self._validate_cached_mappings() if fingerprint_valid else False
if fingerprint_valid and mappings_valid:
return
logger.info("Symlink root paths changed; rescanning symbolic links")
logger.info("Symlink configuration changed; rescanning symbolic links")
self.rebuild_symlink_cache()
logger.info(
@@ -280,14 +328,28 @@ class Config:
def _load_persisted_cache_into_mappings(self) -> bool:
"""Load the symlink cache and store its fingerprint for comparison."""
cache_path = self._get_symlink_cache_path()
if not cache_path.exists():
return False
try:
with cache_path.open("r", encoding="utf-8") as handle:
payload = json.load(handle)
except Exception as exc:
logger.info("Failed to load symlink cache %s: %s", cache_path, exc)
# Check canonical path first, then legacy paths for migration
paths_to_check = [cache_path]
legacy_paths = get_legacy_cache_paths(CacheType.SYMLINK)
paths_to_check.extend(Path(p) for p in legacy_paths if p != str(cache_path))
loaded_path = None
payload = None
for check_path in paths_to_check:
if not check_path.exists():
continue
try:
with check_path.open("r", encoding="utf-8") as handle:
payload = json.load(handle)
loaded_path = check_path
break
except Exception as exc:
logger.info("Failed to load symlink cache %s: %s", check_path, exc)
continue
if payload is None:
return False
if not isinstance(payload, dict):
@@ -307,7 +369,67 @@ class Config:
normalized_mappings[self._normalize_path(target)] = self._normalize_path(link)
self._path_mappings = normalized_mappings
logger.info("Symlink cache loaded with %d mappings", len(self._path_mappings))
# Log migration if loaded from legacy path
if loaded_path is not None and loaded_path != cache_path:
logger.info(
"Symlink cache migrated from %s (will save to %s)",
loaded_path,
cache_path,
)
try:
if loaded_path.exists():
loaded_path.unlink()
logger.info("Cleaned up legacy symlink cache: %s", loaded_path)
try:
parent_dir = loaded_path.parent
if parent_dir.name == "cache" and not any(parent_dir.iterdir()):
parent_dir.rmdir()
logger.info("Removed empty legacy cache directory: %s", parent_dir)
except Exception:
pass
except Exception as exc:
logger.warning(
"Failed to cleanup legacy symlink cache %s: %s",
loaded_path,
exc,
)
else:
logger.info("Symlink cache loaded with %d mappings", len(self._path_mappings))
return True
def _validate_cached_mappings(self) -> bool:
"""Verify all cached symlink mappings are still valid.
Returns True if all mappings are valid, False if rescan is needed.
This catches removed or retargeted symlinks at ANY depth.
"""
for target, link in self._path_mappings.items():
# Convert normalized paths back to OS paths
link_path = link.replace('/', os.sep)
# Check if symlink still exists
if not self._is_link(link_path):
logger.debug("Cached symlink no longer exists: %s", link_path)
return False
# Check if target is still the same
try:
actual_target = self._normalize_path(os.path.realpath(link_path))
if actual_target != target:
logger.debug(
"Symlink target changed: %s -> %s (cached: %s)",
link_path, actual_target, target
)
return False
except OSError:
logger.debug("Cannot resolve symlink: %s", link_path)
return False
return True
def _save_symlink_cache(self) -> None:
@@ -362,10 +484,9 @@ class Config:
with os.scandir(current_display) as it:
for entry in it:
try:
# 1. High speed detection using dirent data (is_symlink)
is_link = entry.is_symlink()
# On Windows, is_symlink handles reparse points
# 1. Detect symlinks including Windows junctions
is_link = self._entry_is_symlink(entry)
if is_link:
# Only resolve realpath when we actually find a link
target_path = os.path.realpath(entry.path)
@@ -484,6 +605,8 @@ class Config:
preview_roots.update(self._expand_preview_root(root))
for root in self.embeddings_roots or []:
preview_roots.update(self._expand_preview_root(root))
for root in self.misc_roots or []:
preview_roots.update(self._expand_preview_root(root))
for target, link in self._path_mappings.items():
preview_roots.update(self._expand_preview_root(target))
@@ -491,11 +614,12 @@ class Config:
self._preview_root_paths = {path for path in preview_roots if path.is_absolute()}
logger.debug(
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d symlink mappings",
"Preview roots rebuilt: %d paths from %d lora roots, %d checkpoint roots, %d embedding roots, %d misc roots, %d symlink mappings",
len(self._preview_root_paths),
len(self.loras_roots or []),
len(self.base_models_roots or []),
len(self.embeddings_roots or []),
len(self.misc_roots or []),
len(self._path_mappings),
)
@@ -654,6 +778,49 @@ class Config:
logger.warning(f"Error initializing embedding paths: {e}")
return []
def _init_misc_paths(self) -> List[str]:
"""Initialize and validate misc (VAE and upscaler) paths from ComfyUI settings"""
try:
raw_vae_paths = folder_paths.get_folder_paths("vae")
raw_upscaler_paths = folder_paths.get_folder_paths("upscale_models")
unique_paths = self._prepare_misc_paths(raw_vae_paths, raw_upscaler_paths)
logger.info("Found misc roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
if not unique_paths:
logger.warning("No valid VAE or upscaler folders found in ComfyUI configuration")
return []
return unique_paths
except Exception as e:
logger.warning(f"Error initializing misc paths: {e}")
return []
def _prepare_misc_paths(
self, vae_paths: Iterable[str], upscaler_paths: Iterable[str]
) -> List[str]:
vae_map = self._dedupe_existing_paths(vae_paths)
upscaler_map = self._dedupe_existing_paths(upscaler_paths)
merged_map: Dict[str, str] = {}
for real_path, original in {**vae_map, **upscaler_map}.items():
if real_path not in merged_map:
merged_map[real_path] = original
unique_paths = sorted(merged_map.values(), key=lambda p: p.lower())
vae_values = set(vae_map.values())
upscaler_values = set(upscaler_map.values())
self.vae_roots = [p for p in unique_paths if p in vae_values]
self.upscaler_roots = [p for p in unique_paths if p in upscaler_values]
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
return unique_paths
def get_preview_static_url(self, preview_path: str) -> str:
if not preview_path:
return ""

View File

@@ -184,15 +184,17 @@ class LoraManager:
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
misc_scanner = await ServiceRegistry.get_misc_scanner()
# Initialize recipe scanner if needed
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
# Create low-priority initialization tasks
init_tasks = [
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init'),
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init'),
asyncio.create_task(embedding_scanner.initialize_in_background(), name='embedding_cache_init'),
asyncio.create_task(misc_scanner.initialize_in_background(), name='misc_cache_init'),
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
]
@@ -252,8 +254,9 @@ class LoraManager:
# Collect all model roots
all_roots = set()
all_roots.update(config.loras_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.base_models_roots)
all_roots.update(config.embeddings_roots)
all_roots.update(config.misc_roots or [])
total_deleted = 0
total_size_freed = 0

View File

@@ -714,10 +714,10 @@ NODE_EXTRACTORS = {
"UNETLoader": UNETLoaderExtractor, # Updated to use dedicated extractor
"UnetLoaderGGUF": UNETLoaderExtractor, # Updated to use dedicated extractor
"LoraLoader": LoraLoaderExtractor,
"LoraManagerLoader": LoraLoaderManagerExtractor,
"LoraLoaderLM": LoraLoaderManagerExtractor,
# Conditioning
"CLIPTextEncode": CLIPTextEncodeExtractor,
"PromptLoraManager": CLIPTextEncodeExtractor,
"PromptLM": CLIPTextEncodeExtractor,
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
"AdvancedCLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb

View File

@@ -4,7 +4,7 @@ from ..metadata_collector.metadata_processor import MetadataProcessor
logger = logging.getLogger(__name__)
class DebugMetadata:
class DebugMetadataLM:
NAME = "Debug Metadata (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Debug node to verify metadata_processor functionality"

136
py/nodes/lora_cycler.py Normal file
View File

@@ -0,0 +1,136 @@
"""
Lora Cycler Node - Sequentially cycles through LoRAs from a pool.
This node accepts optional pool_config input to filter available LoRAs, and outputs
a LORA_STACK with one LoRA at a time. Returns UI updates with current/next LoRA info
and tracks the cycle progress which persists across workflow save/load.
"""
import logging
import os
from ..utils.utils import get_lora_info
logger = logging.getLogger(__name__)
class LoraCyclerLM:
"""Node that sequentially cycles through LoRAs from a pool"""
NAME = "Lora Cycler (LoraManager)"
CATEGORY = "Lora Manager/randomizer"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"cycler_config": ("CYCLER_CONFIG", {}),
},
"optional": {
"pool_config": ("POOL_CONFIG", {}),
},
}
RETURN_TYPES = ("LORA_STACK",)
RETURN_NAMES = ("LORA_STACK",)
FUNCTION = "cycle"
OUTPUT_NODE = False
async def cycle(self, cycler_config, pool_config=None):
"""
Cycle through LoRAs based on configuration and pool filters.
Args:
cycler_config: Dict with cycler settings (current_index, model_strength, clip_strength, sort_by)
pool_config: Optional config from LoRA Pool node for filtering
Returns:
Dictionary with 'result' (LORA_STACK tuple) and 'ui' (for widget display)
"""
from ..services.service_registry import ServiceRegistry
from ..services.lora_service import LoraService
# Extract settings from cycler_config
current_index = cycler_config.get("current_index", 1) # 1-based
model_strength = float(cycler_config.get("model_strength", 1.0))
clip_strength = float(cycler_config.get("clip_strength", 1.0))
sort_by = "filename"
# Dual-index mechanism for batch queue synchronization
execution_index = cycler_config.get("execution_index") # Can be None
# next_index_from_config = cycler_config.get("next_index") # Not used on backend
# Get scanner and service
scanner = await ServiceRegistry.get_lora_scanner()
lora_service = LoraService(scanner)
# Get filtered and sorted LoRA list
lora_list = await lora_service.get_cycler_list(
pool_config=pool_config, sort_by=sort_by
)
total_count = len(lora_list)
if total_count == 0:
logger.warning("[LoraCyclerLM] No LoRAs available in pool")
return {
"result": ([],),
"ui": {
"current_index": [1],
"next_index": [1],
"total_count": [0],
"current_lora_name": [""],
"current_lora_filename": [""],
"error": ["No LoRAs available in pool"],
},
}
# Determine which index to use for this execution
# If execution_index is provided (batch queue case), use it
# Otherwise use current_index (first execution or non-batch case)
if execution_index is not None:
actual_index = execution_index
else:
actual_index = current_index
# Clamp index to valid range (1-based)
clamped_index = max(1, min(actual_index, total_count))
# Get LoRA at current index (convert to 0-based for list access)
current_lora = lora_list[clamped_index - 1]
# Build LORA_STACK with single LoRA
lora_path, _ = get_lora_info(current_lora["file_name"])
if not lora_path:
logger.warning(
f"[LoraCyclerLM] Could not find path for LoRA: {current_lora['file_name']}"
)
lora_stack = []
else:
# Normalize path separators
lora_path = lora_path.replace("/", os.sep)
lora_stack = [(lora_path, model_strength, clip_strength)]
# Calculate next index (wrap to 1 if at end)
next_index = clamped_index + 1
if next_index > total_count:
next_index = 1
# Get next LoRA for UI display (what will be used next generation)
next_lora = lora_list[next_index - 1]
next_display_name = next_lora["file_name"]
return {
"result": (lora_stack,),
"ui": {
"current_index": [clamped_index],
"next_index": [next_index],
"total_count": [total_count],
"current_lora_name": [
current_lora.get("model_name", current_lora["file_name"])
],
"current_lora_filename": [current_lora["file_name"]],
"next_lora_name": [next_display_name],
"next_lora_filename": [next_lora["file_name"]],
},
}

View File

@@ -6,7 +6,7 @@ from .utils import FlexibleOptionalInputType, any_type, extract_lora_name, get_l
logger = logging.getLogger(__name__)
class LoraManagerLoader:
class LoraLoaderLM:
NAME = "Lora Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"
@@ -16,12 +16,9 @@ class LoraManagerLoader:
"required": {
"model": ("MODEL",),
# "clip": ("CLIP",),
"text": ("STRING", {
"multiline": True,
"pysssss.autocomplete": False,
"dynamicPrompts": True,
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),
@@ -131,7 +128,7 @@ class LoraManagerLoader:
return (model, clip, trigger_words_text, formatted_loras_text)
class LoraManagerTextLoader:
class LoraTextLoaderLM:
NAME = "LoRA Text Loader (LoraManager)"
CATEGORY = "Lora Manager/loaders"

View File

@@ -10,7 +10,7 @@ import logging
logger = logging.getLogger(__name__)
class LoraPoolNode:
class LoraPoolLM:
"""
A node that defines LoRA filter criteria through a Vue-based widget.
@@ -67,7 +67,7 @@ class LoraPoolNode:
filters = pool_config.get("filters", self._default_config()["filters"])
# Log for debugging
logger.debug(f"[LoraPoolNode] Processing filters: {filters}")
logger.debug(f"[LoraPoolLM] Processing filters: {filters}")
return (filters,)

View File

@@ -15,7 +15,7 @@ from .utils import extract_lora_name
logger = logging.getLogger(__name__)
class LoraRandomizerNode:
class LoraRandomizerLM:
"""Node that randomly selects LoRAs from a pool"""
NAME = "Lora Randomizer (LoraManager)"
@@ -72,7 +72,7 @@ class LoraRandomizerNode:
loras = self._preprocess_loras_input(loras)
roll_mode = randomizer_config.get("roll_mode", "always")
logger.debug(f"[LoraRandomizerNode] roll_mode: {roll_mode}")
logger.debug(f"[LoraRandomizerLM] roll_mode: {roll_mode}")
# Dual seed mechanism for batch queue synchronization
# execution_seed: seed for generating execution_stack (= previous next_seed)
@@ -127,7 +127,7 @@ class LoraRandomizerNode:
lora_path, trigger_words = get_lora_info(lora["name"])
if not lora_path:
logger.warning(
f"[LoraRandomizerNode] Could not find path for LoRA: {lora['name']}"
f"[LoraRandomizerLM] Could not find path for LoRA: {lora['name']}"
)
continue

View File

@@ -6,7 +6,7 @@ import logging
logger = logging.getLogger(__name__)
class LoraStacker:
class LoraStackerLM:
NAME = "Lora Stacker (LoraManager)"
CATEGORY = "Lora Manager/stackers"
@@ -14,12 +14,9 @@ class LoraStacker:
def INPUT_TYPES(cls):
return {
"required": {
"text": ("STRING", {
"multiline": True,
"pysssss.autocomplete": False,
"dynamicPrompts": True,
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),

View File

@@ -1,6 +1,6 @@
from typing import Any, Optional
class PromptLoraManager:
class PromptLM:
"""Encodes text (and optional trigger words) into CLIP conditioning."""
NAME = "Prompt (LoraManager)"
@@ -15,11 +15,10 @@ class PromptLoraManager:
return {
"required": {
"text": (
'STRING',
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"multiline": True,
"pysssss.autocomplete": False,
"dynamicPrompts": True,
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter prompt... /char, /artist for quick tag search",
"tooltip": "The text to be encoded.",
},
),

33
py/nodes/text.py Normal file
View File

@@ -0,0 +1,33 @@
class TextLM:
"""A simple text node with autocomplete support."""
NAME = "Text (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = (
"A simple text input node with autocomplete support for tags and styles."
)
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"text": (
"AUTOCOMPLETE_TEXT_PROMPT,STRING",
{
"widgetType": "AUTOCOMPLETE_TEXT_PROMPT",
"placeholder": "Enter text... /char, /artist for quick tag search",
"tooltip": "The text output.",
},
),
},
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("STRING",)
OUTPUT_TOOLTIPS = (
"The text output.",
)
FUNCTION = "process"
def process(self, text: str):
return (text,)

View File

@@ -6,27 +6,36 @@ import logging
logger = logging.getLogger(__name__)
class TriggerWordToggle:
class TriggerWordToggleLM:
NAME = "TriggerWord Toggle (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Toggle trigger words on/off"
@classmethod
def INPUT_TYPES(cls):
return {
"required": {
"group_mode": ("BOOLEAN", {
"default": True,
"tooltip": "When enabled, treats each group of trigger words as a single toggleable unit."
}),
"default_active": ("BOOLEAN", {
"default": True,
"tooltip": "Sets the default initial state (active or inactive) when trigger words are added."
}),
"allow_strength_adjustment": ("BOOLEAN", {
"default": False,
"tooltip": "Enable mouse wheel adjustment of each trigger word's strength."
}),
"group_mode": (
"BOOLEAN",
{
"default": True,
"tooltip": "When enabled, treats each group of trigger words as a single toggleable unit.",
},
),
"default_active": (
"BOOLEAN",
{
"default": True,
"tooltip": "Sets the default initial state (active or inactive) when trigger words are added.",
},
),
"allow_strength_adjustment": (
"BOOLEAN",
{
"default": False,
"tooltip": "Enable mouse wheel adjustment of each trigger word's strength.",
},
),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {
@@ -38,15 +47,15 @@ class TriggerWordToggle:
RETURN_NAMES = ("filtered_trigger_words",)
FUNCTION = "process_trigger_words"
def _get_toggle_data(self, kwargs, key='toggle_trigger_words'):
def _get_toggle_data(self, kwargs, key="toggle_trigger_words"):
"""Helper to extract data from either old or new kwargs format"""
if key not in kwargs:
return None
data = kwargs[key]
# Handle new format: {'key': {'__value__': ...}}
if isinstance(data, dict) and '__value__' in data:
return data['__value__']
if isinstance(data, dict) and "__value__" in data:
return data["__value__"]
# Handle old format: {'key': ...}
else:
return data
@@ -60,13 +69,25 @@ class TriggerWordToggle:
**kwargs,
):
# Handle both old and new formats for trigger_words
trigger_words_data = self._get_toggle_data(kwargs, 'orinalMessage')
trigger_words = trigger_words_data if isinstance(trigger_words_data, str) else ""
trigger_words_data = self._get_toggle_data(kwargs, "orinalMessage")
trigger_words = (
trigger_words_data if isinstance(trigger_words_data, str) else ""
)
filtered_triggers = trigger_words
# Check if trigger_words is provided and different from orinalMessage
trigger_words_override = self._get_toggle_data(kwargs, "trigger_words")
if (
trigger_words_override
and isinstance(trigger_words_override, str)
and trigger_words_override != trigger_words
):
filtered_triggers = trigger_words_override
return (filtered_triggers,)
# Get toggle data with support for both formats
trigger_data = self._get_toggle_data(kwargs, 'toggle_trigger_words')
trigger_data = self._get_toggle_data(kwargs, "toggle_trigger_words")
if trigger_data:
try:
# Convert to list if it's a JSON string
@@ -77,7 +98,9 @@ class TriggerWordToggle:
if group_mode:
if allow_strength_adjustment:
parsed_items = [
self._parse_trigger_item(item, allow_strength_adjustment)
self._parse_trigger_item(
item, allow_strength_adjustment
)
for item in trigger_data
]
filtered_groups = [
@@ -91,11 +114,14 @@ class TriggerWordToggle:
]
else:
filtered_groups = [
(item.get('text') or "").strip()
(item.get("text") or "").strip()
for item in trigger_data
if (item.get('text') or "").strip() and item.get('active', False)
if (item.get("text") or "").strip()
and item.get("active", False)
]
filtered_triggers = ', '.join(filtered_groups) if filtered_groups else ""
filtered_triggers = (
", ".join(filtered_groups) if filtered_groups else ""
)
else:
parsed_items = [
self._parse_trigger_item(item, allow_strength_adjustment)
@@ -110,28 +136,34 @@ class TriggerWordToggle:
for item in parsed_items
if item["text"] and item["active"]
]
filtered_triggers = ', '.join(filtered_words) if filtered_words else ""
filtered_triggers = (
", ".join(filtered_words) if filtered_words else ""
)
else:
# Fallback to original message parsing if data is not in the expected list format
if group_mode:
groups = re.split(r',{2,}', trigger_words)
groups = re.split(r",{2,}", trigger_words)
groups = [group.strip() for group in groups if group.strip()]
filtered_triggers = ', '.join(groups)
filtered_triggers = ", ".join(groups)
else:
words = [word.strip() for word in trigger_words.split(',') if word.strip()]
filtered_triggers = ', '.join(words)
words = [
word.strip()
for word in trigger_words.split(",")
if word.strip()
]
filtered_triggers = ", ".join(words)
except Exception as e:
logger.error(f"Error processing trigger words: {e}")
return (filtered_triggers,)
def _parse_trigger_item(self, item, allow_strength_adjustment):
text = (item.get('text') or "").strip()
active = bool(item.get('active', False))
strength = item.get('strength')
text = (item.get("text") or "").strip()
active = bool(item.get("active", False))
strength = item.get("strength")
strength_match = re.match(r'^\((.+):([\d.]+)\)$', text)
strength_match = re.match(r"^\((.+):([\d.]+)\)$", text)
if strength_match:
text = strength_match.group(1).strip()
if strength is None:

View File

@@ -15,12 +15,9 @@ class WanVideoLoraSelectLM:
"required": {
"low_mem_load": ("BOOLEAN", {"default": False, "tooltip": "Load LORA models with less VRAM usage, slower loading. This affects ALL LoRAs, not just the current ones. No effect if merge_loras is False"}),
"merge_loras": ("BOOLEAN", {"default": True, "tooltip": "Merge LoRAs into the model, otherwise they are loaded on the fly. Always disabled for GGUF and scaled fp8 models. This affects ALL LoRAs, not just the current one"}),
"text": ("STRING", {
"multiline": True,
"pysssss.autocomplete": False,
"dynamicPrompts": True,
"text": ("AUTOCOMPLETE_TEXT_LORAS", {
"placeholder": "Search LoRAs to add...",
"tooltip": "Format: <lora:lora_name:strength> separated by spaces or punctuation",
"placeholder": "LoRA syntax input: <lora:name:strength>"
}),
},
"optional": FlexibleOptionalInputType(any_type),

View File

@@ -7,7 +7,7 @@ import logging
logger = logging.getLogger(__name__)
# 定义新节点的类
class WanVideoLoraSelectFromText:
class WanVideoLoraTextSelectLM:
# 节点在UI中显示的名称
NAME = "WanVideo Lora Select From Text (LoraManager)"
# 节点所属的分类
@@ -115,11 +115,3 @@ class WanVideoLoraSelectFromText:
active_loras_text = " ".join(formatted_loras)
return (loras_list, trigger_words_text, active_loras_text)
NODE_CLASS_MAPPINGS = {
"WanVideoLoraSelectFromText": WanVideoLoraSelectFromText
}
NODE_DISPLAY_NAME_MAPPINGS = {
"WanVideoLoraSelectFromText": "WanVideo Lora Select From Text (LoraManager)"
}

View File

@@ -231,6 +231,8 @@ class SettingsHandler:
"enable_metadata_archive_db",
"language",
"use_portable_settings",
"onboarding_completed",
"dismissed_banners",
"proxy_enabled",
"proxy_type",
"proxy_host",
@@ -253,6 +255,7 @@ class SettingsHandler:
"model_name_display",
"update_flag_strategy",
"auto_organize_exclusions",
"filter_presets",
)
_PROXY_KEYS = {
@@ -1201,6 +1204,80 @@ class FileSystemHandler:
return web.json_response({"success": False, "error": str(exc)}, status=500)
class CustomWordsHandler:
"""Handler for autocomplete via TagFTSIndex."""
def __init__(self) -> None:
from ...services.custom_words_service import get_custom_words_service
self._service = get_custom_words_service()
async def search_custom_words(self, request: web.Request) -> web.Response:
"""Search custom words with autocomplete.
Query parameters:
search: The search term to match against.
limit: Maximum number of results to return (default: 20).
category: Optional category filter. Can be:
- A category name (e.g., "character", "artist", "general")
- Comma-separated category IDs (e.g., "4,11" for character)
enriched: If "true", return enriched results with category and post_count
even without category filtering.
"""
try:
search_term = request.query.get("search", "")
limit = int(request.query.get("limit", "20"))
category_param = request.query.get("category", "")
enriched_param = request.query.get("enriched", "").lower() == "true"
# Parse category parameter
categories = None
if category_param:
categories = self._parse_category_param(category_param)
results = self._service.search_words(
search_term, limit, categories=categories, enriched=enriched_param
)
return web.json_response({
"success": True,
"words": results
})
except Exception as exc:
logger.error("Error searching custom words: %s", exc, exc_info=True)
return web.json_response({"error": str(exc)}, status=500)
def _parse_category_param(self, param: str) -> list[int] | None:
"""Parse category parameter into list of category IDs.
Args:
param: Category parameter value (name or comma-separated IDs).
Returns:
List of category IDs, or None if parsing fails.
"""
from ...services.tag_fts_index import CATEGORY_NAME_TO_IDS
param = param.strip().lower()
if not param:
return None
# Try to parse as category name first
if param in CATEGORY_NAME_TO_IDS:
return CATEGORY_NAME_TO_IDS[param]
# Try to parse as comma-separated integers
try:
category_ids = []
for part in param.split(","):
part = part.strip()
if part:
category_ids.append(int(part))
return category_ids if category_ids else None
except ValueError:
logger.debug("Invalid category parameter: %s", param)
return None
class NodeRegistryHandler:
def __init__(
self,
@@ -1427,6 +1504,7 @@ class MiscHandlerSet:
model_library: ModelLibraryHandler,
metadata_archive: MetadataArchiveHandler,
filesystem: FileSystemHandler,
custom_words: CustomWordsHandler,
) -> None:
self.health = health
self.settings = settings
@@ -1438,6 +1516,7 @@ class MiscHandlerSet:
self.model_library = model_library
self.metadata_archive = metadata_archive
self.filesystem = filesystem
self.custom_words = custom_words
def to_route_mapping(
self,
@@ -1465,6 +1544,7 @@ class MiscHandlerSet:
"get_model_versions_status": self.model_library.get_model_versions_status,
"open_file_location": self.filesystem.open_file_location,
"open_settings_location": self.filesystem.open_settings_location,
"search_custom_words": self.custom_words.search_custom_words,
}

View File

@@ -41,10 +41,8 @@ class PreviewHandler:
raise web.HTTPBadRequest(text="Unable to resolve preview path") from exc
resolved_str = str(resolved)
# TODO: Temporarily disabled path validation due to issues #772 and #774
# Re-enable after fixing preview root path handling
# if not self._config.is_preview_path_allowed(resolved_str):
# raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
if not self._config.is_preview_path_allowed(resolved_str):
raise web.HTTPForbidden(text="Preview path is not within an allowed directory")
if not resolved.is_file():
logger.debug("Preview file not found at %s", resolved_str)

View File

@@ -63,6 +63,11 @@ class LoraRoutes(BaseModelRoutes):
"POST", "/api/lm/{prefix}/random-sample", prefix, self.get_random_loras
)
# Cycler routes
registrar.add_prefixed_route(
"POST", "/api/lm/{prefix}/cycler-list", prefix, self.get_cycler_list
)
# ComfyUI integration
registrar.add_prefixed_route(
"POST", "/api/lm/{prefix}/get_trigger_words", prefix, self.get_trigger_words
@@ -283,6 +288,29 @@ class LoraRoutes(BaseModelRoutes):
logger.error(f"Error getting random LoRAs: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def get_cycler_list(self, request: web.Request) -> web.Response:
"""Get filtered and sorted LoRA list for cycler widget"""
try:
json_data = await request.json()
# Parse parameters
pool_config = json_data.get("pool_config")
sort_by = json_data.get("sort_by", "filename")
# Get cycler list from service
lora_list = await self.service.get_cycler_list(
pool_config=pool_config,
sort_by=sort_by
)
return web.json_response(
{"success": True, "loras": lora_list, "count": len(lora_list)}
)
except Exception as e:
logger.error(f"Error getting cycler list: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
async def get_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for specified LoRA models"""
try:

View File

@@ -0,0 +1,112 @@
import logging
from typing import Dict
from aiohttp import web
from .base_model_routes import BaseModelRoutes
from .model_route_registrar import ModelRouteRegistrar
from ..services.misc_service import MiscService
from ..services.service_registry import ServiceRegistry
from ..config import config
logger = logging.getLogger(__name__)
class MiscModelRoutes(BaseModelRoutes):
"""Misc-specific route controller (VAE, Upscaler)"""
def __init__(self):
"""Initialize Misc routes with Misc service"""
super().__init__()
self.template_name = "misc.html"
async def initialize_services(self):
"""Initialize services from ServiceRegistry"""
misc_scanner = await ServiceRegistry.get_misc_scanner()
update_service = await ServiceRegistry.get_model_update_service()
self.service = MiscService(misc_scanner, update_service=update_service)
self.set_model_update_service(update_service)
# Attach service dependencies
self.attach_service(self.service)
def setup_routes(self, app: web.Application):
"""Setup Misc routes"""
# Schedule service initialization on app startup
app.on_startup.append(lambda _: self.initialize_services())
# Setup common routes with 'misc' prefix (includes page route)
super().setup_routes(app, 'misc')
def setup_specific_routes(self, registrar: ModelRouteRegistrar, prefix: str):
"""Setup Misc-specific routes"""
# Misc info by name
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/info/{name}', prefix, self.get_misc_info)
# VAE roots and Upscaler roots
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/vae_roots', prefix, self.get_vae_roots)
registrar.add_prefixed_route('GET', '/api/lm/{prefix}/upscaler_roots', prefix, self.get_upscaler_roots)
def _validate_civitai_model_type(self, model_type: str) -> bool:
"""Validate CivitAI model type for Misc (VAE or Upscaler)"""
return model_type.lower() in ['vae', 'upscaler']
def _get_expected_model_types(self) -> str:
"""Get expected model types string for error messages"""
return "VAE or Upscaler"
def _parse_specific_params(self, request: web.Request) -> Dict:
"""Parse Misc-specific parameters"""
params: Dict = {}
if 'misc_hash' in request.query:
params['hash_filters'] = {'single_hash': request.query['misc_hash'].lower()}
elif 'misc_hashes' in request.query:
params['hash_filters'] = {
'multiple_hashes': [h.lower() for h in request.query['misc_hashes'].split(',')]
}
return params
async def get_misc_info(self, request: web.Request) -> web.Response:
"""Get detailed information for a specific misc model by name"""
try:
name = request.match_info.get('name', '')
misc_info = await self.service.get_model_info_by_name(name)
if misc_info:
return web.json_response(misc_info)
else:
return web.json_response({"error": "Misc model not found"}, status=404)
except Exception as e:
logger.error(f"Error in get_misc_info: {e}", exc_info=True)
return web.json_response({"error": str(e)}, status=500)
async def get_vae_roots(self, request: web.Request) -> web.Response:
"""Return the list of VAE roots from config"""
try:
roots = config.vae_roots
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting VAE roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_upscaler_roots(self, request: web.Request) -> web.Response:
"""Return the list of upscaler roots from config"""
try:
roots = config.upscaler_roots
return web.json_response({
"success": True,
"roots": roots
})
except Exception as e:
logger.error(f"Error getting upscaler roots: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)

View File

@@ -42,6 +42,7 @@ MISC_ROUTE_DEFINITIONS: tuple[RouteDefinition, ...] = (
RouteDefinition("GET", "/api/lm/metadata-archive-status", "get_metadata_archive_status"),
RouteDefinition("GET", "/api/lm/model-versions-status", "get_model_versions_status"),
RouteDefinition("POST", "/api/lm/settings/open-location", "open_settings_location"),
RouteDefinition("GET", "/api/lm/custom-words/search", "search_custom_words"),
)

View File

@@ -18,6 +18,7 @@ from ..services.settings_manager import get_settings_manager
from ..services.downloader import get_downloader
from ..utils.usage_stats import UsageStats
from .handlers.misc_handlers import (
CustomWordsHandler,
FileSystemHandler,
HealthCheckHandler,
LoraCodeHandler,
@@ -117,6 +118,7 @@ class MiscRoutes:
service_registry=self._service_registry_adapter,
metadata_provider_factory=self._metadata_provider_factory,
)
custom_words = CustomWordsHandler()
return self._handler_set_factory(
health=health,
@@ -129,6 +131,7 @@ class MiscRoutes:
model_library=model_library,
metadata_archive=metadata_archive,
filesystem=filesystem,
custom_words=custom_words,
)

View File

@@ -45,8 +45,9 @@ class UpdateRoutes:
# Fetch remote version from GitHub
if nightly:
remote_version, changelog = await UpdateRoutes._get_nightly_version()
releases = None
else:
remote_version, changelog = await UpdateRoutes._get_remote_version()
remote_version, changelog, releases = await UpdateRoutes._get_remote_version()
# Compare versions
if nightly:
@@ -59,7 +60,7 @@ class UpdateRoutes:
remote_version.replace('v', '')
)
return web.json_response({
response_data = {
'success': True,
'current_version': local_version,
'latest_version': remote_version,
@@ -67,7 +68,13 @@ class UpdateRoutes:
'changelog': changelog,
'git_info': git_info,
'nightly': nightly
})
}
# Include releases list for stable mode
if releases is not None:
response_data['releases'] = releases
return web.json_response(response_data)
except NETWORK_EXCEPTIONS as e:
logger.warning("Network unavailable during update check: %s", e)
@@ -443,42 +450,58 @@ class UpdateRoutes:
return git_info
@staticmethod
async def _get_remote_version() -> tuple[str, List[str]]:
async def _get_remote_version() -> tuple[str, List[str], List[Dict]]:
"""
Fetch remote version from GitHub
Returns:
tuple: (version string, changelog list)
tuple: (version string, changelog list, releases list)
"""
repo_owner = "willmiao"
repo_name = "ComfyUI-Lora-Manager"
# Use GitHub API to fetch the latest release
github_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}/releases/latest"
# Use GitHub API to fetch the last 5 releases
github_url = f"https://api.github.com/repos/{repo_owner}/{repo_name}/releases?per_page=5"
try:
downloader = await get_downloader()
success, data = await downloader.make_request('GET', github_url, custom_headers={'Accept': 'application/vnd.github+json'})
if not success:
logger.warning(f"Failed to fetch GitHub release: {data}")
return "v0.0.0", []
logger.warning(f"Failed to fetch GitHub releases: {data}")
return "v0.0.0", [], []
version = data.get('tag_name', '')
if not version.startswith('v'):
version = f"v{version}"
# Parse releases
releases = []
for i, release in enumerate(data):
version = release.get('tag_name', '')
if not version.startswith('v'):
version = f"v{version}"
# Extract changelog from release notes
body = release.get('body', '')
changelog = UpdateRoutes._parse_changelog(body)
releases.append({
'version': version,
'changelog': changelog,
'published_at': release.get('published_at', ''),
'is_latest': i == 0
})
# Extract changelog from release notes
body = data.get('body', '')
changelog = UpdateRoutes._parse_changelog(body)
# Get latest version and its changelog
if releases:
latest_version = releases[0]['version']
latest_changelog = releases[0]['changelog']
return latest_version, latest_changelog, releases
return version, changelog
return "v0.0.0", [], []
except NETWORK_EXCEPTIONS as e:
logger.warning("Unable to reach GitHub for release info: %s", e)
return "v0.0.0", []
return "v0.0.0", [], []
except Exception as e:
logger.error(f"Error fetching remote version: {e}", exc_info=True)
return "v0.0.0", []
return "v0.0.0", [], []
@staticmethod
def _parse_changelog(release_notes: str) -> List[str]:

View File

@@ -5,7 +5,7 @@ import logging
import os
import time
from ..utils.constants import VALID_LORA_TYPES
from ..utils.constants import VALID_LORA_SUB_TYPES, VALID_CHECKPOINT_SUB_TYPES
from ..utils.models import BaseModelMetadata
from ..utils.metadata_manager import MetadataManager
from ..utils.usage_stats import UsageStats
@@ -15,8 +15,8 @@ from .model_query import (
ModelFilterSet,
SearchStrategy,
SettingsProvider,
normalize_civitai_model_type,
resolve_civitai_model_type,
normalize_sub_type,
resolve_sub_type,
)
from .settings_manager import get_settings_manager
@@ -568,16 +568,21 @@ class BaseModelService(ABC):
return await self.scanner.get_base_models(limit)
async def get_model_types(self, limit: int = 20) -> List[Dict[str, Any]]:
"""Get counts of normalized CivitAI model types present in the cache."""
"""Get counts of sub-types present in the cache."""
cache = await self.scanner.get_cached_data()
type_counts: Dict[str, int] = {}
for entry in cache.raw_data:
normalized_type = normalize_civitai_model_type(
resolve_civitai_model_type(entry)
)
if not normalized_type or normalized_type not in VALID_LORA_TYPES:
normalized_type = normalize_sub_type(resolve_sub_type(entry))
if not normalized_type:
continue
# Filter by valid sub-types based on scanner type
if self.model_type == "lora" and normalized_type not in VALID_LORA_SUB_TYPES:
continue
if self.model_type == "checkpoint" and normalized_type not in VALID_CHECKPOINT_SUB_TYPES:
continue
type_counts[normalized_type] = type_counts.get(normalized_type, 0) + 1
sorted_types = sorted(

View File

@@ -21,7 +21,8 @@ class CheckpointScanner(ModelScanner):
hash_index=ModelHashIndex()
)
def _resolve_model_type(self, root_path: Optional[str]) -> Optional[str]:
def _resolve_sub_type(self, root_path: Optional[str]) -> Optional[str]:
"""Resolve the sub-type based on the root path."""
if not root_path:
return None
@@ -34,18 +35,19 @@ class CheckpointScanner(ModelScanner):
return None
def adjust_metadata(self, metadata, file_path, root_path):
if hasattr(metadata, "model_type"):
model_type = self._resolve_model_type(root_path)
if model_type:
metadata.model_type = model_type
"""Adjust metadata during scanning to set sub_type."""
sub_type = self._resolve_sub_type(root_path)
if sub_type:
metadata.sub_type = sub_type
return metadata
def adjust_cached_entry(self, entry: Dict[str, Any]) -> Dict[str, Any]:
model_type = self._resolve_model_type(
"""Adjust entries loaded from the persisted cache to ensure sub_type is set."""
sub_type = self._resolve_sub_type(
self._find_root_for_file(entry.get("file_path"))
)
if model_type:
entry["model_type"] = model_type
if sub_type:
entry["sub_type"] = sub_type
return entry
def get_model_roots(self) -> List[str]:

View File

@@ -22,6 +22,9 @@ class CheckpointService(BaseModelService):
async def format_response(self, checkpoint_data: Dict) -> Dict:
"""Format Checkpoint data for API response"""
# Get sub_type from cache entry (new canonical field)
sub_type = checkpoint_data.get("sub_type", "checkpoint")
return {
"model_name": checkpoint_data["model_name"],
"file_name": checkpoint_data["file_name"],
@@ -37,7 +40,7 @@ class CheckpointService(BaseModelService):
"from_civitai": checkpoint_data.get("from_civitai", True),
"usage_count": checkpoint_data.get("usage_count", 0),
"notes": checkpoint_data.get("notes", ""),
"model_type": checkpoint_data.get("model_type", "checkpoint"),
"sub_type": sub_type,
"favorite": checkpoint_data.get("favorite", False),
"update_available": bool(checkpoint_data.get("update_available", False)),
"civitai": self.filter_civitai_data(checkpoint_data.get("civitai", {}), minimal=True)

View File

@@ -0,0 +1,91 @@
"""Service for managing autocomplete via TagFTSIndex.
This service provides full-text search capabilities for Danbooru/e621 tags
with category filtering and enriched results including post counts.
"""
from __future__ import annotations
import logging
from typing import List, Dict, Any, Optional
logger = logging.getLogger(__name__)
class CustomWordsService:
"""Service for autocomplete via TagFTSIndex.
This service:
- Uses TagFTSIndex for fast full-text search of Danbooru/e621 tags
- Supports category-based filtering
- Returns enriched results with category and post_count
- Provides sub-100ms search times for 221k+ tags
"""
_instance: Optional[CustomWordsService] = None
_initialized: bool = False
def __new__(cls) -> CustomWordsService:
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self) -> None:
if self._initialized:
return
self._tag_index: Optional[Any] = None
self._initialized = True
@classmethod
def get_instance(cls) -> CustomWordsService:
"""Get the singleton instance of CustomWordsService."""
if cls._instance is None:
cls._instance = cls()
return cls._instance
def _get_tag_index(self):
"""Get or create the TagFTSIndex instance (lazy initialization)."""
if self._tag_index is None:
try:
from .tag_fts_index import get_tag_fts_index
self._tag_index = get_tag_fts_index()
except Exception as e:
logger.warning(f"Failed to initialize TagFTSIndex: {e}")
self._tag_index = None
return self._tag_index
def search_words(
self,
search_term: str,
limit: int = 20,
categories: Optional[List[int]] = None,
enriched: bool = False
) -> List[Dict[str, Any]]:
"""Search tags using TagFTSIndex with category filtering.
Args:
search_term: The search term to match against.
limit: Maximum number of results to return.
categories: Optional list of category IDs to filter by.
enriched: If True, always return enriched results with category
and post_count (default behavior now).
Returns:
List of dicts with tag_name, category, and post_count.
"""
tag_index = self._get_tag_index()
if tag_index is not None:
results = tag_index.search(search_term, categories=categories, limit=limit)
return results
logger.debug("TagFTSIndex not available, returning empty results")
return []
def get_custom_words_service() -> CustomWordsService:
"""Factory function to get the CustomWordsService singleton."""
return CustomWordsService.get_instance()
__all__ = ["CustomWordsService", "get_custom_words_service"]

View File

@@ -9,7 +9,7 @@ from collections import OrderedDict
import uuid
from typing import Dict, List, Optional, Set, Tuple
from urllib.parse import urlparse
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata
from ..utils.models import LoraMetadata, CheckpointMetadata, EmbeddingMetadata, MiscMetadata
from ..utils.constants import CARD_PREVIEW_WIDTH, DIFFUSION_MODEL_BASE_MODELS, VALID_LORA_TYPES
from ..utils.civitai_utils import rewrite_preview_url
from ..utils.preview_selection import select_preview_media
@@ -60,6 +60,10 @@ class DownloadManager:
"""Get the checkpoint scanner from registry"""
return await ServiceRegistry.get_checkpoint_scanner()
async def _get_misc_scanner(self):
"""Get the misc scanner from registry"""
return await ServiceRegistry.get_misc_scanner()
async def download_from_civitai(
self,
model_id: int = None,
@@ -275,6 +279,7 @@ class DownloadManager:
lora_scanner = await self._get_lora_scanner()
checkpoint_scanner = await self._get_checkpoint_scanner()
embedding_scanner = await ServiceRegistry.get_embedding_scanner()
misc_scanner = await self._get_misc_scanner()
# Check lora scanner first
if await lora_scanner.check_model_version_exists(model_version_id):
@@ -299,6 +304,13 @@ class DownloadManager:
"error": "Model version already exists in embedding library",
}
# Check misc scanner (VAE, Upscaler)
if await misc_scanner.check_model_version_exists(model_version_id):
return {
"success": False,
"error": "Model version already exists in misc library",
}
# Use CivArchive provider directly when source is 'civarchive'
# This prioritizes CivArchive metadata (with mirror availability info) over Civitai
if source == "civarchive":
@@ -337,6 +349,10 @@ class DownloadManager:
model_type = "lora"
elif model_type_from_info == "textualinversion":
model_type = "embedding"
elif model_type_from_info == "vae":
model_type = "misc"
elif model_type_from_info == "upscaler":
model_type = "misc"
else:
return {
"success": False,
@@ -379,6 +395,14 @@ class DownloadManager:
"success": False,
"error": "Model version already exists in embedding library",
}
elif model_type == "misc":
# Check misc scanner (VAE, Upscaler)
misc_scanner = await self._get_misc_scanner()
if await misc_scanner.check_model_version_exists(version_id):
return {
"success": False,
"error": "Model version already exists in misc library",
}
# Handle use_default_paths
if use_default_paths:
@@ -413,6 +437,26 @@ class DownloadManager:
"error": "Default embedding root path not set in settings",
}
save_dir = default_path
elif model_type == "misc":
from ..config import config
civitai_type = version_info.get("model", {}).get("type", "").lower()
if civitai_type == "vae":
default_paths = config.vae_roots
error_msg = "VAE root path not configured"
elif civitai_type == "upscaler":
default_paths = config.upscaler_roots
error_msg = "Upscaler root path not configured"
else:
default_paths = config.misc_roots
error_msg = "Misc root path not configured"
if not default_paths:
return {
"success": False,
"error": error_msg,
}
save_dir = default_paths[0] if default_paths else ""
# Calculate relative path using template
relative_path = self._calculate_relative_path(version_info, model_type)
@@ -515,6 +559,11 @@ class DownloadManager:
version_info, file_info, save_path
)
logger.info(f"Creating EmbeddingMetadata for {file_name}")
elif model_type == "misc":
metadata = MiscMetadata.from_civitai_info(
version_info, file_info, save_path
)
logger.info(f"Creating MiscMetadata for {file_name}")
# 6. Start download process
result = await self._execute_download(
@@ -620,6 +669,8 @@ class DownloadManager:
scanner = await self._get_checkpoint_scanner()
elif model_type == "embedding":
scanner = await ServiceRegistry.get_embedding_scanner()
elif model_type == "misc":
scanner = await self._get_misc_scanner()
except Exception as exc:
logger.debug("Failed to acquire scanner for %s models: %s", model_type, exc)
@@ -1016,6 +1067,9 @@ class DownloadManager:
elif model_type == "embedding":
scanner = await ServiceRegistry.get_embedding_scanner()
logger.info(f"Updating embedding cache for {actual_file_paths[0]}")
elif model_type == "misc":
scanner = await self._get_misc_scanner()
logger.info(f"Updating misc cache for {actual_file_paths[0]}")
adjust_cached_entry = (
getattr(scanner, "adjust_cached_entry", None)
@@ -1125,6 +1179,14 @@ class DownloadManager:
".pkl",
".sft",
}
if model_type == "misc":
return {
".ckpt",
".pt",
".bin",
".pth",
".safetensors",
}
return {".safetensors"}
async def _extract_model_files_from_archive(

View File

@@ -22,6 +22,9 @@ class EmbeddingService(BaseModelService):
async def format_response(self, embedding_data: Dict) -> Dict:
"""Format Embedding data for API response"""
# Get sub_type from cache entry (new canonical field)
sub_type = embedding_data.get("sub_type", "embedding")
return {
"model_name": embedding_data["model_name"],
"file_name": embedding_data["file_name"],
@@ -37,7 +40,7 @@ class EmbeddingService(BaseModelService):
"from_civitai": embedding_data.get("from_civitai", True),
# "usage_count": embedding_data.get("usage_count", 0), # TODO: Enable when embedding usage tracking is implemented
"notes": embedding_data.get("notes", ""),
"model_type": embedding_data.get("model_type", "embedding"),
"sub_type": sub_type,
"favorite": embedding_data.get("favorite", False),
"update_available": bool(embedding_data.get("update_available", False)),
"civitai": self.filter_civitai_data(embedding_data.get("civitai", {}), minimal=True)

View File

@@ -3,6 +3,7 @@ import logging
from typing import Dict, List, Optional
from .base_model_service import BaseModelService
from .model_query import resolve_sub_type
from ..utils.models import LoraMetadata
from ..config import config
@@ -23,6 +24,10 @@ class LoraService(BaseModelService):
async def format_response(self, lora_data: Dict) -> Dict:
"""Format LoRA data for API response"""
# Resolve sub_type using priority: sub_type > model_type > civitai.model.type > default
# Normalize to lowercase for consistent API responses
sub_type = resolve_sub_type(lora_data).lower()
return {
"model_name": lora_data["model_name"],
"file_name": lora_data["file_name"],
@@ -43,6 +48,7 @@ class LoraService(BaseModelService):
"notes": lora_data.get("notes", ""),
"favorite": lora_data.get("favorite", False),
"update_available": bool(lora_data.get("update_available", False)),
"sub_type": sub_type,
"civitai": self.filter_civitai_data(
lora_data.get("civitai", {}), minimal=True
),
@@ -479,3 +485,49 @@ class LoraService(BaseModelService):
]
return available_loras
async def get_cycler_list(
self,
pool_config: Optional[Dict] = None,
sort_by: str = "filename"
) -> List[Dict]:
"""
Get filtered and sorted LoRA list for cycling.
Args:
pool_config: Optional pool config for filtering (filters dict)
sort_by: Sort field - 'filename' or 'model_name'
Returns:
List of LoRA dicts with file_name and model_name
"""
# Get cached data
cache = await self.scanner.get_cached_data(force_refresh=False)
available_loras = cache.raw_data if cache else []
# Apply pool filters if provided
if pool_config:
available_loras = await self._apply_pool_filters(
available_loras, pool_config
)
# Sort by specified field
if sort_by == "model_name":
available_loras = sorted(
available_loras,
key=lambda x: (x.get("model_name") or x.get("file_name", "")).lower()
)
else: # Default to filename
available_loras = sorted(
available_loras,
key=lambda x: x.get("file_name", "").lower()
)
# Return minimal data needed for cycling
return [
{
"file_name": lora["file_name"],
"model_name": lora.get("model_name", lora["file_name"]),
}
for lora in available_loras
]

View File

@@ -0,0 +1,55 @@
import logging
from typing import Any, Dict, List, Optional
from ..utils.models import MiscMetadata
from ..config import config
from .model_scanner import ModelScanner
from .model_hash_index import ModelHashIndex
logger = logging.getLogger(__name__)
class MiscScanner(ModelScanner):
"""Service for scanning and managing misc files (VAE, Upscaler)"""
def __init__(self):
# Define supported file extensions (combined from VAE and upscaler)
file_extensions = {'.safetensors', '.pt', '.bin', '.ckpt', '.pth'}
super().__init__(
model_type="misc",
model_class=MiscMetadata,
file_extensions=file_extensions,
hash_index=ModelHashIndex()
)
def _resolve_sub_type(self, root_path: Optional[str]) -> Optional[str]:
"""Resolve the sub-type based on the root path."""
if not root_path:
return None
if config.vae_roots and root_path in config.vae_roots:
return "vae"
if config.upscaler_roots and root_path in config.upscaler_roots:
return "upscaler"
return None
def adjust_metadata(self, metadata, file_path, root_path):
"""Adjust metadata during scanning to set sub_type."""
sub_type = self._resolve_sub_type(root_path)
if sub_type:
metadata.sub_type = sub_type
return metadata
def adjust_cached_entry(self, entry: Dict[str, Any]) -> Dict[str, Any]:
"""Adjust entries loaded from the persisted cache to ensure sub_type is set."""
sub_type = self._resolve_sub_type(
self._find_root_for_file(entry.get("file_path"))
)
if sub_type:
entry["sub_type"] = sub_type
return entry
def get_model_roots(self) -> List[str]:
"""Get misc root directories (VAE and upscaler)"""
return config.misc_roots

View File

@@ -0,0 +1,55 @@
import os
import logging
from typing import Dict
from .base_model_service import BaseModelService
from ..utils.models import MiscMetadata
from ..config import config
logger = logging.getLogger(__name__)
class MiscService(BaseModelService):
"""Misc-specific service implementation (VAE, Upscaler)"""
def __init__(self, scanner, update_service=None):
"""Initialize Misc service
Args:
scanner: Misc scanner instance
update_service: Optional service for remote update tracking.
"""
super().__init__("misc", scanner, MiscMetadata, update_service=update_service)
async def format_response(self, misc_data: Dict) -> Dict:
"""Format Misc data for API response"""
# Get sub_type from cache entry (new canonical field)
sub_type = misc_data.get("sub_type", "vae")
return {
"model_name": misc_data["model_name"],
"file_name": misc_data["file_name"],
"preview_url": config.get_preview_static_url(misc_data.get("preview_url", "")),
"preview_nsfw_level": misc_data.get("preview_nsfw_level", 0),
"base_model": misc_data.get("base_model", ""),
"folder": misc_data["folder"],
"sha256": misc_data.get("sha256", ""),
"file_path": misc_data["file_path"].replace(os.sep, "/"),
"file_size": misc_data.get("size", 0),
"modified": misc_data.get("modified", ""),
"tags": misc_data.get("tags", []),
"from_civitai": misc_data.get("from_civitai", True),
"usage_count": misc_data.get("usage_count", 0),
"notes": misc_data.get("notes", ""),
"sub_type": sub_type,
"favorite": misc_data.get("favorite", False),
"update_available": bool(misc_data.get("update_available", False)),
"civitai": self.filter_civitai_data(misc_data.get("civitai", {}), minimal=True)
}
def find_duplicate_hashes(self) -> Dict:
"""Find Misc models with duplicate SHA256 hashes"""
return self.scanner._hash_index.get_duplicate_hashes()
def find_duplicate_filenames(self) -> Dict:
"""Find Misc models with conflicting filenames"""
return self.scanner._hash_index.get_duplicate_filenames()

View File

@@ -33,28 +33,42 @@ def _coerce_to_str(value: Any) -> Optional[str]:
return candidate if candidate else None
def normalize_civitai_model_type(value: Any) -> Optional[str]:
"""Return a lowercase string suitable for comparisons."""
def normalize_sub_type(value: Any) -> Optional[str]:
"""Return a lowercase string suitable for sub_type comparisons."""
candidate = _coerce_to_str(value)
return candidate.lower() if candidate else None
def resolve_civitai_model_type(entry: Mapping[str, Any]) -> str:
"""Extract the model type from CivitAI metadata, defaulting to LORA."""
def resolve_sub_type(entry: Mapping[str, Any]) -> str:
"""Extract the sub-type from metadata, checking multiple sources.
Priority:
1. entry['sub_type'] - new canonical field
2. entry['model_type'] - backward compatibility
3. civitai.model.type - CivitAI API data
4. DEFAULT_CIVITAI_MODEL_TYPE - fallback
"""
if not isinstance(entry, Mapping):
return DEFAULT_CIVITAI_MODEL_TYPE
# Priority 1: Check new canonical field 'sub_type'
sub_type = _coerce_to_str(entry.get("sub_type"))
if sub_type:
return sub_type
# Priority 2: Backward compatibility - check 'model_type' field
model_type = _coerce_to_str(entry.get("model_type"))
if model_type:
return model_type
# Priority 3: Extract from CivitAI metadata
civitai = entry.get("civitai")
if isinstance(civitai, Mapping):
civitai_model = civitai.get("model")
if isinstance(civitai_model, Mapping):
model_type = _coerce_to_str(civitai_model.get("type"))
if model_type:
return model_type
model_type = _coerce_to_str(entry.get("model_type"))
if model_type:
return model_type
civitai_type = _coerce_to_str(civitai_model.get("type"))
if civitai_type:
return civitai_type
return DEFAULT_CIVITAI_MODEL_TYPE
@@ -313,7 +327,7 @@ class ModelFilterSet:
normalized_model_types = {
model_type
for model_type in (
normalize_civitai_model_type(value) for value in model_types
normalize_sub_type(value) for value in model_types
)
if model_type
}
@@ -321,7 +335,7 @@ class ModelFilterSet:
items = [
item
for item in items
if normalize_civitai_model_type(resolve_civitai_model_type(item))
if normalize_sub_type(resolve_sub_type(item))
in normalized_model_types
]
model_types_duration = time.perf_counter() - t0

View File

@@ -275,9 +275,10 @@ class ModelScanner:
_, license_flags = resolve_license_info(license_source or {})
entry['license_flags'] = license_flags
model_type = get_value('model_type', None)
if model_type:
entry['model_type'] = model_type
# Handle sub_type (new canonical field)
sub_type = get_value('sub_type', None)
if sub_type:
entry['sub_type'] = sub_type
return entry

View File

@@ -118,19 +118,24 @@ class ModelServiceFactory:
def register_default_model_types():
"""Register the default model types (LoRA, Checkpoint, and Embedding)"""
"""Register the default model types (LoRA, Checkpoint, Embedding, and Misc)"""
from ..services.lora_service import LoraService
from ..services.checkpoint_service import CheckpointService
from ..services.embedding_service import EmbeddingService
from ..services.misc_service import MiscService
from ..routes.lora_routes import LoraRoutes
from ..routes.checkpoint_routes import CheckpointRoutes
from ..routes.embedding_routes import EmbeddingRoutes
from ..routes.misc_model_routes import MiscModelRoutes
# Register LoRA model type
ModelServiceFactory.register_model_type('lora', LoraService, LoraRoutes)
# Register Checkpoint model type
ModelServiceFactory.register_model_type('checkpoint', CheckpointService, CheckpointRoutes)
# Register Embedding model type
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)
ModelServiceFactory.register_model_type('embedding', EmbeddingService, EmbeddingRoutes)
# Register Misc model type (VAE, Upscaler)
ModelServiceFactory.register_model_type('misc', MiscService, MiscModelRoutes)

View File

@@ -1,13 +1,12 @@
import json
import logging
import os
import re
import sqlite3
import threading
from dataclasses import dataclass
from typing import Dict, List, Mapping, Optional, Sequence, Tuple
from ..utils.settings_paths import get_project_root, get_settings_dir
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
logger = logging.getLogger(__name__)
@@ -404,20 +403,12 @@ class PersistentModelCache:
# Internal helpers -------------------------------------------------
def _resolve_default_path(self, library_name: str) -> str:
override = os.environ.get("LORA_MANAGER_CACHE_DB")
if override:
return override
try:
settings_dir = get_settings_dir(create=True)
except Exception as exc: # pragma: no cover - defensive guard
logger.warning("Falling back to project directory for cache: %s", exc)
settings_dir = get_project_root()
safe_name = re.sub(r"[^A-Za-z0-9_.-]", "_", library_name or "default")
if safe_name.lower() in ("default", ""):
legacy_path = os.path.join(settings_dir, self._DEFAULT_FILENAME)
if os.path.exists(legacy_path):
return legacy_path
return os.path.join(settings_dir, "model_cache", f"{safe_name}.sqlite")
env_override = os.environ.get("LORA_MANAGER_CACHE_DB")
return resolve_cache_path_with_migration(
CacheType.MODEL,
library_name=library_name,
env_override=env_override,
)
def _initialize_schema(self) -> None:
with self._db_lock:

View File

@@ -0,0 +1,484 @@
"""SQLite-based persistent cache for recipe metadata.
This module provides fast recipe cache persistence using SQLite, enabling
quick startup by loading from cache instead of walking directories and
parsing JSON files.
"""
from __future__ import annotations
import json
import logging
import os
import sqlite3
import threading
from dataclasses import dataclass
from typing import Dict, List, Optional, Set, Tuple
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
logger = logging.getLogger(__name__)
@dataclass
class PersistedRecipeData:
"""Lightweight structure returned by the persistent recipe cache."""
raw_data: List[Dict]
file_stats: Dict[str, Tuple[float, int]] # json_path -> (mtime, size)
class PersistentRecipeCache:
"""Persist recipe metadata in SQLite for fast startup."""
_DEFAULT_FILENAME = "recipe_cache.sqlite"
_RECIPE_COLUMNS: Tuple[str, ...] = (
"recipe_id",
"file_path",
"json_path",
"title",
"folder",
"base_model",
"fingerprint",
"created_date",
"modified",
"file_mtime",
"file_size",
"favorite",
"repair_version",
"preview_nsfw_level",
"loras_json",
"checkpoint_json",
"gen_params_json",
"tags_json",
)
_instances: Dict[str, "PersistentRecipeCache"] = {}
_instance_lock = threading.Lock()
def __init__(self, library_name: str = "default", db_path: Optional[str] = None) -> None:
self._library_name = library_name or "default"
self._db_path = db_path or self._resolve_default_path(self._library_name)
self._db_lock = threading.Lock()
self._schema_initialized = False
try:
directory = os.path.dirname(self._db_path)
if directory:
os.makedirs(directory, exist_ok=True)
except Exception as exc:
logger.warning("Could not create recipe cache directory %s: %s", directory, exc)
if self.is_enabled():
self._initialize_schema()
@classmethod
def get_default(cls, library_name: Optional[str] = None) -> "PersistentRecipeCache":
name = library_name or "default"
with cls._instance_lock:
if name not in cls._instances:
cls._instances[name] = cls(name)
return cls._instances[name]
@classmethod
def clear_instances(cls) -> None:
"""Clear all cached instances (useful for library switching)."""
with cls._instance_lock:
cls._instances.clear()
def is_enabled(self) -> bool:
return os.environ.get("LORA_MANAGER_DISABLE_PERSISTENT_CACHE", "0") != "1"
def get_database_path(self) -> str:
"""Expose the resolved SQLite database path."""
return self._db_path
def load_cache(self) -> Optional[PersistedRecipeData]:
"""Load all cached recipes from SQLite.
Returns:
PersistedRecipeData with raw_data and file_stats if cache exists,
None if cache is empty or unavailable.
"""
if not self.is_enabled():
return None
if not self._schema_initialized:
self._initialize_schema()
if not self._schema_initialized:
return None
try:
with self._db_lock:
conn = self._connect(readonly=True)
try:
# Load all recipes
columns_sql = ", ".join(self._RECIPE_COLUMNS)
rows = conn.execute(f"SELECT {columns_sql} FROM recipes").fetchall()
if not rows:
return None
finally:
conn.close()
except FileNotFoundError:
return None
except Exception as exc:
logger.warning("Failed to load persisted recipe cache: %s", exc)
return None
raw_data: List[Dict] = []
file_stats: Dict[str, Tuple[float, int]] = {}
for row in rows:
recipe = self._row_to_recipe(row)
raw_data.append(recipe)
json_path = row["json_path"]
if json_path:
file_stats[json_path] = (
row["file_mtime"] or 0.0,
row["file_size"] or 0,
)
return PersistedRecipeData(raw_data=raw_data, file_stats=file_stats)
def save_cache(self, recipes: List[Dict], json_paths: Optional[Dict[str, str]] = None) -> None:
"""Save all recipes to SQLite cache.
Args:
recipes: List of recipe dictionaries to persist.
json_paths: Optional mapping of recipe_id -> json_path for file stats.
"""
if not self.is_enabled():
return
if not self._schema_initialized:
self._initialize_schema()
if not self._schema_initialized:
return
try:
with self._db_lock:
conn = self._connect()
try:
conn.execute("PRAGMA foreign_keys = ON")
conn.execute("BEGIN")
# Clear existing data
conn.execute("DELETE FROM recipes")
# Prepare and insert all rows
recipe_rows = []
for recipe in recipes:
recipe_id = str(recipe.get("id", ""))
if not recipe_id:
continue
json_path = ""
if json_paths:
json_path = json_paths.get(recipe_id, "")
row = self._prepare_recipe_row(recipe, json_path)
recipe_rows.append(row)
if recipe_rows:
placeholders = ", ".join(["?"] * len(self._RECIPE_COLUMNS))
columns = ", ".join(self._RECIPE_COLUMNS)
conn.executemany(
f"INSERT INTO recipes ({columns}) VALUES ({placeholders})",
recipe_rows,
)
conn.commit()
logger.debug("Persisted %d recipes to cache", len(recipe_rows))
finally:
conn.close()
except Exception as exc:
logger.warning("Failed to persist recipe cache: %s", exc)
def get_file_stats(self) -> Dict[str, Tuple[float, int]]:
"""Return stored file stats for all cached recipes.
Returns:
Dictionary mapping json_path -> (mtime, size).
"""
if not self.is_enabled() or not self._schema_initialized:
return {}
try:
with self._db_lock:
conn = self._connect(readonly=True)
try:
rows = conn.execute(
"SELECT json_path, file_mtime, file_size FROM recipes WHERE json_path IS NOT NULL"
).fetchall()
return {
row["json_path"]: (row["file_mtime"] or 0.0, row["file_size"] or 0)
for row in rows
if row["json_path"]
}
finally:
conn.close()
except Exception:
return {}
def update_recipe(self, recipe: Dict, json_path: Optional[str] = None) -> None:
"""Update or insert a single recipe in the cache.
Args:
recipe: The recipe dictionary to persist.
json_path: Optional path to the recipe JSON file.
"""
if not self.is_enabled() or not self._schema_initialized:
return
recipe_id = str(recipe.get("id", ""))
if not recipe_id:
return
try:
with self._db_lock:
conn = self._connect()
try:
row = self._prepare_recipe_row(recipe, json_path or "")
placeholders = ", ".join(["?"] * len(self._RECIPE_COLUMNS))
columns = ", ".join(self._RECIPE_COLUMNS)
conn.execute(
f"INSERT OR REPLACE INTO recipes ({columns}) VALUES ({placeholders})",
row,
)
conn.commit()
finally:
conn.close()
except Exception as exc:
logger.debug("Failed to update recipe %s in cache: %s", recipe_id, exc)
def remove_recipe(self, recipe_id: str) -> None:
"""Remove a recipe from the cache by ID.
Args:
recipe_id: The ID of the recipe to remove.
"""
if not self.is_enabled() or not self._schema_initialized:
return
if not recipe_id:
return
try:
with self._db_lock:
conn = self._connect()
try:
conn.execute("DELETE FROM recipes WHERE recipe_id = ?", (str(recipe_id),))
conn.commit()
finally:
conn.close()
except Exception as exc:
logger.debug("Failed to remove recipe %s from cache: %s", recipe_id, exc)
def get_indexed_recipe_ids(self) -> Set[str]:
"""Return all recipe IDs in the cache.
Returns:
Set of recipe ID strings.
"""
if not self.is_enabled() or not self._schema_initialized:
return set()
try:
with self._db_lock:
conn = self._connect(readonly=True)
try:
rows = conn.execute("SELECT recipe_id FROM recipes").fetchall()
return {row["recipe_id"] for row in rows if row["recipe_id"]}
finally:
conn.close()
except Exception:
return set()
def get_recipe_count(self) -> int:
"""Return the number of recipes in the cache."""
if not self.is_enabled() or not self._schema_initialized:
return 0
try:
with self._db_lock:
conn = self._connect(readonly=True)
try:
result = conn.execute("SELECT COUNT(*) FROM recipes").fetchone()
return result[0] if result else 0
finally:
conn.close()
except Exception:
return 0
# Internal helpers
def _resolve_default_path(self, library_name: str) -> str:
env_override = os.environ.get("LORA_MANAGER_RECIPE_CACHE_DB")
return resolve_cache_path_with_migration(
CacheType.RECIPE,
library_name=library_name,
env_override=env_override,
)
def _initialize_schema(self) -> None:
with self._db_lock:
if self._schema_initialized:
return
try:
with self._connect() as conn:
conn.execute("PRAGMA journal_mode=WAL")
conn.execute("PRAGMA foreign_keys = ON")
conn.executescript(
"""
CREATE TABLE IF NOT EXISTS recipes (
recipe_id TEXT PRIMARY KEY,
file_path TEXT,
json_path TEXT,
title TEXT,
folder TEXT,
base_model TEXT,
fingerprint TEXT,
created_date REAL,
modified REAL,
file_mtime REAL,
file_size INTEGER,
favorite INTEGER DEFAULT 0,
repair_version INTEGER DEFAULT 0,
preview_nsfw_level INTEGER DEFAULT 0,
loras_json TEXT,
checkpoint_json TEXT,
gen_params_json TEXT,
tags_json TEXT
);
CREATE INDEX IF NOT EXISTS idx_recipes_json_path ON recipes(json_path);
CREATE INDEX IF NOT EXISTS idx_recipes_fingerprint ON recipes(fingerprint);
CREATE TABLE IF NOT EXISTS cache_metadata (
key TEXT PRIMARY KEY,
value TEXT
);
"""
)
conn.commit()
self._schema_initialized = True
except Exception as exc:
logger.warning("Failed to initialize persistent recipe cache schema: %s", exc)
def _connect(self, readonly: bool = False) -> sqlite3.Connection:
uri = False
path = self._db_path
if readonly:
if not os.path.exists(path):
raise FileNotFoundError(path)
path = f"file:{path}?mode=ro"
uri = True
conn = sqlite3.connect(path, check_same_thread=False, uri=uri, detect_types=sqlite3.PARSE_DECLTYPES)
conn.row_factory = sqlite3.Row
return conn
def _prepare_recipe_row(self, recipe: Dict, json_path: str) -> Tuple:
"""Convert a recipe dict to a row tuple for SQLite insertion."""
loras = recipe.get("loras")
loras_json = json.dumps(loras) if loras else None
checkpoint = recipe.get("checkpoint")
checkpoint_json = json.dumps(checkpoint) if checkpoint else None
gen_params = recipe.get("gen_params")
gen_params_json = json.dumps(gen_params) if gen_params else None
tags = recipe.get("tags")
tags_json = json.dumps(tags) if tags else None
# Get file stats if json_path exists
file_mtime = 0.0
file_size = 0
if json_path and os.path.exists(json_path):
try:
stat = os.stat(json_path)
file_mtime = stat.st_mtime
file_size = stat.st_size
except OSError:
pass
return (
str(recipe.get("id", "")),
recipe.get("file_path"),
json_path,
recipe.get("title"),
recipe.get("folder"),
recipe.get("base_model"),
recipe.get("fingerprint"),
float(recipe.get("created_date") or 0.0),
float(recipe.get("modified") or 0.0),
file_mtime,
file_size,
1 if recipe.get("favorite") else 0,
int(recipe.get("repair_version") or 0),
int(recipe.get("preview_nsfw_level") or 0),
loras_json,
checkpoint_json,
gen_params_json,
tags_json,
)
def _row_to_recipe(self, row: sqlite3.Row) -> Dict:
"""Convert a SQLite row to a recipe dictionary."""
loras = []
if row["loras_json"]:
try:
loras = json.loads(row["loras_json"])
except json.JSONDecodeError:
pass
checkpoint = None
if row["checkpoint_json"]:
try:
checkpoint = json.loads(row["checkpoint_json"])
except json.JSONDecodeError:
pass
gen_params = {}
if row["gen_params_json"]:
try:
gen_params = json.loads(row["gen_params_json"])
except json.JSONDecodeError:
pass
tags = []
if row["tags_json"]:
try:
tags = json.loads(row["tags_json"])
except json.JSONDecodeError:
pass
recipe = {
"id": row["recipe_id"],
"file_path": row["file_path"] or "",
"title": row["title"] or "",
"folder": row["folder"] or "",
"base_model": row["base_model"] or "",
"fingerprint": row["fingerprint"] or "",
"created_date": row["created_date"] or 0.0,
"modified": row["modified"] or 0.0,
"favorite": bool(row["favorite"]),
"repair_version": row["repair_version"] or 0,
"preview_nsfw_level": row["preview_nsfw_level"] or 0,
"loras": loras,
"gen_params": gen_params,
}
if tags:
recipe["tags"] = tags
if checkpoint:
recipe["checkpoint"] = checkpoint
return recipe
def get_persistent_recipe_cache() -> PersistentRecipeCache:
"""Get the default persistent recipe cache instance for the active library."""
from .settings_manager import get_settings_manager
library_name = get_settings_manager().get_active_library_name()
return PersistentRecipeCache.get_default(library_name)

View File

@@ -15,7 +15,7 @@ import threading
import time
from typing import Any, Dict, List, Optional, Set
from ..utils.settings_paths import get_settings_dir
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
logger = logging.getLogger(__name__)
@@ -67,17 +67,11 @@ class RecipeFTSIndex:
def _resolve_default_path(self) -> str:
"""Resolve the default database path."""
override = os.environ.get("LORA_MANAGER_RECIPE_FTS_DB")
if override:
return override
try:
settings_dir = get_settings_dir(create=True)
except Exception as exc:
logger.warning("Falling back to current directory for FTS index: %s", exc)
settings_dir = "."
return os.path.join(settings_dir, self._DEFAULT_FILENAME)
env_override = os.environ.get("LORA_MANAGER_RECIPE_FTS_DB")
return resolve_cache_path_with_migration(
CacheType.RECIPE_FTS,
env_override=env_override,
)
def get_database_path(self) -> str:
"""Return the resolved database path."""
@@ -403,6 +397,78 @@ class RecipeFTSIndex:
except Exception:
return 0
def get_indexed_recipe_ids(self) -> Set[str]:
"""Return all recipe IDs currently in the index.
Returns:
Set of recipe ID strings.
"""
if not self._schema_initialized:
self.initialize()
if not self._schema_initialized:
return set()
try:
with self._lock:
conn = self._connect(readonly=True)
try:
cursor = conn.execute("SELECT recipe_id FROM recipe_fts")
return {row[0] for row in cursor.fetchall() if row[0]}
finally:
conn.close()
except FileNotFoundError:
return set()
except Exception as exc:
logger.debug("Failed to get indexed recipe IDs: %s", exc)
return set()
def validate_index(self, recipe_count: int, recipe_ids: Set[str]) -> bool:
"""Check if the FTS index matches the expected recipes.
This method validates whether the existing FTS index can be reused
without a full rebuild. It checks:
1. The index has been initialized
2. The count matches
3. The recipe IDs match
Args:
recipe_count: Expected number of recipes.
recipe_ids: Expected set of recipe IDs.
Returns:
True if the index is valid and can be reused, False otherwise.
"""
if not self._schema_initialized:
self.initialize()
if not self._schema_initialized:
return False
try:
indexed_count = self.get_indexed_count()
if indexed_count != recipe_count:
logger.debug(
"FTS index count mismatch: indexed=%d, expected=%d",
indexed_count, recipe_count
)
return False
indexed_ids = self.get_indexed_recipe_ids()
if indexed_ids != recipe_ids:
missing = recipe_ids - indexed_ids
extra = indexed_ids - recipe_ids
if missing:
logger.debug("FTS index missing %d recipe IDs", len(missing))
if extra:
logger.debug("FTS index has %d extra recipe IDs", len(extra))
return False
return True
except Exception as exc:
logger.debug("FTS index validation failed: %s", exc)
return False
# Internal helpers
def _connect(self, readonly: bool = False) -> sqlite3.Connection:
@@ -509,21 +575,20 @@ class RecipeFTSIndex:
if not fields:
return term_expr
# Build field-restricted query with OR between fields
# Build field-restricted query where ALL words must match within at least one field
field_clauses = []
for field in fields:
if field in self.FIELD_MAP:
cols = self.FIELD_MAP[field]
for col in cols:
# FTS5 column filter syntax: column:term
# Need to handle multiple terms properly
for term in prefix_terms:
field_clauses.append(f'{col}:{term}')
# Create clause where ALL terms must match in this column (implicit AND)
col_terms = [f'{col}:{term}' for term in prefix_terms]
field_clauses.append('(' + ' '.join(col_terms) + ')')
if not field_clauses:
return term_expr
# Combine field clauses with OR
# Any field matching all terms is acceptable (OR between field clauses)
return ' OR '.join(field_clauses)
def _escape_fts_query(self, text: str) -> str:

View File

@@ -9,6 +9,7 @@ from typing import Any, Callable, Dict, Iterable, List, Optional, Set, Tuple
from ..config import config
from .recipe_cache import RecipeCache
from .recipe_fts_index import RecipeFTSIndex
from .persistent_recipe_cache import PersistentRecipeCache, get_persistent_recipe_cache
from .service_registry import ServiceRegistry
from .lora_scanner import LoraScanner
from .metadata_service import get_default_metadata_provider
@@ -78,6 +79,9 @@ class RecipeScanner:
# FTS index for fast search
self._fts_index: Optional[RecipeFTSIndex] = None
self._fts_index_task: Optional[asyncio.Task] = None
# Persistent cache for fast startup
self._persistent_cache: Optional[PersistentRecipeCache] = None
self._json_path_map: Dict[str, str] = {} # recipe_id -> json_path
if lora_scanner:
self._lora_scanner = lora_scanner
if checkpoint_scanner:
@@ -109,6 +113,11 @@ class RecipeScanner:
self._fts_index.clear()
self._fts_index = None
# Reset persistent cache instance for new library
self._persistent_cache = None
self._json_path_map = {}
PersistentRecipeCache.clear_instances()
self._cache = None
self._initialization_task = None
self._is_initializing = False
@@ -321,12 +330,17 @@ class RecipeScanner:
with open(recipe_json_path, 'w', encoding='utf-8') as f:
json.dump(recipe, f, indent=4, ensure_ascii=False)
# 4. Update EXIF if image exists
# 4. Update persistent SQLite cache
if self._persistent_cache:
self._persistent_cache.update_recipe(recipe, recipe_json_path)
self._json_path_map[str(recipe_id)] = recipe_json_path
# 5. Update EXIF if image exists
image_path = recipe.get('file_path')
if image_path and os.path.exists(image_path):
from ..utils.exif_utils import ExifUtils
ExifUtils.append_recipe_metadata(image_path, recipe)
return True
except Exception as e:
logger.error(f"Error persisting recipe {recipe_id}: {e}")
@@ -408,117 +422,268 @@ class RecipeScanner:
logger.error(f"Recipe Scanner: Error initializing cache in background: {e}")
def _initialize_recipe_cache_sync(self):
"""Synchronous version of recipe cache initialization for thread pool execution"""
"""Synchronous version of recipe cache initialization for thread pool execution.
Uses persistent cache for fast startup when available:
1. Try to load from persistent SQLite cache
2. Reconcile with filesystem (check mtime/size for changes)
3. Fall back to full directory scan if cache miss or reconciliation fails
4. Persist results for next startup
"""
try:
# Create a new event loop for this thread
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
# Create a synchronous method to bypass the async lock
def sync_initialize_cache():
# We need to implement scan_all_recipes logic synchronously here
# instead of calling the async method to avoid event loop issues
recipes = []
recipes_dir = self.recipes_dir
if not recipes_dir or not os.path.exists(recipes_dir):
logger.warning(f"Recipes directory not found: {recipes_dir}")
return recipes
# Get all recipe JSON files in the recipes directory
recipe_files = []
for root, _, files in os.walk(recipes_dir):
recipe_count = sum(1 for f in files if f.lower().endswith('.recipe.json'))
if recipe_count > 0:
for file in files:
if file.lower().endswith('.recipe.json'):
recipe_files.append(os.path.join(root, file))
# Process each recipe file
for recipe_path in recipe_files:
try:
with open(recipe_path, 'r', encoding='utf-8') as f:
recipe_data = json.load(f)
# Validate recipe data
if not recipe_data or not isinstance(recipe_data, dict):
logger.warning(f"Invalid recipe data in {recipe_path}")
continue
# Ensure required fields exist
required_fields = ['id', 'file_path', 'title']
if not all(field in recipe_data for field in required_fields):
logger.warning(f"Missing required fields in {recipe_path}")
continue
# Ensure the image file exists and prioritize local siblings
image_path = recipe_data.get('file_path')
if image_path:
recipe_dir = os.path.dirname(recipe_path)
image_filename = os.path.basename(image_path)
local_sibling_path = os.path.normpath(os.path.join(recipe_dir, image_filename))
# If local sibling exists and stored path is different, prefer local
if os.path.exists(local_sibling_path) and os.path.normpath(image_path) != local_sibling_path:
recipe_data['file_path'] = local_sibling_path
# Persist the repair
try:
with open(recipe_path, 'w', encoding='utf-8') as f:
json.dump(recipe_data, f, indent=4, ensure_ascii=False)
logger.info(f"Updated recipe image path to local sibling: {local_sibling_path}")
except Exception as e:
logger.warning(f"Failed to persist repair for {recipe_path}: {e}")
elif not os.path.exists(image_path):
logger.warning(f"Recipe image not found and no local sibling: {image_path}")
# Ensure loras array exists
if 'loras' not in recipe_data:
recipe_data['loras'] = []
# Ensure gen_params exists
if 'gen_params' not in recipe_data:
recipe_data['gen_params'] = {}
# Add to list without async operations
recipes.append(recipe_data)
except Exception as e:
logger.error(f"Error loading recipe file {recipe_path}: {e}")
import traceback
traceback.print_exc(file=sys.stderr)
# Update cache with the collected data
self._cache.raw_data = recipes
self._update_folder_metadata(self._cache)
# Create a simplified resort function that doesn't use await
if hasattr(self._cache, "resort"):
try:
# Sort by name
self._cache.sorted_by_name = natsorted(
self._cache.raw_data,
key=lambda x: x.get('title', '').lower()
)
# Sort by date (modified or created)
self._cache.sorted_by_date = sorted(
self._cache.raw_data,
key=lambda x: x.get('modified', x.get('created_date', 0)),
reverse=True
)
except Exception as e:
logger.error(f"Error sorting recipe cache: {e}")
# Initialize persistent cache
if self._persistent_cache is None:
self._persistent_cache = get_persistent_recipe_cache()
recipes_dir = self.recipes_dir
if not recipes_dir or not os.path.exists(recipes_dir):
logger.warning(f"Recipes directory not found: {recipes_dir}")
return self._cache
# Run our sync initialization that avoids lock conflicts
return sync_initialize_cache()
# Try to load from persistent cache first
persisted = self._persistent_cache.load_cache()
if persisted:
recipes, changed, json_paths = self._reconcile_recipe_cache(persisted, recipes_dir)
self._json_path_map = json_paths
if not changed:
# Fast path: use cached data directly
logger.info("Recipe cache hit: loaded %d recipes from persistent cache", len(recipes))
self._cache.raw_data = recipes
self._update_folder_metadata(self._cache)
self._sort_cache_sync()
return self._cache
else:
# Partial update: some files changed
logger.info("Recipe cache partial hit: reconciled %d recipes with filesystem", len(recipes))
self._cache.raw_data = recipes
self._update_folder_metadata(self._cache)
self._sort_cache_sync()
# Persist updated cache
self._persistent_cache.save_cache(recipes, json_paths)
return self._cache
# Fall back to full directory scan
logger.info("Recipe cache miss: performing full directory scan")
recipes, json_paths = self._full_directory_scan_sync(recipes_dir)
self._json_path_map = json_paths
# Update cache with the collected data
self._cache.raw_data = recipes
self._update_folder_metadata(self._cache)
self._sort_cache_sync()
# Persist for next startup
self._persistent_cache.save_cache(recipes, json_paths)
return self._cache
except Exception as e:
logger.error(f"Error in thread-based recipe cache initialization: {e}")
import traceback
traceback.print_exc(file=sys.stderr)
return self._cache if hasattr(self, '_cache') else None
finally:
# Clean up the event loop
loop.close()
def _reconcile_recipe_cache(
self,
persisted: "PersistedRecipeData",
recipes_dir: str,
) -> Tuple[List[Dict], bool, Dict[str, str]]:
"""Reconcile persisted cache with current filesystem state.
Args:
persisted: The persisted recipe data from SQLite cache.
recipes_dir: Path to the recipes directory.
Returns:
Tuple of (recipes list, changed flag, json_paths dict).
"""
from .persistent_recipe_cache import PersistedRecipeData
recipes: List[Dict] = []
json_paths: Dict[str, str] = {}
changed = False
# Build set of current recipe files
current_files: Dict[str, Tuple[float, int]] = {}
for root, _, files in os.walk(recipes_dir):
for file in files:
if file.lower().endswith('.recipe.json'):
file_path = os.path.join(root, file)
try:
stat = os.stat(file_path)
current_files[file_path] = (stat.st_mtime, stat.st_size)
except OSError:
continue
# Build lookup of persisted recipes by json_path
persisted_by_path: Dict[str, Dict] = {}
for recipe in persisted.raw_data:
recipe_id = str(recipe.get('id', ''))
if recipe_id:
# Find the json_path from file_stats
for json_path, (mtime, size) in persisted.file_stats.items():
if os.path.basename(json_path).startswith(recipe_id):
persisted_by_path[json_path] = recipe
break
# Also index by recipe ID for faster lookups
persisted_by_id: Dict[str, Dict] = {
str(r.get('id', '')): r for r in persisted.raw_data if r.get('id')
}
# Process current files
for file_path, (current_mtime, current_size) in current_files.items():
cached_stats = persisted.file_stats.get(file_path)
if cached_stats:
cached_mtime, cached_size = cached_stats
# Check if file is unchanged
if abs(current_mtime - cached_mtime) < 1.0 and current_size == cached_size:
# Use cached data
cached_recipe = persisted_by_path.get(file_path)
if cached_recipe:
recipe_id = str(cached_recipe.get('id', ''))
# Track folder from file path
cached_recipe['folder'] = cached_recipe.get('folder') or self._calculate_folder(file_path)
recipes.append(cached_recipe)
json_paths[recipe_id] = file_path
continue
# File is new or changed - need to re-read
changed = True
recipe_data = self._load_recipe_file_sync(file_path)
if recipe_data:
recipe_id = str(recipe_data.get('id', ''))
recipes.append(recipe_data)
json_paths[recipe_id] = file_path
# Check for deleted files
for json_path in persisted.file_stats.keys():
if json_path not in current_files:
changed = True
logger.debug("Recipe file deleted: %s", json_path)
return recipes, changed, json_paths
def _full_directory_scan_sync(self, recipes_dir: str) -> Tuple[List[Dict], Dict[str, str]]:
"""Perform a full synchronous directory scan for recipes.
Args:
recipes_dir: Path to the recipes directory.
Returns:
Tuple of (recipes list, json_paths dict).
"""
recipes: List[Dict] = []
json_paths: Dict[str, str] = {}
# Get all recipe JSON files
recipe_files = []
for root, _, files in os.walk(recipes_dir):
for file in files:
if file.lower().endswith('.recipe.json'):
recipe_files.append(os.path.join(root, file))
# Process each recipe file
for recipe_path in recipe_files:
recipe_data = self._load_recipe_file_sync(recipe_path)
if recipe_data:
recipe_id = str(recipe_data.get('id', ''))
recipes.append(recipe_data)
json_paths[recipe_id] = recipe_path
return recipes, json_paths
def _load_recipe_file_sync(self, recipe_path: str) -> Optional[Dict]:
"""Load a single recipe file synchronously.
Args:
recipe_path: Path to the recipe JSON file.
Returns:
Recipe dictionary if valid, None otherwise.
"""
try:
with open(recipe_path, 'r', encoding='utf-8') as f:
recipe_data = json.load(f)
# Validate recipe data
if not recipe_data or not isinstance(recipe_data, dict):
logger.warning(f"Invalid recipe data in {recipe_path}")
return None
# Ensure required fields exist
required_fields = ['id', 'file_path', 'title']
if not all(field in recipe_data for field in required_fields):
logger.warning(f"Missing required fields in {recipe_path}")
return None
# Ensure the image file exists and prioritize local siblings
image_path = recipe_data.get('file_path')
path_updated = False
if image_path:
recipe_dir = os.path.dirname(recipe_path)
image_filename = os.path.basename(image_path)
local_sibling_path = os.path.normpath(os.path.join(recipe_dir, image_filename))
# If local sibling exists and stored path is different, prefer local
if os.path.exists(local_sibling_path) and os.path.normpath(image_path) != local_sibling_path:
recipe_data['file_path'] = local_sibling_path
path_updated = True
logger.info(f"Updated recipe image path to local sibling: {local_sibling_path}")
elif not os.path.exists(image_path):
logger.warning(f"Recipe image not found and no local sibling: {image_path}")
if path_updated:
try:
with open(recipe_path, 'w', encoding='utf-8') as f:
json.dump(recipe_data, f, indent=4, ensure_ascii=False)
except Exception as e:
logger.warning(f"Failed to persist repair for {recipe_path}: {e}")
# Track folder placement relative to recipes directory
recipe_data['folder'] = recipe_data.get('folder') or self._calculate_folder(recipe_path)
# Ensure loras array exists
if 'loras' not in recipe_data:
recipe_data['loras'] = []
# Ensure gen_params exists
if 'gen_params' not in recipe_data:
recipe_data['gen_params'] = {}
return recipe_data
except Exception as e:
logger.error(f"Error loading recipe file {recipe_path}: {e}")
import traceback
traceback.print_exc(file=sys.stderr)
return None
def _sort_cache_sync(self) -> None:
"""Sort cache data synchronously."""
try:
# Sort by name
self._cache.sorted_by_name = natsorted(
self._cache.raw_data,
key=lambda x: x.get('title', '').lower()
)
# Sort by date (modified or created)
self._cache.sorted_by_date = sorted(
self._cache.raw_data,
key=lambda x: (x.get('modified', x.get('created_date', 0)), x.get('file_path', '')),
reverse=True
)
except Exception as e:
logger.error(f"Error sorting recipe cache: {e}")
async def _wait_for_lora_scanner(self) -> None:
"""Ensure the LoRA scanner has initialized before recipe enrichment."""
@@ -570,7 +735,10 @@ class RecipeScanner:
self._post_scan_task = loop.create_task(_run_enrichment(), name="recipe_cache_enrichment")
def _schedule_fts_index_build(self) -> None:
"""Build FTS index in background without blocking."""
"""Build FTS index in background without blocking.
Validates existing index first and reuses it if valid.
"""
if self._fts_index_task and not self._fts_index_task.done():
return # Already running
@@ -587,7 +755,25 @@ class RecipeScanner:
try:
self._fts_index = RecipeFTSIndex()
# Run in thread pool (SQLite is blocking)
# Check if existing index is valid
recipe_ids = {str(r.get('id', '')) for r in self._cache.raw_data if r.get('id')}
recipe_count = len(self._cache.raw_data)
# Run validation in thread pool
is_valid = await loop.run_in_executor(
None,
self._fts_index.validate_index,
recipe_count,
recipe_ids
)
if is_valid:
logger.info("FTS index validated, reusing existing index with %d recipes", recipe_count)
self._fts_index._ready.set()
return
# Only rebuild if validation fails
logger.info("FTS index invalid or outdated, rebuilding...")
await loop.run_in_executor(
None,
self._fts_index.build_index,
@@ -632,7 +818,12 @@ class RecipeScanner:
fields = None
try:
return self._fts_index.search(search, fields)
result = self._fts_index.search(search, fields)
# Return None if empty to trigger fuzzy fallback
# Empty FTS results may indicate query syntax issues or need for fuzzy matching
if not result:
return None
return result
except Exception as exc:
logger.debug("FTS search failed, falling back to fuzzy search: %s", exc)
return None
@@ -870,6 +1061,12 @@ class RecipeScanner:
# Update FTS index
self._update_fts_index_for_recipe(recipe_data, 'add')
# Persist to SQLite cache
if self._persistent_cache:
recipe_id = str(recipe_data.get('id', ''))
json_path = self._json_path_map.get(recipe_id, '')
self._persistent_cache.update_recipe(recipe_data, json_path)
async def remove_recipe(self, recipe_id: str) -> bool:
"""Remove a recipe from the cache by ID."""
@@ -886,6 +1083,12 @@ class RecipeScanner:
# Update FTS index
self._update_fts_index_for_recipe(recipe_id, 'remove')
# Remove from SQLite cache
if self._persistent_cache:
self._persistent_cache.remove_recipe(recipe_id)
self._json_path_map.pop(recipe_id, None)
return True
async def bulk_remove(self, recipe_ids: Iterable[str]) -> int:
@@ -895,9 +1098,13 @@ class RecipeScanner:
removed = await cache.bulk_remove(recipe_ids, resort=False)
if removed:
self._schedule_resort()
# Update FTS index for each removed recipe
for recipe_id in (str(r.get('id', '')) for r in removed):
# Update FTS index and persistent cache for each removed recipe
for recipe in removed:
recipe_id = str(recipe.get('id', ''))
self._update_fts_index_for_recipe(recipe_id, 'remove')
if self._persistent_cache:
self._persistent_cache.remove_recipe(recipe_id)
self._json_path_map.pop(recipe_id, None)
return len(removed)
async def scan_all_recipes(self) -> List[Dict]:
@@ -1690,11 +1897,11 @@ class RecipeScanner:
async def update_recipe_metadata(self, recipe_id: str, metadata: dict) -> bool:
"""Update recipe metadata (like title and tags) in both file system and cache
Args:
recipe_id: The ID of the recipe to update
metadata: Dictionary containing metadata fields to update (title, tags, etc.)
Returns:
bool: True if successful, False otherwise
"""
@@ -1702,16 +1909,16 @@ class RecipeScanner:
recipe_json_path = await self.get_recipe_json_path(recipe_id)
if not recipe_json_path or not os.path.exists(recipe_json_path):
return False
try:
# Load existing recipe data
with open(recipe_json_path, 'r', encoding='utf-8') as f:
recipe_data = json.load(f)
# Update fields
for key, value in metadata.items():
recipe_data[key] = value
# Save updated recipe
with open(recipe_json_path, 'w', encoding='utf-8') as f:
json.dump(recipe_data, f, indent=4, ensure_ascii=False)
@@ -1724,6 +1931,11 @@ class RecipeScanner:
# Update FTS index
self._update_fts_index_for_recipe(recipe_data, 'update')
# Update persistent SQLite cache
if self._persistent_cache:
self._persistent_cache.update_recipe(recipe_data, recipe_json_path)
self._json_path_map[recipe_id] = recipe_json_path
# If the recipe has an image, update its EXIF metadata
from ..utils.exif_utils import ExifUtils
image_path = recipe_data.get('file_path')
@@ -1795,6 +2007,11 @@ class RecipeScanner:
# Update FTS index
self._update_fts_index_for_recipe(recipe_data, 'update')
# Update persistent SQLite cache
if self._persistent_cache:
self._persistent_cache.update_recipe(recipe_data, recipe_json_path)
self._json_path_map[recipe_id] = recipe_json_path
updated_lora = dict(lora_entry)
if target_lora is not None:
preview_url = target_lora.get('preview_url')
@@ -1918,26 +2135,31 @@ class RecipeScanner:
if not recipes_to_update:
return 0, 0
# Persist changes to disk
# Persist changes to disk and SQLite cache
async with self._mutation_lock:
for recipe in recipes_to_update:
recipe_id = recipe.get('id')
recipe_id = str(recipe.get('id', ''))
if not recipe_id:
continue
recipe_path = os.path.join(self.recipes_dir, f"{recipe_id}.recipe.json")
try:
self._write_recipe_file(recipe_path, recipe)
file_updated_count += 1
logger.info(f"Updated file_name in recipe {recipe_path}: -> {new_file_name}")
# Update persistent SQLite cache
if self._persistent_cache:
self._persistent_cache.update_recipe(recipe, recipe_path)
self._json_path_map[recipe_id] = recipe_path
except Exception as e:
logger.error(f"Error updating recipe file {recipe_path}: {e}")
# We don't necessarily need to resort because LoRA file_name isn't a sort key,
# but we might want to schedule a resort if we're paranoid or if searching relies on sorted state.
# Given it's a rename of a dependency, search results might change if searching by LoRA name.
self._schedule_resort()
return file_updated_count, cache_updated_count
async def find_recipes_by_fingerprint(self, fingerprint: str) -> list:

View File

@@ -233,23 +233,44 @@ class ServiceRegistry:
async def get_embedding_scanner(cls):
"""Get or create Embedding scanner instance"""
service_name = "embedding_scanner"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
# Double-check after acquiring lock
if service_name in cls._services:
return cls._services[service_name]
# Import here to avoid circular imports
from .embedding_scanner import EmbeddingScanner
scanner = await EmbeddingScanner.get_instance()
cls._services[service_name] = scanner
logger.debug(f"Created and registered {service_name}")
return scanner
@classmethod
async def get_misc_scanner(cls):
"""Get or create Misc scanner instance (VAE, Upscaler)"""
service_name = "misc_scanner"
if service_name in cls._services:
return cls._services[service_name]
async with cls._get_lock(service_name):
# Double-check after acquiring lock
if service_name in cls._services:
return cls._services[service_name]
# Import here to avoid circular imports
from .misc_scanner import MiscScanner
scanner = await MiscScanner.get_instance()
cls._services[service_name] = scanner
logger.debug(f"Created and registered {service_name}")
return scanner
@classmethod
def clear_services(cls):
"""Clear all registered services - mainly for testing"""

View File

@@ -35,6 +35,8 @@ DEFAULT_SETTINGS: Dict[str, Any] = {
"hash_chunk_size_mb": DEFAULT_HASH_CHUNK_SIZE_MB,
"language": "en",
"show_only_sfw": False,
"onboarding_completed": False,
"dismissed_banners": [],
"enable_metadata_archive_db": False,
"proxy_enabled": False,
"proxy_host": "",

View File

@@ -0,0 +1,680 @@
"""SQLite FTS5-based full-text search index for tags.
This module provides fast tag search using SQLite's FTS5 extension,
enabling sub-100ms search times for 221k+ Danbooru/e621 tags.
Supports alias search: when a user searches for an alias (e.g., "miku"),
the system returns the canonical tag (e.g., "hatsune_miku") and indicates
which alias was matched.
"""
from __future__ import annotations
import csv
import logging
import os
import re
import sqlite3
import threading
import time
from pathlib import Path
from typing import Dict, List, Optional, Set
from ..utils.cache_paths import CacheType, resolve_cache_path_with_migration
logger = logging.getLogger(__name__)
# Schema version for tracking migrations
SCHEMA_VERSION = 2 # Version 2: Added aliases support
# Category definitions for Danbooru and e621
CATEGORY_NAMES = {
# Danbooru categories
0: "general",
1: "artist",
3: "copyright",
4: "character",
5: "meta",
# e621 categories
7: "general",
8: "artist",
10: "copyright",
11: "character",
12: "species",
14: "meta",
15: "lore",
}
# Map category names to their IDs (for filtering)
CATEGORY_NAME_TO_IDS = {
"general": [0, 7],
"artist": [1, 8],
"copyright": [3, 10],
"character": [4, 11],
"meta": [5, 14],
"species": [12],
"lore": [15],
}
class TagFTSIndex:
"""SQLite FTS5-based full-text search index for tags.
Provides fast prefix-based search across the Danbooru/e621 tag database.
Supports category-based filtering and returns enriched results with
post counts and category information.
"""
_DEFAULT_FILENAME = "tag_fts.sqlite"
_CSV_FILENAME = "danbooru_e621_merged.csv"
def __init__(self, db_path: Optional[str] = None, csv_path: Optional[str] = None) -> None:
"""Initialize the FTS index.
Args:
db_path: Optional path to the SQLite database file.
If not provided, uses the default location in settings directory.
csv_path: Optional path to the CSV file containing tag data.
If not provided, looks in the refs/ directory.
"""
self._db_path = db_path or self._resolve_default_db_path()
self._csv_path = csv_path or self._resolve_default_csv_path()
self._lock = threading.Lock()
self._ready = threading.Event()
self._indexing_in_progress = False
self._schema_initialized = False
self._warned_not_ready = False
# Ensure directory exists
try:
directory = os.path.dirname(self._db_path)
if directory:
os.makedirs(directory, exist_ok=True)
except Exception as exc:
logger.warning("Could not create FTS index directory %s: %s", directory, exc)
def _resolve_default_db_path(self) -> str:
"""Resolve the default database path."""
env_override = os.environ.get("LORA_MANAGER_TAG_FTS_DB")
return resolve_cache_path_with_migration(
CacheType.TAG_FTS,
env_override=env_override,
)
def _resolve_default_csv_path(self) -> str:
"""Resolve the default CSV file path."""
# Look for the CSV in the refs/ directory relative to the package
package_dir = Path(__file__).parent.parent.parent
csv_path = package_dir / "refs" / self._CSV_FILENAME
return str(csv_path)
def get_database_path(self) -> str:
"""Return the resolved database path."""
return self._db_path
def get_csv_path(self) -> str:
"""Return the resolved CSV path."""
return self._csv_path
def is_ready(self) -> bool:
"""Check if the FTS index is ready for queries."""
return self._ready.is_set()
def is_indexing(self) -> bool:
"""Check if indexing is currently in progress."""
return self._indexing_in_progress
def initialize(self) -> None:
"""Initialize the database schema."""
if self._schema_initialized:
return
with self._lock:
if self._schema_initialized:
return
try:
conn = self._connect()
try:
conn.execute("PRAGMA journal_mode=WAL")
# Check if we need to migrate from old schema
needs_rebuild = self._check_and_migrate_schema(conn)
conn.executescript("""
-- FTS5 virtual table for full-text search
-- searchable_text contains "tag_name alias1 alias2 ..." for alias matching
CREATE VIRTUAL TABLE IF NOT EXISTS tag_fts USING fts5(
searchable_text,
tokenize='unicode61 remove_diacritics 2'
);
-- Tags table with metadata and aliases
CREATE TABLE IF NOT EXISTS tags (
rowid INTEGER PRIMARY KEY,
tag_name TEXT UNIQUE NOT NULL,
category INTEGER NOT NULL DEFAULT 0,
post_count INTEGER NOT NULL DEFAULT 0,
aliases TEXT DEFAULT ''
);
-- Indexes for efficient filtering
CREATE INDEX IF NOT EXISTS idx_tags_category ON tags(category);
CREATE INDEX IF NOT EXISTS idx_tags_post_count ON tags(post_count DESC);
-- Index version tracking
CREATE TABLE IF NOT EXISTS fts_metadata (
key TEXT PRIMARY KEY,
value TEXT
);
""")
# Set schema version
conn.execute(
"INSERT OR REPLACE INTO fts_metadata (key, value) VALUES (?, ?)",
("schema_version", str(SCHEMA_VERSION))
)
conn.commit()
self._schema_initialized = True
self._needs_rebuild = needs_rebuild
logger.debug("Tag FTS index schema initialized at %s", self._db_path)
finally:
conn.close()
except Exception as exc:
logger.error("Failed to initialize tag FTS schema: %s", exc)
def _check_and_migrate_schema(self, conn: sqlite3.Connection) -> bool:
"""Check schema version and migrate if necessary.
Returns:
True if the index needs to be rebuilt, False otherwise.
"""
try:
# Check if fts_metadata table exists
cursor = conn.execute(
"SELECT name FROM sqlite_master WHERE type='table' AND name='fts_metadata'"
)
if not cursor.fetchone():
return False # Fresh database, no migration needed
# Check schema version
cursor = conn.execute(
"SELECT value FROM fts_metadata WHERE key='schema_version'"
)
row = cursor.fetchone()
if not row:
# Old schema without version, needs rebuild
logger.info("Migrating tag FTS index to schema version %d (adding alias support)", SCHEMA_VERSION)
self._drop_old_tables(conn)
return True
current_version = int(row[0])
if current_version < SCHEMA_VERSION:
logger.info("Migrating tag FTS index from version %d to %d", current_version, SCHEMA_VERSION)
self._drop_old_tables(conn)
return True
return False
except Exception as exc:
logger.warning("Error checking schema version: %s", exc)
return False
def _drop_old_tables(self, conn: sqlite3.Connection) -> None:
"""Drop old tables for schema migration."""
try:
conn.executescript("""
DROP TABLE IF EXISTS tag_fts;
DROP TABLE IF EXISTS tags;
""")
conn.commit()
except Exception as exc:
logger.warning("Error dropping old tables: %s", exc)
def build_index(self) -> None:
"""Build the FTS index from the CSV file.
This method parses the danbooru_e621_merged.csv file and creates
the FTS index for fast searching. The CSV format is:
tag_name,category,post_count,aliases
Where aliases is a comma-separated string (e.g., "miku,vocaloid_miku,39").
"""
if self._indexing_in_progress:
logger.warning("Tag FTS indexing already in progress, skipping")
return
if not os.path.exists(self._csv_path):
logger.warning("CSV file not found at %s, cannot build tag index", self._csv_path)
return
self._indexing_in_progress = True
self._ready.clear()
start_time = time.time()
try:
self.initialize()
if not self._schema_initialized:
logger.error("Cannot build tag FTS index: schema not initialized")
return
with self._lock:
conn = self._connect()
try:
conn.execute("BEGIN")
# Clear existing data
conn.execute("DELETE FROM tag_fts")
conn.execute("DELETE FROM tags")
# Parse CSV and insert in batches
batch_size = 500
rows = []
total_inserted = 0
tags_with_aliases = 0
with open(self._csv_path, "r", encoding="utf-8") as f:
reader = csv.reader(f)
for row in reader:
if len(row) < 3:
continue
tag_name = row[0].strip()
if not tag_name:
continue
try:
category = int(row[1])
except (ValueError, IndexError):
category = 0
try:
post_count = int(row[2])
except (ValueError, IndexError):
post_count = 0
# Parse aliases from column 4 (if present)
aliases = row[3].strip() if len(row) >= 4 else ""
if aliases:
tags_with_aliases += 1
rows.append((tag_name, category, post_count, aliases))
if len(rows) >= batch_size:
self._insert_batch(conn, rows)
total_inserted += len(rows)
rows = []
# Insert remaining rows
if rows:
self._insert_batch(conn, rows)
total_inserted += len(rows)
# Update metadata
conn.execute(
"INSERT OR REPLACE INTO fts_metadata (key, value) VALUES (?, ?)",
("last_build_time", str(time.time()))
)
conn.execute(
"INSERT OR REPLACE INTO fts_metadata (key, value) VALUES (?, ?)",
("tag_count", str(total_inserted))
)
conn.execute(
"INSERT OR REPLACE INTO fts_metadata (key, value) VALUES (?, ?)",
("schema_version", str(SCHEMA_VERSION))
)
conn.commit()
elapsed = time.time() - start_time
logger.info(
"Tag FTS index built: %d tags indexed (%d with aliases) in %.2fs",
total_inserted, tags_with_aliases, elapsed
)
finally:
conn.close()
self._ready.set()
except Exception as exc:
logger.error("Failed to build tag FTS index: %s", exc, exc_info=True)
finally:
self._indexing_in_progress = False
def _insert_batch(self, conn: sqlite3.Connection, rows: List[tuple]) -> None:
"""Insert a batch of rows into the database.
Each row is a tuple of (tag_name, category, post_count, aliases).
The FTS searchable_text is built as "tag_name alias1 alias2 ..." for alias matching.
"""
# Insert into tags table (with aliases)
conn.executemany(
"INSERT OR IGNORE INTO tags (tag_name, category, post_count, aliases) VALUES (?, ?, ?, ?)",
rows
)
# Build a map of tag_name -> aliases for FTS insertion
aliases_map = {row[0]: row[3] for row in rows}
# Get rowids and insert into FTS table with explicit rowid
# to ensure tags.rowid matches tag_fts.rowid for JOINs
tag_names = [row[0] for row in rows]
placeholders = ",".join("?" * len(tag_names))
cursor = conn.execute(
f"SELECT rowid, tag_name FROM tags WHERE tag_name IN ({placeholders})",
tag_names
)
# Build FTS rows with (rowid, searchable_text) = (tags.rowid, "tag_name alias1 alias2 ...")
fts_rows = []
for rowid, tag_name in cursor.fetchall():
aliases = aliases_map.get(tag_name, "")
if aliases:
# Replace commas with spaces to create searchable text
# Strip "/" prefix from aliases as it's an FTS5 special character
alias_parts = []
for alias in aliases.split(","):
alias = alias.strip()
if alias.startswith("/"):
alias = alias[1:] # Remove leading slash
if alias:
alias_parts.append(alias)
searchable_text = f"{tag_name} {' '.join(alias_parts)}" if alias_parts else tag_name
else:
searchable_text = tag_name
fts_rows.append((rowid, searchable_text))
if fts_rows:
conn.executemany("INSERT INTO tag_fts (rowid, searchable_text) VALUES (?, ?)", fts_rows)
def ensure_ready(self) -> bool:
"""Ensure the index is ready, building if necessary.
Returns:
True if the index is ready, False otherwise.
"""
if self.is_ready():
return True
# Check if index already exists and has data
self.initialize()
if self._schema_initialized:
# Check if schema migration requires rebuild
if getattr(self, "_needs_rebuild", False):
logger.info("Schema migration requires index rebuild")
self._needs_rebuild = False
self.build_index()
return self.is_ready()
count = self.get_indexed_count()
if count > 0:
self._ready.set()
logger.debug("Tag FTS index already populated with %d tags", count)
return True
# Build the index
self.build_index()
return self.is_ready()
def search(
self,
query: str,
categories: Optional[List[int]] = None,
limit: int = 20
) -> List[Dict]:
"""Search tags using FTS5 with prefix matching.
Supports alias search: if the query matches an alias rather than
the tag_name, the result will include a "matched_alias" field.
Args:
query: The search query string.
categories: Optional list of category IDs to filter by.
limit: Maximum number of results to return.
Returns:
List of dictionaries with tag_name, category, post_count,
and optionally matched_alias.
"""
# Ensure index is ready (lazy initialization)
if not self.ensure_ready():
if not self._warned_not_ready:
logger.debug("Tag FTS index not ready, returning empty results")
self._warned_not_ready = True
return []
if not query or not query.strip():
return []
fts_query = self._build_fts_query(query)
if not fts_query:
return []
try:
with self._lock:
conn = self._connect(readonly=True)
try:
# Build the SQL query - now also fetch aliases for matched_alias detection
# Use subquery for category filter to ensure FTS is evaluated first
if categories:
placeholders = ",".join("?" * len(categories))
sql = f"""
SELECT t.tag_name, t.category, t.post_count, t.aliases
FROM tags t
WHERE t.rowid IN (
SELECT rowid FROM tag_fts WHERE searchable_text MATCH ?
)
AND t.category IN ({placeholders})
ORDER BY t.post_count DESC
LIMIT ?
"""
params = [fts_query] + categories + [limit]
else:
sql = """
SELECT t.tag_name, t.category, t.post_count, t.aliases
FROM tag_fts f
JOIN tags t ON f.rowid = t.rowid
WHERE f.searchable_text MATCH ?
ORDER BY t.post_count DESC
LIMIT ?
"""
params = [fts_query, limit]
cursor = conn.execute(sql, params)
results = []
for row in cursor.fetchall():
result = {
"tag_name": row[0],
"category": row[1],
"post_count": row[2],
}
# Check if search matched an alias rather than the tag_name
matched_alias = self._find_matched_alias(query, row[0], row[3])
if matched_alias:
result["matched_alias"] = matched_alias
results.append(result)
return results
finally:
conn.close()
except Exception as exc:
logger.debug("Tag FTS search error for query '%s': %s", query, exc)
return []
def _find_matched_alias(self, query: str, tag_name: str, aliases_str: str) -> Optional[str]:
"""Find which alias matched the query, if any.
Args:
query: The original search query.
tag_name: The canonical tag name.
aliases_str: Comma-separated string of aliases.
Returns:
The matched alias string, or None if the query matched the tag_name directly.
"""
query_lower = query.lower().strip()
if not query_lower:
return None
# Strip leading "/" from query if present (FTS index strips these)
query_normalized = query_lower.lstrip("/")
# Check if query matches tag_name prefix (direct match, no alias needed)
if tag_name.lower().startswith(query_normalized):
return None
# Check aliases first - if query matches an alias or a word within an alias, return it
if aliases_str:
for alias in aliases_str.split(","):
alias = alias.strip()
if not alias:
continue
# Normalize alias for comparison (strip leading slash)
alias_normalized = alias.lower().lstrip("/")
# Check if alias starts with query
if alias_normalized.startswith(query_normalized):
return alias # Return original alias (with "/" if present)
# Check if any word within the alias starts with query
# (mirrors FTS5 tokenization which splits on underscores)
alias_words = alias_normalized.replace("_", " ").split()
for word in alias_words:
if word.startswith(query_normalized):
return alias
# If no alias matched, check if query matches a word in tag_name
# (handles cases like "long_hair" matching "long" - no alias indicator needed)
tag_words = tag_name.lower().replace("_", " ").split()
for word in tag_words:
if word.startswith(query_normalized):
return None
# Query matched via FTS but not tag_name words or aliases
# This shouldn't normally happen, but return None for safety
return None
def get_indexed_count(self) -> int:
"""Return the number of tags currently indexed."""
if not self._schema_initialized:
return 0
try:
with self._lock:
conn = self._connect(readonly=True)
try:
cursor = conn.execute("SELECT COUNT(*) FROM tags")
result = cursor.fetchone()
return result[0] if result else 0
finally:
conn.close()
except Exception:
return 0
def clear(self) -> bool:
"""Clear all data from the FTS index.
Returns:
True if successful, False otherwise.
"""
try:
with self._lock:
conn = self._connect()
try:
conn.execute("DELETE FROM tag_fts")
conn.execute("DELETE FROM tags")
conn.commit()
self._ready.clear()
return True
finally:
conn.close()
except Exception as exc:
logger.error("Failed to clear tag FTS index: %s", exc)
return False
# Internal helpers
def _connect(self, readonly: bool = False) -> sqlite3.Connection:
"""Create a database connection."""
uri = False
path = self._db_path
if readonly:
if not os.path.exists(path):
raise FileNotFoundError(path)
path = f"file:{path}?mode=ro"
uri = True
conn = sqlite3.connect(path, check_same_thread=False, uri=uri)
conn.row_factory = sqlite3.Row
return conn
def _build_fts_query(self, query: str) -> str:
"""Build an FTS5 query string with prefix matching.
Args:
query: The user's search query.
Returns:
FTS5 query string.
"""
# Split query into words and clean them
words = query.lower().split()
if not words:
return ""
# Escape and add prefix wildcard to each word
prefix_terms = []
for word in words:
escaped = self._escape_fts_query(word)
if escaped:
# Add prefix wildcard for substring-like matching
prefix_terms.append(f"{escaped}*")
if not prefix_terms:
return ""
# Combine terms with implicit AND (all words must match)
return " ".join(prefix_terms)
def _escape_fts_query(self, text: str) -> str:
"""Escape special FTS5 characters.
FTS5 special characters: " ( ) * : ^ - /
We keep * for prefix matching but escape others.
"""
if not text:
return ""
# Replace FTS5 special characters with space
# Note: "/" is special in FTS5 (column filter syntax), so we strip it
special = ['"', "(", ")", "*", ":", "^", "-", "{", "}", "[", "]", "/"]
result = text
for char in special:
result = result.replace(char, " ")
# Collapse multiple spaces and strip
result = re.sub(r"\s+", " ", result).strip()
return result
# Singleton instance
_tag_fts_index: Optional[TagFTSIndex] = None
_tag_fts_lock = threading.Lock()
def get_tag_fts_index() -> TagFTSIndex:
"""Get the singleton TagFTSIndex instance."""
global _tag_fts_index
if _tag_fts_index is None:
with _tag_fts_lock:
if _tag_fts_index is None:
_tag_fts_index = TagFTSIndex()
return _tag_fts_index
__all__ = [
"TagFTSIndex",
"get_tag_fts_index",
"CATEGORY_NAMES",
"CATEGORY_NAME_TO_IDS",
]

421
py/utils/cache_paths.py Normal file
View File

@@ -0,0 +1,421 @@
"""Centralized cache path resolution with automatic migration support.
This module provides a unified interface for resolving cache file paths,
with automatic migration from legacy locations to the new organized
cache directory structure.
Target structure:
{settings_dir}/
└── cache/
├── symlink/
│ └── symlink_map.json
├── model/
│ └── {library_name}.sqlite
├── recipe/
│ └── {library_name}.sqlite
└── fts/
├── recipe_fts.sqlite
└── tag_fts.sqlite
"""
from __future__ import annotations
import logging
import os
import re
import shutil
from enum import Enum
from typing import List, Optional
from .settings_paths import get_project_root, get_settings_dir
logger = logging.getLogger(__name__)
class CacheType(Enum):
"""Types of cache files managed by the cache path resolver."""
MODEL = "model"
RECIPE = "recipe"
RECIPE_FTS = "recipe_fts"
TAG_FTS = "tag_fts"
SYMLINK = "symlink"
# Subdirectory structure for each cache type
_CACHE_SUBDIRS = {
CacheType.MODEL: "model",
CacheType.RECIPE: "recipe",
CacheType.RECIPE_FTS: "fts",
CacheType.TAG_FTS: "fts",
CacheType.SYMLINK: "symlink",
}
# Filename patterns for each cache type
_CACHE_FILENAMES = {
CacheType.MODEL: "{library_name}.sqlite",
CacheType.RECIPE: "{library_name}.sqlite",
CacheType.RECIPE_FTS: "recipe_fts.sqlite",
CacheType.TAG_FTS: "tag_fts.sqlite",
CacheType.SYMLINK: "symlink_map.json",
}
def get_cache_base_dir(create: bool = True) -> str:
"""Return the base cache directory path.
Args:
create: Whether to create the directory if it does not exist.
Returns:
The absolute path to the cache base directory ({settings_dir}/cache/).
"""
settings_dir = get_settings_dir(create=create)
cache_dir = os.path.join(settings_dir, "cache")
if create:
os.makedirs(cache_dir, exist_ok=True)
return cache_dir
def _sanitize_library_name(library_name: Optional[str]) -> str:
"""Sanitize a library name for use in filenames.
Args:
library_name: The library name to sanitize.
Returns:
A sanitized version safe for use in filenames.
"""
name = library_name or "default"
return re.sub(r"[^A-Za-z0-9_.-]", "_", name)
def get_cache_file_path(
cache_type: CacheType,
library_name: Optional[str] = None,
create_dir: bool = True,
) -> str:
"""Get the canonical path for a cache file.
Args:
cache_type: The type of cache file.
library_name: The library name (only used for MODEL and RECIPE types).
create_dir: Whether to create the parent directory if it does not exist.
Returns:
The absolute path to the cache file in its canonical location.
"""
cache_base = get_cache_base_dir(create=create_dir)
subdir = _CACHE_SUBDIRS[cache_type]
cache_dir = os.path.join(cache_base, subdir)
if create_dir:
os.makedirs(cache_dir, exist_ok=True)
filename_template = _CACHE_FILENAMES[cache_type]
safe_name = _sanitize_library_name(library_name)
filename = filename_template.format(library_name=safe_name)
return os.path.join(cache_dir, filename)
def get_legacy_cache_paths(
cache_type: CacheType,
library_name: Optional[str] = None,
) -> List[str]:
"""Get a list of legacy cache file paths to check for migration.
The paths are returned in order of priority (most recent first).
Args:
cache_type: The type of cache file.
library_name: The library name (only used for MODEL and RECIPE types).
Returns:
A list of potential legacy paths to check, in order of preference.
"""
try:
settings_dir = get_settings_dir(create=False)
except Exception:
settings_dir = get_project_root()
safe_name = _sanitize_library_name(library_name)
legacy_paths: List[str] = []
if cache_type == CacheType.MODEL:
# Legacy per-library path: {settings_dir}/model_cache/{library}.sqlite
legacy_paths.append(
os.path.join(settings_dir, "model_cache", f"{safe_name}.sqlite")
)
# Legacy root-level single cache (for "default" library only)
if safe_name.lower() in ("default", ""):
legacy_paths.append(os.path.join(settings_dir, "model_cache.sqlite"))
elif cache_type == CacheType.RECIPE:
# Legacy per-library path: {settings_dir}/recipe_cache/{library}.sqlite
legacy_paths.append(
os.path.join(settings_dir, "recipe_cache", f"{safe_name}.sqlite")
)
# Legacy root-level single cache (for "default" library only)
if safe_name.lower() in ("default", ""):
legacy_paths.append(os.path.join(settings_dir, "recipe_cache.sqlite"))
elif cache_type == CacheType.RECIPE_FTS:
# Legacy root-level path
legacy_paths.append(os.path.join(settings_dir, "recipe_fts.sqlite"))
elif cache_type == CacheType.TAG_FTS:
# Legacy root-level path
legacy_paths.append(os.path.join(settings_dir, "tag_fts.sqlite"))
elif cache_type == CacheType.SYMLINK:
# Current location in cache/ but without subdirectory
legacy_paths.append(
os.path.join(settings_dir, "cache", "symlink_map.json")
)
return legacy_paths
def _cleanup_legacy_file_after_migration(
legacy_path: str,
canonical_path: str,
) -> bool:
"""Safely remove a legacy file after successful migration.
Args:
legacy_path: The legacy file path to remove.
canonical_path: The canonical path where the file was copied to.
Returns:
True if cleanup succeeded, False otherwise.
"""
try:
if not os.path.exists(canonical_path):
logger.warning(
"Skipping cleanup of %s: canonical file not found at %s",
legacy_path,
canonical_path,
)
return False
legacy_size = os.path.getsize(legacy_path)
canonical_size = os.path.getsize(canonical_path)
if legacy_size != canonical_size:
logger.warning(
"Skipping cleanup of %s: file size mismatch (legacy=%d, canonical=%d)",
legacy_path,
legacy_size,
canonical_size,
)
return False
os.remove(legacy_path)
logger.info("Cleaned up legacy cache file: %s", legacy_path)
_cleanup_empty_legacy_directories(legacy_path)
return True
except Exception as exc:
logger.warning(
"Failed to cleanup legacy cache file %s: %s",
legacy_path,
exc,
)
return False
def _cleanup_empty_legacy_directories(legacy_path: str) -> None:
"""Remove empty parent directories of a legacy file.
This function only removes directories if they are empty,
using os.rmdir() which fails on non-empty directories.
Args:
legacy_path: The legacy file path whose parent directories should be cleaned.
"""
try:
parent_dir = os.path.dirname(legacy_path)
legacy_dir_names = ("model_cache", "recipe_cache")
current = parent_dir
while current:
base_name = os.path.basename(current)
if base_name in legacy_dir_names:
if os.path.isdir(current) and not os.listdir(current):
try:
os.rmdir(current)
logger.info("Removed empty legacy directory: %s", current)
except Exception:
pass
parent = os.path.dirname(current)
if parent == current:
break
current = parent
except Exception as exc:
logger.debug("Failed to cleanup empty legacy directories: %s", exc)
def resolve_cache_path_with_migration(
cache_type: CacheType,
library_name: Optional[str] = None,
env_override: Optional[str] = None,
) -> str:
"""Resolve the cache file path, migrating from legacy locations if needed.
This function performs lazy migration: on first access, it checks if the
file exists at the canonical location. If not, it looks for legacy files
and copies them to the new location. After successful migration, the
legacy file is automatically removed.
Args:
cache_type: The type of cache file.
library_name: The library name (only used for MODEL and RECIPE types).
env_override: Optional environment variable value that overrides all
path resolution. When set, returns this path directly without
any migration.
Returns:
The resolved path to use for the cache file.
"""
# Environment override bypasses all migration logic
if env_override:
return env_override
canonical_path = get_cache_file_path(cache_type, library_name, create_dir=True)
# If file already exists at canonical location, use it
if os.path.exists(canonical_path):
return canonical_path
# Check legacy paths for migration
legacy_paths = get_legacy_cache_paths(cache_type, library_name)
for legacy_path in legacy_paths:
if os.path.exists(legacy_path):
try:
shutil.copy2(legacy_path, canonical_path)
logger.info(
"Migrated %s cache from %s to %s",
cache_type.value,
legacy_path,
canonical_path,
)
_cleanup_legacy_file_after_migration(legacy_path, canonical_path)
return canonical_path
except Exception as exc:
logger.warning(
"Failed to migrate %s cache from %s: %s",
cache_type.value,
legacy_path,
exc,
)
# No legacy file found; return canonical path (will be created fresh)
return canonical_path
def get_legacy_cache_files_for_cleanup() -> List[str]:
"""Get a list of legacy cache files that can be removed after migration.
This function returns files that exist in legacy locations and have
corresponding files in the new canonical locations.
Returns:
A list of legacy file paths that are safe to remove.
"""
files_to_remove: List[str] = []
try:
settings_dir = get_settings_dir(create=False)
except Exception:
return files_to_remove
# Check each cache type for migrated legacy files
for cache_type in CacheType:
# For MODEL and RECIPE, we need to check each library
if cache_type in (CacheType.MODEL, CacheType.RECIPE):
# Check default library
_check_legacy_for_cleanup(cache_type, "default", files_to_remove)
# Check for any per-library caches in legacy directories
legacy_dir_name = "model_cache" if cache_type == CacheType.MODEL else "recipe_cache"
legacy_dir = os.path.join(settings_dir, legacy_dir_name)
if os.path.isdir(legacy_dir):
try:
for filename in os.listdir(legacy_dir):
if filename.endswith(".sqlite"):
library_name = filename[:-7] # Remove .sqlite
_check_legacy_for_cleanup(cache_type, library_name, files_to_remove)
except Exception:
pass
else:
_check_legacy_for_cleanup(cache_type, None, files_to_remove)
return files_to_remove
def _check_legacy_for_cleanup(
cache_type: CacheType,
library_name: Optional[str],
files_to_remove: List[str],
) -> None:
"""Check if a legacy cache file can be removed after migration.
Args:
cache_type: The type of cache file.
library_name: The library name (only used for MODEL and RECIPE types).
files_to_remove: List to append removable files to.
"""
canonical_path = get_cache_file_path(cache_type, library_name, create_dir=False)
if not os.path.exists(canonical_path):
return
legacy_paths = get_legacy_cache_paths(cache_type, library_name)
for legacy_path in legacy_paths:
if os.path.exists(legacy_path) and legacy_path not in files_to_remove:
files_to_remove.append(legacy_path)
def cleanup_legacy_cache_files(dry_run: bool = True) -> List[str]:
"""Remove legacy cache files that have been migrated.
Args:
dry_run: If True, only return the list of files that would be removed
without actually removing them.
Returns:
A list of files that were (or would be) removed.
"""
files = get_legacy_cache_files_for_cleanup()
if dry_run or not files:
return files
removed: List[str] = []
for file_path in files:
try:
os.remove(file_path)
removed.append(file_path)
logger.info("Removed legacy cache file: %s", file_path)
except Exception as exc:
logger.warning("Failed to remove legacy cache file %s: %s", file_path, exc)
# Try to remove empty legacy directories
try:
settings_dir = get_settings_dir(create=False)
for legacy_dir_name in ("model_cache", "recipe_cache"):
legacy_dir = os.path.join(settings_dir, legacy_dir_name)
if os.path.isdir(legacy_dir) and not os.listdir(legacy_dir):
os.rmdir(legacy_dir)
logger.info("Removed empty legacy directory: %s", legacy_dir)
except Exception:
pass
return removed

View File

@@ -45,8 +45,14 @@ SUPPORTED_MEDIA_EXTENSIONS = {
"videos": [".mp4", ".webm"],
}
# Valid Lora types
VALID_LORA_TYPES = ["lora", "locon", "dora"]
# Valid sub-types for each scanner type
VALID_LORA_SUB_TYPES = ["lora", "locon", "dora"]
VALID_CHECKPOINT_SUB_TYPES = ["checkpoint", "diffusion_model"]
VALID_EMBEDDING_SUB_TYPES = ["embedding"]
VALID_MISC_SUB_TYPES = ["vae", "upscaler"]
# Backward compatibility alias
VALID_LORA_TYPES = VALID_LORA_SUB_TYPES
# Supported Civitai model types for user model queries (case-insensitive)
CIVITAI_USER_MODEL_TYPES = [
@@ -89,6 +95,7 @@ DEFAULT_PRIORITY_TAG_CONFIG = {
"lora": ", ".join(CIVITAI_MODEL_TAGS),
"checkpoint": ", ".join(CIVITAI_MODEL_TAGS),
"embedding": ", ".join(CIVITAI_MODEL_TAGS),
"misc": ", ".join(CIVITAI_MODEL_TAGS),
}
# baseModel values from CivitAI that should be treated as diffusion models (unet)

View File

@@ -173,14 +173,14 @@ class LoraMetadata(BaseModelMetadata):
@dataclass
class CheckpointMetadata(BaseModelMetadata):
"""Represents the metadata structure for a Checkpoint model"""
model_type: str = "checkpoint" # Model type (checkpoint, diffusion_model, etc.)
sub_type: str = "checkpoint" # Model sub-type (checkpoint, diffusion_model, etc.)
@classmethod
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'CheckpointMetadata':
"""Create CheckpointMetadata instance from Civitai version info"""
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
model_type = version_info.get('type', 'checkpoint')
sub_type = version_info.get('type', 'checkpoint')
# Extract tags and description if available
tags = []
@@ -203,7 +203,7 @@ class CheckpointMetadata(BaseModelMetadata):
preview_nsfw_level=0,
from_civitai=True,
civitai=version_info,
model_type=model_type,
sub_type=sub_type,
tags=tags,
modelDescription=description
)
@@ -211,15 +211,15 @@ class CheckpointMetadata(BaseModelMetadata):
@dataclass
class EmbeddingMetadata(BaseModelMetadata):
"""Represents the metadata structure for an Embedding model"""
model_type: str = "embedding" # Model type (embedding, textual_inversion, etc.)
sub_type: str = "embedding"
@classmethod
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'EmbeddingMetadata':
"""Create EmbeddingMetadata instance from Civitai version info"""
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
model_type = version_info.get('type', 'embedding')
sub_type = version_info.get('type', 'embedding')
# Extract tags and description if available
tags = []
description = ""
@@ -228,7 +228,7 @@ class EmbeddingMetadata(BaseModelMetadata):
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
@@ -241,7 +241,53 @@ class EmbeddingMetadata(BaseModelMetadata):
preview_nsfw_level=0,
from_civitai=True,
civitai=version_info,
model_type=model_type,
sub_type=sub_type,
tags=tags,
modelDescription=description
)
@dataclass
class MiscMetadata(BaseModelMetadata):
"""Represents the metadata structure for a Misc model (VAE, Upscaler)"""
sub_type: str = "vae"
@classmethod
def from_civitai_info(cls, version_info: Dict, file_info: Dict, save_path: str) -> 'MiscMetadata':
"""Create MiscMetadata instance from Civitai version info"""
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
# Determine sub_type from CivitAI model type
civitai_type = version_info.get('model', {}).get('type', '').lower()
if civitai_type == 'vae':
sub_type = 'vae'
elif civitai_type == 'upscaler':
sub_type = 'upscaler'
else:
sub_type = 'vae' # Default to vae
# Extract tags and description if available
tags = []
description = ""
if 'model' in version_info:
if 'tags' in version_info['model']:
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
file_path=save_path.replace(os.sep, '/'),
size=file_info.get('sizeKB', 0) * 1024,
modified=datetime.now().timestamp(),
sha256=file_info['hashes'].get('SHA256', '').lower(),
base_model=base_model,
preview_url=None, # Will be updated after preview download
preview_nsfw_level=0,
from_civitai=True,
civitai=version_info,
sub_type=sub_type,
tags=tags,
modelDescription=description
)

221787
refs/danbooru_e621_merged.csv Normal file

File diff suppressed because one or more lines are too long

View File

@@ -296,6 +296,19 @@
min-height: 20px;
}
/* Gradient overlay on right side for icon readability */
.card-header::after {
content: '';
position: absolute;
top: 0;
right: 0;
width: 100px;
height: 100%;
background: linear-gradient(to left, oklch(0% 0 0 / 0.4) 0%, transparent 100%);
pointer-events: none;
border-top-right-radius: var(--border-radius);
}
.card-header-info {
display: flex;
align-items: center;
@@ -426,15 +439,39 @@
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
display: inline-block;
display: inline-flex;
align-items: center;
gap: 4px;
color: white;
text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5);
background: rgba(255, 255, 255, 0.2);
padding: 2px var(--space-1);
background: rgba(255, 255, 255, 0.12);
padding: 2px 6px;
border-radius: var(--border-radius-xs);
backdrop-filter: blur(2px);
font-size: 0.85em;
backdrop-filter: blur(4px);
font-size: 0.8em;
line-height: 1.2;
font-weight: 500;
}
/* Subtle separator between sub-type and base model */
.model-separator {
width: 1px;
height: 0.6em;
background: rgba(255, 255, 255, 0.25);
flex-shrink: 0;
}
/* Sub-type abbreviation styling */
.model-sub-type {
opacity: 0.9;
flex-shrink: 0;
}
/* Base model abbreviation styling */
.model-base-type {
flex-shrink: 1;
overflow: hidden;
text-overflow: ellipsis;
}
/* Style for version name */
@@ -596,18 +633,22 @@
}
.model-update-badge {
display: inline-flex;
align-items: center;
gap: 6px;
padding: 2px 10px;
border-radius: var(--border-radius-xs);
width: 18px;
height: 18px;
padding: 0;
border-radius: 4px;
background: var(--badge-update-bg);
color: var(--badge-update-text);
font-size: 0.7rem;
font-weight: 600;
letter-spacing: 0.04em;
text-transform: uppercase;
box-shadow: 0 4px 12px var(--badge-update-glow);
display: inline-flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
box-shadow: 0 2px 6px var(--badge-update-glow);
border: 1px solid color-mix(in oklab, var(--badge-update-bg) 55%, transparent);
white-space: nowrap;
}
.model-update-badge i {
margin-left: 1px;
line-height: 1;
}

View File

@@ -482,3 +482,75 @@
[data-theme="dark"] .import-container {
background: rgba(255, 255, 255, 0.03);
}
/* Setup Guidance State - When example images path is not configured */
.import-container--needs-setup {
cursor: default;
border-style: solid;
border-color: var(--border-color);
background: var(--lora-surface);
}
.import-container--needs-setup:hover {
border-color: var(--border-color);
transform: none;
}
.import-setup-guidance {
display: flex;
flex-direction: column;
align-items: center;
gap: var(--space-2);
padding: var(--space-3) var(--space-2);
text-align: center;
}
.setup-icon {
width: 64px;
height: 64px;
border-radius: 50%;
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
display: flex;
align-items: center;
justify-content: center;
margin-bottom: var(--space-1);
}
.setup-icon i {
font-size: 1.75rem;
color: var(--lora-accent);
}
.import-setup-guidance h3 {
margin: 0;
font-size: 1.2rem;
font-weight: 600;
color: var(--text-color);
}
.setup-description {
margin: 0;
color: var(--text-color);
opacity: 0.9;
max-width: 320px;
line-height: 1.5;
}
.setup-usage {
margin: 0;
font-size: 0.85em;
color: var(--text-color);
opacity: 0.6;
font-style: italic;
}
.setup-settings-btn {
margin-top: var(--space-2);
background: var(--lora-accent) !important;
color: var(--lora-text) !important;
}
.setup-settings-btn:hover {
opacity: 0.9;
transform: translateY(-1px);
}

View File

@@ -1,4 +1,146 @@
/* Update Modal specific styles */
/* Update Modal Styles */
.update-header {
display: flex;
align-items: center;
gap: var(--space-2);
margin-bottom: var(--space-3);
padding-bottom: var(--space-2);
border-bottom: 1px solid var(--lora-border);
}
.notification-tabs {
display: flex;
gap: var(--space-2);
margin-bottom: var(--space-3);
}
.notification-tab {
display: inline-flex;
align-items: center;
gap: var(--space-1);
padding: 0.5rem 0.75rem;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
color: var(--text-color);
cursor: pointer;
transition: background 0.2s ease, border-color 0.2s ease;
font-weight: 500;
}
.notification-tab:hover,
.notification-tab.active {
background: var(--lora-accent-light, rgba(0, 148, 255, 0.12));
border-color: var(--lora-accent);
color: var(--lora-accent-text, var(--text-color));
}
.notification-tab-badge {
display: none;
min-width: 1.25rem;
height: 1.25rem;
padding: 0 0.4rem;
border-radius: 999px;
background: var(--lora-accent);
color: #fff;
font-size: 0.75rem;
font-weight: 600;
align-items: center;
justify-content: center;
}
.notification-tab-badge.is-dot {
min-width: 0.5rem;
width: 0.5rem;
height: 0.5rem;
padding: 0;
border-radius: 50%;
}
.notification-tab-badge.visible {
display: inline-flex;
}
.notification-panels {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.notification-panel {
display: none;
}
.notification-panel.active {
display: block;
}
.update-icon {
font-size: 1.8em;
color: var(--lora-accent);
animation: bounce 1.5s infinite;
}
@keyframes bounce {
0%, 100% {
transform: translateY(0);
}
50% {
transform: translateY(-5px);
}
}
.update-content {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.update-info {
display: flex;
justify-content: space-between;
align-items: center;
border-radius: var(--border-radius-sm);
padding: var(--space-3);
}
.update-info .version-info {
display: flex;
flex-direction: column;
gap: 8px;
}
.current-version, .new-version {
display: flex;
align-items: center;
gap: 10px;
}
.label {
font-size: 0.9em;
color: var(--text-color);
opacity: 0.8;
}
.version-number {
font-family: monospace;
font-weight: 600;
}
.new-version .version-number {
color: var(--lora-accent);
}
/* Add styling for git info display */
.git-info {
font-size: 0.85em;
opacity: 0.7;
margin-top: 4px;
font-family: monospace;
color: var(--text-color);
}
/* Update actions container */
.update-actions {
display: flex;
flex-direction: column;
@@ -7,17 +149,24 @@
flex-wrap: nowrap;
}
/* GitHub link button styling */
.update-link {
color: var(--lora-accent);
text-decoration: none;
display: flex;
align-items: center;
gap: 8px;
font-size: 0.95em;
padding: 8px 16px;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
text-decoration: none;
color: var(--text-color);
transition: all 0.2s ease;
}
.update-link:hover {
text-decoration: underline;
background: var(--lora-accent);
color: white;
transform: translateY(-2px);
}
/* Update progress styles */
@@ -83,15 +232,96 @@
background-color: var(--lora-error);
}
/* Add styles for markdown elements in changelog */
/* Changelog section */
.changelog-section {
background: rgba(0, 0, 0, 0.02);
border: 1px solid rgba(0, 0, 0, 0.08);
border-radius: var(--border-radius-sm);
padding: var(--space-3);
}
.changelog-section h3 {
margin-top: 0;
margin-bottom: var(--space-2);
color: var(--lora-accent);
font-size: 1.1em;
}
.changelog-content {
max-height: 550px;
overflow-y: auto;
padding-left: var(--space-3);
}
.changelog-item {
margin-bottom: var(--space-2);
padding-bottom: var(--space-2);
border-bottom: 1px solid var(--lora-border);
}
.changelog-item:last-child {
margin-bottom: 0;
padding-bottom: 0;
border-bottom: none;
}
/* Multiple releases styling */
.changelog-content > .changelog-item + .changelog-item {
margin-top: var(--space-4);
padding-top: var(--space-4);
border-top: 1px solid var(--lora-border);
}
.changelog-item h4 {
display: flex;
flex-wrap: wrap;
align-items: center;
gap: var(--space-2);
margin-bottom: var(--space-2);
font-size: 1em;
margin-top: 0;
color: var(--text-color);
}
.changelog-item .latest-badge {
background-color: var(--lora-accent);
color: white;
padding: 2px 8px;
border-radius: 12px;
font-size: 0.75em;
font-weight: 500;
text-transform: uppercase;
}
.changelog-item .publish-date {
font-size: 0.85em;
color: var(--text-color);
opacity: 0.6;
font-weight: normal;
}
.changelog-item.latest {
background-color: rgba(66, 153, 225, 0.05);
border-radius: var(--border-radius-sm);
padding: var(--space-2);
border: 1px solid rgba(66, 153, 225, 0.2);
}
[data-theme="dark"] .changelog-item.latest {
background-color: rgba(66, 153, 225, 0.1);
border-color: rgba(66, 153, 225, 0.3);
}
/* Changelog markdown styles */
.changelog-item ul {
margin: 0;
padding-left: 20px;
margin-top: 8px;
}
.changelog-item li {
margin-bottom: 6px;
line-height: 1.4;
color: var(--text-color);
}
.changelog-item strong {
@@ -121,4 +351,191 @@
.changelog-item a:hover {
text-decoration: underline;
}
}
/* Update preferences section */
.update-preferences {
border-top: 1px solid var(--lora-border);
margin-top: var(--space-2);
padding-top: var(--space-2);
display: flex;
align-items: center;
justify-content: flex-start;
}
/* Override toggle switch styles for update preferences */
.update-preferences .toggle-switch {
position: relative;
display: inline-flex;
align-items: center;
width: auto;
height: 24px;
cursor: pointer;
}
.update-preferences .toggle-slider {
position: relative;
display: inline-block;
width: 50px;
height: 24px;
flex-shrink: 0;
margin-right: 10px;
}
.update-preferences .toggle-label {
margin-left: 0;
white-space: nowrap;
line-height: 24px;
}
/* Banner history */
.banner-history {
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.banner-history h3 {
margin: 0;
font-size: 1.05rem;
color: var(--lora-accent);
}
.banner-history-empty {
margin: 0;
padding: var(--space-3);
background: var(--lora-surface);
border: 1px dashed var(--lora-border);
border-radius: var(--border-radius-sm);
text-align: center;
color: var(--text-muted, rgba(0, 0, 0, 0.6));
}
.banner-history-list {
list-style: none;
margin: 0;
padding: 0;
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.banner-history-item {
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
padding: var(--space-2);
background: var(--card-bg, #fff);
display: flex;
flex-direction: column;
gap: var(--space-1);
}
[data-theme="dark"] .banner-history-item {
background: rgba(255, 255, 255, 0.03);
}
.banner-history-title {
margin: 0;
font-size: 1rem;
}
.banner-history-description {
margin: 0;
color: var(--text-color);
opacity: 0.85;
}
.banner-history-meta {
display: flex;
gap: var(--space-2);
font-size: 0.85rem;
color: var(--text-muted, rgba(0, 0, 0, 0.6));
flex-wrap: wrap;
}
.banner-history-time {
display: inline-flex;
align-items: center;
}
.banner-history-status {
display: inline-flex;
align-items: center;
gap: 0.35rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.banner-history-status.active {
color: var(--lora-success);
}
.banner-history-status.dismissed {
color: var(--lora-error);
}
.banner-history-actions {
display: flex;
flex-wrap: wrap;
gap: var(--space-2);
margin-top: var(--space-1);
}
.banner-history-action {
display: inline-flex;
align-items: center;
gap: 0.35rem;
padding: 0.35rem 0.65rem;
border-radius: var(--border-radius-sm);
border: 1px solid var(--lora-border);
text-decoration: none;
font-size: 0.85rem;
transition: background 0.2s ease, color 0.2s ease, border-color 0.2s ease;
}
.banner-history-action i {
font-size: 0.9rem;
}
.banner-history-action.banner-history-action-primary {
background: var(--lora-accent);
border-color: var(--lora-accent);
color: #fff;
}
.banner-history-action.banner-history-action-secondary {
background: var(--lora-surface);
color: var(--text-color);
}
.banner-history-action.banner-history-action-tertiary {
background: transparent;
border-style: dashed;
}
.banner-history-action:hover {
background: var(--lora-accent-light, rgba(0, 148, 255, 0.12));
border-color: var(--lora-accent);
color: var(--lora-accent-text, var(--text-color));
}
@media (max-width: 480px) {
.update-info {
flex-direction: column;
gap: var(--space-2);
}
.version-info {
width: 100%;
}
.update-preferences {
flex-direction: row;
flex-wrap: wrap;
}
.update-preferences .toggle-label {
margin-top: 5px;
}
}

View File

@@ -466,6 +466,202 @@
border-color: var(--lora-accent);
}
/* Presets Section Styles */
.presets-section {
border-bottom: 1px solid var(--border-color);
padding-bottom: 16px;
margin-bottom: 16px;
}
.presets-section h4 {
margin: 0 0 8px 0;
}
.filter-presets {
display: flex;
flex-wrap: wrap;
gap: 6px;
}
.filter-preset {
display: inline-flex;
align-items: center;
gap: 8px;
padding: 4px 4px 4px 10px;
border-radius: var(--border-radius-sm);
background-color: var(--lora-surface);
border: 1px solid var(--border-color);
transition: all 0.2s ease;
cursor: pointer;
}
.filter-preset:hover {
background-color: var(--lora-surface-hover);
border-color: var(--lora-accent);
}
.filter-preset.active {
background-color: var(--lora-accent);
border-color: var(--lora-accent);
}
.filter-preset.active .preset-name {
color: white;
font-weight: 600;
}
.filter-preset.active .preset-delete-btn {
color: white;
opacity: 0.8;
}
.filter-preset.active .preset-delete-btn:hover {
opacity: 1;
color: white;
}
.preset-name {
cursor: pointer;
font-size: 14px;
color: var(--text-color);
user-select: none;
display: flex;
align-items: center;
gap: 6px;
white-space: nowrap;
}
.preset-delete-btn {
background: none;
border: none;
color: var(--text-color);
opacity: 0.5;
cursor: pointer;
padding: 4px;
display: flex;
align-items: center;
justify-content: center;
font-size: 11px;
transition: all 0.2s ease;
margin-left: auto;
}
.preset-delete-btn:hover {
opacity: 1;
color: var(--lora-error, #e74c3c);
}
.add-preset-btn {
background-color: transparent;
border: 1px dashed var(--border-color);
color: var(--text-color);
opacity: 0.85;
display: inline-flex;
align-items: center;
gap: 6px;
padding: 4px 10px;
font-size: 14px;
border-radius: var(--border-radius-sm);
cursor: pointer;
transition: all 0.25s ease;
}
/* Enabled state - visual cue that button is actionable */
.add-preset-btn:not(.disabled) {
border-color: var(--lora-accent);
border-style: solid;
background-color: rgba(66, 153, 225, 0.08);
}
.add-preset-btn:hover:not(.disabled) {
opacity: 1;
background-color: rgba(66, 153, 225, 0.15);
color: var(--lora-accent);
transform: translateY(-1px);
box-shadow: 0 2px 6px rgba(66, 153, 225, 0.2);
}
/* Disabled state - clear "unavailable" visual language */
.add-preset-btn.disabled {
opacity: 0.35;
cursor: not-allowed;
background-color: rgba(128, 128, 128, 0.05);
border-style: dashed;
border-color: var(--border-color);
color: var(--text-muted);
}
.add-preset-btn i {
font-size: 12px;
transition: transform 0.2s ease;
}
/* Inline preset naming input */
.preset-inline-input-container {
display: inline-flex;
align-items: center;
gap: 4px;
padding: 2px;
background-color: var(--lora-surface);
border: 1px solid var(--lora-accent);
border-radius: var(--border-radius-sm);
}
.preset-inline-input {
width: 120px;
padding: 4px 8px;
border: none;
background: transparent;
color: var(--text-color);
font-size: 13px;
outline: none;
}
.preset-inline-input::placeholder {
color: var(--text-color);
opacity: 0.5;
}
.preset-inline-btn {
background: none;
border: none;
color: var(--text-color);
cursor: pointer;
padding: 4px 6px;
display: flex;
align-items: center;
justify-content: center;
font-size: 12px;
transition: color 0.2s ease;
opacity: 0.7;
}
.preset-inline-btn:hover {
opacity: 1;
}
.preset-inline-btn.save:hover {
color: var(--lora-accent);
}
.preset-inline-btn.cancel:hover {
color: var(--lora-error, #e74c3c);
}
/* Two-step delete confirmation */
.preset-delete-btn.confirm {
color: var(--lora-accent);
opacity: 1;
animation: pulse-confirm 0.5s ease-in-out infinite alternate;
}
@keyframes pulse-confirm {
from { opacity: 0.7; }
to { opacity: 1; }
}
/* Mobile adjustments */
@media (max-width: 768px) {
.search-options-panel,

View File

@@ -1,400 +0,0 @@
/* Update Modal Styles */
.update-modal {
max-width: 600px;
}
.update-header {
display: flex;
align-items: center;
gap: var(--space-2);
margin-bottom: var(--space-3);
padding-bottom: var(--space-2);
border-bottom: 1px solid var(--lora-border);
}
.notification-tabs {
display: flex;
gap: var(--space-2);
margin-bottom: var(--space-3);
}
.notification-tab {
display: inline-flex;
align-items: center;
gap: var(--space-1);
padding: 0.5rem 0.75rem;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
color: var(--text-color);
cursor: pointer;
transition: background 0.2s ease, border-color 0.2s ease;
font-weight: 500;
}
.notification-tab:hover,
.notification-tab.active {
background: var(--lora-accent-light, rgba(0, 148, 255, 0.12));
border-color: var(--lora-accent);
color: var(--lora-accent-text, var(--text-color));
}
.notification-tab-badge {
display: none;
min-width: 1.25rem;
height: 1.25rem;
padding: 0 0.4rem;
border-radius: 999px;
background: var(--lora-accent);
color: #fff;
font-size: 0.75rem;
font-weight: 600;
align-items: center;
justify-content: center;
}
.notification-tab-badge.is-dot {
min-width: 0.5rem;
width: 0.5rem;
height: 0.5rem;
padding: 0;
border-radius: 50%;
}
.notification-tab-badge.visible {
display: inline-flex;
}
.notification-panels {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.notification-panel {
display: none;
}
.notification-panel.active {
display: block;
}
.update-icon {
font-size: 1.8em;
color: var(--lora-accent);
animation: bounce 1.5s infinite;
}
@keyframes bounce {
0%, 100% {
transform: translateY(0);
}
50% {
transform: translateY(-5px);
}
}
.update-content {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.update-info {
display: flex;
justify-content: space-between;
align-items: center;
border-radius: var(--border-radius-sm);
padding: var(--space-3);
}
.update-info .version-info {
display: flex;
flex-direction: column;
gap: 8px;
}
.current-version, .new-version {
display: flex;
align-items: center;
gap: 10px;
}
.label {
font-size: 0.9em;
color: var(--text-color);
opacity: 0.8;
}
.version-number {
font-family: monospace;
font-weight: 600;
}
.new-version .version-number {
color: var(--lora-accent);
}
/* Add styling for git info display */
.git-info {
font-size: 0.85em;
opacity: 0.7;
margin-top: 4px;
font-family: monospace;
color: var(--text-color);
}
.update-link {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 16px;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
text-decoration: none;
color: var(--text-color);
transition: all 0.2s ease;
}
.update-link:hover {
background: var(--lora-accent);
color: white;
transform: translateY(-2px);
}
.changelog-section {
background: rgba(0, 0, 0, 0.02); /* 轻微的灰色背景 */
border: 1px solid rgba(0, 0, 0, 0.08); /* 更明显的边框 */
border-radius: var(--border-radius-sm);
padding: var(--space-3);
}
.changelog-section h3 {
margin-top: 0;
margin-bottom: var(--space-2);
color: var(--lora-accent);
font-size: 1.1em;
}
.changelog-content {
max-height: 300px; /* Increased height since we removed instructions */
overflow-y: auto;
}
.changelog-item {
margin-bottom: var(--space-2);
padding-bottom: var(--space-2);
border-bottom: 1px solid var(--lora-border);
}
.changelog-item:last-child {
margin-bottom: 0;
padding-bottom: 0;
border-bottom: none;
}
.changelog-item h4 {
margin-top: 0;
margin-bottom: 8px;
font-size: 1em;
color: var(--text-color);
}
.changelog-item ul {
margin: 0;
padding-left: 20px;
}
.changelog-item li {
margin-bottom: 4px;
color: var(--text-color);
}
@media (max-width: 480px) {
.update-info {
flex-direction: column;
gap: var(--space-2);
}
.version-info {
width: 100%;
}
}
/* Update preferences section */
.update-preferences {
border-top: 1px solid var(--lora-border);
margin-top: var(--space-2);
padding-top: var(--space-2);
display: flex;
align-items: center;
justify-content: flex-start;
}
.banner-history {
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.banner-history h3 {
margin: 0;
font-size: 1.05rem;
color: var(--lora-accent);
}
.banner-history-empty {
margin: 0;
padding: var(--space-3);
background: var(--lora-surface);
border: 1px dashed var(--lora-border);
border-radius: var(--border-radius-sm);
text-align: center;
color: var(--text-muted, rgba(0, 0, 0, 0.6));
}
.banner-history-list {
list-style: none;
margin: 0;
padding: 0;
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.banner-history-item {
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
padding: var(--space-2);
background: var(--card-bg, #fff);
display: flex;
flex-direction: column;
gap: var(--space-1);
}
[data-theme="dark"] .banner-history-item {
background: rgba(255, 255, 255, 0.03);
}
.banner-history-title {
margin: 0;
font-size: 1rem;
}
.banner-history-description {
margin: 0;
color: var(--text-color);
opacity: 0.85;
}
.banner-history-meta {
display: flex;
gap: var(--space-2);
font-size: 0.85rem;
color: var(--text-muted, rgba(0, 0, 0, 0.6));
flex-wrap: wrap;
}
.banner-history-time {
display: inline-flex;
align-items: center;
}
.banner-history-status {
display: inline-flex;
align-items: center;
gap: 0.35rem;
font-weight: 600;
text-transform: uppercase;
letter-spacing: 0.05em;
}
.banner-history-status.active {
color: var(--lora-success);
}
.banner-history-status.dismissed {
color: var(--lora-error);
}
.banner-history-actions {
display: flex;
flex-wrap: wrap;
gap: var(--space-2);
margin-top: var(--space-1);
}
.banner-history-action {
display: inline-flex;
align-items: center;
gap: 0.35rem;
padding: 0.35rem 0.65rem;
border-radius: var(--border-radius-sm);
border: 1px solid var(--lora-border);
text-decoration: none;
font-size: 0.85rem;
transition: background 0.2s ease, color 0.2s ease, border-color 0.2s ease;
}
.banner-history-action i {
font-size: 0.9rem;
}
.banner-history-action.banner-history-action-primary {
background: var(--lora-accent);
border-color: var(--lora-accent);
color: #fff;
}
.banner-history-action.banner-history-action-secondary {
background: var(--lora-surface);
color: var(--text-color);
}
.banner-history-action.banner-history-action-tertiary {
background: transparent;
border-style: dashed;
}
.banner-history-action:hover {
background: var(--lora-accent-light, rgba(0, 148, 255, 0.12));
border-color: var(--lora-accent);
color: var(--lora-accent-text, var(--text-color));
}
/* Override toggle switch styles for update preferences */
.update-preferences .toggle-switch {
position: relative;
display: inline-flex;
align-items: center;
width: auto;
height: 24px;
cursor: pointer;
}
.update-preferences .toggle-slider {
position: relative;
display: inline-block;
width: 50px;
height: 24px;
flex-shrink: 0;
margin-right: 10px;
}
.update-preferences .toggle-label {
margin-left: 0;
white-space: nowrap;
line-height: 24px;
}
@media (max-width: 480px) {
.update-preferences {
flex-direction: row;
flex-wrap: wrap;
}
.update-preferences .toggle-label {
margin-top: 5px;
}
}

View File

@@ -20,7 +20,6 @@
@import 'components/toast.css';
@import 'components/loading.css';
@import 'components/menu.css';
@import 'components/update-modal.css';
@import 'components/lora-modal/lora-modal.css';
@import 'components/lora-modal/description.css';
@import 'components/lora-modal/tag.css';

View File

@@ -9,7 +9,8 @@ import { state } from '../state/index.js';
export const MODEL_TYPES = {
LORA: 'loras',
CHECKPOINT: 'checkpoints',
EMBEDDING: 'embeddings' // Future model type
EMBEDDING: 'embeddings',
MISC: 'misc'
};
// Base API configuration for each model type
@@ -40,6 +41,15 @@ export const MODEL_CONFIG = {
supportsBulkOperations: true,
supportsMove: true,
templateName: 'embeddings.html'
},
[MODEL_TYPES.MISC]: {
displayName: 'Misc',
singularName: 'misc',
defaultPageSize: 100,
supportsLetterFilter: false,
supportsBulkOperations: true,
supportsMove: true,
templateName: 'misc.html'
}
};
@@ -133,6 +143,11 @@ export const MODEL_SPECIFIC_ENDPOINTS = {
},
[MODEL_TYPES.EMBEDDING]: {
metadata: `/api/lm/${MODEL_TYPES.EMBEDDING}/metadata`,
},
[MODEL_TYPES.MISC]: {
metadata: `/api/lm/${MODEL_TYPES.MISC}/metadata`,
vae_roots: `/api/lm/${MODEL_TYPES.MISC}/vae_roots`,
upscaler_roots: `/api/lm/${MODEL_TYPES.MISC}/upscaler_roots`,
}
};

View File

@@ -59,6 +59,18 @@ export class BaseModelApiClient {
sort_by: pageState.sortBy
}, pageState);
// If params is null, it means wildcard resolved to no matches - return empty results
if (params === null) {
return {
items: [],
totalItems: 0,
totalPages: 0,
currentPage: page,
hasMore: false,
folders: []
};
}
const response = await fetch(`${this.apiConfig.endpoints.list}?${params}`);
if (!response.ok) {
throw new Error(`Failed to fetch ${this.apiConfig.config.displayName}s: ${response.statusText}`);
@@ -868,6 +880,13 @@ export class BaseModelApiClient {
}
if (pageState.filters.baseModel && pageState.filters.baseModel.length > 0) {
// Check for empty wildcard marker - if present, no models should match
const EMPTY_WILDCARD_MARKER = '__EMPTY_WILDCARD_RESULT__';
if (pageState.filters.baseModel.length === 1 &&
pageState.filters.baseModel[0] === EMPTY_WILDCARD_MARKER) {
// Wildcard resolved to no matches - return empty results
return null; // Signal to return empty results
}
pageState.filters.baseModel.forEach(model => {
params.append('base_model', model);
});

62
static/js/api/miscApi.js Normal file
View File

@@ -0,0 +1,62 @@
import { BaseModelApiClient } from './baseModelApi.js';
import { getSessionItem } from '../utils/storageHelpers.js';
export class MiscApiClient extends BaseModelApiClient {
_addModelSpecificParams(params, pageState) {
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
const filterMiscHashes = getSessionItem('recipe_to_misc_filterHashes');
if (filterMiscHash) {
params.append('misc_hash', filterMiscHash);
} else if (filterMiscHashes) {
try {
if (Array.isArray(filterMiscHashes) && filterMiscHashes.length > 0) {
params.append('misc_hashes', filterMiscHashes.join(','));
}
} catch (error) {
console.error('Error parsing misc hashes from session storage:', error);
}
}
if (pageState.subType) {
params.append('sub_type', pageState.subType);
}
}
async getMiscInfo(filePath) {
try {
const response = await fetch(this.apiConfig.endpoints.specific.info, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ file_path: filePath })
});
if (!response.ok) throw new Error('Failed to fetch misc info');
return await response.json();
} catch (error) {
console.error('Error fetching misc info:', error);
throw error;
}
}
async getVaeRoots() {
try {
const response = await fetch(this.apiConfig.endpoints.specific.vae_roots, { method: 'GET' });
if (!response.ok) throw new Error('Failed to fetch VAE roots');
return await response.json();
} catch (error) {
console.error('Error fetching VAE roots:', error);
throw error;
}
}
async getUpscalerRoots() {
try {
const response = await fetch(this.apiConfig.endpoints.specific.upscaler_roots, { method: 'GET' });
if (!response.ok) throw new Error('Failed to fetch upscaler roots');
return await response.json();
} catch (error) {
console.error('Error fetching upscaler roots:', error);
throw error;
}
}
}

View File

@@ -1,6 +1,7 @@
import { LoraApiClient } from './loraApi.js';
import { CheckpointApiClient } from './checkpointApi.js';
import { EmbeddingApiClient } from './embeddingApi.js';
import { MiscApiClient } from './miscApi.js';
import { MODEL_TYPES, isValidModelType } from './apiConfig.js';
import { state } from '../state/index.js';
@@ -12,6 +13,8 @@ export function createModelApiClient(modelType) {
return new CheckpointApiClient(MODEL_TYPES.CHECKPOINT);
case MODEL_TYPES.EMBEDDING:
return new EmbeddingApiClient(MODEL_TYPES.EMBEDDING);
case MODEL_TYPES.MISC:
return new MiscApiClient(MODEL_TYPES.MISC);
default:
throw new Error(`Unsupported model type: ${modelType}`);
}

View File

@@ -103,6 +103,19 @@ export async function fetchRecipesPage(page = 1, pageSize = 100) {
// Add base model filters
if (pageState.filters?.baseModel && pageState.filters.baseModel.length) {
// Check for empty wildcard marker - if present, no models should match
const EMPTY_WILDCARD_MARKER = '__EMPTY_WILDCARD_RESULT__';
if (pageState.filters.baseModel.length === 1 &&
pageState.filters.baseModel[0] === EMPTY_WILDCARD_MARKER) {
// Wildcard resolved to no matches - return empty results
return {
items: [],
totalItems: 0,
totalPages: 0,
currentPage: page,
hasMore: false
};
}
params.append('base_models', pageState.filters.baseModel.join(','));
}

View File

@@ -26,7 +26,7 @@ export class CheckpointContextMenu extends BaseContextMenu {
// Update the "Move to other root" label based on current model type
const moveOtherItem = this.menu.querySelector('[data-action="move-other"]');
if (moveOtherItem) {
const currentType = card.dataset.model_type || 'checkpoint';
const currentType = card.dataset.sub_type || 'checkpoint';
const otherType = currentType === 'checkpoint' ? 'diffusion_model' : 'checkpoint';
const typeLabel = i18n.t(`checkpoints.modelTypes.${otherType}`);
moveOtherItem.innerHTML = `<i class="fas fa-exchange-alt"></i> ${i18n.t('checkpoints.contextMenu.moveToOtherTypeFolder', { otherType: typeLabel })}`;
@@ -65,11 +65,11 @@ export class CheckpointContextMenu extends BaseContextMenu {
apiClient.refreshSingleModelMetadata(this.currentCard.dataset.filepath);
break;
case 'move':
moveManager.showMoveModal(this.currentCard.dataset.filepath, this.currentCard.dataset.model_type);
moveManager.showMoveModal(this.currentCard.dataset.filepath, this.currentCard.dataset.sub_type);
break;
case 'move-other':
{
const currentType = this.currentCard.dataset.model_type || 'checkpoint';
const currentType = this.currentCard.dataset.sub_type || 'checkpoint';
const otherType = currentType === 'checkpoint' ? 'diffusion_model' : 'checkpoint';
moveManager.showMoveModal(this.currentCard.dataset.filepath, otherType);
}

View File

@@ -0,0 +1,85 @@
import { BaseContextMenu } from './BaseContextMenu.js';
import { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
import { showDeleteModal, showExcludeModal } from '../../utils/modalUtils.js';
import { moveManager } from '../../managers/MoveManager.js';
import { i18n } from '../../i18n/index.js';
export class MiscContextMenu extends BaseContextMenu {
constructor() {
super('miscContextMenu', '.model-card');
this.nsfwSelector = document.getElementById('nsfwLevelSelector');
this.modelType = 'misc';
this.resetAndReload = resetAndReload;
this.initNSFWSelector();
}
// Implementation needed by the mixin
async saveModelMetadata(filePath, data) {
return getModelApiClient().saveModelMetadata(filePath, data);
}
showMenu(x, y, card) {
super.showMenu(x, y, card);
// Update the "Move to other root" label based on current model type
const moveOtherItem = this.menu.querySelector('[data-action="move-other"]');
if (moveOtherItem) {
const currentType = card.dataset.sub_type || 'vae';
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
const typeLabel = i18n.t(`misc.modelTypes.${otherType}`);
moveOtherItem.innerHTML = `<i class="fas fa-exchange-alt"></i> ${i18n.t('misc.contextMenu.moveToOtherTypeFolder', { otherType: typeLabel })}`;
}
}
handleMenuAction(action) {
// First try to handle with common actions
if (ModelContextMenuMixin.handleCommonMenuActions.call(this, action)) {
return;
}
const apiClient = getModelApiClient();
// Otherwise handle misc-specific actions
switch (action) {
case 'details':
// Show misc details
this.currentCard.click();
break;
case 'replace-preview':
// Add new action for replacing preview images
apiClient.replaceModelPreview(this.currentCard.dataset.filepath);
break;
case 'delete':
showDeleteModal(this.currentCard.dataset.filepath);
break;
case 'copyname':
// Copy misc model name
if (this.currentCard.querySelector('.fa-copy')) {
this.currentCard.querySelector('.fa-copy').click();
}
break;
case 'refresh-metadata':
// Refresh metadata from CivitAI
apiClient.refreshSingleModelMetadata(this.currentCard.dataset.filepath);
break;
case 'move':
moveManager.showMoveModal(this.currentCard.dataset.filepath, this.currentCard.dataset.sub_type);
break;
case 'move-other':
{
const currentType = this.currentCard.dataset.sub_type || 'vae';
const otherType = currentType === 'vae' ? 'upscaler' : 'vae';
moveManager.showMoveModal(this.currentCard.dataset.filepath, otherType);
}
break;
case 'exclude':
showExcludeModal(this.currentCard.dataset.filepath);
break;
}
}
}
// Mix in shared methods
Object.assign(MiscContextMenu.prototype, ModelContextMenuMixin);

View File

@@ -2,6 +2,7 @@ export { LoraContextMenu } from './LoraContextMenu.js';
export { RecipeContextMenu } from './RecipeContextMenu.js';
export { CheckpointContextMenu } from './CheckpointContextMenu.js';
export { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
export { MiscContextMenu } from './MiscContextMenu.js';
export { GlobalContextMenu } from './GlobalContextMenu.js';
export { ModelContextMenuMixin } from './ModelContextMenuMixin.js';
@@ -9,6 +10,7 @@ import { LoraContextMenu } from './LoraContextMenu.js';
import { RecipeContextMenu } from './RecipeContextMenu.js';
import { CheckpointContextMenu } from './CheckpointContextMenu.js';
import { EmbeddingContextMenu } from './EmbeddingContextMenu.js';
import { MiscContextMenu } from './MiscContextMenu.js';
import { GlobalContextMenu } from './GlobalContextMenu.js';
// Factory method to create page-specific context menu instances
@@ -22,6 +24,8 @@ export function createPageContextMenu(pageType) {
return new CheckpointContextMenu();
case 'embeddings':
return new EmbeddingContextMenu();
case 'misc':
return new MiscContextMenu();
default:
return null;
}

View File

@@ -32,6 +32,7 @@ export class HeaderManager {
if (path.includes('/checkpoints')) return 'checkpoints';
if (path.includes('/embeddings')) return 'embeddings';
if (path.includes('/statistics')) return 'statistics';
if (path.includes('/misc')) return 'misc';
if (path.includes('/loras')) return 'loras';
return 'unknown';
}

View File

@@ -1075,7 +1075,7 @@ class RecipeModal {
const checkpointName = checkpoint.name || checkpoint.modelName || checkpoint.file_name || 'Checkpoint';
const versionLabel = checkpoint.version || checkpoint.modelVersionName || '';
const baseModel = checkpoint.baseModel || checkpoint.base_model || '';
const modelTypeRaw = (checkpoint.model_type || checkpoint.type || 'checkpoint').toLowerCase();
const modelTypeRaw = (checkpoint.sub_type || checkpoint.type || 'checkpoint').toLowerCase();
const modelTypeLabel = modelTypeRaw === 'diffusion_model' ? 'Diffusion Model' : 'Checkpoint';
const previewMedia = isPreviewVideo ? `
@@ -1172,7 +1172,7 @@ class RecipeModal {
return;
}
const modelType = (checkpoint.model_type || checkpoint.type || 'checkpoint').toLowerCase();
const modelType = (checkpoint.sub_type || checkpoint.type || 'checkpoint').toLowerCase();
const isDiffusionModel = modelType === 'diffusion_model' || modelType === 'unet';
const widgetName = isDiffusionModel ? 'unet_name' : 'ckpt_name';

View File

@@ -0,0 +1,119 @@
// MiscControls.js - Specific implementation for the Misc (VAE/Upscaler) page
import { PageControls } from './PageControls.js';
import { getModelApiClient, resetAndReload } from '../../api/modelApiFactory.js';
import { getSessionItem, removeSessionItem } from '../../utils/storageHelpers.js';
import { downloadManager } from '../../managers/DownloadManager.js';
/**
* MiscControls class - Extends PageControls for Misc-specific functionality
*/
export class MiscControls extends PageControls {
constructor() {
// Initialize with 'misc' page type
super('misc');
// Register API methods specific to the Misc page
this.registerMiscAPI();
// Check for custom filters (e.g., from recipe navigation)
this.checkCustomFilters();
}
/**
* Register Misc-specific API methods
*/
registerMiscAPI() {
const miscAPI = {
// Core API functions
loadMoreModels: async (resetPage = false, updateFolders = false) => {
return await getModelApiClient().loadMoreWithVirtualScroll(resetPage, updateFolders);
},
resetAndReload: async (updateFolders = false) => {
return await resetAndReload(updateFolders);
},
refreshModels: async (fullRebuild = false) => {
return await getModelApiClient().refreshModels(fullRebuild);
},
// Add fetch from Civitai functionality for misc models
fetchFromCivitai: async () => {
return await getModelApiClient().fetchCivitaiMetadata();
},
// Add show download modal functionality
showDownloadModal: () => {
downloadManager.showDownloadModal();
},
toggleBulkMode: () => {
if (window.bulkManager) {
window.bulkManager.toggleBulkMode();
} else {
console.error('Bulk manager not available');
}
},
clearCustomFilter: async () => {
await this.clearCustomFilter();
}
};
// Register the API
this.registerAPI(miscAPI);
}
/**
* Check for custom filters sent from other pages (e.g., recipe modal)
*/
checkCustomFilters() {
const filterMiscHash = getSessionItem('recipe_to_misc_filterHash');
const filterRecipeName = getSessionItem('filterMiscRecipeName');
if (filterMiscHash && filterRecipeName) {
const indicator = document.getElementById('customFilterIndicator');
const filterText = indicator?.querySelector('.customFilterText');
if (indicator && filterText) {
indicator.classList.remove('hidden');
const displayText = `Viewing misc model from: ${filterRecipeName}`;
filterText.textContent = this._truncateText(displayText, 30);
filterText.setAttribute('title', displayText);
const filterElement = indicator.querySelector('.filter-active');
if (filterElement) {
filterElement.classList.add('animate');
setTimeout(() => filterElement.classList.remove('animate'), 600);
}
}
}
}
/**
* Clear misc custom filter and reload
*/
async clearCustomFilter() {
removeSessionItem('recipe_to_misc_filterHash');
removeSessionItem('recipe_to_misc_filterHashes');
removeSessionItem('filterMiscRecipeName');
const indicator = document.getElementById('customFilterIndicator');
if (indicator) {
indicator.classList.add('hidden');
}
await resetAndReload();
}
/**
* Helper to truncate text with ellipsis
* @param {string} text
* @param {number} maxLength
* @returns {string}
*/
_truncateText(text, maxLength) {
return text.length > maxLength ? `${text.substring(0, maxLength - 3)}...` : text;
}
}

View File

@@ -3,13 +3,14 @@ import { PageControls } from './PageControls.js';
import { LorasControls } from './LorasControls.js';
import { CheckpointsControls } from './CheckpointsControls.js';
import { EmbeddingsControls } from './EmbeddingsControls.js';
import { MiscControls } from './MiscControls.js';
// Export the classes
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls };
export { PageControls, LorasControls, CheckpointsControls, EmbeddingsControls, MiscControls };
/**
* Factory function to create the appropriate controls based on page type
* @param {string} pageType - The type of page ('loras', 'checkpoints', or 'embeddings')
* @param {string} pageType - The type of page ('loras', 'checkpoints', 'embeddings', or 'misc')
* @returns {PageControls} - The appropriate controls instance
*/
export function createPageControls(pageType) {
@@ -19,6 +20,8 @@ export function createPageControls(pageType) {
return new CheckpointsControls();
} else if (pageType === 'embeddings') {
return new EmbeddingsControls();
} else if (pageType === 'misc') {
return new MiscControls();
} else {
console.error(`Unknown page type: ${pageType}`);
return null;

View File

@@ -4,7 +4,7 @@ import { showModelModal } from './ModelModal.js';
import { toggleShowcase } from './showcase/ShowcaseView.js';
import { bulkManager } from '../../managers/BulkManager.js';
import { modalManager } from '../../managers/ModalManager.js';
import { NSFW_LEVELS, getBaseModelAbbreviation } from '../../utils/constants.js';
import { NSFW_LEVELS, getBaseModelAbbreviation, getSubTypeAbbreviation, MODEL_SUBTYPE_DISPLAY_NAMES } from '../../utils/constants.js';
import { MODEL_TYPES } from '../../api/apiConfig.js';
import { getModelApiClient } from '../../api/modelApiFactory.js';
import { showDeleteModal } from '../../utils/modalUtils.js';
@@ -176,7 +176,7 @@ function handleSendToWorkflow(card, replaceMode, modelType) {
return;
}
const subtype = (card.dataset.model_type || 'checkpoint').toLowerCase();
const subtype = (card.dataset.sub_type || 'checkpoint').toLowerCase();
const isDiffusionModel = subtype === 'diffusion_model';
const widgetName = isDiffusionModel ? 'unet_name' : 'ckpt_name';
const actionTypeText = translate(
@@ -214,6 +214,52 @@ function handleSendToWorkflow(card, replaceMode, modelType) {
missingNodesMessage,
missingTargetMessage,
});
} else if (modelType === MODEL_TYPES.MISC) {
const modelPath = card.dataset.filepath;
if (!modelPath) {
const message = translate('modelCard.sendToWorkflow.missingPath', {}, 'Unable to determine model path for this card');
showToast(message, {}, 'error');
return;
}
const subtype = (card.dataset.sub_type || 'vae').toLowerCase();
const isVae = subtype === 'vae';
const widgetName = isVae ? 'vae_name' : 'model_name';
const actionTypeText = translate(
isVae ? 'uiHelpers.nodeSelector.vae' : 'uiHelpers.nodeSelector.upscaler',
{},
isVae ? 'VAE' : 'Upscaler'
);
const successMessage = translate(
isVae ? 'uiHelpers.workflow.vaeUpdated' : 'uiHelpers.workflow.upscalerUpdated',
{},
isVae ? 'VAE updated in workflow' : 'Upscaler updated in workflow'
);
const failureMessage = translate(
isVae ? 'uiHelpers.workflow.vaeFailed' : 'uiHelpers.workflow.upscalerFailed',
{},
isVae ? 'Failed to update VAE node' : 'Failed to update upscaler node'
);
const missingNodesMessage = translate(
'uiHelpers.workflow.noMatchingNodes',
{},
'No compatible nodes available in the current workflow'
);
const missingTargetMessage = translate(
'uiHelpers.workflow.noTargetNodeSelected',
{},
'No target node selected'
);
sendModelPathToWorkflow(modelPath, {
widgetName,
collectionType: MODEL_TYPES.MISC,
actionTypeText,
successMessage,
failureMessage,
missingNodesMessage,
missingTargetMessage,
});
} else {
showToast('modelCard.sendToWorkflow.checkpointNotImplemented', {}, 'info');
}
@@ -230,6 +276,10 @@ function handleCopyAction(card, modelType) {
} else if (modelType === MODEL_TYPES.EMBEDDING) {
const embeddingName = card.dataset.file_name;
copyToClipboard(embeddingName, 'Embedding name copied');
} else if (modelType === MODEL_TYPES.MISC) {
const miscName = card.dataset.file_name;
const message = translate('modelCard.actions.miscNameCopied', {}, 'Model name copied');
copyToClipboard(miscName, message);
}
}
@@ -453,9 +503,9 @@ export function createModelCard(model, modelType) {
card.dataset.usage_tips = model.usage_tips;
}
// checkpoint specific data
if (modelType === MODEL_TYPES.CHECKPOINT) {
card.dataset.model_type = model.model_type; // checkpoint or diffusion_model
// Set sub_type for all model types (lora/locon/dora, checkpoint/diffusion_model, embedding)
if (model.sub_type) {
card.dataset.sub_type = model.sub_type;
}
// Store metadata if available
@@ -580,6 +630,11 @@ export function createModelCard(model, modelType) {
const baseModelLabel = model.base_model || 'Unknown';
const baseModelAbbreviation = getBaseModelAbbreviation(baseModelLabel);
// Sub-type display (e.g., LoRA, LyCO, DoRA, CKPT, DM, EMB)
const subType = model.sub_type || '';
const subTypeAbbreviation = getSubTypeAbbreviation(subType);
const fullSubTypeName = MODEL_SUBTYPE_DISPLAY_NAMES[subType?.toLowerCase()] || subType || '';
card.innerHTML = `
<div class="card-preview ${shouldBlur ? 'blurred' : ''}">
${isVideo ?
@@ -592,12 +647,15 @@ export function createModelCard(model, modelType) {
<i class="fas fa-eye"></i>
</button>` : ''}
<div class="card-header-info">
<span class="base-model-label ${shouldBlur ? 'with-toggle' : ''}" title="${baseModelLabel}">
${baseModelAbbreviation}
<span class="base-model-label ${shouldBlur ? 'with-toggle' : ''}"
title="${fullSubTypeName ? fullSubTypeName + ' | ' : ''}${baseModelLabel}">
${subTypeAbbreviation ? `<span class="model-sub-type">${subTypeAbbreviation}</span>` : ''}
${subTypeAbbreviation ? `<span class="model-separator"></span>` : ''}
<span class="model-base-type">${baseModelAbbreviation}</span>
</span>
${hasUpdateAvailable ? `
<span class="model-update-badge" title="${updateBadgeTooltip}">
${updateBadgeLabel}
<i class="fas fa-arrow-up"></i>
</span>
` : ''}
</div>

View File

@@ -382,6 +382,19 @@ export function setupTriggerWordsEditMode() {
this.value = ''; // Clear input after adding
}
});
// Auto-commit on blur to prevent data loss when clicking save
triggerWordInput.addEventListener('blur', function () {
if (this.value.trim()) {
// Small delay to avoid conflict with save button click
setTimeout(() => {
if (document.contains(this) && this.value.trim()) {
addNewTriggerWord(this.value.trim());
this.value = '';
}
}, 150);
}
});
}
// Set up save button
@@ -619,6 +632,14 @@ async function saveTriggerWords() {
const editBtn = document.querySelector('.edit-trigger-words-btn');
const filePath = editBtn.dataset.filePath;
const triggerWordsSection = editBtn.closest('.trigger-words');
// Auto-commit any pending input to prevent data loss
const input = triggerWordsSection.querySelector('.metadata-input');
if (input && input.value.trim()) {
addNewTriggerWord(input.value.trim());
input.value = '';
}
const triggerWordTags = triggerWordsSection.querySelectorAll('.trigger-word-tag');
const words = Array.from(triggerWordTags).map(tag => tag.dataset.word);

View File

@@ -4,6 +4,8 @@
*/
import { showToast } from '../../../utils/uiHelpers.js';
import { state } from '../../../state/index.js';
import { modalManager } from '../../../managers/ModalManager.js';
import { translate } from '../../../utils/i18nHelpers.js';
import { NSFW_LEVELS } from '../../../utils/constants.js';
import {
initLazyLoading,
@@ -275,6 +277,40 @@ function findLocalFile(img, index, exampleFiles) {
* @returns {string} HTML content for import interface
*/
function renderImportInterface(isEmpty) {
// Check if example images path is configured
const exampleImagesPath = state.global.settings.example_images_path;
const isPathConfigured = exampleImagesPath && exampleImagesPath.trim() !== '';
// If path is not configured, show setup guidance
if (!isPathConfigured) {
const title = translate('uiHelpers.exampleImages.setupRequired', {}, 'Example Images Storage');
const description = translate('uiHelpers.exampleImages.setupDescription', {}, 'To add custom example images, you need to set a download location first.');
const usage = translate('uiHelpers.exampleImages.setupUsage', {}, 'This path is used for both downloaded and custom example images.');
const openSettings = translate('uiHelpers.exampleImages.openSettings', {}, 'Open Settings');
return `
<div class="example-import-area ${isEmpty ? 'empty' : ''}">
<div class="import-container import-container--needs-setup" id="exampleImportContainer">
<div class="import-setup-guidance">
<div class="setup-icon">
<i class="fas fa-folder-plus"></i>
</div>
<h3>${title}</h3>
<p class="setup-description">
${description}
</p>
<p class="setup-usage">
${usage}
</p>
<button class="select-files-btn setup-settings-btn" id="openExampleSettingsBtn">
<i class="fas fa-cog"></i> ${openSettings}
</button>
</div>
</div>
</div>
`;
}
return `
<div class="example-import-area ${isEmpty ? 'empty' : ''}">
<div class="import-container" id="exampleImportContainer">
@@ -300,6 +336,33 @@ function renderImportInterface(isEmpty) {
`;
}
/**
* Open settings modal and scroll to example images section
*/
function openSettingsForExampleImages() {
modalManager.showModal('settingsModal');
// Wait for modal to be visible, then scroll to example images section
setTimeout(() => {
const exampleImagesInput = document.getElementById('exampleImagesPath');
if (exampleImagesInput) {
// Find the parent settings-section
const section = exampleImagesInput.closest('.settings-section');
if (section) {
section.scrollIntoView({ behavior: 'smooth', block: 'center' });
// Add a brief highlight effect
section.style.transition = 'background-color 0.3s ease';
section.style.backgroundColor = 'rgba(66, 153, 225, 0.1)';
setTimeout(() => {
section.style.backgroundColor = '';
}, 1500);
}
// Focus the input
exampleImagesInput.focus();
}
}, 100);
}
/**
* Initialize the example import functionality
* @param {string} modelHash - The SHA256 hash of the model
@@ -311,6 +374,14 @@ export function initExampleImport(modelHash, container) {
const importContainer = container.querySelector('#exampleImportContainer');
const fileInput = container.querySelector('#exampleFilesInput');
const selectFilesBtn = container.querySelector('#selectExampleFilesBtn');
const openSettingsBtn = container.querySelector('#openExampleSettingsBtn');
// Set up "Open Settings" button for setup guidance state
if (openSettingsBtn) {
openSettingsBtn.addEventListener('click', () => {
openSettingsForExampleImages();
});
}
// Set up file selection button
if (selectFilesBtn) {

View File

@@ -46,7 +46,7 @@ export class AppCore {
state.loadingManager = new LoadingManager();
modalManager.initialize();
updateService.initialize();
bannerService.initialize();
await bannerService.initialize();
window.modalManager = modalManager;
window.settingsManager = settingsManager;
const exampleImagesManager = new ExampleImagesManager();
@@ -81,8 +81,8 @@ export class AppCore {
this.initialized = true;
// Start onboarding if needed (after everything is initialized)
setTimeout(() => {
onboardingManager.start();
setTimeout(async () => {
await onboardingManager.start();
}, 1000); // Small delay to ensure all elements are rendered
// Return the core instance for chaining
@@ -99,7 +99,7 @@ export class AppCore {
initializePageFeatures() {
const pageType = this.getPageType();
if (['loras', 'recipes', 'checkpoints', 'embeddings'].includes(pageType)) {
if (['loras', 'recipes', 'checkpoints', 'embeddings', 'misc'].includes(pageType)) {
this.initializeContextMenus(pageType);
initializeInfiniteScroll(pageType);
}

View File

@@ -36,7 +36,7 @@ class BannerService {
/**
* Initialize the banner service
*/
initialize() {
async initialize() {
if (this.initialized) return;
this.container = document.getElementById('banner-container');
@@ -45,6 +45,9 @@ class BannerService {
return;
}
// Load dismissed banners from backend first (for persistence across browser modes)
await this.loadDismissedBannersFromBackend();
// Register default banners
this.registerBanner('civitai-extension', {
id: 'civitai-extension',
@@ -76,10 +79,36 @@ class BannerService {
this.prepareCommunitySupportBanner();
this.showActiveBanners();
await this.showActiveBanners();
this.initialized = true;
}
/**
* Load dismissed banners from backend settings
* Falls back to localStorage if backend is unavailable
*/
async loadDismissedBannersFromBackend() {
try {
const response = await fetch('/api/lm/settings');
const data = await response.json();
if (data.success && data.settings && Array.isArray(data.settings.dismissed_banners)) {
// Merge backend dismissed banners with localStorage
const backendDismissed = data.settings.dismissed_banners;
const localDismissed = getStorageItem('dismissed_banners', []);
// Use Set to get unique banner IDs
const mergedDismissed = [...new Set([...backendDismissed, ...localDismissed])];
// Save merged list to localStorage as cache
if (mergedDismissed.length > 0) {
setStorageItem('dismissed_banners', mergedDismissed);
}
}
} catch (e) {
console.debug('Failed to fetch dismissed banners from backend, using localStorage');
}
}
/**
* Register a new banner
* @param {string} id - Unique banner ID
@@ -101,6 +130,7 @@ class BannerService {
* @returns {boolean}
*/
isBannerDismissed(bannerId) {
// Check localStorage (which is synced with backend on load)
const dismissedBanners = getStorageItem('dismissed_banners', []);
return dismissedBanners.includes(bannerId);
}
@@ -109,13 +139,16 @@ class BannerService {
* Dismiss a banner
* @param {string} bannerId - Banner ID
*/
dismissBanner(bannerId) {
async dismissBanner(bannerId) {
const dismissedBanners = getStorageItem('dismissed_banners', []);
let bannerAlreadyDismissed = dismissedBanners.includes(bannerId);
if (!bannerAlreadyDismissed) {
dismissedBanners.push(bannerId);
setStorageItem('dismissed_banners', dismissedBanners);
// Save to backend for persistence (survives incognito/private mode)
await this.saveDismissedBannersToBackend(dismissedBanners);
}
// Remove banner from DOM
@@ -139,10 +172,26 @@ class BannerService {
}
}
/**
* Save dismissed banners to backend settings
* @param {string[]} dismissedBanners - Array of dismissed banner IDs
*/
async saveDismissedBannersToBackend(dismissedBanners) {
try {
await fetch('/api/lm/settings', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ dismissed_banners: dismissedBanners })
});
} catch (e) {
console.error('Failed to save dismissed banners to backend:', e);
}
}
/**
* Show all active (non-dismissed) banners
*/
showActiveBanners() {
async showActiveBanners() {
if (!this.container) return;
const activeBanners = Array.from(this.banners.values())
@@ -177,7 +226,7 @@ class BannerService {
}).join('') : '';
const dismissButtonHtml = banner.dismissible ?
`<button class="banner-dismiss" onclick="bannerService.dismissBanner('${banner.id}')" title="Dismiss">
`<button class="banner-dismiss" onclick="bannerService.dismissBanner('${banner.id}').catch(console.error)" title="Dismiss">
<i class="fas fa-times"></i>
</button>` : '';
@@ -227,8 +276,20 @@ class BannerService {
/**
* Clear all dismissed banners (for testing/admin purposes)
*/
clearDismissedBanners() {
async clearDismissedBanners() {
setStorageItem('dismissed_banners', []);
// Also clear on backend
try {
await fetch('/api/lm/settings', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ dismissed_banners: [] })
});
} catch (e) {
console.error('Failed to clear dismissed banners on backend:', e);
}
location.reload();
}

View File

@@ -64,6 +64,17 @@ export class BulkManager {
deleteAll: true,
setContentRating: true
},
[MODEL_TYPES.MISC]: {
addTags: true,
sendToWorkflow: false,
copyAll: false,
refreshAll: true,
checkUpdates: true,
moveAll: true,
autoOrganize: true,
deleteAll: true,
setContentRating: true
},
recipes: {
addTags: false,
sendToWorkflow: false,

View File

@@ -4,6 +4,7 @@ import { getModelApiClient } from '../api/modelApiFactory.js';
import { removeStorageItem, setStorageItem, getStorageItem } from '../utils/storageHelpers.js';
import { MODEL_TYPE_DISPLAY_NAMES } from '../utils/constants.js';
import { translate } from '../utils/i18nHelpers.js';
import { FilterPresetManager, EMPTY_WILDCARD_MARKER } from './FilterPresetManager.js';
export class FilterManager {
constructor(options = {}) {
@@ -21,6 +22,12 @@ export class FilterManager {
this.activeFiltersCount = document.getElementById('activeFiltersCount');
this.tagsLoaded = false;
// Initialize preset manager
this.presetManager = new FilterPresetManager({
page: this.currentPage,
filterManager: this
});
this.initialize();
// Store this instance in the state
@@ -30,6 +37,17 @@ export class FilterManager {
}
}
// Accessor for backward compatibility with activePreset
get activePreset() {
return this.presetManager?.activePreset ?? null;
}
set activePreset(value) {
if (this.presetManager) {
this.presetManager.activePreset = value;
}
}
initialize() {
// Create base model filter tags if they exist
if (document.getElementById('baseModelTags')) {
@@ -109,12 +127,35 @@ export class FilterManager {
return;
}
// Collect existing tag names from the API response
const existingTagNames = new Set(tags.map(t => t.tag));
// Add any active filter tags that aren't in the top 20
if (this.filters.tags) {
Object.keys(this.filters.tags).forEach(tagName => {
// Skip special tags like __no_tags__
if (tagName.startsWith('__')) return;
if (!existingTagNames.has(tagName)) {
// Add this tag to the list with count 0 (unknown)
tags.push({ tag: tagName, count: 0 });
existingTagNames.add(tagName);
}
});
}
tags.forEach(tag => {
const tagEl = document.createElement('div');
tagEl.className = 'filter-tag tag-filter';
const tagName = tag.tag;
tagEl.dataset.tag = tagName;
tagEl.innerHTML = `${tagName} <span class="tag-count">${tag.count}</span>`;
// Show count only if it's > 0 (known count)
if (tag.count > 0) {
tagEl.innerHTML = `${tagName} <span class="tag-count">${tag.count}</span>`;
} else {
tagEl.textContent = tagName;
}
// Add click handler to cycle through tri-state filter and automatically apply
tagEl.addEventListener('click', async () => {
@@ -376,6 +417,9 @@ export class FilterManager {
this.loadTopTags();
this.tagsLoaded = true;
}
// Render presets
this.presetManager.renderPresets();
} else {
this.closeFilterPanel();
}
@@ -434,7 +478,9 @@ export class FilterManager {
const tagFilterCount = this.filters.tags ? Object.keys(this.filters.tags).length : 0;
const licenseFilterCount = this.filters.license ? Object.keys(this.filters.license).length : 0;
const modelTypeFilterCount = this.filters.modelTypes.length;
const totalActiveFilters = this.filters.baseModel.length + tagFilterCount + licenseFilterCount + modelTypeFilterCount;
// Exclude EMPTY_WILDCARD_MARKER from base model count
const baseModelCount = this.filters.baseModel.filter(m => m !== EMPTY_WILDCARD_MARKER).length;
const totalActiveFilters = baseModelCount + tagFilterCount + licenseFilterCount + modelTypeFilterCount;
if (this.activeFiltersCount) {
if (totalActiveFilters > 0) {
@@ -444,18 +490,32 @@ export class FilterManager {
this.activeFiltersCount.style.display = 'none';
}
}
// Update add button state when filters change
if (this.presetManager) {
this.presetManager.updateAddButtonState();
}
}
async applyFilters(showToastNotification = true) {
async applyFilters(showToastNotification = true, isPresetApply = false) {
const pageState = getCurrentPageState();
const storageKey = `${this.currentPage}_filters`;
// Save filters to localStorage
// Save filters to localStorage (exclude EMPTY_WILDCARD_MARKER)
const filtersSnapshot = this.cloneFilters();
// Don't persist EMPTY_WILDCARD_MARKER - it's a runtime-only marker
filtersSnapshot.baseModel = filtersSnapshot.baseModel.filter(m => m !== EMPTY_WILDCARD_MARKER);
setStorageItem(storageKey, filtersSnapshot);
// Update state with current filters
pageState.filters = filtersSnapshot;
pageState.filters = this.cloneFilters();
// Deactivate preset if this is a manual filter change (not from applying a preset)
if (!isPresetApply && this.activePreset) {
this.activePreset = null;
this.presetManager.saveActivePreset(); // Persist the cleared state
this.presetManager.renderPresets(); // Re-render to remove active state
}
// Call the appropriate manager's load method based on page type
if (this.currentPage === 'recipes' && window.recipeManager) {
@@ -492,6 +552,10 @@ export class FilterManager {
}
async clearFilters() {
// Clear active preset
this.activePreset = null;
this.presetManager.saveActivePreset(); // Persist the cleared state
// Clear all filters
this.filters = this.initializeFilters({
...this.filters,
@@ -508,6 +572,7 @@ export class FilterManager {
// Update UI
this.updateTagSelections();
this.updateActiveFiltersCount();
this.presetManager.renderPresets(); // Re-render to remove active state
// Remove from local Storage
const storageKey = `${this.currentPage}_filters`;
@@ -553,14 +618,19 @@ export class FilterManager {
console.error(`Error loading ${this.currentPage} filters from storage:`, error);
}
}
// Restore active preset after loading filters
this.presetManager.restoreActivePreset();
}
hasActiveFilters() {
const tagCount = this.filters.tags ? Object.keys(this.filters.tags).length : 0;
const licenseCount = this.filters.license ? Object.keys(this.filters.license).length : 0;
const modelTypeCount = this.filters.modelTypes.length;
// Exclude EMPTY_WILDCARD_MARKER from base model count
const baseModelCount = this.filters.baseModel.filter(m => m !== EMPTY_WILDCARD_MARKER).length;
return (
this.filters.baseModel.length > 0 ||
baseModelCount > 0 ||
tagCount > 0 ||
licenseCount > 0 ||
modelTypeCount > 0
@@ -695,4 +765,9 @@ export class FilterManager {
element.classList.add('exclude');
}
}
// Preset management delegation methods for backward compatibility
hasEmptyWildcardResult() {
return this.presetManager?.hasEmptyWildcardResult() ?? false;
}
}

View File

@@ -0,0 +1,815 @@
import { showToast } from '../utils/uiHelpers.js';
import { removeStorageItem, setStorageItem, getStorageItem } from '../utils/storageHelpers.js';
import { translate } from '../utils/i18nHelpers.js';
import { state } from '../state/index.js';
// Constants for preset management
const PRESETS_STORAGE_VERSION = 'v1';
const MAX_PRESET_NAME_LENGTH = 30;
const MAX_PRESETS_COUNT = 10;
// Marker for when wildcard patterns resolve to no matches
// This ensures we return empty results instead of all models
export const EMPTY_WILDCARD_MARKER = '__EMPTY_WILDCARD_RESULT__';
// Timeout for two-step delete confirmation (ms)
const DELETE_CONFIRM_TIMEOUT = 3000;
export class FilterPresetManager {
constructor(options = {}) {
this.currentPage = options.page || 'loras';
this.filterManager = options.filterManager || null;
this.activePreset = null;
// Race condition fix: track pending preset applications
this.applyPresetAbortController = null;
this.applyPresetRequestId = 0;
// UI state for two-step delete
this.pendingDeletePreset = null;
this.pendingDeleteTimeout = null;
// UI state for inline naming
this.isInlineNamingActive = false;
// Cache for presets to avoid repeated settings lookups
this._presetsCache = null;
}
// Storage key methods (legacy - for migration only)
getPresetsStorageKey() {
return `${this.currentPage}_filter_presets_${PRESETS_STORAGE_VERSION}`;
}
getActivePresetStorageKey() {
return `${this.currentPage}_active_preset`;
}
/**
* Get settings key for filter presets based on current page
*/
getSettingsKey() {
return `filter_presets`;
}
/**
* Get the filter presets object from settings
* Returns an object with page keys (loras, checkpoints, embeddings) containing presets arrays
*/
getPresetsFromSettings() {
const settings = state?.global?.settings;
const presets = settings?.filter_presets;
if (presets && typeof presets === 'object') {
return presets;
}
return {};
}
/**
* Save filter presets to backend settings
*/
async savePresetsToBackend(allPresets) {
try {
const response = await fetch('/api/lm/settings', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ filter_presets: allPresets })
});
if (!response.ok) {
throw new Error('Failed to save presets to backend');
}
const data = await response.json();
if (data.success === false) {
throw new Error(data.error || 'Failed to save presets to backend');
}
// Update local cache
this._presetsCache = allPresets;
// Update local settings state
if (state?.global?.settings) {
state.global.settings.filter_presets = allPresets;
}
return true;
} catch (error) {
console.error('Error saving presets to backend:', error);
showToast('Failed to save presets to backend', {}, 'error');
return false;
}
}
/**
* Save active preset name to localStorage
* Note: This is UI state only, not persisted to backend
*/
saveActivePreset() {
const key = this.getActivePresetStorageKey();
if (this.activePreset) {
setStorageItem(key, this.activePreset);
} else {
removeStorageItem(key);
}
}
/**
* Restore active preset from localStorage
* Note: This is UI state only, not synced from backend
*/
restoreActivePreset() {
const key = this.getActivePresetStorageKey();
const savedPresetName = getStorageItem(key);
if (savedPresetName) {
// Verify the preset still exists
const presets = this.loadPresets();
const preset = presets.find(p => p.name === savedPresetName);
if (preset) {
this.activePreset = savedPresetName;
} else {
// Preset no longer exists, clear the saved value
this.activePreset = null;
this.saveActivePreset();
}
}
}
/**
* Migrate presets from localStorage to backend settings
*/
async migratePresetsFromLocalStorage() {
const legacyKey = this.getPresetsStorageKey();
const legacyPresets = getStorageItem(legacyKey);
if (!legacyPresets || !Array.isArray(legacyPresets) || legacyPresets.length === 0) {
return false;
}
// Check if we already have presets in backend for this page
const allPresets = this.getPresetsFromSettings();
if (allPresets[this.currentPage] && allPresets[this.currentPage].length > 0) {
// Already migrated, clear localStorage
removeStorageItem(legacyKey);
return false;
}
// Migrate to backend
const validPresets = legacyPresets.filter(preset => {
if (!preset || typeof preset !== 'object') return false;
if (!preset.name || typeof preset.name !== 'string') return false;
if (!preset.filters || typeof preset.filters !== 'object') return false;
return true;
});
if (validPresets.length > 0) {
allPresets[this.currentPage] = validPresets;
const success = await this.savePresetsToBackend(allPresets);
if (success) {
removeStorageItem(legacyKey);
console.log(`Migrated ${validPresets.length} presets from localStorage to backend`);
}
return success;
}
return false;
}
loadPresets() {
// Get presets from settings
const allPresets = this.getPresetsFromSettings();
let presets = allPresets[this.currentPage];
// Fallback to localStorage if no presets in settings (migration)
if (!presets) {
const legacyKey = this.getPresetsStorageKey();
presets = getStorageItem(legacyKey);
// Trigger async migration
if (presets && Array.isArray(presets) && presets.length > 0) {
this.migratePresetsFromLocalStorage();
}
}
if (!presets) {
return [];
}
if (!Array.isArray(presets)) {
console.warn('Invalid presets data format: expected array');
return [];
}
const validPresets = presets.filter(preset => {
if (!preset || typeof preset !== 'object') return false;
if (!preset.name || typeof preset.name !== 'string') return false;
if (!preset.filters || typeof preset.filters !== 'object') return false;
return true;
});
return validPresets;
}
/**
* Resolve base model patterns to actual available models
* Supports exact matches and wildcard patterns (ending with *)
*
* @param {Array} patterns - Array of base model patterns
* @param {AbortSignal} signal - Optional abort signal for cancellation
* @returns {Promise<Array>} Resolved base model names
*/
async resolveBaseModelPatterns(patterns, signal = null) {
if (!patterns || patterns.length === 0) return [];
const hasWildcards = patterns.some(p => p.endsWith('*'));
try {
const fetchOptions = signal ? { signal } : {};
const response = await fetch(`/api/lm/${this.currentPage}/base-models`, fetchOptions);
if (!response.ok) throw new Error('Failed to fetch base models');
const data = await response.json();
if (!data.success || !Array.isArray(data.base_models)) {
const nonWildcards = patterns.filter(p => !p.endsWith('*'));
if (hasWildcards && nonWildcards.length === 0) {
return [EMPTY_WILDCARD_MARKER];
}
return nonWildcards;
}
const availableModels = data.base_models.map(m => m.name);
const resolvedModels = [];
for (const pattern of patterns) {
if (pattern.endsWith('*')) {
const prefix = pattern.slice(0, -1);
const matches = availableModels.filter(model =>
model.startsWith(prefix)
);
resolvedModels.push(...matches);
} else {
if (availableModels.includes(pattern)) {
resolvedModels.push(pattern);
}
}
}
const uniqueModels = [...new Set(resolvedModels)];
if (hasWildcards && uniqueModels.length === 0) {
return [EMPTY_WILDCARD_MARKER];
}
return uniqueModels;
} catch (error) {
// Rethrow abort errors so they can be handled properly
if (error.name === 'AbortError') {
throw error;
}
console.warn('Error resolving base model patterns:', error);
const nonWildcards = patterns.filter(p => !p.endsWith('*'));
if (hasWildcards && nonWildcards.length === 0) {
return [EMPTY_WILDCARD_MARKER];
}
return nonWildcards;
}
}
/**
* Check if the base model filter represents an empty wildcard result
*/
hasEmptyWildcardResult() {
const filters = this.filterManager?.filters;
return filters?.baseModel?.length === 1 &&
filters.baseModel[0] === EMPTY_WILDCARD_MARKER;
}
async savePresets(presets) {
const allPresets = this.getPresetsFromSettings();
allPresets[this.currentPage] = presets;
await this.savePresetsToBackend(allPresets);
}
validatePresetName(name) {
if (!name || !name.trim()) {
return { valid: false, message: translate('toast.error.presetNameEmpty', {}, 'Preset name cannot be empty') };
}
const trimmedName = name.trim();
if (trimmedName.length > MAX_PRESET_NAME_LENGTH) {
return {
valid: false,
message: translate('toast.error.presetNameTooLong', { max: MAX_PRESET_NAME_LENGTH }, `Preset name must be ${MAX_PRESET_NAME_LENGTH} characters or less`)
};
}
const htmlSpecialChars = /[<>'&]/;
if (htmlSpecialChars.test(trimmedName)) {
return { valid: false, message: translate('toast.error.presetNameInvalidChars', {}, 'Preset name contains invalid characters') };
}
const controlChars = /[\x00-\x1F\x7F-\x9F]/;
if (controlChars.test(trimmedName)) {
return { valid: false, message: translate('toast.error.presetNameInvalidChars', {}, 'Preset name contains invalid characters') };
}
return { valid: true, name: trimmedName };
}
async createPreset(name, options = {}) {
const validation = this.validatePresetName(name);
if (!validation.valid) {
showToast(validation.message, {}, 'error');
return false;
}
const trimmedName = validation.name;
let presets = this.loadPresets();
const existingIndex = presets.findIndex(p => p.name.toLowerCase() === trimmedName.toLowerCase());
const isDuplicate = existingIndex !== -1;
if (isDuplicate) {
if (options.overwrite) {
presets[existingIndex] = {
name: trimmedName,
filters: this.filterManager.cloneFilters(),
createdAt: Date.now()
};
await this.savePresets(presets);
this.renderPresets();
showToast(
translate('toast.presets.overwritten', { name: trimmedName }, `Preset "${trimmedName}" overwritten`),
{},
'success'
);
return true;
} else {
const confirmMsg = translate('header.filter.presetOverwriteConfirm', { name: trimmedName }, `Preset "${trimmedName}" already exists. Overwrite?`);
if (confirm(confirmMsg)) {
return this.createPreset(name, { overwrite: true });
}
return false;
}
}
if (presets.length >= MAX_PRESETS_COUNT) {
showToast(
translate('toast.error.maxPresetsReached', { max: MAX_PRESETS_COUNT }, `Maximum ${MAX_PRESETS_COUNT} presets allowed. Delete one to add more.`),
{},
'error'
);
return false;
}
const preset = {
name: trimmedName,
filters: this.filterManager.cloneFilters(),
createdAt: Date.now()
};
presets.push(preset);
await this.savePresets(presets);
// Auto-activate the newly created preset
this.activePreset = trimmedName;
this.saveActivePreset();
this.renderPresets();
showToast(
translate('toast.presets.created', { name: trimmedName }, `Preset "${trimmedName}" created`),
{},
'success'
);
return true;
}
async deletePreset(name) {
try {
let presets = this.loadPresets();
const filtered = presets.filter(p => p.name !== name);
if (filtered.length === 0) {
const allPresets = this.getPresetsFromSettings();
delete allPresets[this.currentPage];
await this.savePresetsToBackend(allPresets);
} else {
await this.savePresets(filtered);
}
if (this.activePreset === name) {
this.activePreset = null;
this.saveActivePreset();
}
this.renderPresets();
showToast(
translate('toast.presets.deleted', { name }, `Preset "${name}" deleted`),
{},
'success'
);
} catch (error) {
console.error('Error deleting preset:', error);
showToast(translate('toast.error.deletePresetFailed', {}, 'Failed to delete preset'), {}, 'error');
}
}
/**
* Apply a preset with race condition protection
* Cancels any pending preset application before starting a new one
*/
async applyPreset(name) {
// Cancel any pending preset application
if (this.applyPresetAbortController) {
this.applyPresetAbortController.abort();
}
this.applyPresetAbortController = new AbortController();
const signal = this.applyPresetAbortController.signal;
const requestId = ++this.applyPresetRequestId;
try {
const presets = this.loadPresets();
const preset = presets.find(p => p.name === name);
if (!preset) {
showToast(translate('toast.error.presetNotFound', {}, 'Preset not found'), {}, 'error');
return;
}
if (!preset.filters || typeof preset.filters !== 'object') {
showToast(translate('toast.error.invalidPreset', {}, 'Invalid preset data'), {}, 'error');
return;
}
// Check if aborted before expensive operations
if (signal.aborted) return;
// Resolve base model patterns (supports wildcards for default presets)
const resolvedBaseModels = await this.resolveBaseModelPatterns(
preset.filters.baseModel,
signal
);
// Check if request is still valid (another preset may have been selected)
if (requestId !== this.applyPresetRequestId) return;
if (signal.aborted) return;
// Set active preset AFTER successful resolution
this.activePreset = name;
this.saveActivePreset();
// Apply the preset filters with resolved base models
this.filterManager.filters = this.filterManager.initializeFilters({
...preset.filters,
baseModel: resolvedBaseModels
});
// Update state
const { getCurrentPageState } = await import('../state/index.js');
const pageState = getCurrentPageState();
pageState.filters = this.filterManager.cloneFilters();
// If tags haven't been loaded yet, load them first
if (!this.filterManager.tagsLoaded) {
await this.filterManager.loadTopTags();
this.filterManager.tagsLoaded = true;
}
// Check again after async operation
if (requestId !== this.applyPresetRequestId) return;
// Update UI
this.filterManager.updateTagSelections();
this.filterManager.updateActiveFiltersCount();
this.renderPresets();
// Apply filters (pass true for isPresetApply so it doesn't clear activePreset)
await this.filterManager.applyFilters(false, true);
showToast(
translate('toast.presets.applied', { name }, `Preset "${name}" applied`),
{},
'success'
);
} catch (error) {
// Silently handle abort errors
if (error.name === 'AbortError') return;
console.error('Error applying preset:', error);
showToast(translate('toast.error.applyPresetFailed', {}, 'Failed to apply preset'), {}, 'error');
}
}
hasUserCreatedPresets() {
// Check in settings first
const allPresets = this.getPresetsFromSettings();
const presets = allPresets[this.currentPage];
if (presets && Array.isArray(presets) && presets.length > 0) {
return true;
}
// Fallback to localStorage
const presetsKey = this.getPresetsStorageKey();
const localPresets = getStorageItem(presetsKey);
return Array.isArray(localPresets) && localPresets.length > 0;
}
/**
* Check if the add button should be disabled
* Returns true if no filters are active OR a preset is already active
*/
shouldDisableAddButton() {
return !this.filterManager?.hasActiveFilters() || this.activePreset !== null;
}
/**
* Update the add button's disabled state
*/
updateAddButtonState() {
const addBtn = document.querySelector('.add-preset-btn');
if (!addBtn) return;
const shouldDisable = this.shouldDisableAddButton();
if (shouldDisable) {
addBtn.classList.add('disabled');
// Update tooltip to explain why it's disabled
if (this.activePreset) {
addBtn.title = translate('header.filter.savePresetDisabledActive', {}, 'Cannot save: A preset is already active. Clear filters to save new preset.');
} else {
addBtn.title = translate('header.filter.savePresetDisabledNoFilters', {}, 'Select filters first to save as preset');
}
} else {
addBtn.classList.remove('disabled');
addBtn.title = translate('header.filter.savePreset', {}, 'Save current filters as a new preset');
}
}
/**
* Initiate two-step delete process
*/
initiateDelete(presetName, deleteBtn) {
// If already pending for this preset, execute the delete
if (this.pendingDeletePreset === presetName) {
this.cancelPendingDelete();
this.deletePreset(presetName);
return;
}
// Cancel any previous pending delete
this.cancelPendingDelete();
// Set up new pending delete
this.pendingDeletePreset = presetName;
deleteBtn.classList.add('confirm');
deleteBtn.innerHTML = '<i class="fas fa-check"></i>';
deleteBtn.title = translate('header.filter.presetDeleteConfirmClick', {}, 'Click again to confirm');
// Auto-cancel after timeout
this.pendingDeleteTimeout = setTimeout(() => {
this.cancelPendingDelete();
}, DELETE_CONFIRM_TIMEOUT);
}
/**
* Cancel pending delete operation
*/
cancelPendingDelete() {
if (this.pendingDeleteTimeout) {
clearTimeout(this.pendingDeleteTimeout);
this.pendingDeleteTimeout = null;
}
if (this.pendingDeletePreset) {
// Reset all delete buttons to normal state
const deleteBtns = document.querySelectorAll('.preset-delete-btn.confirm');
deleteBtns.forEach(btn => {
btn.classList.remove('confirm');
btn.innerHTML = '<i class="fas fa-times"></i>';
btn.title = translate('header.filter.presetDeleteTooltip', {}, 'Delete preset');
});
this.pendingDeletePreset = null;
}
}
/**
* Show inline input for preset naming
*/
showInlineNamingInput() {
if (this.isInlineNamingActive) return;
// Check if there are any active filters
if (!this.filterManager?.hasActiveFilters()) {
showToast(translate('toast.filters.noActiveFilters', {}, 'No active filters to save'), {}, 'info');
return;
}
// Check max presets limit before showing input
const presets = this.loadPresets();
if (presets.length >= MAX_PRESETS_COUNT) {
showToast(
translate('toast.error.maxPresetsReached', { max: MAX_PRESETS_COUNT }, `Maximum ${MAX_PRESETS_COUNT} presets allowed. Delete one to add more.`),
{},
'error'
);
return;
}
this.isInlineNamingActive = true;
const presetsContainer = document.getElementById('filterPresets');
if (!presetsContainer) return;
// Find the add button and hide it
const addBtn = presetsContainer.querySelector('.add-preset-btn');
if (addBtn) {
addBtn.style.display = 'none';
}
// Create inline input container
const inputContainer = document.createElement('div');
inputContainer.className = 'preset-inline-input-container';
inputContainer.innerHTML = `
<input type="text"
class="preset-inline-input"
placeholder="${translate('header.filter.presetNamePlaceholder', {}, 'Preset name...')}"
maxlength="${MAX_PRESET_NAME_LENGTH}">
<button class="preset-inline-btn save" title="${translate('common.actions.save', {}, 'Save')}">
<i class="fas fa-check"></i>
</button>
<button class="preset-inline-btn cancel" title="${translate('common.actions.cancel', {}, 'Cancel')}">
<i class="fas fa-times"></i>
</button>
`;
presetsContainer.appendChild(inputContainer);
const input = inputContainer.querySelector('.preset-inline-input');
const saveBtn = inputContainer.querySelector('.preset-inline-btn.save');
const cancelBtn = inputContainer.querySelector('.preset-inline-btn.cancel');
// Focus input
input.focus();
// Handle save
const handleSave = async () => {
const name = input.value;
if (await this.createPreset(name)) {
this.hideInlineNamingInput();
}
};
// Handle cancel
const handleCancel = () => {
this.hideInlineNamingInput();
};
// Event listeners
saveBtn.addEventListener('click', (e) => {
e.stopPropagation();
handleSave();
});
cancelBtn.addEventListener('click', (e) => {
e.stopPropagation();
handleCancel();
});
input.addEventListener('keydown', (e) => {
if (e.key === 'Enter') {
e.preventDefault();
handleSave();
} else if (e.key === 'Escape') {
e.preventDefault();
handleCancel();
}
});
// Prevent clicks inside from bubbling
inputContainer.addEventListener('click', (e) => {
e.stopPropagation();
});
}
/**
* Hide inline input and restore add button
*/
hideInlineNamingInput() {
this.isInlineNamingActive = false;
const presetsContainer = document.getElementById('filterPresets');
if (!presetsContainer) return;
// Remove input container
const inputContainer = presetsContainer.querySelector('.preset-inline-input-container');
if (inputContainer) {
inputContainer.remove();
}
// Show add button
const addBtn = presetsContainer.querySelector('.add-preset-btn');
if (addBtn) {
addBtn.style.display = '';
}
}
renderPresets() {
const presetsContainer = document.getElementById('filterPresets');
if (!presetsContainer) return;
// Cancel any pending delete when re-rendering
this.cancelPendingDelete();
this.isInlineNamingActive = false;
const presets = this.loadPresets();
presetsContainer.innerHTML = '';
// Render existing presets
presets.forEach(preset => {
const presetEl = document.createElement('div');
presetEl.className = 'filter-preset';
const isActive = this.activePreset === preset.name;
if (isActive) {
presetEl.classList.add('active');
}
presetEl.addEventListener('click', (e) => {
e.stopPropagation();
});
const presetName = document.createElement('span');
presetName.className = 'preset-name';
if (isActive) {
presetName.innerHTML = `<i class="fas fa-check"></i> ${preset.name}`;
} else {
presetName.textContent = preset.name;
}
presetName.title = translate('header.filter.presetClickTooltip', { name: preset.name }, `Click to apply preset "${preset.name}"`);
const deleteBtn = document.createElement('button');
deleteBtn.className = 'preset-delete-btn';
deleteBtn.innerHTML = '<i class="fas fa-times"></i>';
deleteBtn.title = translate('header.filter.presetDeleteTooltip', {}, 'Delete preset');
// Apply preset on name click (toggle if already active)
presetName.addEventListener('click', async (e) => {
e.stopPropagation();
this.cancelPendingDelete();
if (this.activePreset === preset.name) {
await this.filterManager.clearFilters();
} else {
await this.applyPreset(preset.name);
}
});
// Two-step delete on delete button click
deleteBtn.addEventListener('click', (e) => {
e.stopPropagation();
this.initiateDelete(preset.name, deleteBtn);
});
presetEl.appendChild(presetName);
presetEl.appendChild(deleteBtn);
presetsContainer.appendChild(presetEl);
});
// Add the "Add new preset" button (always shown, unified style)
const addBtn = document.createElement('div');
addBtn.className = 'filter-preset add-preset-btn';
addBtn.innerHTML = `<i class="fas fa-plus"></i> ${translate('common.actions.add', {}, 'Add')}`;
addBtn.title = translate('header.filter.savePreset', {}, 'Save current filters as a new preset');
addBtn.addEventListener('click', (e) => {
e.stopPropagation();
this.cancelPendingDelete();
this.showInlineNamingInput();
});
presetsContainer.appendChild(addBtn);
// Update add button state (handles disabled state based on filters)
this.updateAddButtonState();
}
/**
* Legacy method for backward compatibility
* @deprecated Use showInlineNamingInput instead
*/
showSavePresetDialog() {
this.showInlineNamingInput();
}
}

View File

@@ -340,9 +340,9 @@ class MoveManager {
folder: newRelativeFolder
};
// Only update model_type if it's present in the cache_entry
if (result.cache_entry && result.cache_entry.model_type) {
updateData.model_type = result.cache_entry.model_type;
// Only update sub_type if it's present in the cache_entry
if (result.cache_entry && result.cache_entry.sub_type) {
updateData.sub_type = result.cache_entry.sub_type;
}
state.virtualScroller.updateSingleItem(result.original_file_path, updateData);
@@ -374,9 +374,9 @@ class MoveManager {
folder: newRelativeFolder
};
// Only update model_type if it's present in the cache_entry
if (result.cache_entry && result.cache_entry.model_type) {
updateData.model_type = result.cache_entry.model_type;
// Only update sub_type if it's present in the cache_entry
if (result.cache_entry && result.cache_entry.sub_type) {
updateData.sub_type = result.cache_entry.sub_type;
}
state.virtualScroller.updateSingleItem(this.currentFilePath, updateData);

View File

@@ -82,7 +82,23 @@ export class OnboardingManager {
}
// Check if user should see onboarding
shouldShowOnboarding() {
// First checks backend settings (persistent), falls back to localStorage
async shouldShowOnboarding() {
// Try to get state from backend first (persistent across browser modes)
try {
const response = await fetch('/api/lm/settings');
const data = await response.json();
if (data.success && data.settings && data.settings.onboarding_completed === true) {
// Sync to localStorage as cache
setStorageItem('onboarding_completed', true);
return false;
}
} catch (e) {
// Backend unavailable, fall back to localStorage
console.debug('Failed to fetch onboarding state from backend, using localStorage');
}
// Fallback to localStorage (for backward compatibility)
const completed = getStorageItem('onboarding_completed');
const skipped = getStorageItem('onboarding_skipped');
return !completed && !skipped;
@@ -90,7 +106,8 @@ export class OnboardingManager {
// Start the onboarding process
async start() {
if (!this.shouldShowOnboarding()) {
const shouldShow = await this.shouldShowOnboarding();
if (!shouldShow) {
return;
}
@@ -159,9 +176,9 @@ export class OnboardingManager {
});
// Handle skip button - skip entire tutorial
document.getElementById('skipLanguageBtn').addEventListener('click', () => {
document.getElementById('skipLanguageBtn').addEventListener('click', async () => {
document.body.removeChild(modal);
this.skip(); // Skip entire tutorial instead of just language selection
await this.skip(); // Skip entire tutorial instead of just language selection
resolve();
});
@@ -205,11 +222,11 @@ export class OnboardingManager {
}
// Start the tutorial steps
startTutorial() {
async startTutorial() {
this.isActive = true;
this.currentStep = 0;
this.createOverlay();
this.showStep(0);
await this.showStep(0);
}
// Create overlay elements
@@ -231,9 +248,9 @@ export class OnboardingManager {
}
// Show specific step
showStep(stepIndex) {
async showStep(stepIndex) {
if (stepIndex >= this.steps.length) {
this.complete();
await this.complete();
return;
}
@@ -242,7 +259,7 @@ export class OnboardingManager {
if (!target && step.target !== 'body') {
// Skip this step if target not found
this.showStep(stepIndex + 1);
await this.showStep(stepIndex + 1);
return;
}
@@ -426,25 +443,48 @@ export class OnboardingManager {
}
// Navigate to next step
nextStep() {
this.showStep(this.currentStep + 1);
async nextStep() {
await this.showStep(this.currentStep + 1);
}
// Navigate to previous step
previousStep() {
async previousStep() {
if (this.currentStep > 0) {
this.showStep(this.currentStep - 1);
await this.showStep(this.currentStep - 1);
}
}
// Skip the tutorial
skip() {
async skip() {
// Save to backend for persistence (survives incognito/private mode)
try {
await fetch('/api/lm/settings', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ onboarding_completed: true })
});
} catch (e) {
console.error('Failed to save onboarding state to backend:', e);
}
// Also save to localStorage as cache and for backward compatibility
setStorageItem('onboarding_skipped', true);
setStorageItem('onboarding_completed', true);
this.cleanup();
}
// Complete the tutorial
complete() {
async complete() {
// Save to backend for persistence (survives incognito/private mode)
try {
await fetch('/api/lm/settings', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ onboarding_completed: true })
});
} catch (e) {
console.error('Failed to save onboarding state to backend:', e);
}
// Also save to localStorage as cache and for backward compatibility
setStorageItem('onboarding_completed', true);
this.cleanup();
}

View File

@@ -509,38 +509,90 @@ export class UpdateService {
}
// Update changelog content if available
if (this.updateInfo && this.updateInfo.changelog) {
if (this.updateInfo && (this.updateInfo.changelog || this.updateInfo.releases)) {
const changelogContent = modal.querySelector('.changelog-content');
if (changelogContent) {
changelogContent.innerHTML = ''; // Clear existing content
// Create changelog item
const changelogItem = document.createElement('div');
changelogItem.className = 'changelog-item';
const versionHeader = document.createElement('h4');
versionHeader.textContent = `${translate('common.status.version', {}, 'Version')} ${this.latestVersion}`;
changelogItem.appendChild(versionHeader);
// Create changelog list
const changelogList = document.createElement('ul');
if (this.updateInfo.changelog && this.updateInfo.changelog.length > 0) {
this.updateInfo.changelog.forEach(item => {
const listItem = document.createElement('li');
// Parse markdown in changelog items
listItem.innerHTML = this.parseMarkdown(item);
changelogList.appendChild(listItem);
// Check if we have multiple releases
const releases = this.updateInfo.releases;
if (releases && Array.isArray(releases) && releases.length > 0) {
// Display multiple releases (up to 5)
releases.forEach(release => {
const changelogItem = document.createElement('div');
changelogItem.className = 'changelog-item';
if (release.is_latest) {
changelogItem.classList.add('latest');
}
const versionHeader = document.createElement('h4');
if (release.is_latest) {
const badge = document.createElement('span');
badge.className = 'latest-badge';
badge.textContent = translate('update.latestBadge', {}, 'Latest');
versionHeader.appendChild(badge);
versionHeader.appendChild(document.createTextNode(' '));
}
const versionSpan = document.createElement('span');
versionSpan.className = 'version';
versionSpan.textContent = `${translate('common.status.version', {}, 'Version')} ${release.version}`;
versionHeader.appendChild(versionSpan);
if (release.published_at) {
const dateSpan = document.createElement('span');
dateSpan.className = 'publish-date';
dateSpan.textContent = this.formatRelativeTime(new Date(release.published_at).getTime());
versionHeader.appendChild(dateSpan);
}
changelogItem.appendChild(versionHeader);
// Create changelog list
const changelogList = document.createElement('ul');
if (release.changelog && release.changelog.length > 0) {
release.changelog.forEach(item => {
const listItem = document.createElement('li');
listItem.innerHTML = this.parseMarkdown(item);
changelogList.appendChild(listItem);
});
} else {
const listItem = document.createElement('li');
listItem.textContent = translate('update.noChangelogAvailable', {}, 'No detailed changelog available.');
changelogList.appendChild(listItem);
}
changelogItem.appendChild(changelogList);
changelogContent.appendChild(changelogItem);
});
} else {
// If no changelog items available
const listItem = document.createElement('li');
listItem.textContent = translate('update.noChangelogAvailable', {}, 'No detailed changelog available. Check GitHub for more information.');
changelogList.appendChild(listItem);
// Fallback: display single changelog (old behavior)
const changelogItem = document.createElement('div');
changelogItem.className = 'changelog-item';
const versionHeader = document.createElement('h4');
versionHeader.textContent = `${translate('common.status.version', {}, 'Version')} ${this.latestVersion}`;
changelogItem.appendChild(versionHeader);
const changelogList = document.createElement('ul');
if (this.updateInfo.changelog && this.updateInfo.changelog.length > 0) {
this.updateInfo.changelog.forEach(item => {
const listItem = document.createElement('li');
listItem.innerHTML = this.parseMarkdown(item);
changelogList.appendChild(listItem);
});
} else {
const listItem = document.createElement('li');
listItem.textContent = translate('update.noChangelogAvailable', {}, 'No detailed changelog available. Check GitHub for more information.');
changelogList.appendChild(listItem);
}
changelogItem.appendChild(changelogList);
changelogContent.appendChild(changelogItem);
}
changelogItem.appendChild(changelogList);
changelogContent.appendChild(changelogItem);
}
}

51
static/js/misc.js Normal file
View File

@@ -0,0 +1,51 @@
import { appCore } from './core.js';
import { confirmDelete, closeDeleteModal, confirmExclude, closeExcludeModal } from './utils/modalUtils.js';
import { createPageControls } from './components/controls/index.js';
import { ModelDuplicatesManager } from './components/ModelDuplicatesManager.js';
import { MODEL_TYPES } from './api/apiConfig.js';
// Initialize the Misc (VAE/Upscaler) page
export class MiscPageManager {
constructor() {
// Initialize page controls
this.pageControls = createPageControls(MODEL_TYPES.MISC);
// Initialize the ModelDuplicatesManager
this.duplicatesManager = new ModelDuplicatesManager(this, MODEL_TYPES.MISC);
// Expose only necessary functions to global scope
this._exposeRequiredGlobalFunctions();
}
_exposeRequiredGlobalFunctions() {
// Minimal set of functions that need to remain global
window.confirmDelete = confirmDelete;
window.closeDeleteModal = closeDeleteModal;
window.confirmExclude = confirmExclude;
window.closeExcludeModal = closeExcludeModal;
// Expose duplicates manager
window.modelDuplicatesManager = this.duplicatesManager;
}
async initialize() {
// Initialize common page features (including context menus)
appCore.initializePageFeatures();
console.log('Misc Manager initialized');
}
}
export async function initializeMiscPage() {
// Initialize core application
await appCore.initialize();
// Initialize misc page
const miscPage = new MiscPageManager();
await miscPage.initialize();
return miscPage;
}
// Initialize everything when DOM is ready
document.addEventListener('DOMContentLoaded', initializeMiscPage);

View File

@@ -177,6 +177,35 @@ export const state = {
showFavoritesOnly: false,
showUpdateAvailableOnly: false,
duplicatesMode: false,
},
[MODEL_TYPES.MISC]: {
currentPage: 1,
isLoading: false,
hasMore: true,
sortBy: 'name',
activeFolder: getStorageItem(`${MODEL_TYPES.MISC}_activeFolder`),
previewVersions: new Map(),
searchManager: null,
searchOptions: {
filename: true,
modelname: true,
creator: false,
recursive: getStorageItem(`${MODEL_TYPES.MISC}_recursiveSearch`, true),
},
filters: {
baseModel: [],
tags: {},
license: {},
modelTypes: []
},
bulkMode: false,
selectedModels: new Set(),
metadataCache: new Map(),
showFavoritesOnly: false,
showUpdateAvailableOnly: false,
duplicatesMode: false,
subType: 'vae'
}
},

View File

@@ -57,12 +57,45 @@ export const BASE_MODELS = {
UNKNOWN: "Other"
};
export const MODEL_TYPE_DISPLAY_NAMES = {
// Model sub-type display names (new canonical field: sub_type)
export const MODEL_SUBTYPE_DISPLAY_NAMES = {
// LoRA sub-types
lora: "LoRA",
locon: "LyCORIS",
dora: "DoRA",
// Checkpoint sub-types
checkpoint: "Checkpoint",
diffusion_model: "Diffusion Model",
// Embedding sub-types
embedding: "Embedding",
// Misc sub-types
vae: "VAE",
upscaler: "Upscaler",
};
// Backward compatibility alias
export const MODEL_TYPE_DISPLAY_NAMES = MODEL_SUBTYPE_DISPLAY_NAMES;
// Abbreviated sub-type names for compact display (e.g., in model card labels)
export const MODEL_SUBTYPE_ABBREVIATIONS = {
lora: "LoRA",
locon: "LyCO",
dora: "DoRA",
checkpoint: "CKPT",
diffusion_model: "DM",
embedding: "EMB",
vae: "VAE",
upscaler: "UP",
};
export function getSubTypeAbbreviation(subType) {
if (!subType || typeof subType !== 'string') {
return '';
}
const normalized = subType.toLowerCase();
return MODEL_SUBTYPE_ABBREVIATIONS[normalized] || subType.toUpperCase().slice(0, 4);
}
export const BASE_MODEL_ABBREVIATIONS = {
// Stable Diffusion 1.x models
[BASE_MODELS.SD_1_4]: 'SD1',

View File

@@ -28,10 +28,9 @@ async function getCardCreator(pageType) {
// Function to get the appropriate data fetcher based on page type
async function getDataFetcher(pageType) {
if (pageType === 'loras' || pageType === 'embeddings' || pageType === 'checkpoints') {
if (pageType === 'loras' || pageType === 'embeddings' || pageType === 'checkpoints' || pageType === 'misc') {
return (page = 1, pageSize = 100) => getModelApiClient().fetchModelsPage(page, pageSize);
} else if (pageType === 'recipes') {
// Import the recipeApi module and use the fetchRecipesPage function
const { fetchRecipesPage } = await import('../api/recipeApi.js');
return fetchRecipesPage;
}

View File

@@ -13,6 +13,8 @@
{% set current_page = 'checkpoints' %}
{% elif current_path.startswith('/embeddings') %}
{% set current_page = 'embeddings' %}
{% elif current_path.startswith('/misc') %}
{% set current_page = 'misc' %}
{% elif current_path.startswith('/statistics') %}
{% set current_page = 'statistics' %}
{% else %}
@@ -38,6 +40,10 @@
id="embeddingsNavItem">
<i class="fas fa-code"></i> <span>{{ t('header.navigation.embeddings') }}</span>
</a>
<a href="/misc" class="nav-item{% if current_path.startswith('/misc') %} active{% endif %}"
id="miscNavItem">
<i class="fas fa-puzzle-piece"></i> <span>{{ t('header.navigation.misc') }}</span>
</a>
<a href="/statistics" class="nav-item{% if current_path.startswith('/statistics') %} active{% endif %}"
id="statisticsNavItem">
<i class="fas fa-chart-bar"></i> <span>{{ t('header.navigation.statistics') }}</span>
@@ -116,6 +122,11 @@
<div class="search-option-tag active" data-option="modelname">{{ t('header.search.filters.modelname') }}</div>
<div class="search-option-tag active" data-option="tags">{{ t('header.search.filters.tags') }}</div>
<div class="search-option-tag" data-option="creator">{{ t('header.search.filters.creator') }}</div>
{% elif request.path == '/misc' %}
<div class="search-option-tag active" data-option="filename">{{ t('header.search.filters.filename') }}</div>
<div class="search-option-tag active" data-option="modelname">{{ t('header.search.filters.modelname') }}</div>
<div class="search-option-tag active" data-option="tags">{{ t('header.search.filters.tags') }}</div>
<div class="search-option-tag" data-option="creator">{{ t('header.search.filters.creator') }}</div>
{% else %}
<!-- Default options for LoRAs page -->
<div class="search-option-tag active" data-option="filename">{{ t('header.search.filters.filename') }}</div>
@@ -135,6 +146,14 @@
<i class="fas fa-times"></i>
</button>
</div>
<!-- Presets Section -->
<div class="filter-section presets-section">
<h4>{{ t('header.filter.presets') }}</h4>
<div class="filter-presets" id="filterPresets">
</div>
</div>
<div class="filter-section">
<h4>{{ t('header.filter.baseModel') }}</h4>
<div class="filter-tags" id="baseModelTags">
@@ -148,7 +167,7 @@
<div class="tags-loading">{{ t('common.status.loading') }}</div>
</div>
</div>
{% if current_page == 'loras' %}
{% if current_page == 'loras' or current_page == 'checkpoints' or current_page == 'misc' %}
<div class="filter-section">
<h4>{{ t('header.filter.modelTypes') }}</h4>
<div class="filter-tags" id="modelTypeTags">

45
templates/misc.html Normal file
View File

@@ -0,0 +1,45 @@
{% extends "base.html" %}
{% block title %}{{ t('misc.title') }}{% endblock %}
{% block page_id %}misc{% endblock %}
{% block init_title %}{{ t('initialization.misc.title') }}{% endblock %}
{% block init_message %}{{ t('initialization.misc.message') }}{% endblock %}
{% block init_check_url %}/api/lm/misc/list?page=1&page_size=1{% endblock %}
{% block additional_components %}
<div id="miscContextMenu" class="context-menu" style="display: none;">
<div class="context-menu-item" data-action="refresh-metadata"><i class="fas fa-sync"></i> {{ t('loras.contextMenu.refreshMetadata') }}</div>
<div class="context-menu-item" data-action="relink-civitai"><i class="fas fa-link"></i> {{ t('loras.contextMenu.relinkCivitai') }}</div>
<div class="context-menu-item" data-action="copyname"><i class="fas fa-copy"></i> {{ t('loras.contextMenu.copyFilename') }}</div>
<div class="context-menu-item" data-action="preview"><i class="fas fa-folder-open"></i> {{ t('loras.contextMenu.openExamples') }}</div>
<div class="context-menu-item" data-action="download-examples"><i class="fas fa-download"></i> {{ t('loras.contextMenu.downloadExamples') }}</div>
<div class="context-menu-item" data-action="replace-preview"><i class="fas fa-image"></i> {{ t('loras.contextMenu.replacePreview') }}</div>
<div class="context-menu-item" data-action="set-nsfw"><i class="fas fa-exclamation-triangle"></i> {{ t('loras.contextMenu.setContentRating') }}</div>
<div class="context-menu-separator"></div>
<div class="context-menu-item" data-action="move"><i class="fas fa-folder-open"></i> {{ t('loras.contextMenu.moveToFolder') }}</div>
<div class="context-menu-item" data-action="move-other"><i class="fas fa-exchange-alt"></i> {{ t('misc.contextMenu.moveToOtherTypeFolder', {otherType: '...'}) }}</div>
<div class="context-menu-item" data-action="exclude"><i class="fas fa-eye-slash"></i> {{ t('loras.contextMenu.excludeModel') }}</div>
<div class="context-menu-item delete-item" data-action="delete"><i class="fas fa-trash"></i> {{ t('loras.contextMenu.deleteModel') }}</div>
</div>
{% endblock %}
{% block content %}
{% include 'components/controls.html' %}
{% include 'components/duplicates_banner.html' %}
{% include 'components/folder_sidebar.html' %}
<!-- Misc cards container -->
<div class="card-grid" id="modelGrid">
<!-- Cards will be dynamically inserted here -->
</div>
{% endblock %}
{% block overlay %}
<div class="bulk-mode-overlay"></div>
{% endblock %}
{% block main_script %}
<script type="module" src="/loras_static/js/misc.js?v={{ version }}"></script>
{% endblock %}

View File

@@ -4,6 +4,7 @@ import os
import pytest
from py import config as config_module
from py.utils import cache_paths as cache_paths_module
def _normalize(path: str) -> str:
@@ -28,9 +29,14 @@ def _setup_paths(monkeypatch: pytest.MonkeyPatch, tmp_path):
}
return mapping.get(kind, [])
def fake_get_settings_dir(create: bool = True) -> str:
return str(settings_dir)
monkeypatch.setattr(config_module.folder_paths, "get_folder_paths", fake_get_folder_paths)
monkeypatch.setattr(config_module, "standalone_mode", True)
monkeypatch.setattr(config_module, "get_settings_dir", lambda create=True: str(settings_dir))
monkeypatch.setattr(config_module, "get_settings_dir", fake_get_settings_dir)
# Also patch cache_paths module which has its own import of get_settings_dir
monkeypatch.setattr(cache_paths_module, "get_settings_dir", fake_get_settings_dir)
return loras_dir, settings_dir
@@ -57,7 +63,7 @@ def test_symlink_scan_skips_file_links(monkeypatch: pytest.MonkeyPatch, tmp_path
normalized_file_real = _normalize(os.path.realpath(file_target))
assert normalized_file_real not in cfg._path_mappings
cache_path = settings_dir / "cache" / "symlink_map.json"
cache_path = settings_dir / "cache" / "symlink" / "symlink_map.json"
assert cache_path.exists()
@@ -71,7 +77,7 @@ def test_symlink_cache_reuses_previous_scan(monkeypatch: pytest.MonkeyPatch, tmp
first_cfg = config_module.Config()
cached_mappings = dict(first_cfg._path_mappings)
cache_path = settings_dir / "cache" / "symlink_map.json"
cache_path = settings_dir / "cache" / "symlink" / "symlink_map.json"
assert cache_path.exists()
def fail_scan(self):
@@ -97,7 +103,7 @@ def test_symlink_cache_survives_noise_mtime(monkeypatch: pytest.MonkeyPatch, tmp
noise_file = recipes_dir / "touchme.txt"
first_cfg = config_module.Config()
cache_path = settings_dir / "cache" / "symlink_map.json"
cache_path = settings_dir / "cache" / "symlink" / "symlink_map.json"
assert cache_path.exists()
# Update a noisy path to bump parent directory mtime
@@ -112,7 +118,8 @@ def test_symlink_cache_survives_noise_mtime(monkeypatch: pytest.MonkeyPatch, tmp
assert second_cfg.map_path_to_link(str(target_dir)) == _normalize(str(dir_link))
def test_manual_rescan_refreshes_cache(monkeypatch: pytest.MonkeyPatch, tmp_path):
def test_retargeted_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Changing a symlink's target should trigger automatic cache invalidation."""
loras_dir, _ = _setup_paths(monkeypatch, tmp_path)
target_dir = loras_dir / "target"
@@ -122,22 +129,16 @@ def test_manual_rescan_refreshes_cache(monkeypatch: pytest.MonkeyPatch, tmp_path
# Build initial cache pointing at the first target
first_cfg = config_module.Config()
old_real = _normalize(os.path.realpath(target_dir))
assert first_cfg.map_path_to_link(str(target_dir)) == _normalize(str(dir_link))
# Retarget the symlink to a new directory without touching the cache file
# Retarget the symlink to a new directory
new_target = loras_dir / "target_v2"
new_target.mkdir()
dir_link.unlink()
dir_link.symlink_to(new_target, target_is_directory=True)
# Second config should automatically detect the change and rescan
second_cfg = config_module.Config()
# Cache still point at the old real path immediately after load
assert second_cfg.map_path_to_link(str(new_target)) == _normalize(str(new_target))
# Manual rescan should refresh the mapping to the new target
second_cfg.rebuild_symlink_cache()
new_real = _normalize(os.path.realpath(new_target))
assert second_cfg._path_mappings.get(new_real) == _normalize(str(dir_link))
assert second_cfg.map_path_to_link(str(new_target)) == _normalize(str(dir_link))
@@ -164,9 +165,14 @@ def test_symlink_roots_are_preserved(monkeypatch: pytest.MonkeyPatch, tmp_path):
}
return mapping.get(kind, [])
def fake_get_settings_dir(create: bool = True) -> str:
return str(settings_dir)
monkeypatch.setattr(config_module.folder_paths, "get_folder_paths", fake_get_folder_paths)
monkeypatch.setattr(config_module, "standalone_mode", True)
monkeypatch.setattr(config_module, "get_settings_dir", lambda create=True: str(settings_dir))
monkeypatch.setattr(config_module, "get_settings_dir", fake_get_settings_dir)
# Also patch cache_paths module which has its own import of get_settings_dir
monkeypatch.setattr(cache_paths_module, "get_settings_dir", fake_get_settings_dir)
cfg = config_module.Config()
@@ -174,6 +180,162 @@ def test_symlink_roots_are_preserved(monkeypatch: pytest.MonkeyPatch, tmp_path):
normalized_link = _normalize(str(loras_link))
assert cfg._path_mappings[normalized_real] == normalized_link
cache_path = settings_dir / "cache" / "symlink_map.json"
cache_path = settings_dir / "cache" / "symlink" / "symlink_map.json"
payload = json.loads(cache_path.read_text(encoding="utf-8"))
assert payload["path_mappings"][normalized_real] == normalized_link
def test_symlink_subfolder_to_external_location(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Symlink under root pointing outside root should be detected and allowed."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create external directory (outside loras_dir)
external_dir = tmp_path / "external_models"
external_dir.mkdir()
preview_file = external_dir / "model.preview.png"
preview_file.write_bytes(b"preview")
# Create symlink under loras_dir pointing to external location
symlink = loras_dir / "characters"
symlink.symlink_to(external_dir, target_is_directory=True)
cfg = config_module.Config()
# Verify symlink was detected
normalized_external = _normalize(str(external_dir))
normalized_link = _normalize(str(symlink))
assert cfg._path_mappings[normalized_external] == normalized_link
# Verify preview path is allowed
assert cfg.is_preview_path_allowed(str(preview_file))
def test_new_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Adding a new symlink should trigger cache invalidation."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Initial scan with no symlinks
first_cfg = config_module.Config()
assert len(first_cfg._path_mappings) == 0
# Create a symlink after initial cache
external_dir = tmp_path / "external"
external_dir.mkdir()
symlink = loras_dir / "new_link"
symlink.symlink_to(external_dir, target_is_directory=True)
# Second config should detect the change and rescan
second_cfg = config_module.Config()
normalized_external = _normalize(str(external_dir))
assert normalized_external in second_cfg._path_mappings
def test_removed_deep_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Removing a deep symlink should trigger cache invalidation."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure with deep symlink
subdir = loras_dir / "anime"
subdir.mkdir()
external_dir = tmp_path / "external"
external_dir.mkdir()
deep_symlink = subdir / "styles"
deep_symlink.symlink_to(external_dir, target_is_directory=True)
# Initial scan finds the deep symlink
first_cfg = config_module.Config()
normalized_external = _normalize(str(external_dir))
assert normalized_external in first_cfg._path_mappings
# Remove the deep symlink
deep_symlink.unlink()
# Second config should detect invalid cached mapping and rescan
second_cfg = config_module.Config()
assert normalized_external not in second_cfg._path_mappings
def test_retargeted_deep_symlink_triggers_rescan(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Changing a deep symlink's target should trigger cache invalidation."""
loras_dir, settings_dir = _setup_paths(monkeypatch, tmp_path)
# Create nested structure
subdir = loras_dir / "anime"
subdir.mkdir()
target_v1 = tmp_path / "external_v1"
target_v1.mkdir()
target_v2 = tmp_path / "external_v2"
target_v2.mkdir()
deep_symlink = subdir / "styles"
deep_symlink.symlink_to(target_v1, target_is_directory=True)
# Initial scan
first_cfg = config_module.Config()
assert _normalize(str(target_v1)) in first_cfg._path_mappings
# Retarget the symlink
deep_symlink.unlink()
deep_symlink.symlink_to(target_v2, target_is_directory=True)
# Second config should detect changed target and rescan
second_cfg = config_module.Config()
assert _normalize(str(target_v2)) in second_cfg._path_mappings
assert _normalize(str(target_v1)) not in second_cfg._path_mappings
def test_legacy_symlink_cache_automatic_cleanup(monkeypatch: pytest.MonkeyPatch, tmp_path):
"""Test that legacy symlink cache is automatically cleaned up after migration."""
settings_dir = tmp_path / "settings"
loras_dir = tmp_path / "loras"
loras_dir.mkdir()
checkpoint_dir = tmp_path / "checkpoints"
checkpoint_dir.mkdir()
embedding_dir = tmp_path / "embeddings"
embedding_dir.mkdir()
def fake_get_folder_paths(kind: str):
mapping = {
"loras": [str(loras_dir)],
"checkpoints": [str(checkpoint_dir)],
"unet": [],
"embeddings": [str(embedding_dir)],
}
return mapping.get(kind, [])
def fake_get_settings_dir(create: bool = True) -> str:
return str(settings_dir)
monkeypatch.setattr(config_module.folder_paths, "get_folder_paths", fake_get_folder_paths)
monkeypatch.setattr(config_module, "standalone_mode", True)
monkeypatch.setattr(config_module, "get_settings_dir", fake_get_settings_dir)
monkeypatch.setattr(cache_paths_module, "get_settings_dir", fake_get_settings_dir)
# Create legacy symlink cache at old location
settings_dir.mkdir(parents=True, exist_ok=True)
legacy_cache_dir = settings_dir / "cache"
legacy_cache_dir.mkdir(exist_ok=True)
legacy_cache_path = legacy_cache_dir / "symlink_map.json"
# Write some legacy cache data
legacy_data = {
"fingerprint": {"roots": []},
"path_mappings": {
"/legacy/target": "/legacy/link"
}
}
legacy_cache_path.write_text(json.dumps(legacy_data), encoding="utf-8")
# Verify legacy file exists
assert legacy_cache_path.exists()
# Initialize Config - this should trigger migration and automatic cleanup
cfg = config_module.Config()
# New canonical cache should exist
new_cache_path = settings_dir / "cache" / "symlink" / "symlink_map.json"
assert new_cache_path.exists()
# Legacy file should be automatically cleaned up
assert not legacy_cache_path.exists()
# Config should still work correctly
assert isinstance(cfg._path_mappings, dict)

View File

@@ -37,6 +37,7 @@ vi.mock(APP_MODULE, () => ({
canvas: {
ds: { scale: 1 },
},
registerExtension: vi.fn(),
},
}));

View File

@@ -35,7 +35,6 @@ vi.mock(API_MODULE, () => ({
const collectActiveLorasFromChain = vi.fn();
const updateConnectedTriggerWords = vi.fn();
const mergeLoras = vi.fn();
const setupInputWidgetWithAutocomplete = vi.fn();
const getAllGraphNodes = vi.fn();
const getNodeFromGraph = vi.fn();
@@ -43,7 +42,6 @@ vi.mock(UTILS_MODULE, () => ({
collectActiveLorasFromChain,
updateConnectedTriggerWords,
mergeLoras,
setupInputWidgetWithAutocomplete,
chainCallback: (proto, property, callback) => {
proto[property] = callback;
},
@@ -73,11 +71,6 @@ describe("Lora Loader trigger word updates", () => {
mergeLoras.mockClear();
mergeLoras.mockImplementation(() => [{ name: "Alpha", active: true }]);
setupInputWidgetWithAutocomplete.mockClear();
setupInputWidgetWithAutocomplete.mockImplementation(
(_node, _widget, originalCallback) => originalCallback
);
addLorasWidget.mockClear();
addLorasWidget.mockImplementation((_node, _name, _opts, callback) => ({
widget: { value: [], callback },
@@ -94,27 +87,31 @@ describe("Lora Loader trigger word updates", () => {
const nodeType = { comfyClass: "Lora Loader (LoraManager)", prototype: {} };
await extension.beforeRegisterNodeDef(nodeType, {}, {});
// Create mock widget (AUTOCOMPLETE_TEXT_LORAS type created by Vue widgets)
const inputWidget = {
value: "",
options: {},
callback: null, // Will be set by onNodeCreated
};
const node = {
comfyClass: "Lora Loader (LoraManager)",
widgets: [
{
value: "",
options: {},
inputEl: {},
},
],
widgets: [inputWidget],
addInput: vi.fn(),
graph: {},
};
nodeType.prototype.onNodeCreated.call(node);
expect(setupInputWidgetWithAutocomplete).toHaveBeenCalled();
// The widget is now the AUTOCOMPLETE_TEXT_LORAS type, created automatically by Vue widgets
expect(node.inputWidget).toBe(inputWidget);
expect(node.lorasWidget).toBeDefined();
const inputCallback = node.widgets[0].callback;
// The callback should have been set up by onNodeCreated
const inputCallback = inputWidget.callback;
expect(typeof inputCallback).toBe("function");
// Simulate typing in the input widget
inputCallback("<lora:Alpha:1.0>");
expect(mergeLoras).toHaveBeenCalledWith("<lora:Alpha:1.0>", []);
@@ -128,4 +125,3 @@ describe("Lora Loader trigger word updates", () => {
expect([...triggerWordSet]).toEqual(["Alpha"]);
});
});

Some files were not shown because too many files have changed in this diff Show More