Compare commits

...

344 Commits

Author SHA1 Message Date
Will Miao
82a2a6e669 chore: update version to 0.8.19 and add release notes for new features and enhancements 2025-06-28 08:04:16 +08:00
Will Miao
6376d60af5 Add temp debug console logging 2025-06-27 17:47:19 +08:00
Will Miao
b1e2e3831f fix: enhance model processing logic to skip already processed models only if their directories contain files. See #259 2025-06-27 13:09:19 +08:00
Will Miao
5de1c8aa82 feat: add node selector header with action mode indicator and instructions for improved user guidance 2025-06-27 12:39:20 +08:00
Will Miao
63dc5c2bdb fix: change overflow-y property to scroll for consistent vertical scrolling behavior 2025-06-27 11:44:43 +08:00
Will Miao
7f2d1670a0 feat: add startExpanded option to renderShowcaseContent for improved showcase interaction 2025-06-27 10:12:17 +08:00
Will Miao
53c8c337fc fix: remove unnecessary variable assignment for trigger words section in edit mode 2025-06-27 09:58:24 +08:00
Will Miao
5b4ec1b2a2 feat: implement disabled state for header search on statistics page with appropriate styling and functionality adjustments 2025-06-27 09:45:48 +08:00
Will Miao
64dd2ed141 feat: enhance node registration and management with support for multiple nodes and improved UI elements. Fixes #220 2025-06-26 23:00:55 +08:00
Will Miao
eb57e04e95 feat: implement thread-safe node registry and registration endpoints for Lora nodes 2025-06-26 18:31:14 +08:00
Will Miao
ae905c8630 fix: correct extension name format and update initialization method in usage stats 2025-06-26 16:57:26 +08:00
Will Miao
c157e794f0 feat: implement event delegation for checkpoint cards and enhance Civitai link handling 2025-06-26 11:42:43 +08:00
Will Miao
ed9bae6f6a feat: enhance recipe metadata handling with NSFW level updates and context menu actions. FIxes #247 2025-06-26 11:04:51 +08:00
Will Miao
9fe1ce19ad feat: add Patreon support section to the support modal with styling 2025-06-26 09:54:07 +08:00
Will Miao
6148236cbd fix: add missing patreon entry in FUNDING.yml 2025-06-26 08:23:12 +08:00
Will Miao
2471eb518a fix: correct key reference in process_trigger_words and update comment for widget values. Fixes #254 2025-06-25 20:57:12 +08:00
Will Miao
8931b41c76 feat: refactor API routes for renaming models and update related functions 2025-06-25 19:38:38 +08:00
Will Miao
7f523f167d fix: correct indentation for appending lora_entry in CivitaiApiMetadataParser. Fixes #253 2025-06-25 15:57:14 +08:00
Will Miao
446b6d6158 feat: sync saved example images path with backend on path update. Fixes #250 2025-06-25 15:34:25 +08:00
Will Miao
2ee057e19b feat: update metadata saving to ensure backup creation and support nested civitai structure 2025-06-25 11:50:10 +08:00
Will Miao
afc810f21f feat: prevent Ctrl+A behavior when search input is focused. See #251 2025-06-24 22:12:53 +08:00
pixelpaws
357052a903 Merge pull request #252 from willmiao/stats-page
Add statistics page with metrics, charts, and insights functionality
2025-06-24 21:37:06 +08:00
Will Miao
39d6d8d04a Add statistics page with metrics, charts, and insights functionality
- Implemented CSS styles for the statistics page layout and components.
- Developed JavaScript functionality for managing statistics, including data fetching, chart rendering, and tab navigation.
- Created HTML template for the statistics page, integrating dynamic content for metrics, charts, and insights.
- Added responsive design adjustments and loading states for better user experience.
2025-06-24 21:36:20 +08:00
Will Miao
888896c0c0 feat: add card info display setting with options for always visible or reveal on hover 2025-06-24 17:41:52 +08:00
Will Miao
ceee482ecc feat: refactor Lora handling by introducing chainCallback for improved node initialization and widget management. Fixes #176 2025-06-24 16:36:15 +08:00
Will Miao
d0ed1213d8 feat: enhance LoRA metadata handling by adding model IDs and updating recipe data structure. Fixes #246 2025-06-24 11:12:21 +08:00
Will Miao
f6ef428008 feat: update preview URL handling in RecipeRoutes and optimize recipe refresh logic in RecipeModal. Fixes #244 2025-06-23 15:29:22 +08:00
Will Miao
e726c4f442 feat: enhance metadata extraction for TSC samplers with vae_decode handling 2025-06-23 10:55:27 +08:00
Will Miao
402318e586 feat: enhance metadata processing and extraction for Efficient nodes with improved prompt handling and conditioning outputs. 2025-06-22 13:21:31 +08:00
Will Miao
b198cc2a6e feat: enhance metadata enrichment process to update file paths and preview URLs dynamically. See #113 2025-06-21 21:24:22 +08:00
Will Miao
c3dd4da11b feat: enhance theme toggle functionality with auto theme support and icon updates. Fix #243 2025-06-21 20:43:44 +08:00
Will Miao
ba2e42b06e feat: enhance LoraModal with notes hint and cleanup functionality on close 2025-06-21 20:04:57 +08:00
Will Miao
fa0902dc74 feat: add AdvancedCLIPTextEncode to NODE_EXTRACTORS for enhanced metadata extraction. See #234 2025-06-21 06:22:33 +08:00
Will Miao
8fcb6083dc feat: update release notes and version to 0.8.18 with new features and improvements 2025-06-20 18:25:15 +08:00
Will Miao
1ef88140e3 fix: adjust widget heights and padding for improved layout and text alignment 2025-06-20 17:21:31 +08:00
Will Miao
aa34c4c84c refactor: streamline prompt matching logic in MetadataProcessor 2025-06-20 17:00:23 +08:00
Will Miao
32d12bb334 feat: update API routes for version info and enhance version fetching functionality 2025-06-20 16:38:11 +08:00
Will Miao
1b2a02cb1a feat: add git information display in update modals and enhance version check functionality 2025-06-20 15:22:07 +08:00
Will Miao
2ff11a16c4 feat: implement DebugMetadata node with metadata display and update functionality 2025-06-20 14:17:39 +08:00
Will Miao
441af82dbd fix: update EXIF metadata extraction method for better compatibility with non-JPEG formats 2025-06-20 11:15:05 +08:00
Will Miao
e09c09af6f feat: support GIF format for preview images. Fixes #236 2025-06-20 10:51:52 +08:00
Will Miao
3721fe226f Remove unused code 2025-06-20 10:43:02 +08:00
Will Miao
8ace0e11cf Update find_preview_file to include example extension from Civitai Helper for A1111. Fixes #225 2025-06-20 10:41:42 +08:00
Will Miao
5e249b0b59 fix: Update from_civitai flag to True in metadata creation for checkpoints and LoraMetadata. Fixes #238 2025-06-20 05:48:28 +08:00
Will Miao
4889955ecf feat: Add conditioning matching to prompts and update metadata handling in node extractors. See #235 2025-06-20 00:04:02 +08:00
pixelpaws
d840fd53da Merge pull request #231 from PredatorIWD/fix-crash-on-symlinks
Don't crash completely if a symlink resolve fails
2025-06-19 18:34:03 +08:00
pixelpaws
a61819cdb3 Merge branch 'main' into fix-crash-on-symlinks 2025-06-19 18:33:40 +08:00
Will Miao
e986fbb5fb refactor: Streamline progress file handling and enhance metadata extraction for images 2025-06-19 18:12:16 +08:00
Will Miao
8f4d575ec8 refactor: Improve metadata handling and streamline example image loading in modals 2025-06-19 17:07:28 +08:00
Will Miao
605a06317b feat: Enhance media handling by adding NSFW level support and improving preview image management 2025-06-19 15:19:24 +08:00
Will Miao
a7304ccf47 feat: Add deepMerge method for improved object merging in VirtualScroller 2025-06-19 12:46:50 +08:00
Will Miao
374e2bd4b9 refactor: Add MediaRenderers, MediaUtils, MetadataPanel, and ShowcaseView components for enhanced media handling in showcase
- Implemented MediaRenderers.js to generate HTML for video and image wrappers, including NSFW handling and media controls.
- Created MediaUtils.js for utility functions to manage media loading, lazy loading, and metadata panel interactions.
- Developed MetadataPanel.js to generate metadata panels for media items, including prompts and generation parameters.
- Introduced ShowcaseView.js to render showcase content, manage media items, and handle file imports with drag-and-drop support.
2025-06-19 11:21:32 +08:00
Will Miao
09a3246ddb Add delete functionality for custom example images with API endpoint 2025-06-19 11:21:00 +08:00
Will Miao
a615603866 Prevent Ctrl+A behavior in modals by checking for open modals before handling the key event 2025-06-18 18:43:11 +08:00
Will Miao
1ca05808e1 Enhance preview image upload by deleting existing previews and updating UI state management 2025-06-18 18:37:13 +08:00
Will Miao
5febc2a805 Add update indicator and animation for updated cards in VirtualScroller 2025-06-18 17:30:49 +08:00
Will Miao
3c047bee58 Refactor example images handling by introducing migration logic, updating metadata structure, and enhancing image loading in the UI 2025-06-18 17:14:49 +08:00
Will Miao
022c6c157a Refactor example images code 2025-06-18 09:28:00 +08:00
Will Miao
fa587d5678 Refactor modal components by removing unused imports and commenting out cache management section in modals.html 2025-06-17 21:06:01 +08:00
Will Miao
afa5a42f5a Refactor metadata handling by introducing MetadataManager for centralized operations and improving error handling 2025-06-17 21:01:48 +08:00
Will Miao
71df8ba3e2 Refactor metadata handling by removing direct UI updates from saveModelMetadata and related functions 2025-06-17 20:25:39 +08:00
Will Miao
8764998e8c Update example images optimization message to clarify metadata preservation 2025-06-16 23:26:55 +08:00
Will Miao
2cb4f3aac8 Add example images access modal and API integration for checking image availability. Fixes #183 and #209 2025-06-16 21:33:49 +08:00
Will Miao
1ccaf33aac Refactor example images management by removing centralized examples settings and migration functionality 2025-06-16 18:29:37 +08:00
Will Miao
cb0a8e0413 Implement example image import functionality with UI and backend integration 2025-06-16 18:14:53 +08:00
Luka Celebic
8674168df4 Don't crash completely if a symlink resolve fails 2025-06-15 20:00:21 +02:00
Will Miao
2221653801 Add bulk selection functionality and limit thumbnail display in BulkManager. See #229 2025-06-15 22:21:21 +08:00
Will Miao
78bcdcef5d Enhance CivitAI metadata fetch handling and update virtual scroller item management. See #227 2025-06-15 08:34:22 +08:00
Will Miao
672fbe2ac0 Remove unused and outdated code to improve clarity 2025-06-15 06:18:47 +08:00
Will Miao
56a5970b44 Adjust NSFW warning styles for medium and compact density modes 2025-06-14 19:49:54 +08:00
Will Miao
a66cef7cfe Increase max-height for model names in medium and compact density modes to prevent text cutoff 2025-06-14 19:30:46 +08:00
Will Miao
c0b1c2e099 Remove commented-out Civitai context menu item from checkpoints and context menu templates 2025-06-14 18:13:37 +08:00
Will Miao
9e553bb87b Refactor card update functions to unify model and Lora card handling; remove unused metadata path update logic. See #228 2025-06-14 09:39:59 +08:00
Will Miao
f966514bc7 Add tag editing functionality and update compact tags rendering 2025-06-13 20:42:44 +08:00
Will Miao
dc0a49f96d Refactor trigger words and metadata editing styles
- Removed outdated styles from trigger words CSS and consolidated into a new shared edit-metadata CSS file.
- Updated JavaScript components for trigger words and model tags to utilize the new metadata styles.
- Adjusted class names and structure in the HTML to align with the new styling conventions.
- Enhanced the UI for editing tags and trigger words, ensuring consistency across components.
2025-06-13 20:19:10 +08:00
Will Miao
65c783c024 Refactor lora-modal.css into modular components 2025-06-13 15:10:26 +08:00
Will Miao
6395836fbb Add styles for empty tags and update tag rendering logic to always display container 2025-06-13 07:11:07 +08:00
Will Miao
a7207084ef Remove unused monitor cleanup logic from LoraManager and DownloadManager 2025-06-13 05:52:52 +08:00
Will Miao
27ef1f1e71 Refactor tag editing setup: improve event handler management for edit and save buttons 2025-06-13 05:46:53 +08:00
Will Miao
68fdb14cd6 Remove unused lora monitor retrieval and ignore path logic from ApiRoutes, DownloadManager, and ModelScanner. Fixes #226 2025-06-13 05:46:22 +08:00
Will Miao
c2af282a85 Add tag editing functionality: implement UI for editing model tags, including save and delete options, and integrate with existing modal structure. 2025-06-12 21:00:17 +08:00
Will Miao
92d48335cb Add endpoints and functionality for verifying duplicates in Lora and Checkpoints
- Implemented `/api/loras/verify-duplicates` and `/api/checkpoints/verify-duplicates` endpoints.
- Added `handle_verify_duplicates` method in `ModelRouteUtils` to process duplicate verification requests.
- Enhanced `ModelDuplicatesManager` to manage verification state and display results.
- Updated CSS for verification badges and hash mismatch indicators. Fixes #221
2025-06-12 12:06:01 +08:00
Will Miao
78cac2edc2 Add DoRA type support. move VALID_LORA_TYPES to utils.constants and update imports in recipe parsers and API routes. 2025-06-12 09:25:00 +08:00
Will Miao
26d105c439 Enhance Civitai model handling: add get_model_version method for detailed metadata retrieval, update routes to utilize new method, and improve URL handling in context menu for model re-linking. 2025-06-11 22:06:16 +08:00
Will Miao
7fec107b98 Refactor context menus to use ModelContextMenuMixin for shared functionality
- Introduced ModelContextMenuMixin to encapsulate shared methods for Lora and Checkpoint context menus.
- Updated CheckpointContextMenu to utilize the mixin for common actions and NSFW level handling.
- Simplified LoraContextMenu by integrating the mixin, removing redundant methods.
- Removed duplicated NSFW handling logic and centralized it in the mixin.
- Adjusted import/export statements to reflect the new structure and ensure proper functionality.
2025-06-11 20:52:45 +08:00
Will Miao
eb01ad3af9 Refactor model response inclusion to only include groups with multiple models; update model removal logic to accept hash value. See #221 2025-06-11 19:52:44 +08:00
Will Miao
e0d9880b32 Remove duplicate hash entries with a single path in get_duplicate_hashes method 2025-06-11 17:33:13 +08:00
Will Miao
e81e96f0ab Refactor file monitoring and model scanning; remove unused monitors and streamline model file deletion process. 2025-06-11 17:02:10 +08:00
Will Miao
06d5bd259c Refactor model file processing in ModelScanner to determine root paths and enhance error logging for missing roots. 2025-06-11 15:53:35 +08:00
Will Miao
14238b8d62 Update preview URL handling in load_metadata function to reflect model location changes. See #113 2025-06-11 15:43:12 +08:00
Will Miao
3b51886927 Add cache file control to ModelScanner; implement flags to enable/disable cache usage and clear cache files accordingly. See #222 2025-06-11 09:17:10 +08:00
Will Miao
a295ff2e06 Refactor video embed implementation to enhance privacy and user experience; replace iframe with a privacy-friendly video container and add external link buttons for YouTube access. 2025-06-10 06:44:08 +08:00
Will Miao
18cdaabf5e Update release notes and version to v0.8.17, adding new features including duplicate model detection, enhanced URL recipe imports, and improved trigger word control. 2025-06-09 19:07:53 +08:00
Will Miao
787e37b7c6 Add CivitAI re-linking functionality and related UI components. Fixes #216
- Implemented new API endpoints for re-linking models to CivitAI.
- Added context menu options for re-linking in both Lora and Checkpoint context menus.
- Created a modal for user confirmation and input for CivitAI model URL.
- Updated styles for the new modal and context menu items.
- Enhanced error handling and user feedback during the re-linking process.
2025-06-09 17:23:03 +08:00
Will Miao
4e5c8b2dd0 Add help modal functionality and update related UI components 2025-06-09 14:55:18 +08:00
Will Miao
d8ddacde38 Remove 'folder' field from model metadata before saving to file. See #211 2025-06-09 11:26:24 +08:00
Will Miao
bb1e42f0d3 Add restart required icon to example images download location label. See #212 2025-06-08 20:43:10 +08:00
pixelpaws
923669c495 Merge pull request #213 from willmiao/migrate-images
Migrate images
2025-06-08 20:11:37 +08:00
Will Miao
7a4139544c Add method to update model metadata from local example images. Fixes #211 2025-06-08 20:10:36 +08:00
Will Miao
4d6ea0236b Add centralized example images setting and update related UI components 2025-06-08 17:38:46 +08:00
Will Miao
e872a06f22 Refactor MiscRoutes and move example images related api to ExampleImagesRoutes 2025-06-08 14:40:30 +08:00
Will Miao
647bda2160 Add API endpoint and frontend integration for fetching example image files 2025-06-07 22:31:57 +08:00
Will Miao
c1e93d23f3 Merge branch 'migrate-images' of https://github.com/willmiao/ComfyUI-Lora-Manager into migrate-images 2025-06-07 11:32:55 +08:00
Will Miao
c96550cc68 Enhance migration and download processes: add backend path update and prevent duplicate completion toasts 2025-06-07 11:29:53 +08:00
Will Miao
b1015ecdc5 Add migration functionality for example images: implement API endpoint and UI controls 2025-06-07 11:27:25 +08:00
Will Miao
f1b928a037 Add migration functionality for example images: implement API endpoint and UI controls 2025-06-07 09:34:07 +08:00
Will Miao
16c312c90b Fix version description not showing. Fixes #210 2025-06-07 01:29:38 +08:00
Will Miao
110ffd0118 Refactor modal close behavior: ensure consistent handling of closeOnOutsideClick option across multiple modals. 2025-06-06 10:32:18 +08:00
Will Miao
35ad872419 Enhance duplicates management: add help tooltip for duplicate groups and improve responsive styling for banners and groups. 2025-06-05 15:06:53 +08:00
Will Miao
9b943cf2b8 Update custom node icon 2025-06-05 06:48:48 +08:00
Will Miao
9d1b357e64 Enhance cache validation logic: add logging for version and model type mismatches, and relax directory structure checks to improve cache validity. 2025-06-04 20:47:14 +08:00
Will Miao
9fc2fb4d17 Enhance model caching and exclusion functionality: update cache version, add excluded models to cache data, and ensure cache is saved to disk after model exclusion and deletion. 2025-06-04 18:38:45 +08:00
Will Miao
641fa8a3d9 Enhance duplicates mode functionality: add toggle for entering/exiting mode, improve exit button styling, and manage control button states during duplicates mode. 2025-06-04 16:46:57 +08:00
Will Miao
add9269706 Enhance duplicate mode exit logic: hide duplicates banner, clear model grid, and re-enable virtual scrolling. Improve spacer element handling in VirtualScroller by recreating it if not found in the DOM. 2025-06-04 16:05:57 +08:00
Will Miao
1a01c4a344 Refactor trigger words UI handling: improve event listener management, restore original words on cancel, and enhance dropdown update logic. See #147 2025-06-04 15:02:13 +08:00
Will Miao
b4e7feed06 Enhance trained words extraction and display: include class tokens in response and update UI accordingly. See #147 2025-06-04 12:04:38 +08:00
Will Miao
4b96c650eb Enhance example image handling: improve filename extraction and fallback for local images 2025-06-04 11:30:56 +08:00
Will Miao
107aef3785 Enhance SaveImage and TriggerWordToggle: add tooltips for parameters to improve user guidance 2025-06-03 19:40:01 +08:00
Will Miao
b49807824f Fix optimizeExampleImages setting in SettingsManager 2025-06-03 18:10:43 +08:00
Will Miao
e5ef2ef8b5 Add default_active parameter to TriggerWordToggle for controlling default state 2025-06-03 17:45:52 +08:00
Will Miao
88779ed56c Enhance Lora Manager widget: add configurable window size for Shift+Click behavior 2025-06-03 16:25:31 +08:00
Will Miao
8b59fb6adc Refactor ShowcaseView and uiHelpers for improved image/video handling
- Moved getLocalExampleImageUrl function to uiHelpers.js for better modularity.
- Updated ShowcaseView.js to utilize the new structure for local and fallback URLs.
- Enhanced lazy loading functions to support both primary and fallback URLs for images and videos.
- Simplified metadata panel generation in ShowcaseView.js.
- Improved showcase toggle functionality and added initialization for lazy loading and metadata handlers.
2025-06-03 16:06:54 +08:00
Will Miao
7945647b0b Refactor core application and recipe manager: remove lazy loading functionality and clean up imports in uiHelpers. 2025-06-03 15:40:51 +08:00
Will Miao
2d39b84806 Add CivitaiApiMetadataParser and improve recipe parsing logic for Civitai images. Also fixes #197
Additional info: Now prioritizes using the Civitai Images API to fetch image and generation metadata. Even NSFW images can now be imported via URL.
2025-06-03 14:58:43 +08:00
Will Miao
e151a19fcf Implement bulk operations for LoRAs: add send to workflow and bulk delete functionality with modal confirmation. 2025-06-03 07:44:52 +08:00
Will Miao
99d2ba26b9 Add API endpoint for fetching trained words and implement dropdown suggestions in the trigger words editor. See #147 2025-06-02 17:04:33 +08:00
Will Miao
396924f4cc Add badge for duplicate count and update logic in ModelDuplicatesManager and PageControls 2025-06-02 09:42:28 +08:00
Will Miao
7545312229 Add bulk delete endpoint for checkpoints and enhance ModelDuplicatesManager for better handling of model types 2025-06-02 08:54:31 +08:00
Will Miao
26f9779fbf Add bulk delete functionality for loras and implement model duplicates management. See #198
- Introduced a new API endpoint for bulk deleting loras.
- Added ModelDuplicatesManager to handle duplicate models for loras and checkpoints.
- Implemented UI components for displaying duplicates and managing selections.
- Enhanced controls with a button for finding duplicates.
- Updated templates to include a duplicates banner and associated actions.
2025-06-02 08:08:45 +08:00
Will Miao
0bd62eef3a Add endpoints for finding duplicate loras and filename conflicts; implement tracking for duplicates in ModelHashIndex and update ModelScanner to handle new data structures. 2025-05-31 20:50:51 +08:00
Will Miao
e06d15f508 Remove LoraHashIndex class and related functionality to streamline codebase. 2025-05-31 20:25:12 +08:00
Will Miao
aa1ee96bc9 Add versioning and history tracking to usage statistics. Implement backup and conversion for old stats format, enhancing data structure for checkpoints and loras. 2025-05-31 16:38:18 +08:00
Will Miao
355c73512d Enhance modal close behavior by tracking mouse events on the background. Implement logic to close modals only if mouseup occurs on the background after mousedown, improving user experience. 2025-05-31 08:53:20 +08:00
Will Miao
0daf9d92ff Update version to 0.8.16 and enhance release notes with new features, improvements, and bug fixes. 2025-05-30 21:04:24 +08:00
Will Miao
37de26ce25 Enhance Lora code update handling for browser and desktop modes. Implement broadcast support for Lora Loader nodes and improve node ID management in the workflow. 2025-05-30 20:12:38 +08:00
Will Miao
0eaef7e7a0 Refactor extension name for consistency in usage statistics tracking 2025-05-30 17:30:29 +08:00
Will Miao
8063cee3cd Add rename functionality for checkpoint and LoRA files with loading indicators 2025-05-30 16:38:18 +08:00
Will Miao
cbb25b4ac0 Enhance model metadata saving functionality with loading indicators and improved validation. Refactor editing logic for better user experience in both checkpoint and LoRA modals. Fixes #200 2025-05-30 16:30:01 +08:00
Will Miao
c62206a157 Add preprocessing for MessagePack serialization to handle large integers. See #201 2025-05-30 10:55:48 +08:00
Will Miao
09832141d0 Add functionality to open example images folder for models 2025-05-30 09:42:36 +08:00
Will Miao
bf8e121a10 Add functionality to copy LoRA syntax and update event handling for copy action 2025-05-30 09:02:17 +08:00
Will Miao
68568073ec Refactor model caching logic to streamline adding models and ensure disk persistence 2025-05-30 07:34:39 +08:00
Will Miao
ec36524c35 Add Civitai image URL optimization and simplify image processing logic 2025-05-29 22:20:16 +08:00
Will Miao
67acd9fd2c Relax cache validation by removing strict modification time checks, allowing users to refresh the cache as needed. 2025-05-29 20:58:06 +08:00
Will Miao
f7be5c8d25 Change log level to info for cache save operation and ensure cache is saved to disk after updating preview URL 2025-05-29 20:09:58 +08:00
Will Miao
ceacac75e0 Increase minimum width of dropdown menu for improved usability 2025-05-29 15:55:14 +08:00
Will Miao
bae66f94e8 Add full rebuild option to model refresh functionality and enhance dropdown controls 2025-05-29 15:51:45 +08:00
Will Miao
ddf132bd78 Add cache management feature: implement clear cache API and modal confirmation 2025-05-29 14:36:13 +08:00
Will Miao
afb012029f Enhance get_cached_data method: improve cache rebuilding logic and ensure cache is saved after initialization 2025-05-29 08:50:17 +08:00
Will Miao
651e14c8c3 Enhance get_cached_data method: add rebuild_cache option for improved cache management 2025-05-29 08:36:18 +08:00
Will Miao
e7c626eb5f Add MessagePack support for efficient cache serialization and update dependencies 2025-05-28 22:30:06 +08:00
pixelpaws
a0b0d40a19 Update README.md 2025-05-27 22:28:26 +08:00
Will Miao
42e3ab9e27 Update tutorial links in README: replace outdated video links with the latest tutorial 2025-05-27 19:24:22 +08:00
Will Miao
6e5f333364 Enhance model file moving logic: support moving associated files and handle metadata paths 2025-05-27 05:41:39 +08:00
Will Miao
f33a9abe60 Limit Lora hash display to first 10 characters and improve WebP metadata handling 2025-05-22 16:29:12 +08:00
Will Miao
7f1bbdd615 Remove debug print statement for primary sampler ID in MetadataProcessor 2025-05-22 16:01:55 +08:00
Will Miao
d3bf8eaceb Add container padding properties to VirtualScroller and adjust card padding 2025-05-22 15:23:32 +08:00
Will Miao
b9c9d602de Enhance download modals: auto-focus on URL input and auto-select version if only one available 2025-05-22 11:07:52 +08:00
Will Miao
b25fbd6e24 Refactor modal styles: remove model name field and adjust margin for modal content header 2025-05-22 10:02:13 +08:00
Will Miao
6052608a4e Update version to 0.8.15-bugfix in pyproject.toml 2025-05-22 04:42:12 +08:00
Will Miao
a073b82751 Enhance WebP image saving: add EXIF data and workflow metadata support. Fixes #193 2025-05-21 19:17:12 +08:00
Will Miao
8250acdfb5 Add creator information display to Lora and Checkpoint modals. #186 2025-05-21 15:31:23 +08:00
Will Miao
8e1f73a34e Refactor display density settings: replace compact mode with display density option and update related UI components 2025-05-20 19:35:41 +08:00
Will Miao
50704bc882 Enhance error handling and input validation in fetch_and_update_model method 2025-05-20 13:57:22 +08:00
Will Miao
35d34e3513 Revert db0b49c427 Refactor load_metadata to use save_metadata for updating metadata files 2025-05-19 21:46:01 +08:00
Will Miao
ea834f3de6 Revert "Enhance metadata processing in ModelScanner: prevent intermediate writes, restore missing civitai data, and ensure base_model consistency. #185"
This reverts commit 99b36442bb.
2025-05-19 21:39:31 +08:00
Will Miao
11aedde72f Fix save_metadata call to await asynchronous execution in load_metadata function. Fixes #192 2025-05-19 15:01:56 +08:00
Will Miao
488654abc8 Improve card layout responsiveness and scrolling behavior 2025-05-18 07:49:39 +08:00
Will Miao
da1be0dc65 Merge branch 'main' of https://github.com/willmiao/ComfyUI-Lora-Manager 2025-05-17 15:40:23 +08:00
Will Miao
d0c728a339 Enhance node tracing logic and improve prompt handling in metadata processing. See #189 2025-05-17 15:40:05 +08:00
pixelpaws
66c66c4d9b Update README.md 2025-05-16 17:08:23 +08:00
Will Miao
4882721387 Update version to 0.8.15 and add release notes for enhanced features and improvements 2025-05-16 16:13:37 +08:00
Will Miao
06a8850c0c Add more wiki images 2025-05-16 15:54:52 +08:00
Will Miao
370aa06c67 Refactor duplicates banner styles for improved layout and responsiveness 2025-05-16 15:47:08 +08:00
Will Miao
c9fa0564e7 Update images 2025-05-16 11:36:37 +08:00
Will Miao
2ba7a0ceba Add keyboard navigation support and related styles for enhanced user experience 2025-05-15 20:17:57 +08:00
Will Miao
276aedfbb9 Set 'from_civitai' flag to True when updating local metadata with CivitAI data 2025-05-15 16:50:32 +08:00
Will Miao
c193c75674 Fix misleading error message for invalid civitai api key or early access deny 2025-05-15 13:46:46 +08:00
Will Miao
a562ba3746 Fix TriggerWord Toggle not updating when all LoRAs are disabled 2025-05-15 10:30:46 +08:00
Will Miao
2fedd572ff Add header drag functionality for proportional strength adjustment of LoRAs 2025-05-15 10:12:46 +08:00
Will Miao
db0b49c427 Refactor load_metadata to use save_metadata for updating metadata files 2025-05-15 09:49:30 +08:00
Will Miao
03a6f8111c Add functionality to copy and send LoRA/Recipe syntax to workflow
- Implemented copy functionality for LoRA and Recipe syntax in context menus.
- Added options to send LoRA and Recipe to workflow in both append and replace modes.
- Updated HTML templates to include new context menu items for sending actions.
2025-05-15 07:01:50 +08:00
Will Miao
925ad7b3e0 Add user-select: none to prevent text selection on cards and control elements 2025-05-15 05:36:56 +08:00
Will Miao
bf793d5b8b Refactor Lora and Recipe card event handling: replace copy functionality with direct send to ComfyUI workflow, update UI elements, and enhance sendLoraToWorkflow to support recipe syntax. 2025-05-14 23:51:00 +08:00
Will Miao
64a906ca5e Add Lora syntax send to comfyui functionality: implement API endpoint and frontend integration for sending and updating LoRA codes in ComfyUI nodes. 2025-05-14 21:09:36 +08:00
Will Miao
99b36442bb Enhance metadata processing in ModelScanner: prevent intermediate writes, restore missing civitai data, and ensure base_model consistency. #185 2025-05-14 19:16:58 +08:00
Will Miao
3c5164d510 Update screenshot 2025-05-13 22:56:51 +08:00
Will Miao
ec4b5a4d45 Update release notes and version to v0.8.14: add virtualized scrolling, compact display mode, and enhanced LoRA node functionality. 2025-05-13 22:50:32 +08:00
Will Miao
78e1901779 Add compact mode settings and styles for improved layout control. Fixes #33 2025-05-13 21:40:37 +08:00
Will Miao
cb539314de Ensure full LoRA node chain is considered when updating TriggerWord Toggle nodes 2025-05-13 20:33:52 +08:00
Will Miao
c7627fe0de Remove no longer needed ref files. 2025-05-13 17:57:59 +08:00
Will Miao
84bfad7ce5 Enhance model deletion handling in UI: integrate virtual scroller updates and remove legacy UI card removal logic. 2025-05-13 17:50:28 +08:00
Will Miao
3e06938b05 Add enableDataWindowing option to VirtualScroller for improved control over data fetching. (Disable data windowing for now) 2025-05-13 17:13:17 +08:00
Will Miao
4f712fec14 Reduce default delay in model processing from 0.2 to 0.1 seconds for improved responsiveness. 2025-05-13 15:30:09 +08:00
Will Miao
c5c9659c76 Update refreshModels to pass folder update flag to resetAndReloadFunction 2025-05-13 15:25:40 +08:00
Will Miao
d6e175c1f1 Add API endpoints for retrieving LoRA notes and trigger words; enhance context menu with copy options. Supports #177 2025-05-13 15:14:25 +08:00
Will Miao
88088e1071 Restructure the code of loras_widget into smaller, more manageable modules. 2025-05-13 14:42:28 +08:00
Will Miao
958ddbca86 Fix workaround for saved value retrieval in Loras widget to address custom nodes issue. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/176 2025-05-13 12:27:18 +08:00
Will Miao
6670fd28f4 Add sync functionality for clipStrength when collapsed in Loras widget. https://github.com/willmiao/ComfyUI-Lora-Manager/issues/176 2025-05-13 11:45:13 +08:00
pixelpaws
1e59c31de3 Merge pull request #184 from willmiao/vscroll
Add virtual scroll
2025-05-12 22:27:40 +08:00
Will Miao
c966dbbbbc Enhance DuplicatesManager and VirtualScroller to manage virtual scrolling state and improve rendering logic 2025-05-12 21:31:03 +08:00
Will Miao
af8f5ba04e Implement client-side placeholder handling for empty recipe grid and remove server-side conditional rendering 2025-05-12 21:20:28 +08:00
Will Miao
b741ed0b3b Refactor recipe and checkpoint management to implement virtual scrolling and improve state handling 2025-05-12 20:07:47 +08:00
Will Miao
01ba3c14f8 Implement virtual scrolling for model loading and checkpoint management 2025-05-12 17:47:57 +08:00
Will Miao
d13b1a83ad checkpoint 2025-05-12 16:44:45 +08:00
Will Miao
303477db70 update 2025-05-12 14:50:10 +08:00
Will Miao
311e89e9e7 checkpoint 2025-05-12 13:59:11 +08:00
Will Miao
8546cfe714 checkpoint 2025-05-12 10:25:58 +08:00
Will Miao
e6f4d84b9a Merge branch 'main' of https://github.com/willmiao/ComfyUI-Lora-Manager 2025-05-11 18:50:53 +08:00
Will Miao
ce7e422169 Revert "refactor: streamline LoraCard event handling and implement virtual scrolling for improved performance"
This reverts commit 5dd8d905fa.
2025-05-11 18:50:19 +08:00
pixelpaws
e5aec80984 Merge pull request #179 from jakerdy/patch-1
[Fix] `/api/chekcpoints/info/{name}` change misspelled method call
2025-05-11 17:10:40 +08:00
Jak Erdy
6d97817390 [Fix] /api/chekcpoints/info/{name} change misspelled method call
If you call:
`http://127.0.0.1:8188/api/checkpoints/info/some_name`
You will get error, that there is no method `get_checkpoint_info_by_name` in `scanner`.
Lookslike it wasn't fixed after refactoring or something. Now it works as expected.
2025-05-10 17:38:10 +07:00
Will Miao
d516f22159 Merge branch 'main' of https://github.com/willmiao/ComfyUI-Lora-Manager 2025-05-10 07:34:06 +08:00
pixelpaws
e918c18ca2 Create FUNDING.yml 2025-05-09 20:17:35 +08:00
Will Miao
5dd8d905fa refactor: streamline LoraCard event handling and implement virtual scrolling for improved performance 2025-05-09 16:33:34 +08:00
Will Miao
1121d1ee6c Revert "update"
This reverts commit 4793f096af.
2025-05-09 16:14:10 +08:00
Will Miao
4793f096af update 2025-05-09 15:42:56 +08:00
Will Miao
7b5b4ce082 refactor: enhance CFGGuider handling and add CFGGuiderExtractor for improved metadata extraction. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/172 2025-05-09 13:50:22 +08:00
Will Miao
fa08c9c3e4 Update version to 0.8.13; enhance recipe management and source tracking features in release notes 2025-05-09 11:38:46 +08:00
pixelpaws
d0d5eb956a Merge pull request #174 from willmiao/dev
Dev
2025-05-09 11:06:47 +08:00
Will Miao
969f949330 refactor(lora-loader, lora-stacker, loras-widget): enhance handling of model and clip strengths; update formatting and UI interactions. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/171 2025-05-09 11:05:59 +08:00
Will Miao
9169bbd04d refactor(widget-serialization): remove dummy items from serialization which was a fix to ComfyUI issues 2025-05-08 20:25:26 +08:00
Will Miao
99463ad01c refactor(import-modal): remove outdated duplicate styles and clean up modal button layout 2025-05-08 20:16:25 +08:00
pixelpaws
f1d6b0feda Merge pull request #173 from willmiao/dev
Dev
2025-05-08 18:33:52 +08:00
Will Miao
e33da50278 refactor: update duplicate recipe management; simplify UI and remove deprecated functions 2025-05-08 18:33:19 +08:00
Will Miao
4034eb3221 feat: implement duplicate recipe detection and management; add UI for marking duplicates for deletion 2025-05-08 17:29:58 +08:00
Will Miao
75a95f0109 refactor: enhance recipe fingerprint calculation and return detailed recipe information; remove unnecessary console logs in import managers 2025-05-08 16:54:49 +08:00
Will Miao
92fdc16fe6 feat(modals): implement duplicate delete confirmation modal and enhance deletion workflow 2025-05-08 16:17:52 +08:00
Will Miao
23fa2995c8 refactor(import): Implement DownloadManager, FolderBrowser, ImageProcessor, and RecipeDataManager for enhanced recipe import functionality
- Added DownloadManager to handle saving recipes and downloading missing LoRAs.
- Introduced FolderBrowser for selecting LoRA root directories and managing folder navigation.
- Created ImageProcessor for handling image uploads and URL inputs for recipe analysis.
- Developed RecipeDataManager to manage recipe details, including metadata and LoRA information.
- Implemented ImportStepManager to control the flow of the import process and manage UI steps.
- Added utility function for formatting file sizes for better user experience.
2025-05-08 15:41:13 +08:00
Will Miao
59aefdff77 feat: implement duplicate detection and management features; add UI components and styles for duplicates 2025-05-08 15:13:14 +08:00
Will Miao
e92ab9e3cc refactor: add endpoints for finding duplicates and bulk deletion of recipes; enhance fingerprint calculation and handling 2025-05-07 19:34:27 +08:00
Will Miao
e3bf1f763c refactor: remove workflow parsing module and associated files for cleanup 2025-05-07 17:13:30 +08:00
Will Miao
1c6e9d0b69 refactor: enhance hash processing in AutomaticMetadataParser for improved key handling 2025-05-07 05:29:16 +08:00
Will Miao
bfd4eb3e11 refactor: update import paths for config in AutomaticMetadataParser and RecipeFormatParser. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/168 2025-05-07 04:39:06 +08:00
Will Miao
c9f902a8af Refactor recipe metadata parser package for ComfyUI-Lora-Manager
- Implemented the base class `RecipeMetadataParser` for parsing recipe metadata from user comments.
- Created a factory class `RecipeParserFactory` to instantiate appropriate parser based on user comment content.
- Developed multiple parser classes: `ComfyMetadataParser`, `AutomaticMetadataParser`, `MetaFormatParser`, and `RecipeFormatParser` to handle different metadata formats.
- Introduced constants for generation parameters and valid LoRA types.
- Enhanced error handling and logging throughout the parsing process.
- Added functionality to populate LoRA and checkpoint information from Civitai API responses.
- Structured the output of parsed metadata to include prompts, LoRAs, generation parameters, and model information.
2025-05-06 21:11:25 +08:00
Will Miao
0b67510ec9 refactor: remove StandardMetadataParser and ImageSaverMetadataParser, integrate AutomaticMetadataParser for improved metadata handling 2025-05-06 17:51:44 +08:00
Will Miao
b5cd320e8b Update 'natsort' to dependencies in pyproject.toml 2025-05-06 08:59:48 +08:00
pixelpaws
deb25b4987 Merge pull request #166 from Rauks/add-natural-sort
fix: use natural sorting when sorting by name
2025-05-06 08:58:19 +08:00
pixelpaws
4612da264a Merge pull request #167 from willmiao/dev
Dev
2025-05-06 08:28:20 +08:00
Karl Woditsch
59b67e1e10 fix: use natural sorting when sorting by name 2025-05-05 22:25:50 +02:00
Will Miao
5fad936b27 feat: implement recipe card update functionality after modal edits 2025-05-05 23:17:58 +08:00
Will Miao
e376a45dea refactor: remove unused source URL tooltip from RecipeModal component 2025-05-05 21:11:52 +08:00
Will Miao
fd593bb61d feat: add source URL functionality to recipe modal, including dynamic display and editing options 2025-05-05 20:50:32 +08:00
Will Miao
71b97d5974 fix: update recipe data structure to include source_path from metadata and improve loading messages 2025-05-05 18:15:59 +08:00
Will Miao
2b405ae164 fix: update load_metadata to set preview_nsfw_level based on civitai data. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/53 2025-05-05 15:46:37 +08:00
Will Miao
2fe4736b69 fix: update ImageSaverMetadataParser to improve metadata matching and parsing logic. https://github.com/willmiao/ComfyUI-Lora-Manager/issues/104 2025-05-05 14:41:56 +08:00
Will Miao
184f8ca6cf feat: add local image analysis functionality and update import modal for URL/local path input. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/140 2025-05-05 11:35:20 +08:00
Will Miao
1ff2019dde fix: update model type checks to include LoCon and lycoris in API routes. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/159 2025-05-05 07:48:08 +08:00
Will Miao
a3d8261686 fix: remove console log and update file extension handling for LoRA syntax. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/158 2025-05-04 08:52:35 +08:00
Will Miao
7d0600976e fix: enhance pointer event handling for progress panel visibility 2025-05-04 08:08:59 +08:00
Will Miao
e1e6e4f3dc feat: update version to 0.8.12 and enhance release notes in README 2025-05-03 17:21:21 +08:00
pixelpaws
fba2853773 Merge pull request #157 from willmiao/dev
Dev
2025-05-03 17:07:48 +08:00
Will Miao
48df7e1078 Refactor code structure for improved readability and maintainability 2025-05-03 17:06:57 +08:00
Will Miao
235dcd5fa6 feat: enhance metadata panel visibility handling in showcase view 2025-05-03 16:41:47 +08:00
Will Miao
2027db7411 feat: refactor model deletion functionality with confirmation modal 2025-05-03 16:31:17 +08:00
Will Miao
611dd33c75 feat: add model exclution functionality frontend 2025-05-03 16:14:09 +08:00
Will Miao
ec1c92a714 feat: add model exclusion functionality with new API endpoints and metadata handling 2025-05-02 22:36:50 +08:00
Will Miao
6ac78156ac feat: comment out "View Details" option in context menus for checkpoints and recipes 2025-05-02 20:59:06 +08:00
pixelpaws
e94b74e92d Merge pull request #156 from willmiao/dev
Dev
2025-05-02 19:35:25 +08:00
Will Miao
2bbec47f63 feat: update WeChat and Alipay QR code to use WebP format for improved performance 2025-05-02 19:34:40 +08:00
pixelpaws
b5ddf4c953 Merge pull request #155 from Rauks/add-base-models
feat: Add "HiDream" and "LTXV" base models
2025-05-02 19:17:18 +08:00
Will Miao
44be75aeef feat: add WeChat and Alipay support section with QR code toggle functionality 2025-05-02 19:15:54 +08:00
Karl Woditsch
2c03759b5d feat: Add "HiDream" and "LTXV" base models 2025-05-02 11:56:10 +02:00
Will Miao
2e3da03723 feat: update metadata panel visibility logic to show on media hover and add rendering calculations 2025-05-02 17:53:15 +08:00
Will Miao
6e96fbcda7 feat: enhance alphabet bar with toggle functionality and visual indicators 2025-05-01 20:50:31 +08:00
Will Miao
d1fd5b7f27 feat: implement alphabet filtering feature with letter counts and UI components v1 2025-05-01 20:07:12 +08:00
Will Miao
9dbcc105e7 feat: add model metadata refresh functionality and enhance download progress tracking. https://github.com/willmiao/ComfyUI-Lora-Manager/issues/151 2025-05-01 18:57:29 +08:00
Will Miao
5cd5a82ddc feat: add creator information to model metadata handling 2025-05-01 15:56:57 +08:00
Will Miao
88c1892dc9 feat: enhance model metadata fetching to include creator information 2025-05-01 15:30:05 +08:00
Will Miao
3c1b181675 fix: enhance version comparison by ignoring suffixes in semantic version strings 2025-05-01 07:47:09 +08:00
Will Miao
6777dc16ca fix: update version to 0.8.11-bugfix in pyproject.toml 2025-05-01 06:19:03 +08:00
Will Miao
3833647dfe refactor: remove unused tkinter imports from misc_routes.py. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/150 2025-05-01 06:06:20 +08:00
Will Miao
b6c47f0cce feat: update version to 0.8.11 and add release notes for offline image support and download system improvements 2025-04-30 19:35:57 +08:00
Will Miao
d308c7ac60 feat: enhance A1111MetadataParser to improve metadata extraction and parsing logic. https://github.com/willmiao/ComfyUI-Lora-Manager/issues/148 2025-04-30 19:09:47 +08:00
Will Miao
947c757aa5 Revert the incorrect changes 2025-04-30 19:09:00 +08:00
pixelpaws
5ee5bd7d36 Merge pull request #149 from willmiao/dev
Dev
2025-04-30 16:05:38 +08:00
Will Miao
d9c4ae92cd Add GPL-3.0 license 2025-04-30 16:04:41 +08:00
Will Miao
e1efff19f0 feat: add mini progress circle to progress panel when collapsed 2025-04-30 15:42:01 +08:00
Will Miao
61f723a1f5 feat: add back-to-top button and update its positioning 2025-04-30 14:46:43 +08:00
Will Miao
b32756932b feat: initialize example images manager on app startup and streamline event listener setup 2025-04-30 14:17:39 +08:00
Will Miao
cb5e64d26b feat: enhance example images downloading by adding local file processing before remote download 2025-04-30 13:56:29 +08:00
Will Miao
f36febf10a fix: create independent session for downloading example images to prevent interference 2025-04-30 13:35:12 +08:00
Will Miao
26d9a9caa6 refactor: streamline example images download functionality and UI updates 2025-04-30 13:20:44 +08:00
Will Miao
cb876cf77e Implement saving model example images locally. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/88 2025-04-29 22:41:18 +08:00
Will Miao
4789711910 feat: enhance metadata processing by refining primary sampler selection and adding CLIPTextEncodeFlux extractor. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/146 2025-04-29 06:31:21 +08:00
Will Miao
4064980505 fix: update tutorial link for v0.8.10 release in README 2025-04-28 19:36:55 +08:00
pixelpaws
f9b8f2d22c Merge pull request #145 from mobedoor/main
Make workflow folder compatible with ComfyUI Browse Templates screen
2025-04-28 19:26:46 +08:00
mobedoor
6a95aadc53 Make workflow folder compatible with ComfyUI Browse Templates screen 2025-04-28 16:13:19 +05:00
Will Miao
f9f08f082d Update the installation instructions to include the one-click portable package option. 2025-04-28 18:38:24 +08:00
Will Miao
0817901bef feat: update README and pyproject.toml for v0.8.10 release; add standalone mode and portable edition features 2025-04-28 18:24:02 +08:00
Will Miao
ac22172e53 Update requirements for standalone mode 2025-04-28 15:14:11 +08:00
Will Miao
fd87fbf31e Update workflow 2025-04-28 07:08:35 +08:00
Will Miao
554be0908f feat: add dynamic filename format patterns for Save Image Node in README 2025-04-28 07:01:33 +08:00
Will Miao
eaec4e5f13 feat: update README and settings.json.example for standalone mode; enhance standalone.py to redirect status requests to loras page 2025-04-27 09:41:33 +08:00
Will Miao
0e7ba27a7d feat: enhance Civitai resource extraction in StandardMetadataParser for improved JSON handling. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/141 2025-04-26 22:12:40 +08:00
Will Miao
c551f5c23b feat: update README with standalone mode instructions and add settings.json.example file 2025-04-26 20:39:24 +08:00
pixelpaws
5159657ae5 Merge pull request #142 from willmiao/dev
Dev
2025-04-26 20:25:26 +08:00
Will Miao
d35db7df72 feat: add standalone mode for LoRA Manager with setup instructions 2025-04-26 20:23:27 +08:00
Will Miao
2b5399c559 feat: enhance folder path retrieval for diffusion models and improve warning messages 2025-04-26 20:08:00 +08:00
Will Miao
9e61bbbd8e feat: improve warning management by removing existing deleted LoRAs and early access warnings 2025-04-26 19:46:48 +08:00
Will Miao
7ce5857cd5 feat: implement standalone mode support with mock modules and path handling 2025-04-26 19:14:38 +08:00
Will Miao
38fbae99fd feat: limit maximum height of loras widget to accommodate up to 5 entries. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/109 2025-04-26 12:00:36 +08:00
Will Miao
b0a9d44b0c Add support for SamplerCustomAdvanced node in metadata extraction 2025-04-26 09:40:44 +08:00
Will Miao
b4e22cd375 feat: update release notes and version to 0.8.9 with new favorites system and UI enhancements 2025-04-25 22:13:16 +08:00
Will Miao
9bc92736a7 feat: enhance session management by ensuring freshness and optimizing connection parameters 2025-04-25 20:54:25 +08:00
pixelpaws
111b34d05c Merge pull request #138 from willmiao/dev
feat: implement theme management with auto-detection and user prefere…
2025-04-25 19:47:17 +08:00
Will Miao
07d9599a2f feat: implement theme management with auto-detection and user preference storage. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/137 2025-04-25 19:39:11 +08:00
pixelpaws
d8194f211d Merge pull request #136 from willmiao/dev
Dev
2025-04-25 17:56:26 +08:00
Will Miao
51a6374c33 feat: add favorites filtering functionality across models and UI components 2025-04-25 17:55:33 +08:00
Will Miao
aa6c6035b6 refactor: consolidate save model metadata functionality across APIs 2025-04-25 13:31:01 +08:00
Will Miao
44b4a7ffbb fix: update requirements to include 'toml' and correct pip install command in README. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/134 2025-04-25 10:26:01 +08:00
Will Miao
e5bb018d22 feat: integrate Font Awesome resources locally. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/131
- Replace CDN references with local resources
- Download and include Font Awesome CSS and webfonts in project
- Remove CDN preconnect as resources are now served locally
- Improve reliability for users with limited network access
2025-04-25 10:09:20 +08:00
Will Miao
79b8a6536e docs: Update README to clarify contribution guidelines and acknowledge project inspirations 2025-04-25 09:48:00 +08:00
Will Miao
3de31cd06a feat: Add functionality to move civitai.info file during model relocation 2025-04-25 09:41:23 +08:00
Will Miao
c579b54d40 fix: Preserve original path separators when mapping real paths in Config. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/132 2025-04-25 09:33:07 +08:00
Will Miao
0a52575e8b feat: Enhance model file retrieval by ensuring primary model is selected from files list. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/127 2025-04-25 05:45:29 +08:00
Will Miao
23c9a98f66 feat: Add endpoint for scanning and rebuilding recipe cache, and update UI to use new refresh method 2025-04-24 13:23:31 +08:00
Will Miao
796fc33b5b feat: Optimize TCP connection parameters and enhance logging for download operations 2025-04-22 19:43:37 +08:00
Will Miao
dc4c11ddd2 feat: Update release notes and version to 0.8.8 with new features and bug fixes 2025-04-22 13:29:00 +08:00
pixelpaws
d389e4d5d4 Merge pull request #122 from willmiao/dev
Dev
2025-04-22 09:40:05 +08:00
Will Miao
8cb78ad931 feat: Add route for retrieving current usage statistics 2025-04-22 09:39:00 +08:00
Will Miao
85f987d15c feat: Centralize clipboard functionality with copyToClipboard utility across components 2025-04-22 09:33:05 +08:00
Will Miao
b12079e0f6 feat: Implement usage statistics tracking with backend integration and route setup 2025-04-22 08:56:34 +08:00
pixelpaws
dcf5c6167a Merge pull request #121 from willmiao/dev
Dev
2025-04-21 15:44:23 +08:00
Will Miao
b395d3f487 fix: Update filename formatting in save_images method to ensure unique filenames for batch images 2025-04-21 15:42:49 +08:00
Will Miao
37662cad10 Update workflow 2025-04-21 15:42:49 +08:00
pixelpaws
aa1673063d Merge pull request #120 from willmiao/dev
feat: Enhance LoraManager by updating trigger words handling and dyna…
2025-04-21 06:52:16 +08:00
Will Miao
f51f49eb60 feat: Enhance LoraManager by updating trigger words handling and dynamically loading widget modules. 2025-04-21 06:49:51 +08:00
pixelpaws
54c9bac961 Merge pull request #119 from willmiao/dev
Dev
2025-04-20 22:29:28 +08:00
Will Miao
e70fd73bdd feat: Implement trigger words API and update frontend integration for LoraManager. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/43 2025-04-20 22:27:53 +08:00
Will Miao
9bb9e7b64d refactor: Extract common methods for Lora handling into utils.py and update references in lora_loader.py and lora_stacker.py 2025-04-20 21:35:36 +08:00
pixelpaws
f64c03543a Merge pull request #116 from matrunchyk/main
Prevent duplicates of root folders when using symlinks
2025-04-20 17:05:08 +08:00
Will Miao
51374de1a1 fix: Update version to 0.8.7-bugfix2 in pyproject.toml for clarity on bug fixes 2025-04-20 15:04:24 +08:00
Will Miao
afcc12f263 fix: Update populate_lora_from_civitai method to accept a tuple for Civitai API response. Fixes https://github.com/willmiao/ComfyUI-Lora-Manager/issues/117 2025-04-20 15:01:23 +08:00
Your Name
88c5482366 Merge branch 'main' of https://github.com/willmiao/ComfyUI-Lora-Manager 2025-04-19 21:47:41 +03:00
Your Name
bbf7295c32 Prevent duplicates of root folders when using symlinks 2025-04-19 21:42:01 +03:00
Will Miao
ca5e23e68c fix: Update version to 0.8.7-bugfix in pyproject.toml for clarity on bug fixes 2025-04-19 23:02:50 +08:00
Will Miao
eadb1487ae feat: Refactor metadata formatting to use helper function for conditional parameter addition 2025-04-19 23:00:09 +08:00
Will Miao
1faa70fc77 feat: Implement filename-based hash retrieval in LoraScanner and ModelScanner for improved compatibility 2025-04-19 21:12:26 +08:00
Will Miao
30d7c007de fix: Correct metadata restoration logic to ensure file info is fetched when metadata is missing 2025-04-19 20:51:23 +08:00
Will Miao
f54f6a4402 feat: Enhance metadata handling by restoring missing civitai data and extracting tags and descriptions from version info 2025-04-19 11:35:42 +08:00
Will Miao
7b41cdec65 feat: Add civitai_deleted attribute to BaseModelMetadata for tracking deletion status from Civitai 2025-04-19 09:30:43 +08:00
Will Miao
fb6a652a57 feat: Add checkpoint hash retrieval and enhance metadata formatting in SaveImage class 2025-04-18 23:55:45 +08:00
Will Miao
ea34d753c1 refactor: Remove unnecessary workflow data logging and streamline saveRecipeDirectly function for legacy loras widget 2025-04-18 21:52:26 +08:00
229 changed files with 30292 additions and 12305 deletions

5
.github/FUNDING.yml vendored Normal file
View File

@@ -0,0 +1,5 @@
# These are supported funding model platforms
patreon: PixelPawsAI
ko_fi: pixelpawsai
custom: ['paypal.me/pixelpawsai']

1
.gitignore vendored
View File

@@ -3,3 +3,4 @@ settings.json
output/*
py/run_test.py
.vscode/
cache/

687
LICENSE
View File

@@ -1,21 +1,674 @@
MIT License
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (c) 2023 Will Miao
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
Preamble
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
ComfyUI Lora Manager - A ComfyUI custom node for managing models
Copyright (C) 2025 Will Miao
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
ComfyUI Lora Manager Copyright (C) 2025 Will Miao
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

229
README.md
View File

@@ -10,68 +10,86 @@ A comprehensive toolset that streamlines organizing, downloading, and applying L
![Interface Preview](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/static/images/screenshot.png)
One-click Integration:
![One-Click Integration](https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/static/images/one-click-send.jpg)
## 📺 Tutorial: One-Click LoRA Integration
Watch this quick tutorial to learn how to use the new one-click LoRA integration feature:
[![One-Click LoRA Integration Tutorial](https://img.youtube.com/vi/qS95OjX3e70/0.jpg)](https://youtu.be/qS95OjX3e70)
[![LoRA Manager v0.8.0 - New Recipe Feature & Bulk Operations](https://img.youtube.com/vi/noN7f_ER7yo/0.jpg)](https://youtu.be/noN7f_ER7yo)
[![One-Click LoRA Integration Tutorial](https://img.youtube.com/vi/hvKw31YpE-U/0.jpg)](https://youtu.be/hvKw31YpE-U)
---
## Release Notes
### v0.8.7
* **Enhanced Context Menu** - Added comprehensive context menu functionality to Recipes and Checkpoints pages for improved workflow
* **Interactive LoRA Strength Control** - Implemented drag functionality in LoRA Loader for intuitive strength adjustment
* **Metadata Collector Overhaul** - Rebuilt metadata collection system with optimized architecture for better performance
* **Improved Save Image Node** - Enhanced metadata capture and image saving performance with the new metadata collector
* **Streamlined Recipe Saving** - Optimized Save Recipe functionality to work independently without requiring Preview Image nodes
### v0.8.19
* **Analytics Dashboard** - Added new Statistics page providing comprehensive visual analysis of model collection and usage patterns for better library insights
* **Target Node Selection** - Enhanced workflow integration with intelligent target choosing when sending LoRAs/recipes to workflows with multiple loader/stacker nodes; a visual selector now appears showing node color, type, ID, and title for precise targeting
* **Enhanced NSFW Controls** - Added support for setting NSFW levels on recipes with automatic content blurring based on user preferences
* **Customizable Card Display** - New display settings allowing users to choose whether card information and action buttons are always visible or only revealed on hover
* **Expanded Compatibility** - Added support for efficiency-nodes-comfyui in Save Recipe and Save Image nodes, plus fixed compatibility with ComfyUI_Custom_Nodes_AlekPet
### v0.8.18
* **Custom Example Images** - Added ability to import your own example images for LoRAs and checkpoints with automatic metadata extraction from embedded information
* **Enhanced Example Management** - New action buttons to set specific examples as previews or delete custom examples
* **Improved Duplicate Detection** - Enhanced "Find Duplicates" with hash verification feature to eliminate false positives when identifying duplicate models
* **Tag Management** - Added tag editing functionality allowing users to customize and manage model tags
* **Advanced Selection Controls** - Implemented Ctrl+A shortcut for quickly selecting all filtered LoRAs, automatically entering bulk mode when needed
* **Note**: Cache file functionality temporarily disabled pending rework
### v0.8.17
* **Duplicate Model Detection** - Added "Find Duplicates" functionality for LoRAs and checkpoints using model file hash detection, enabling convenient viewing and batch deletion of duplicate models
* **Enhanced URL Recipe Imports** - Optimized import recipe via URL functionality using CivitAI API calls instead of web scraping, now supporting all rated images (including NSFW) for recipe imports
* **Improved TriggerWord Control** - Enhanced TriggerWord Toggle node with new default_active switch to set the initial state (active/inactive) when trigger words are added
* **Centralized Example Management** - Added "Migrate Existing Example Images" feature to consolidate downloaded example images from model folders into central storage with customizable naming patterns
* **Intelligent Word Suggestions** - Implemented smart trigger word suggestions by reading class tokens and tag frequency from safetensors files, displaying recommendations when editing trigger words
* **Model Version Management** - Added "Re-link to CivitAI" context menu option for connecting models to different CivitAI versions when needed
### v0.8.16
* **Dramatic Startup Speed Improvement** - Added cache serialization mechanism for significantly faster loading times, especially beneficial for large model collections
* **Enhanced Refresh Options** - Extended functionality with "Full Rebuild (complete)" option alongside "Quick Refresh (incremental)" to fix potential memory cache issues without requiring application restart
* **Customizable Display Density** - Replaced compact mode with adjustable display density settings for personalized layout customization
* **Model Creator Information** - Added creator details to model information panels for better attribution
* **Improved WebP Support** - Enhanced Save Image node with workflow embedding capability for WebP format images
* **Direct Example Access** - Added "Open Example Images Folder" button to card interfaces for convenient browsing of downloaded model examples
* **Enhanced Compatibility** - Full ComfyUI Desktop support for "Send lora or recipe to workflow" functionality
* **Cache Management** - Added settings to clear existing cache files when needed
* **Bug Fixes & Stability** - Various improvements for overall reliability and performance
### v0.8.15
* **Enhanced One-Click Integration** - Replaced copy button with direct send button allowing LoRAs/recipes to be sent directly to your current ComfyUI workflow without needing to paste
* **Flexible Workflow Integration** - Click to append LoRAs/recipes to existing loader nodes or Shift+click to replace content, with additional right-click menu options for "Send to Workflow (Append)" or "Send to Workflow (Replace)"
* **Improved LoRA Loader Controls** - Added header drag functionality for proportional strength adjustment of all LoRAs simultaneously (including CLIP strengths when expanded)
* **Keyboard Navigation Support** - Implemented Page Up/Down for page scrolling, Home key to jump to top, and End key to jump to bottom for faster browsing through large collections
### v0.8.14
* **Virtualized Scrolling** - Completely rebuilt rendering mechanism for smooth browsing with no lag or freezing, now supporting virtually unlimited model collections with optimized layouts for large displays, improving space utilization and user experience
* **Compact Display Mode** - Added space-efficient view option that displays more cards per row (7 on 1080p, 8 on 2K, 10 on 4K)
* **Enhanced LoRA Node Functionality** - Comprehensive improvements to LoRA loader/stacker nodes including real-time trigger word updates (reflecting any change anywhere in the LoRA chain for precise updates) and expanded context menu with "Copy Notes" and "Copy Trigger Words" options for faster workflow
### v0.8.13
* **Enhanced Recipe Management** - Added "Find duplicates" feature to identify and batch delete duplicate recipes with duplicate detection notifications during imports
* **Improved Source Tracking** - Source URLs are now saved with recipes imported via URL, allowing users to view original content with one click or manually edit links
* **Advanced LoRA Control** - Double-click LoRAs in Loader/Stacker nodes to access expanded CLIP strength controls for more precise adjustments of model and CLIP strength separately
* **Lycoris Model Support** - Added compatibility with Lycoris models for expanded creative options
* **Bug Fixes & UX Improvements** - Resolved various issues and enhanced overall user experience with numerous optimizations
### v0.8.12
* **Enhanced Model Discovery** - Added alphabetical navigation bar to LoRAs page for faster browsing through large collections
* **Optimized Example Images** - Improved download logic to automatically refresh stale metadata before fetching example images
* **Model Exclusion System** - New right-click option to exclude specific LoRAs or checkpoints from management
* **Improved Showcase Experience** - Enhanced interaction in LoRA and checkpoint showcase areas for better usability
### v0.8.11
* **Offline Image Support** - Added functionality to download and save all model example images locally, ensuring access even when offline or if images are removed from CivitAI or the site is down
* **Resilient Download System** - Implemented pause/resume capability with checkpoint recovery that persists through restarts or unexpected exits
* **Bug Fixes & Stability** - Resolved various issues to enhance overall reliability and performance
### v0.8.6 Major Update
* **Checkpoint Management** - Added comprehensive management for model checkpoints including scanning, searching, filtering, and deletion
* **Enhanced Metadata Support** - New capabilities for retrieving and managing checkpoint metadata with improved operations
* **Improved Initial Loading** - Optimized cache initialization with visual progress indicators for better user experience
### v0.8.5
* **Enhanced LoRA & Recipe Connectivity** - Added Recipes tab in LoRA details to see all recipes using a specific LoRA
* **Improved Navigation** - New shortcuts to jump between related LoRAs and Recipes with one-click navigation
* **Video Preview Controls** - Added "Autoplay Videos on Hover" setting to optimize performance and reduce resource usage
* **UI Experience Refinements** - Smoother transitions between related content pages
### v0.8.4
* **Node Layout Improvements** - Fixed layout issues with LoRA Loader and Trigger Words Toggle nodes in newer ComfyUI frontend versions
* **Recipe LoRA Reconnection** - Added ability to reconnect deleted LoRAs in recipes by clicking the "deleted" badge in recipe details
* **Bug Fixes & Stability** - Resolved various issues for improved reliability
### v0.8.3
* **Enhanced Workflow Parser** - Rebuilt workflow analysis engine with improved support for ComfyUI core nodes and easier extensibility
* **Improved Recipe System** - Refined the experimental Save Recipe functionality with better workflow integration
* **New Save Image Node** - Added experimental node with metadata support for perfect CivitAI compatibility
* Supports dynamic filename prefixes with variables [1](https://github.com/nkchocoai/ComfyUI-SaveImageWithMetaData?tab=readme-ov-file#filename_prefix)
* **Default LoRA Root Setting** - Added configuration option for setting your preferred LoRA directory
### v0.8.2
* **Faster Initialization for Forge Users** - Improved first-run efficiency by utilizing existing `.json` and `.civitai.info` files from Forges CivitAI helper extension, making migration smoother.
* **LoRA Filename Editing** - Added support for renaming LoRA files directly within LoRA Manager.
* **Recipe Editing** - Users can now edit recipe names and tags.
* **Retain Deleted LoRAs in Recipes** - Deleted LoRAs will remain listed in recipes, allowing future functionality to reconnect them once re-obtained.
* **Download Missing LoRAs from Recipes** - Easily fetch missing LoRAs associated with a recipe.
### v0.8.1
* **Base Model Correction** - Added support for modifying base model associations to fix incorrect metadata for non-CivitAI LoRAs
* **LoRA Loader Flexibility** - Made CLIP input optional for model-only workflows like Hunyuan video generation
* **Expanded Recipe Support** - Added compatibility with 3 additional recipe metadata formats
* **Enhanced Showcase Images** - Generation parameters now displayed alongside LoRA preview images
* **UI Improvements & Bug Fixes** - Various interface refinements and stability enhancements
### v0.8.0
* **Introduced LoRA Recipes** - Create, import, save, and share your favorite LoRA combinations
* **Recipe Management System** - Easily browse, search, and organize your LoRA recipes
* **Workflow Integration** - Save recipes directly from your workflow with generation parameters preserved
* **Simplified Workflow Application** - Quickly apply saved recipes to new projects
* **Enhanced UI & UX** - Improved interface design and user experience
* **Bug Fixes & Stability** - Resolved various issues and enhanced overall performance
### v0.8.10
* **Standalone Mode** - Run LoRA Manager independently from ComfyUI for a lightweight experience that works even with other stable diffusion interfaces
* **Portable Edition** - New one-click portable version for easy startup and updates in standalone mode
* **Enhanced Metadata Collection** - Added support for SamplerCustomAdvanced node in the metadata collector module
* **Improved UI Organization** - Optimized Lora Loader node height to display up to 5 LoRAs at once with scrolling capability for larger collections
[View Update History](./update_logs.md)
@@ -90,13 +108,6 @@ Watch this quick tutorial to learn how to use the new one-click LoRA integration
- 🚀 **High Performance**
- Fast model loading and browsing
- Smooth scrolling through large collections
- Real-time updates when files change
- 📂 **Advanced Organization**
- Quick search with fuzzy matching
- Folder-based categorization
- Move LoRAs between folders
- Sort by name or date
- 🌐 **Rich Model Integration**
- Direct download from CivitAI
@@ -128,19 +139,26 @@ Watch this quick tutorial to learn how to use the new one-click LoRA integration
## Installation
### Option 1: **ComfyUI Manager** (Recommended)
### Option 1: **ComfyUI Manager** (Recommended for ComfyUI users)
1. Open **ComfyUI**.
2. Go to **Manager > Custom Node Manager**.
3. Search for `lora-manager`.
4. Click **Install**.
### Option 2: **Manual Installation**
### Option 2: **Portable Standalone Edition** (No ComfyUI required)
1. Download the [Portable Package](https://github.com/willmiao/ComfyUI-Lora-Manager/releases/download/v0.8.15/lora_manager_portable.7z)
2. Copy the provided `settings.json.example` file to create a new file named `settings.json` in `comfyui-lora-manager` folder
3. Edit `settings.json` to include your correct model folder paths and CivitAI API key
4. Run run.bat
### Option 3: **Manual Installation**
```bash
git clone https://github.com/willmiao/ComfyUI-Lora-Manager.git
cd ComfyUI-Lora-Manager
pip install requirements.txt
pip install -r requirements.txt
```
## Usage
@@ -161,29 +179,102 @@ pip install requirements.txt
- Paste into the Lora Loader node's text input
- The node will automatically apply preset strength and trigger words
### Filename Format Patterns for Save Image Node
The Save Image Node supports dynamic filename generation using pattern codes. You can customize how your images are named using the following format patterns:
#### Available Pattern Codes
- `%seed%` - Inserts the generation seed number
- `%width%` - Inserts the image width
- `%height%` - Inserts the image height
- `%pprompt:N%` - Inserts the positive prompt (limited to N characters)
- `%nprompt:N%` - Inserts the negative prompt (limited to N characters)
- `%model:N%` - Inserts the model/checkpoint name (limited to N characters)
- `%date%` - Inserts current date/time as "yyyyMMddhhmmss"
- `%date:FORMAT%` - Inserts date using custom format with:
- `yyyy` - 4-digit year
- `yy` - 2-digit year
- `MM` - 2-digit month
- `dd` - 2-digit day
- `hh` - 2-digit hour
- `mm` - 2-digit minute
- `ss` - 2-digit second
#### Examples
- `image_%seed%``image_1234567890`
- `gen_%width%x%height%``gen_512x768`
- `%model:10%_%seed%``dreamshape_1234567890`
- `%date:yyyy-MM-dd%``2025-04-28`
- `%pprompt:20%_%seed%``beautiful landscape_1234567890`
- `%model%_%date:yyMMdd%_%seed%``dreamshaper_v8_250428_1234567890`
You can combine multiple patterns to create detailed, organized filenames for your generated images.
### Standalone Mode
You can now run LoRA Manager independently from ComfyUI:
1. **For ComfyUI users**:
- Launch ComfyUI with LoRA Manager at least once to initialize the necessary path information in the `settings.json` file.
- Make sure dependencies are installed: `pip install -r requirements.txt`
- From your ComfyUI root directory, run:
```bash
python custom_nodes\comfyui-lora-manager\standalone.py
```
- Access the interface at: `http://localhost:8188/loras`
- You can specify a different host or port with arguments:
```bash
python custom_nodes\comfyui-lora-manager\standalone.py --host 127.0.0.1 --port 9000
```
2. **For non-ComfyUI users**:
- Copy the provided `settings.json.example` file to create a new file named `settings.json`
- Edit `settings.json` to include your correct model folder paths and CivitAI API key
- Install required dependencies: `pip install -r requirements.txt`
- Run standalone mode:
```bash
python standalone.py
```
- Access the interface through your browser at: `http://localhost:8188/loras`
This standalone mode provides a lightweight option for managing your model and recipe collection without needing to run the full ComfyUI environment, making it useful even for users who primarily use other stable diffusion interfaces.
---
## Contributing
Thank you for your interest in contributing to ComfyUI LoRA Manager! As this project is currently in its early stages and undergoing rapid development and refactoring, we are temporarily not accepting pull requests.
However, your feedback and ideas are extremely valuable to us:
- Please feel free to open issues for any bugs you encounter
- Submit feature requests through GitHub issues
- Share your suggestions for improvements
We appreciate your understanding and look forward to potentially accepting code contributions once the project architecture stabilizes.
---
## Credits
This project has been inspired by and benefited from other excellent ComfyUI extensions:
- [ComfyUI-SaveImageWithMetaData](https://github.com/Comfy-Community/ComfyUI-SaveImageWithMetaData) - For the image metadata functionality
- [ComfyUI-SaveImageWithMetaData](https://github.com/nkchocoai/ComfyUI-SaveImageWithMetaData) - For the image metadata functionality
- [rgthree-comfy](https://github.com/rgthree/rgthree-comfy) - For the lora loader functionality
---
## Contributing
If you have suggestions, bug reports, or improvements, feel free to open an issue or contribute directly to the codebase. Pull requests are always welcome!
---
## ☕ Support
If you find this project helpful, consider supporting its development:
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/pixelpawsai)
[![Patreon](https://img.shields.io/badge/Become%20a%20Patron-F96854.svg?style=for-the-badge&logo=patreon&logoColor=white)](https://patreon.com/PixelPawsAI)
WeChat: [Click to view QR code](https://raw.githubusercontent.com/willmiao/ComfyUI-Lora-Manager/main/static/images/wechat-qr.webp)
## 💬 Community
Join our Discord community for support, discussions, and updates:

Binary file not shown.

After

Width:  |  Height:  |  Size: 669 KiB

File diff suppressed because one or more lines are too long

Binary file not shown.

After

Width:  |  Height:  |  Size: 669 KiB

File diff suppressed because one or more lines are too long

View File

@@ -3,6 +3,11 @@ import platform
import folder_paths # type: ignore
from typing import List
import logging
import sys
import json
# Check if running in standalone mode
standalone_mode = 'nodes' not in sys.modules
logger = logging.getLogger(__name__)
@@ -18,9 +23,46 @@ class Config:
self._route_mappings = {}
self.loras_roots = self._init_lora_paths()
self.checkpoints_roots = self._init_checkpoint_paths()
self.temp_directory = folder_paths.get_temp_directory()
# 在初始化时扫描符号链接
self._scan_symbolic_links()
if not standalone_mode:
# Save the paths to settings.json when running in ComfyUI mode
self.save_folder_paths_to_settings()
def save_folder_paths_to_settings(self):
"""Save folder paths to settings.json for standalone mode to use later"""
try:
# Check if we're running in ComfyUI mode (not standalone)
if hasattr(folder_paths, "get_folder_paths") and not isinstance(folder_paths, type):
# Get all relevant paths
lora_paths = folder_paths.get_folder_paths("loras")
checkpoint_paths = folder_paths.get_folder_paths("checkpoints")
diffuser_paths = folder_paths.get_folder_paths("diffusers")
unet_paths = folder_paths.get_folder_paths("unet")
# Load existing settings
settings_path = os.path.join(os.path.dirname(os.path.dirname(__file__)), 'settings.json')
settings = {}
if os.path.exists(settings_path):
with open(settings_path, 'r', encoding='utf-8') as f:
settings = json.load(f)
# Update settings with paths
settings['folder_paths'] = {
'loras': lora_paths,
'checkpoints': checkpoint_paths,
'diffusers': diffuser_paths,
'unet': unet_paths
}
# Save settings
with open(settings_path, 'w', encoding='utf-8') as f:
json.dump(settings, f, indent=2)
logger.info("Saved folder paths to settings.json")
except Exception as e:
logger.warning(f"Failed to save folder paths: {e}")
def _is_link(self, path: str) -> bool:
try:
@@ -103,50 +145,66 @@ class Config:
def _init_lora_paths(self) -> List[str]:
"""Initialize and validate LoRA paths from ComfyUI settings"""
paths = sorted(set(path.replace(os.sep, "/")
for path in folder_paths.get_folder_paths("loras")
if os.path.exists(path)), key=lambda p: p.lower())
print("Found LoRA roots:", "\n - " + "\n - ".join(paths))
if not paths:
raise ValueError("No valid loras folders found in ComfyUI configuration")
# 初始化路径映射
for path in paths:
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, '/')
if real_path != path:
self.add_path_mapping(path, real_path)
return paths
try:
raw_paths = folder_paths.get_folder_paths("loras")
# Normalize and resolve symlinks, store mapping from resolved -> original
path_map = {}
for path in raw_paths:
if os.path.exists(path):
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, '/')
path_map[real_path] = path_map.get(real_path, path.replace(os.sep, "/")) # preserve first seen
# Now sort and use only the deduplicated real paths
unique_paths = sorted(path_map.values(), key=lambda p: p.lower())
logger.info("Found LoRA roots:" + ("\n - " + "\n - ".join(unique_paths) if unique_paths else "[]"))
if not unique_paths:
logger.warning("No valid loras folders found in ComfyUI configuration")
return []
for original_path in unique_paths:
real_path = os.path.normpath(os.path.realpath(original_path)).replace(os.sep, '/')
if real_path != original_path:
self.add_path_mapping(original_path, real_path)
return unique_paths
except Exception as e:
logger.warning(f"Error initializing LoRA paths: {e}")
return []
def _init_checkpoint_paths(self) -> List[str]:
"""Initialize and validate checkpoint paths from ComfyUI settings"""
# Get checkpoint paths from folder_paths
checkpoint_paths = folder_paths.get_folder_paths("checkpoints")
diffusion_paths = folder_paths.get_folder_paths("diffusers")
unet_paths = folder_paths.get_folder_paths("unet")
# Combine all checkpoint-related paths
all_paths = checkpoint_paths + diffusion_paths + unet_paths
# Filter and normalize paths
paths = sorted(set(path.replace(os.sep, "/")
for path in all_paths
if os.path.exists(path)), key=lambda p: p.lower())
print("Found checkpoint roots:", paths)
if not paths:
logger.warning("No valid checkpoint folders found in ComfyUI configuration")
try:
# Get checkpoint paths from folder_paths
checkpoint_paths = folder_paths.get_folder_paths("checkpoints")
diffusion_paths = folder_paths.get_folder_paths("diffusers")
unet_paths = folder_paths.get_folder_paths("unet")
# Combine all checkpoint-related paths
all_paths = checkpoint_paths + diffusion_paths + unet_paths
# Filter and normalize paths
paths = sorted(set(path.replace(os.sep, "/")
for path in all_paths
if os.path.exists(path)), key=lambda p: p.lower())
logger.info("Found checkpoint roots:" + ("\n - " + "\n - ".join(paths) if paths else "[]"))
if not paths:
logger.warning("No valid checkpoint folders found in ComfyUI configuration")
return []
# 初始化路径映射,与 LoRA 路径处理方式相同
for path in paths:
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, '/')
if real_path != path:
self.add_path_mapping(path, real_path)
return paths
except Exception as e:
logger.warning(f"Error initializing checkpoint paths: {e}")
return []
# 初始化路径映射,与 LoRA 路径处理方式相同
for path in paths:
real_path = os.path.normpath(os.path.realpath(path)).replace(os.sep, '/')
if real_path != path:
self.add_path_mapping(path, real_path)
return paths
def get_preview_static_url(self, preview_path: str) -> str:
"""Convert local preview path to static URL"""

View File

@@ -1,15 +1,28 @@
import asyncio
import sys
import os
import logging
from pathlib import Path
from server import PromptServer # type: ignore
from .config import config
from .routes.lora_routes import LoraRoutes
from .routes.api_routes import ApiRoutes
from .routes.recipe_routes import RecipeRoutes
from .routes.checkpoints_routes import CheckpointsRoutes
from .routes.stats_routes import StatsRoutes
from .routes.update_routes import UpdateRoutes
from .routes.misc_routes import MiscRoutes
from .routes.example_images_routes import ExampleImagesRoutes
from .services.service_registry import ServiceRegistry
import logging
from .services.settings_manager import settings
from .utils.example_images_migration import ExampleImagesMigration
logger = logging.getLogger(__name__)
# Check if we're in standalone mode
STANDALONE_MODE = 'nodes' not in sys.modules
class LoraManager:
"""Main entry point for LoRA Manager plugin"""
@@ -18,8 +31,18 @@ class LoraManager:
"""Initialize and register all routes"""
app = PromptServer.instance.app
# Configure aiohttp access logger to be less verbose
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
added_targets = set() # Track already added target paths
# Add static route for example images if the path exists in settings
example_images_path = settings.get('example_images_path')
logger.info(f"Example images path: {example_images_path}")
if example_images_path and os.path.exists(example_images_path):
app.router.add_static('/example_images_static', example_images_path)
logger.info(f"Added static route for example images: /example_images_static -> {example_images_path}")
# Add static routes for each lora root
for idx, root in enumerate(config.loras_roots, start=1):
preview_path = f'/loras_static/root{idx}/preview'
@@ -75,10 +98,14 @@ class LoraManager:
route_path = f'/loras_static/link_{link_idx["lora"]}/preview'
link_idx["lora"] += 1
app.router.add_static(route_path, target_path)
logger.info(f"Added static route for link target {route_path} -> {target_path}")
config.add_route_mapping(target_path, route_path)
added_targets.add(target_path)
try:
app.router.add_static(route_path, Path(target_path).resolve(strict=False))
logger.info(f"Added static route for link target {route_path} -> {target_path}")
config.add_route_mapping(target_path, route_path)
added_targets.add(target_path)
except Exception as e:
logger.warning(f"Failed to add static route on initialization for {target_path}: {e}")
continue
# Add static route for plugin assets
app.router.add_static('/loras_static', config.static_path)
@@ -86,12 +113,17 @@ class LoraManager:
# Setup feature routes
lora_routes = LoraRoutes()
checkpoints_routes = CheckpointsRoutes()
stats_routes = StatsRoutes()
# Initialize routes
lora_routes.setup_routes(app)
checkpoints_routes.setup_routes(app)
stats_routes.setup_routes(app) # Add statistics routes
ApiRoutes.setup_routes(app)
RecipeRoutes.setup_routes(app)
UpdateRoutes.setup_routes(app)
MiscRoutes.setup_routes(app) # Register miscellaneous routes
ExampleImagesRoutes.setup_routes(app) # Register example images routes
# Schedule service initialization
app.on_startup.append(lambda app: cls._initialize_services())
@@ -104,27 +136,17 @@ class LoraManager:
async def _initialize_services(cls):
"""Initialize all services using the ServiceRegistry"""
try:
# Ensure aiohttp access logger is configured with reduced verbosity
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
# Initialize CivitaiClient first to ensure it's ready for other services
civitai_client = await ServiceRegistry.get_civitai_client()
# Get file monitors through ServiceRegistry
lora_monitor = await ServiceRegistry.get_lora_monitor()
checkpoint_monitor = await ServiceRegistry.get_checkpoint_monitor()
# Start monitors
lora_monitor.start()
logger.debug("Lora monitor started")
# Make sure checkpoint monitor has paths before starting
await checkpoint_monitor.initialize_paths()
checkpoint_monitor.start()
logger.debug("Checkpoint monitor started")
await ServiceRegistry.get_civitai_client()
# Register DownloadManager with ServiceRegistry
download_manager = await ServiceRegistry.get_download_manager()
await ServiceRegistry.get_download_manager()
# Initialize WebSocket manager
ws_manager = await ServiceRegistry.get_websocket_manager()
await ServiceRegistry.get_websocket_manager()
# Initialize scanners in background
lora_scanner = await ServiceRegistry.get_lora_scanner()
@@ -133,10 +155,18 @@ class LoraManager:
# Initialize recipe scanner if needed
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
# Initialize metadata collector if not in standalone mode
if not STANDALONE_MODE:
from .metadata_collector import init as init_metadata
init_metadata()
logger.debug("Metadata collector initialized")
# Create low-priority initialization tasks
asyncio.create_task(lora_scanner.initialize_in_background(), name='lora_cache_init')
asyncio.create_task(checkpoint_scanner.initialize_in_background(), name='checkpoint_cache_init')
asyncio.create_task(recipe_scanner.initialize_in_background(), name='recipe_cache_init')
await ExampleImagesMigration.check_and_run_migrations()
logger.info("LoRA Manager: All services initialized and background tasks scheduled")
@@ -148,17 +178,6 @@ class LoraManager:
"""Cleanup resources using ServiceRegistry"""
try:
logger.info("LoRA Manager: Cleaning up services")
# Get monitors from ServiceRegistry
lora_monitor = await ServiceRegistry.get_service("lora_monitor")
if lora_monitor:
lora_monitor.stop()
logger.info("Stopped LoRA monitor")
checkpoint_monitor = await ServiceRegistry.get_service("checkpoint_monitor")
if checkpoint_monitor:
checkpoint_monitor.stop()
logger.info("Stopped checkpoint monitor")
# Close CivitaiClient gracefully
civitai_client = await ServiceRegistry.get_service("civitai_client")

View File

@@ -1,18 +1,32 @@
import os
import importlib
from .metadata_hook import MetadataHook
from .metadata_registry import MetadataRegistry
import sys
def init():
# Install hooks to collect metadata during execution
MetadataHook.install()
# Initialize registry
registry = MetadataRegistry()
print("ComfyUI Metadata Collector initialized")
def get_metadata(prompt_id=None):
"""Helper function to get metadata from the registry"""
registry = MetadataRegistry()
return registry.get_metadata(prompt_id)
# Check if running in standalone mode
standalone_mode = 'nodes' not in sys.modules
if not standalone_mode:
from .metadata_hook import MetadataHook
from .metadata_registry import MetadataRegistry
def init():
# Install hooks to collect metadata during execution
MetadataHook.install()
# Initialize registry
registry = MetadataRegistry()
print("ComfyUI Metadata Collector initialized")
def get_metadata(prompt_id=None):
"""Helper function to get metadata from the registry"""
registry = MetadataRegistry()
return registry.get_metadata(prompt_id)
else:
# Standalone mode - provide dummy implementations
def init():
print("ComfyUI Metadata Collector disabled in standalone mode")
def get_metadata(prompt_id=None):
"""Dummy implementation for standalone mode"""
return {}

View File

@@ -1,12 +1,13 @@
"""Constants used by the metadata collector"""
# Individual category constants
# Metadata categories
MODELS = "models"
PROMPTS = "prompts"
SAMPLING = "sampling"
LORAS = "loras"
SIZE = "size"
IMAGES = "images" # Added new category for image results
IMAGES = "images"
IS_SAMPLER = "is_sampler" # New constant to mark sampler nodes
# Collection of categories for iteration
METADATA_CATEGORIES = [MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES] # Added IMAGES to categories
# Complete list of categories to track
METADATA_CATEGORIES = [MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES]

View File

@@ -1,38 +1,135 @@
import json
import sys
from .constants import IMAGES
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE
# Check if running in standalone mode
standalone_mode = 'nodes' not in sys.modules
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IS_SAMPLER
class MetadataProcessor:
"""Process and format collected metadata"""
@staticmethod
def find_primary_sampler(metadata):
"""Find the primary KSampler node (with denoise=1)"""
def find_primary_sampler(metadata, downstream_id=None):
"""
Find the primary KSampler node that executed before the given downstream node
Parameters:
- metadata: The workflow metadata
- downstream_id: Optional ID of a downstream node to help identify the specific primary sampler
"""
if downstream_id is None:
if IMAGES in metadata and "first_decode" in metadata[IMAGES]:
downstream_id = metadata[IMAGES]["first_decode"]["node_id"]
# If we have a downstream_id and execution_order, use it to narrow down potential samplers
if downstream_id and "execution_order" in metadata:
execution_order = metadata["execution_order"]
# Find the index of the downstream node in the execution order
if downstream_id in execution_order:
downstream_index = execution_order.index(downstream_id)
# Extract all sampler nodes that executed before the downstream node
candidate_samplers = {}
for i in range(downstream_index):
node_id = execution_order[i]
# Use IS_SAMPLER flag to identify true sampler nodes
if node_id in metadata.get(SAMPLING, {}) and metadata[SAMPLING][node_id].get(IS_SAMPLER, False):
candidate_samplers[node_id] = metadata[SAMPLING][node_id]
# If we found candidate samplers, apply primary sampler logic to these candidates only
if candidate_samplers:
# Collect potential primary samplers based on different criteria
custom_advanced_samplers = []
advanced_add_noise_samplers = []
high_denoise_samplers = []
max_denoise = -1
high_denoise_id = None
# First, check for SamplerCustomAdvanced among candidates
prompt = metadata.get("current_prompt")
if prompt and prompt.original_prompt:
for node_id in candidate_samplers:
node_info = prompt.original_prompt.get(node_id, {})
if node_info.get("class_type") == "SamplerCustomAdvanced":
custom_advanced_samplers.append(node_id)
# Next, check for KSamplerAdvanced with add_noise="enable" among candidates
for node_id, sampler_info in candidate_samplers.items():
parameters = sampler_info.get("parameters", {})
add_noise = parameters.get("add_noise")
if add_noise == "enable":
advanced_add_noise_samplers.append(node_id)
# Find the sampler with highest denoise value among candidates
for node_id, sampler_info in candidate_samplers.items():
parameters = sampler_info.get("parameters", {})
denoise = parameters.get("denoise")
if denoise is not None and denoise > max_denoise:
max_denoise = denoise
high_denoise_id = node_id
if high_denoise_id:
high_denoise_samplers.append(high_denoise_id)
# Combine all potential primary samplers
potential_samplers = custom_advanced_samplers + advanced_add_noise_samplers + high_denoise_samplers
# Find the most recent potential primary sampler (closest to downstream node)
for i in range(downstream_index - 1, -1, -1):
node_id = execution_order[i]
if node_id in potential_samplers:
return node_id, candidate_samplers[node_id]
# If no potential sampler found from our criteria, return the most recent sampler
if candidate_samplers:
for i in range(downstream_index - 1, -1, -1):
node_id = execution_order[i]
if node_id in candidate_samplers:
return node_id, candidate_samplers[node_id]
# If no downstream_id provided or no suitable sampler found, fall back to original logic
primary_sampler = None
primary_sampler_id = None
max_denoise = -1
# First, check for KSamplerAdvanced with add_noise="enable"
# First, check for SamplerCustomAdvanced
prompt = metadata.get("current_prompt")
if prompt and prompt.original_prompt:
for node_id, node_info in prompt.original_prompt.items():
if node_info.get("class_type") == "SamplerCustomAdvanced":
# Check if the node is in SAMPLING and has IS_SAMPLER flag
if node_id in metadata.get(SAMPLING, {}) and metadata[SAMPLING][node_id].get(IS_SAMPLER, False):
return node_id, metadata[SAMPLING][node_id]
# Next, check for KSamplerAdvanced with add_noise="enable" using IS_SAMPLER flag
for node_id, sampler_info in metadata.get(SAMPLING, {}).items():
# Skip if not marked as a sampler
if not sampler_info.get(IS_SAMPLER, False):
continue
parameters = sampler_info.get("parameters", {})
add_noise = parameters.get("add_noise")
# If add_noise is "enable", this is likely the primary sampler for KSamplerAdvanced
if add_noise == "enable":
primary_sampler = sampler_info
primary_sampler_id = node_id
break
# If no KSamplerAdvanced found, fall back to traditional KSampler with denoise=1
# If no specialized sampler found, find the sampler with highest denoise value
if primary_sampler is None:
for node_id, sampler_info in metadata.get(SAMPLING, {}).items():
# Skip if not marked as a sampler
if not sampler_info.get(IS_SAMPLER, False):
continue
parameters = sampler_info.get("parameters", {})
denoise = parameters.get("denoise")
# If denoise is 1.0, this is likely the primary sampler
if denoise == 1.0 or denoise == 1:
if denoise is not None and denoise > max_denoise:
max_denoise = denoise
primary_sampler = sampler_info
primary_sampler_id = node_id
break
return primary_sampler_id, primary_sampler
@@ -60,13 +157,18 @@ class MetadataProcessor:
current_node_id = node_id
current_input = input_name
# If we're just tracing to origin (no target_class), keep track of the last valid node
last_valid_node = None
while current_depth < max_depth:
if current_node_id not in prompt.original_prompt:
return None
return last_valid_node if not target_class else None
node_inputs = prompt.original_prompt[current_node_id].get("inputs", {})
if current_input not in node_inputs:
return None
# We've reached a node without the specified input - this is our origin node
# if we're not looking for a specific target_class
return current_node_id if not target_class else None
input_value = node_inputs[current_input]
# Input connections are formatted as [node_id, output_index]
@@ -77,9 +179,9 @@ class MetadataProcessor:
if target_class and prompt.original_prompt[found_node_id].get("class_type") == target_class:
return found_node_id
# If we're not looking for a specific class or haven't found it yet
# If we're not looking for a specific class, update the last valid node
if not target_class:
return found_node_id
last_valid_node = found_node_id
# Continue tracing through intermediate nodes
current_node_id = found_node_id
@@ -87,16 +189,17 @@ class MetadataProcessor:
if "conditioning" in prompt.original_prompt[current_node_id].get("inputs", {}):
current_input = "conditioning"
else:
# If there's no "conditioning" input, we can't trace further
# If there's no "conditioning" input, return the current node
# if we're not looking for a specific target_class
return found_node_id if not target_class else None
else:
# We've reached a node with no further connections
return None
return last_valid_node if not target_class else None
current_depth += 1
# If we've reached max depth without finding target_class
return None
return last_valid_node if not target_class else None
@staticmethod
def find_primary_checkpoint(metadata):
@@ -112,8 +215,60 @@ class MetadataProcessor:
return None
@staticmethod
def extract_generation_params(metadata):
"""Extract generation parameters from metadata using node relationships"""
def match_conditioning_to_prompts(metadata, sampler_id):
"""
Match conditioning objects from a sampler to prompts in metadata
Parameters:
- metadata: The workflow metadata
- sampler_id: ID of the sampler node to match
Returns:
- Dictionary with 'prompt' and 'negative_prompt' if found
"""
result = {
"prompt": "",
"negative_prompt": ""
}
# Check if we have stored conditioning objects for this sampler
if sampler_id in metadata.get(PROMPTS, {}) and (
"pos_conditioning" in metadata[PROMPTS][sampler_id] or
"neg_conditioning" in metadata[PROMPTS][sampler_id]):
pos_conditioning = metadata[PROMPTS][sampler_id].get("pos_conditioning")
neg_conditioning = metadata[PROMPTS][sampler_id].get("neg_conditioning")
# Try to match conditioning objects with those stored by CLIPTextEncodeExtractor
for prompt_node_id, prompt_data in metadata[PROMPTS].items():
# For nodes with single conditioning output
if "conditioning" in prompt_data:
if pos_conditioning is not None and id(prompt_data["conditioning"]) == id(pos_conditioning):
result["prompt"] = prompt_data.get("text", "")
if neg_conditioning is not None and id(prompt_data["conditioning"]) == id(neg_conditioning):
result["negative_prompt"] = prompt_data.get("text", "")
# For nodes with separate pos_conditioning and neg_conditioning outputs (like TSC_EfficientLoader)
if "positive_encoded" in prompt_data:
if pos_conditioning is not None and id(prompt_data["positive_encoded"]) == id(pos_conditioning):
result["prompt"] = prompt_data.get("positive_text", "")
if "negative_encoded" in prompt_data:
if neg_conditioning is not None and id(prompt_data["negative_encoded"]) == id(neg_conditioning):
result["negative_prompt"] = prompt_data.get("negative_text", "")
return result
@staticmethod
def extract_generation_params(metadata, id=None):
"""
Extract generation parameters from metadata using node relationships
Parameters:
- metadata: The workflow metadata
- id: Optional ID of a downstream node to help identify the specific primary sampler
"""
params = {
"prompt": "",
"negative_prompt": "",
@@ -133,13 +288,20 @@ class MetadataProcessor:
prompt = metadata.get("current_prompt")
# Find the primary KSampler node
primary_sampler_id, primary_sampler = MetadataProcessor.find_primary_sampler(metadata)
primary_sampler_id, primary_sampler = MetadataProcessor.find_primary_sampler(metadata, id)
# Directly get checkpoint from metadata instead of tracing
checkpoint = MetadataProcessor.find_primary_checkpoint(metadata)
if checkpoint:
params["checkpoint"] = checkpoint
# Check if guidance parameter exists in any sampling node
for node_id, sampler_info in metadata.get(SAMPLING, {}).items():
parameters = sampler_info.get("parameters", {})
if "guidance" in parameters and parameters["guidance"] is not None:
params["guidance"] = parameters["guidance"]
break
if primary_sampler:
# Extract sampling parameters
sampling_params = primary_sampler.get("parameters", {})
@@ -150,64 +312,83 @@ class MetadataProcessor:
params["sampler"] = sampling_params.get("sampler_name")
params["scheduler"] = sampling_params.get("scheduler")
# Trace connections from the primary sampler
if prompt and primary_sampler_id:
# Trace positive prompt - look specifically for CLIPTextEncode
positive_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "positive", "CLIPTextEncode", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
# Check if this is a SamplerCustomAdvanced node
is_custom_advanced = False
if prompt.original_prompt and primary_sampler_id in prompt.original_prompt:
is_custom_advanced = prompt.original_prompt[primary_sampler_id].get("class_type") == "SamplerCustomAdvanced"
# Find any FluxGuidance nodes in the positive conditioning path
flux_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "positive", "FluxGuidance", max_depth=5)
if flux_node_id and flux_node_id in metadata.get(SAMPLING, {}):
flux_params = metadata[SAMPLING][flux_node_id].get("parameters", {})
params["guidance"] = flux_params.get("guidance")
# Trace negative prompt - look specifically for CLIPTextEncode
negative_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "negative", "CLIPTextEncode", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
# Check if the sampler itself has size information (from latent_image)
if primary_sampler_id in metadata.get(SIZE, {}):
width = metadata[SIZE][primary_sampler_id].get("width")
height = metadata[SIZE][primary_sampler_id].get("height")
if width and height:
params["size"] = f"{width}x{height}"
else:
# Fallback to the previous trace method if needed
latent_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "latent_image")
if latent_node_id:
# Follow chain to find EmptyLatentImage node
size_found = False
current_node_id = latent_node_id
# Limit depth to avoid infinite loops in complex workflows
max_depth = 10
for _ in range(max_depth):
if current_node_id in metadata.get(SIZE, {}):
width = metadata[SIZE][current_node_id].get("width")
height = metadata[SIZE][current_node_id].get("height")
if width and height:
params["size"] = f"{width}x{height}"
size_found = True
break
if is_custom_advanced:
# For SamplerCustomAdvanced, trace specific inputs
# 1. Trace sigmas input to find BasicScheduler
scheduler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sigmas", "BasicScheduler", max_depth=5)
if scheduler_node_id and scheduler_node_id in metadata.get(SAMPLING, {}):
scheduler_params = metadata[SAMPLING][scheduler_node_id].get("parameters", {})
params["steps"] = scheduler_params.get("steps")
params["scheduler"] = scheduler_params.get("scheduler")
# 2. Trace sampler input to find KSamplerSelect
sampler_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "sampler", "KSamplerSelect", max_depth=5)
if sampler_node_id and sampler_node_id in metadata.get(SAMPLING, {}):
sampler_params = metadata[SAMPLING][sampler_node_id].get("parameters", {})
params["sampler"] = sampler_params.get("sampler_name")
# 3. Trace guider input for CFGGuider and CLIPTextEncode
guider_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "guider", max_depth=5)
if guider_node_id and guider_node_id in prompt.original_prompt:
# Check if the guider node is a CFGGuider
if prompt.original_prompt[guider_node_id].get("class_type") == "CFGGuider":
# Extract cfg value from the CFGGuider
if guider_node_id in metadata.get(SAMPLING, {}):
cfg_params = metadata[SAMPLING][guider_node_id].get("parameters", {})
params["cfg_scale"] = cfg_params.get("cfg")
# Try to follow the chain
if prompt and prompt.original_prompt and current_node_id in prompt.original_prompt:
node_info = prompt.original_prompt[current_node_id]
if "inputs" in node_info:
# Look for a connection that might lead to size information
for input_name, input_value in node_info["inputs"].items():
if isinstance(input_value, list) and len(input_value) >= 2:
current_node_id = input_value[0]
break
else:
break # No connections to follow
else:
break # No inputs to follow
else:
break # Can't follow further
# Find CLIPTextEncode for positive prompt
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "positive", "CLIPTextEncode", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
# Find CLIPTextEncode for negative prompt
negative_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "negative", "CLIPTextEncode", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
else:
positive_node_id = MetadataProcessor.trace_node_input(prompt, guider_node_id, "conditioning", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
else:
# For standard samplers, match conditioning objects to prompts
prompt_results = MetadataProcessor.match_conditioning_to_prompts(metadata, primary_sampler_id)
params["prompt"] = prompt_results["prompt"]
params["negative_prompt"] = prompt_results["negative_prompt"]
# If prompts were still not found, fall back to tracing connections
if not params["prompt"]:
# Original tracing for standard samplers
# Trace positive prompt - look specifically for CLIPTextEncode
positive_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "positive", max_depth=10)
if positive_node_id and positive_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_node_id].get("text", "")
else:
# If CLIPTextEncode is not found, try to find CLIPTextEncodeFlux
positive_flux_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "positive", "CLIPTextEncodeFlux", max_depth=10)
if positive_flux_node_id and positive_flux_node_id in metadata.get(PROMPTS, {}):
params["prompt"] = metadata[PROMPTS][positive_flux_node_id].get("text", "")
# Trace negative prompt - look specifically for CLIPTextEncode
negative_node_id = MetadataProcessor.trace_node_input(prompt, primary_sampler_id, "negative", max_depth=10)
if negative_node_id and negative_node_id in metadata.get(PROMPTS, {}):
params["negative_prompt"] = metadata[PROMPTS][negative_node_id].get("text", "")
# Size extraction is same for all sampler types
# Check if the sampler itself has size information (from latent_image)
if primary_sampler_id in metadata.get(SIZE, {}):
width = metadata[SIZE][primary_sampler_id].get("width")
height = metadata[SIZE][primary_sampler_id].get("height")
if width and height:
params["size"] = f"{width}x{height}"
# Extract LoRAs using the standardized format
lora_parts = []
@@ -227,9 +408,19 @@ class MetadataProcessor:
return params
@staticmethod
def to_dict(metadata):
"""Convert extracted metadata to the ComfyUI output.json format"""
params = MetadataProcessor.extract_generation_params(metadata)
def to_dict(metadata, id=None):
"""
Convert extracted metadata to the ComfyUI output.json format
Parameters:
- metadata: The workflow metadata
- id: Optional ID of a downstream node to help identify the specific primary sampler
"""
if standalone_mode:
# Return empty dictionary in standalone mode
return {}
params = MetadataProcessor.extract_generation_params(metadata, id)
# Convert all values to strings to match output.json format
for key in params:
@@ -239,7 +430,7 @@ class MetadataProcessor:
return params
@staticmethod
def to_json(metadata):
def to_json(metadata, id=None):
"""Convert metadata to JSON string"""
params = MetadataProcessor.to_dict(metadata)
params = MetadataProcessor.to_dict(metadata, id)
return json.dumps(params, indent=4)

View File

@@ -1,6 +1,6 @@
import os
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES
from .constants import MODELS, PROMPTS, SAMPLING, LORAS, SIZE, IMAGES, IS_SAMPLER
class NodeMetadataExtractor:
@@ -35,7 +35,70 @@ class CheckpointLoaderExtractor(NodeMetadataExtractor):
"type": "checkpoint",
"node_id": node_id
}
class TSCCheckpointLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "ckpt_name" not in inputs:
return
model_name = inputs.get("ckpt_name")
if model_name:
metadata[MODELS][node_id] = {
"name": model_name,
"type": "checkpoint",
"node_id": node_id
}
# For loader node has lora_stack input, like Efficient Loader from Efficient Nodes
active_loras = []
# Process lora_stack if available
if "lora_stack" in inputs:
lora_stack = inputs.get("lora_stack", [])
for lora_path, model_strength, clip_strength in lora_stack:
# Extract lora name from path (following the format in lora_loader.py)
lora_name = os.path.splitext(os.path.basename(lora_path))[0]
active_loras.append({
"name": lora_name,
"strength": model_strength
})
if active_loras:
metadata[LORAS][node_id] = {
"lora_list": active_loras,
"node_id": node_id
}
# Extract positive and negative prompt text if available
positive_text = inputs.get("positive", "")
negative_text = inputs.get("negative", "")
if positive_text or negative_text:
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
# Store both positive and negative text
metadata[PROMPTS][node_id]["positive_text"] = positive_text
metadata[PROMPTS][node_id]["negative_text"] = negative_text
@staticmethod
def update(node_id, outputs, metadata):
# Handle conditioning outputs from TSC_EfficientLoader
# outputs is a list with [(model, positive_encoded, negative_encoded, {"samples":latent}, vae, clip, dependencies,)]
if outputs and isinstance(outputs, list) and len(outputs) > 0:
first_output = outputs[0]
if isinstance(first_output, tuple) and len(first_output) >= 3:
positive_conditioning = first_output[1]
negative_conditioning = first_output[2]
# Save both conditioning objects in metadata
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
metadata[PROMPTS][node_id]["positive_encoded"] = positive_conditioning
metadata[PROMPTS][node_id]["negative_encoded"] = negative_conditioning
class CLIPTextEncodeExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
@@ -47,6 +110,13 @@ class CLIPTextEncodeExtractor(NodeMetadataExtractor):
"text": text,
"node_id": node_id
}
@staticmethod
def update(node_id, outputs, metadata):
if outputs and isinstance(outputs, list) and len(outputs) > 0:
if isinstance(outputs[0], tuple) and len(outputs[0]) > 0:
conditioning = outputs[0][0]
metadata[PROMPTS][node_id]["conditioning"] = conditioning
class SamplerExtractor(NodeMetadataExtractor):
@staticmethod
@@ -61,8 +131,21 @@ class SamplerExtractor(NodeMetadataExtractor):
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id
"node_id": node_id,
IS_SAMPLER: True # Add sampler flag
}
# Store the conditioning objects directly in metadata for later matching
pos_conditioning = inputs.get("positive", None)
neg_conditioning = inputs.get("negative", None)
# Save conditioning objects in metadata for later matching
if pos_conditioning is not None or neg_conditioning is not None:
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
metadata[PROMPTS][node_id]["pos_conditioning"] = pos_conditioning
metadata[PROMPTS][node_id]["neg_conditioning"] = neg_conditioning
# Extract latent image dimensions if available
if "latent_image" in inputs and inputs["latent_image"] is not None:
@@ -98,9 +181,22 @@ class KSamplerAdvancedExtractor(NodeMetadataExtractor):
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id
"node_id": node_id,
IS_SAMPLER: True # Add sampler flag
}
# Store the conditioning objects directly in metadata for later matching
pos_conditioning = inputs.get("positive", None)
neg_conditioning = inputs.get("negative", None)
# Save conditioning objects in metadata for later matching
if pos_conditioning is not None or neg_conditioning is not None:
if node_id not in metadata[PROMPTS]:
metadata[PROMPTS][node_id] = {"node_id": node_id}
metadata[PROMPTS][node_id]["pos_conditioning"] = pos_conditioning
metadata[PROMPTS][node_id]["neg_conditioning"] = neg_conditioning
# Extract latent image dimensions if available
if "latent_image" in inputs and inputs["latent_image"] is not None:
latent = inputs["latent_image"]
@@ -122,6 +218,81 @@ class KSamplerAdvancedExtractor(NodeMetadataExtractor):
"node_id": node_id
}
class TSCSamplerBaseExtractor(NodeMetadataExtractor):
"""Base extractor for handling TSC sampler node outputs"""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
# Store vae_decode setting for later use in update
if inputs and "vae_decode" in inputs:
if SAMPLING not in metadata:
metadata[SAMPLING] = {}
if node_id not in metadata[SAMPLING]:
metadata[SAMPLING][node_id] = {"parameters": {}, "node_id": node_id}
# Store the vae_decode setting
metadata[SAMPLING][node_id]["vae_decode"] = inputs["vae_decode"]
@staticmethod
def update(node_id, outputs, metadata):
# Check if vae_decode was set to "true"
should_save_image = True
if SAMPLING in metadata and node_id in metadata[SAMPLING]:
vae_decode = metadata[SAMPLING][node_id].get("vae_decode")
if vae_decode is not None:
should_save_image = (vae_decode == "true")
# Skip image saving if vae_decode isn't "true"
if not should_save_image:
return
# Ensure IMAGES category exists
if IMAGES not in metadata:
metadata[IMAGES] = {}
# Extract output_images from the TSC sampler format
# outputs = [{"ui": {"images": preview_images}, "result": result}]
# where result = (original_model, original_positive, original_negative, latent_list, optional_vae, output_images,)
if outputs and isinstance(outputs, list) and len(outputs) > 0:
# Get the first item in the list
output_item = outputs[0]
if isinstance(output_item, dict) and "result" in output_item:
result = output_item["result"]
if isinstance(result, tuple) and len(result) >= 6:
# The output_images is the last element in the result tuple
output_images = (result[5],)
# Save image data under node ID index to be captured by caching mechanism
metadata[IMAGES][node_id] = {
"node_id": node_id,
"image": output_images
}
# Only set first_decode if it hasn't been recorded yet
if "first_decode" not in metadata[IMAGES]:
metadata[IMAGES]["first_decode"] = metadata[IMAGES][node_id]
class TSCKSamplerExtractor(SamplerExtractor, TSCSamplerBaseExtractor):
"""Extractor for TSC_KSampler nodes"""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
# Call parent extract methods
SamplerExtractor.extract(node_id, inputs, outputs, metadata)
TSCSamplerBaseExtractor.extract(node_id, inputs, outputs, metadata)
# Update method is inherited from TSCSamplerBaseExtractor
class TSCKSamplerAdvancedExtractor(KSamplerAdvancedExtractor, TSCSamplerBaseExtractor):
"""Extractor for TSC_KSamplerAdvanced nodes"""
@staticmethod
def extract(node_id, inputs, outputs, metadata):
# Call parent extract methods
SamplerExtractor.extract(node_id, inputs, outputs, metadata)
TSCSamplerBaseExtractor.extract(node_id, inputs, outputs, metadata)
# Update method is inherited from TSCSamplerBaseExtractor
class LoraLoaderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
@@ -257,23 +428,177 @@ class VAEDecodeExtractor(NodeMetadataExtractor):
if "first_decode" not in metadata[IMAGES]:
metadata[IMAGES]["first_decode"] = metadata[IMAGES][node_id]
class KSamplerSelectExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "sampler_name" not in inputs:
return
sampling_params = {}
if "sampler_name" in inputs:
sampling_params["sampler_name"] = inputs["sampler_name"]
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id,
IS_SAMPLER: False # Mark as non-primary sampler
}
class BasicSchedulerExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
sampling_params = {}
for key in ["scheduler", "steps", "denoise"]:
if key in inputs:
sampling_params[key] = inputs[key]
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id,
IS_SAMPLER: False # Mark as non-primary sampler
}
class SamplerCustomAdvancedExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs:
return
sampling_params = {}
# Handle noise.seed as seed
if "noise" in inputs and inputs["noise"] is not None and hasattr(inputs["noise"], "seed"):
noise = inputs["noise"]
sampling_params["seed"] = noise.seed
metadata[SAMPLING][node_id] = {
"parameters": sampling_params,
"node_id": node_id,
IS_SAMPLER: True # Add sampler flag
}
# Extract latent image dimensions if available
if "latent_image" in inputs and inputs["latent_image"] is not None:
latent = inputs["latent_image"]
if isinstance(latent, dict) and "samples" in latent:
# Extract dimensions from latent tensor
samples = latent["samples"]
if hasattr(samples, "shape") and len(samples.shape) >= 3:
# Correct shape interpretation: [batch_size, channels, height/8, width/8]
# Multiply by 8 to get actual pixel dimensions
height = int(samples.shape[2] * 8)
width = int(samples.shape[3] * 8)
if SIZE not in metadata:
metadata[SIZE] = {}
metadata[SIZE][node_id] = {
"width": width,
"height": height,
"node_id": node_id
}
import json
class CLIPTextEncodeFluxExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "clip_l" not in inputs or "t5xxl" not in inputs:
return
clip_l_text = inputs.get("clip_l", "")
t5xxl_text = inputs.get("t5xxl", "")
# If both are empty, use empty string
if not clip_l_text and not t5xxl_text:
combined_text = ""
# If one is empty, use the non-empty one
elif not clip_l_text:
combined_text = t5xxl_text
elif not t5xxl_text:
combined_text = clip_l_text
# If both have content, use JSON format
else:
combined_text = json.dumps({
"T5": t5xxl_text,
"CLIP-L": clip_l_text
})
metadata[PROMPTS][node_id] = {
"text": combined_text,
"node_id": node_id
}
# Extract guidance value if available
if "guidance" in inputs:
guidance_value = inputs.get("guidance")
# Store the guidance value in SAMPLING category
if SAMPLING not in metadata:
metadata[SAMPLING] = {}
if node_id not in metadata[SAMPLING]:
metadata[SAMPLING][node_id] = {"parameters": {}, "node_id": node_id}
metadata[SAMPLING][node_id]["parameters"]["guidance"] = guidance_value
@staticmethod
def update(node_id, outputs, metadata):
if outputs and isinstance(outputs, list) and len(outputs) > 0:
if isinstance(outputs[0], tuple) and len(outputs[0]) > 0:
conditioning = outputs[0][0]
metadata[PROMPTS][node_id]["conditioning"] = conditioning
class CFGGuiderExtractor(NodeMetadataExtractor):
@staticmethod
def extract(node_id, inputs, outputs, metadata):
if not inputs or "cfg" not in inputs:
return
cfg_value = inputs.get("cfg")
# Store the cfg value in SAMPLING category
if SAMPLING not in metadata:
metadata[SAMPLING] = {}
if node_id not in metadata[SAMPLING]:
metadata[SAMPLING][node_id] = {"parameters": {}, "node_id": node_id}
metadata[SAMPLING][node_id]["parameters"]["cfg"] = cfg_value
# Registry of node-specific extractors
# Keys are node class names
NODE_EXTRACTORS = {
# Sampling
"KSampler": SamplerExtractor,
"KSamplerAdvanced": KSamplerAdvancedExtractor, # Add KSamplerAdvanced
"SamplerCustomAdvanced": SamplerExtractor, # Add SamplerCustomAdvanced
"KSamplerAdvanced": KSamplerAdvancedExtractor,
"SamplerCustomAdvanced": SamplerCustomAdvancedExtractor,
"TSC_KSampler": TSCKSamplerExtractor, # Efficient Nodes
"TSC_KSamplerAdvanced": TSCKSamplerAdvancedExtractor, # Efficient Nodes
# Sampling Selectors
"KSamplerSelect": KSamplerSelectExtractor, # Add KSamplerSelect
"BasicScheduler": BasicSchedulerExtractor, # Add BasicScheduler
# Loaders
"CheckpointLoaderSimple": CheckpointLoaderExtractor,
"comfyLoader": CheckpointLoaderExtractor, # easy comfyLoader
"TSC_EfficientLoader": TSCCheckpointLoaderExtractor, # Efficient Nodes
"UNETLoader": UNETLoaderExtractor, # Updated to use dedicated extractor
"UnetLoaderGGUF": UNETLoaderExtractor, # Updated to use dedicated extractor
"LoraLoader": LoraLoaderExtractor,
"LoraManagerLoader": LoraLoaderManagerExtractor,
# Conditioning
"CLIPTextEncode": CLIPTextEncodeExtractor,
"CLIPTextEncodeFlux": CLIPTextEncodeFluxExtractor, # Add CLIPTextEncodeFlux
"WAS_Text_to_Conditioning": CLIPTextEncodeExtractor,
"AdvancedCLIPTextEncode": CLIPTextEncodeExtractor, # From https://github.com/BlenderNeko/ComfyUI_ADV_CLIP_emb
# Latent
"EmptyLatentImage": ImageSizeExtractor,
# Flux
"FluxGuidance": FluxGuidanceExtractor, # Add FluxGuidance
"CFGGuider": CFGGuiderExtractor, # Add CFGGuider
# Image
"VAEDecode": VAEDecodeExtractor, # Added VAEDecode extractor
# Add other nodes as needed

View File

@@ -1,4 +1,5 @@
import logging
from server import PromptServer # type: ignore
from ..metadata_collector.metadata_processor import MetadataProcessor
logger = logging.getLogger(__name__)
@@ -7,6 +8,7 @@ class DebugMetadata:
NAME = "Debug Metadata (LoraManager)"
CATEGORY = "Lora Manager/utils"
DESCRIPTION = "Debug node to verify metadata_processor functionality"
OUTPUT_NODE = True
@classmethod
def INPUT_TYPES(cls):
@@ -14,22 +16,30 @@ class DebugMetadata:
"required": {
"images": ("IMAGE",),
},
"hidden": {
"id": "UNIQUE_ID",
},
}
RETURN_TYPES = ("STRING",)
RETURN_NAMES = ("metadata_json",)
RETURN_TYPES = ()
FUNCTION = "process_metadata"
def process_metadata(self, images):
def process_metadata(self, images, id):
try:
# Get the current execution context's metadata
from ..metadata_collector import get_metadata
metadata = get_metadata()
# Use the MetadataProcessor to convert it to JSON string
metadata_json = MetadataProcessor.to_json(metadata)
metadata_json = MetadataProcessor.to_json(metadata, id)
# Send metadata to frontend for display
PromptServer.instance.send_sync("metadata_update", {
"id": id,
"metadata": metadata_json
})
return (metadata_json,)
except Exception as e:
logger.error(f"Error processing metadata: {e}")
return ("{}",) # Return empty JSON object in case of error
return ()

View File

@@ -1,11 +1,8 @@
import logging
from nodes import LoraLoader
from comfy.comfy_types import IO # type: ignore
from ..services.lora_scanner import LoraScanner
from ..config import config
import asyncio
import os
from .utils import FlexibleOptionalInputType, any_type
from .utils import FlexibleOptionalInputType, any_type, get_lora_info, extract_lora_name, get_loras_list
logger = logging.getLogger(__name__)
@@ -32,48 +29,6 @@ class LoraManagerLoader:
RETURN_TYPES = ("MODEL", "CLIP", IO.STRING, IO.STRING)
RETURN_NAMES = ("MODEL", "CLIP", "trigger_words", "loaded_loras")
FUNCTION = "load_loras"
async def get_lora_info(self, lora_name):
"""Get the lora path and trigger words from cache"""
scanner = await LoraScanner.get_instance()
cache = await scanner.get_cached_data()
for item in cache.raw_data:
if item.get('file_name') == lora_name:
file_path = item.get('file_path')
if file_path:
for root in config.loras_roots:
root = root.replace(os.sep, '/')
if file_path.startswith(root):
relative_path = os.path.relpath(file_path, root).replace(os.sep, '/')
# Get trigger words from civitai metadata
civitai = item.get('civitai', {})
trigger_words = civitai.get('trainedWords', []) if civitai else []
return relative_path, trigger_words
return lora_name, [] # Fallback if not found
def extract_lora_name(self, lora_path):
"""Extract the lora name from a lora path (e.g., 'IL\\aorunIllstrious.safetensors' -> 'aorunIllstrious')"""
# Get the basename without extension
basename = os.path.basename(lora_path)
return os.path.splitext(basename)[0]
def _get_loras_list(self, kwargs):
"""Helper to extract loras list from either old or new kwargs format"""
if 'loras' not in kwargs:
return []
loras_data = kwargs['loras']
# Handle new format: {'loras': {'__value__': [...]}}
if isinstance(loras_data, dict) and '__value__' in loras_data:
return loras_data['__value__']
# Handle old format: {'loras': [...]}
elif isinstance(loras_data, list):
return loras_data
# Unexpected format
else:
logger.warning(f"Unexpected loras format: {type(loras_data)}")
return []
def load_loras(self, model, text, **kwargs):
"""Loads multiple LoRAs based on the kwargs input and lora_stack."""
@@ -89,27 +44,38 @@ class LoraManagerLoader:
model, clip = LoraLoader().load_lora(model, clip, lora_path, model_strength, clip_strength)
# Extract lora name for trigger words lookup
lora_name = self.extract_lora_name(lora_path)
_, trigger_words = asyncio.run(self.get_lora_info(lora_name))
lora_name = extract_lora_name(lora_path)
_, trigger_words = asyncio.run(get_lora_info(lora_name))
all_trigger_words.extend(trigger_words)
loaded_loras.append(f"{lora_name}: {model_strength}")
# Add clip strength to output if different from model strength
if abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Then process loras from kwargs with support for both old and new formats
loras_list = self._get_loras_list(kwargs)
loras_list = get_loras_list(kwargs)
for lora in loras_list:
if not lora.get('active', False):
continue
lora_name = lora['name']
strength = float(lora['strength'])
model_strength = float(lora['strength'])
# Get clip strength - use model strength as default if not specified
clip_strength = float(lora.get('clipStrength', model_strength))
# Get lora path and trigger words
lora_path, trigger_words = asyncio.run(self.get_lora_info(lora_name))
lora_path, trigger_words = asyncio.run(get_lora_info(lora_name))
# Apply the LoRA using the resolved path
model, clip = LoraLoader().load_lora(model, clip, lora_path, strength, strength)
loaded_loras.append(f"{lora_name}: {strength}")
# Apply the LoRA using the resolved path with separate strengths
model, clip = LoraLoader().load_lora(model, clip, lora_path, model_strength, clip_strength)
# Include clip strength in output if different from model strength
if abs(model_strength - clip_strength) > 0.001:
loaded_loras.append(f"{lora_name}: {model_strength},{clip_strength}")
else:
loaded_loras.append(f"{lora_name}: {model_strength}")
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
@@ -117,8 +83,23 @@ class LoraManagerLoader:
# use ',, ' to separate trigger words for group mode
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format loaded_loras as <lora:lora_name:strength> separated by spaces
formatted_loras = " ".join([f"<lora:{name.split(':')[0].strip()}:{str(strength).strip()}>"
for name, strength in [item.split(':') for item in loaded_loras]])
# Format loaded_loras with support for both formats
formatted_loras = []
for item in loaded_loras:
parts = item.split(":")
lora_name = parts[0].strip()
strength_parts = parts[1].strip().split(",")
if len(strength_parts) > 1:
# Different model and clip strengths
model_str = strength_parts[0].strip()
clip_str = strength_parts[1].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}:{clip_str}>")
else:
# Same strength for both
model_str = strength_parts[0].strip()
formatted_loras.append(f"<lora:{lora_name}:{model_str}>")
formatted_loras_text = " ".join(formatted_loras)
return (model, clip, trigger_words_text, formatted_loras)
return (model, clip, trigger_words_text, formatted_loras_text)

View File

@@ -3,7 +3,7 @@ from ..services.lora_scanner import LoraScanner
from ..config import config
import asyncio
import os
from .utils import FlexibleOptionalInputType, any_type
from .utils import FlexibleOptionalInputType, any_type, get_lora_info, extract_lora_name, get_loras_list
import logging
logger = logging.getLogger(__name__)
@@ -29,48 +29,6 @@ class LoraStacker:
RETURN_TYPES = ("LORA_STACK", IO.STRING, IO.STRING)
RETURN_NAMES = ("LORA_STACK", "trigger_words", "active_loras")
FUNCTION = "stack_loras"
async def get_lora_info(self, lora_name):
"""Get the lora path and trigger words from cache"""
scanner = await LoraScanner.get_instance()
cache = await scanner.get_cached_data()
for item in cache.raw_data:
if item.get('file_name') == lora_name:
file_path = item.get('file_path')
if file_path:
for root in config.loras_roots:
root = root.replace(os.sep, '/')
if file_path.startswith(root):
relative_path = os.path.relpath(file_path, root).replace(os.sep, '/')
# Get trigger words from civitai metadata
civitai = item.get('civitai', {})
trigger_words = civitai.get('trainedWords', []) if civitai else []
return relative_path, trigger_words
return lora_name, [] # Fallback if not found
def extract_lora_name(self, lora_path):
"""Extract the lora name from a lora path (e.g., 'IL\\aorunIllstrious.safetensors' -> 'aorunIllstrious')"""
# Get the basename without extension
basename = os.path.basename(lora_path)
return os.path.splitext(basename)[0]
def _get_loras_list(self, kwargs):
"""Helper to extract loras list from either old or new kwargs format"""
if 'loras' not in kwargs:
return []
loras_data = kwargs['loras']
# Handle new format: {'loras': {'__value__': [...]}}
if isinstance(loras_data, dict) and '__value__' in loras_data:
return loras_data['__value__']
# Handle old format: {'loras': [...]}
elif isinstance(loras_data, list):
return loras_data
# Unexpected format
else:
logger.warning(f"Unexpected loras format: {type(loras_data)}")
return []
def stack_loras(self, text, **kwargs):
"""Stacks multiple LoRAs based on the kwargs input without loading them."""
@@ -80,39 +38,49 @@ class LoraStacker:
# Process existing lora_stack if available
lora_stack = kwargs.get('lora_stack', None)
if lora_stack:
if (lora_stack):
stack.extend(lora_stack)
# Get trigger words from existing stack entries
for lora_path, _, _ in lora_stack:
lora_name = self.extract_lora_name(lora_path)
_, trigger_words = asyncio.run(self.get_lora_info(lora_name))
lora_name = extract_lora_name(lora_path)
_, trigger_words = asyncio.run(get_lora_info(lora_name))
all_trigger_words.extend(trigger_words)
# Process loras from kwargs with support for both old and new formats
loras_list = self._get_loras_list(kwargs)
loras_list = get_loras_list(kwargs)
for lora in loras_list:
if not lora.get('active', False):
continue
lora_name = lora['name']
model_strength = float(lora['strength'])
clip_strength = model_strength # Using same strength for both as in the original loader
# Get clip strength - use model strength as default if not specified
clip_strength = float(lora.get('clipStrength', model_strength))
# Get lora path and trigger words
lora_path, trigger_words = asyncio.run(self.get_lora_info(lora_name))
lora_path, trigger_words = asyncio.run(get_lora_info(lora_name))
# Add to stack without loading
# replace '/' with os.sep to avoid different OS path format
stack.append((lora_path.replace('/', os.sep), model_strength, clip_strength))
active_loras.append((lora_name, model_strength))
active_loras.append((lora_name, model_strength, clip_strength))
# Add trigger words to collection
all_trigger_words.extend(trigger_words)
# use ',, ' to separate trigger words for group mode
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Format active_loras as <lora:lora_name:strength> separated by spaces
active_loras_text = " ".join([f"<lora:{name}:{str(strength).strip()}>"
for name, strength in active_loras])
# Format active_loras with support for both formats
formatted_loras = []
for name, model_strength, clip_strength in active_loras:
if abs(model_strength - clip_strength) > 0.001:
# Different model and clip strengths
formatted_loras.append(f"<lora:{name}:{str(model_strength).strip()}:{str(clip_strength).strip()}>")
else:
# Same strength for both
formatted_loras.append(f"<lora:{name}:{str(model_strength).strip()}>")
active_loras_text = " ".join(formatted_loras)
return (stack, trigger_words_text, active_loras_text)

View File

@@ -5,6 +5,7 @@ import re
import numpy as np
import folder_paths # type: ignore
from ..services.lora_scanner import LoraScanner
from ..services.checkpoint_scanner import CheckpointScanner
from ..metadata_collector.metadata_processor import MetadataProcessor
from ..metadata_collector import get_metadata
from PIL import Image, PngImagePlugin
@@ -30,16 +31,36 @@ class SaveImage:
return {
"required": {
"images": ("IMAGE",),
"filename_prefix": ("STRING", {"default": "ComfyUI"}),
"file_format": (["png", "jpeg", "webp"],),
"filename_prefix": ("STRING", {
"default": "ComfyUI",
"tooltip": "Base filename for saved images. Supports format patterns like %seed%, %width%, %height%, %model%, etc."
}),
"file_format": (["png", "jpeg", "webp"], {
"tooltip": "Image format to save as. PNG preserves quality, JPEG is smaller, WebP balances size and quality."
}),
},
"optional": {
"lossless_webp": ("BOOLEAN", {"default": False}),
"quality": ("INT", {"default": 100, "min": 1, "max": 100}),
"embed_workflow": ("BOOLEAN", {"default": False}),
"add_counter_to_filename": ("BOOLEAN", {"default": True}),
"lossless_webp": ("BOOLEAN", {
"default": False,
"tooltip": "When enabled, saves WebP images with lossless compression. Results in larger files but no quality loss."
}),
"quality": ("INT", {
"default": 100,
"min": 1,
"max": 100,
"tooltip": "Compression quality for JPEG and lossy WebP formats (1-100). Higher values mean better quality but larger files."
}),
"embed_workflow": ("BOOLEAN", {
"default": False,
"tooltip": "Embeds the complete workflow data into the image metadata. Only works with PNG and WebP formats."
}),
"add_counter_to_filename": ("BOOLEAN", {
"default": True,
"tooltip": "Adds an incremental counter to filenames to prevent overwriting previous images."
}),
},
"hidden": {
"id": "UNIQUE_ID",
"prompt": "PROMPT",
"extra_pnginfo": "EXTRA_PNGINFO",
},
@@ -53,18 +74,55 @@ class SaveImage:
async def get_lora_hash(self, lora_name):
"""Get the lora hash from cache"""
scanner = await LoraScanner.get_instance()
cache = await scanner.get_cached_data()
# Use the new direct filename lookup method
hash_value = scanner.get_hash_by_filename(lora_name)
if hash_value:
return hash_value
# Fallback to old method for compatibility
cache = await scanner.get_cached_data()
for item in cache.raw_data:
if item.get('file_name') == lora_name:
return item.get('sha256')
return None
async def get_checkpoint_hash(self, checkpoint_path):
"""Get the checkpoint hash from cache"""
scanner = await CheckpointScanner.get_instance()
if not checkpoint_path:
return None
# Extract basename without extension
checkpoint_name = os.path.basename(checkpoint_path)
checkpoint_name = os.path.splitext(checkpoint_name)[0]
# Try direct filename lookup first
hash_value = scanner.get_hash_by_filename(checkpoint_name)
if hash_value:
return hash_value
# Fallback to old method for compatibility
cache = await scanner.get_cached_data()
normalized_path = checkpoint_path.replace('\\', '/')
for item in cache.raw_data:
if item.get('file_name') == checkpoint_name and item.get('file_path').endswith(normalized_path):
return item.get('sha256')
return None
async def format_metadata(self, metadata_dict):
"""Format metadata in the requested format similar to userComment example"""
if not metadata_dict:
return ""
# Helper function to only add parameter if value is not None
def add_param_if_not_none(param_list, label, value):
if value is not None:
param_list.append(f"{label}: {value}")
# Extract the prompt and negative prompt
prompt = metadata_dict.get('prompt', '')
negative_prompt = metadata_dict.get('negative_prompt', '')
@@ -100,7 +158,11 @@ class SaveImage:
# Add standard parameters in the correct order
if 'steps' in metadata_dict:
params.append(f"Steps: {metadata_dict.get('steps')}")
add_param_if_not_none(params, "Steps", metadata_dict.get('steps'))
# Combine sampler and scheduler information
sampler_name = None
scheduler_name = None
if 'sampler' in metadata_dict:
sampler = metadata_dict.get('sampler')
@@ -123,7 +185,6 @@ class SaveImage:
'ddim': 'DDIM'
}
sampler_name = sampler_mapping.get(sampler, sampler)
params.append(f"Sampler: {sampler_name}")
if 'scheduler' in metadata_dict:
scheduler = metadata_dict.get('scheduler')
@@ -135,44 +196,54 @@ class SaveImage:
'sgm_quadratic': 'SGM Quadratic'
}
scheduler_name = scheduler_mapping.get(scheduler, scheduler)
params.append(f"Schedule type: {scheduler_name}")
# CFG scale (cfg_scale in metadata_dict)
if 'cfg_scale' in metadata_dict:
params.append(f"CFG scale: {metadata_dict.get('cfg_scale')}")
# Add combined sampler and scheduler information
if sampler_name:
if scheduler_name:
params.append(f"Sampler: {sampler_name} {scheduler_name}")
else:
params.append(f"Sampler: {sampler_name}")
# CFG scale (Use guidance if available, otherwise fall back to cfg_scale or cfg)
if 'guidance' in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get('guidance'))
elif 'cfg_scale' in metadata_dict:
add_param_if_not_none(params, "CFG scale", metadata_dict.get('cfg_scale'))
elif 'cfg' in metadata_dict:
params.append(f"CFG scale: {metadata_dict.get('cfg')}")
add_param_if_not_none(params, "CFG scale", metadata_dict.get('cfg'))
# Seed
if 'seed' in metadata_dict:
params.append(f"Seed: {metadata_dict.get('seed')}")
add_param_if_not_none(params, "Seed", metadata_dict.get('seed'))
# Size
if 'size' in metadata_dict:
params.append(f"Size: {metadata_dict.get('size')}")
add_param_if_not_none(params, "Size", metadata_dict.get('size'))
# Model info
if 'checkpoint' in metadata_dict:
# Ensure checkpoint is a string before processing
checkpoint = metadata_dict.get('checkpoint')
if checkpoint is not None:
# Handle both string and other types safely
if isinstance(checkpoint, str):
# Extract basename without path
checkpoint = os.path.basename(checkpoint)
# Remove extension if present
checkpoint = os.path.splitext(checkpoint)[0]
else:
# Convert non-string to string
checkpoint = str(checkpoint)
# Get model hash
model_hash = await self.get_checkpoint_hash(checkpoint)
params.append(f"Model: {checkpoint}")
# Extract basename without path
checkpoint_name = os.path.basename(checkpoint)
# Remove extension if present
checkpoint_name = os.path.splitext(checkpoint_name)[0]
# Add model hash if available
if model_hash:
params.append(f"Model hash: {model_hash[:10]}, Model: {checkpoint_name}")
else:
params.append(f"Model: {checkpoint_name}")
# Add LoRA hashes if available
if lora_hashes:
lora_hash_parts = []
for lora_name, hash_value in lora_hashes.items():
lora_hash_parts.append(f"{lora_name}: {hash_value}")
lora_hash_parts.append(f"{lora_name}: {hash_value[:10]}")
if lora_hash_parts:
params.append(f"Lora hashes: \"{', '.join(lora_hash_parts)}\"")
@@ -249,14 +320,14 @@ class SaveImage:
return filename
def save_images(self, images, filename_prefix, file_format, prompt=None, extra_pnginfo=None,
def save_images(self, images, filename_prefix, file_format, id, prompt=None, extra_pnginfo=None,
lossless_webp=True, quality=100, embed_workflow=False, add_counter_to_filename=True):
"""Save images with metadata"""
results = []
# Get metadata using the metadata collector
raw_metadata = get_metadata()
metadata_dict = MetadataProcessor.to_dict(raw_metadata)
metadata_dict = MetadataProcessor.to_dict(raw_metadata, id)
# Get or create metadata asynchronously
metadata = asyncio.run(self.format_metadata(metadata_dict))
@@ -284,7 +355,7 @@ class SaveImage:
if add_counter_to_filename:
# Use counter + i to ensure unique filenames for all images in batch
current_counter = counter + i
base_filename += f"_{current_counter:05}"
base_filename += f"_{current_counter:05}_"
# Set file extension and prepare saving parameters
if file_format == "png":
@@ -327,14 +398,23 @@ class SaveImage:
print(f"Error adding EXIF data: {e}")
img.save(file_path, format="JPEG", **save_kwargs)
elif file_format == "webp":
# For WebP, also use piexif for metadata
if metadata:
try:
exif_dict = {'Exif': {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}}
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
print(f"Error adding EXIF data: {e}")
try:
# For WebP, use piexif for metadata
exif_dict = {}
if metadata:
exif_dict['Exif'] = {piexif.ExifIFD.UserComment: b'UNICODE\0' + metadata.encode('utf-16be')}
# Add workflow if needed
if embed_workflow and extra_pnginfo is not None:
workflow_json = json.dumps(extra_pnginfo["workflow"])
exif_dict['0th'] = {piexif.ImageIFD.ImageDescription: "Workflow:" + workflow_json}
exif_bytes = piexif.dump(exif_dict)
save_kwargs["exif"] = exif_bytes
except Exception as e:
print(f"Error adding EXIF data: {e}")
img.save(file_path, format="WEBP", **save_kwargs)
results.append({
@@ -348,7 +428,7 @@ class SaveImage:
return results
def process_image(self, images, filename_prefix="ComfyUI", file_format="png", prompt=None, extra_pnginfo=None,
def process_image(self, images, id, filename_prefix="ComfyUI", file_format="png", prompt=None, extra_pnginfo=None,
lossless_webp=True, quality=100, embed_workflow=False, add_counter_to_filename=True):
"""Process and save image with metadata"""
# Make sure the output directory exists
@@ -365,6 +445,7 @@ class SaveImage:
images,
filename_prefix,
file_format,
id,
prompt,
extra_pnginfo,
lossless_webp,

View File

@@ -16,11 +16,18 @@ class TriggerWordToggle:
def INPUT_TYPES(cls):
return {
"required": {
"group_mode": ("BOOLEAN", {"default": True}),
"group_mode": ("BOOLEAN", {
"default": True,
"tooltip": "When enabled, treats each group of trigger words as a single toggleable unit."
}),
"default_active": ("BOOLEAN", {
"default": True,
"tooltip": "Sets the default initial state (active or inactive) when trigger words are added."
}),
},
"optional": FlexibleOptionalInputType(any_type),
"hidden": {
"id": "UNIQUE_ID", # 会被 ComfyUI 自动替换为唯一ID
"id": "UNIQUE_ID",
},
}
@@ -41,17 +48,11 @@ class TriggerWordToggle:
else:
return data
def process_trigger_words(self, id, group_mode, **kwargs):
def process_trigger_words(self, id, group_mode, default_active, **kwargs):
# Handle both old and new formats for trigger_words
trigger_words_data = self._get_toggle_data(kwargs, 'trigger_words')
trigger_words_data = self._get_toggle_data(kwargs, 'orinalMessage')
trigger_words = trigger_words_data if isinstance(trigger_words_data, str) else ""
# Send trigger words to frontend
PromptServer.instance.send_sync("trigger_word_update", {
"id": id,
"message": trigger_words
})
filtered_triggers = trigger_words
# Get toggle data with support for both formats

View File

@@ -30,4 +30,55 @@ class FlexibleOptionalInputType(dict):
return True
any_type = AnyType("*")
any_type = AnyType("*")
# Common methods extracted from lora_loader.py and lora_stacker.py
import os
import logging
import asyncio
from ..services.lora_scanner import LoraScanner
from ..config import config
logger = logging.getLogger(__name__)
async def get_lora_info(lora_name):
"""Get the lora path and trigger words from cache"""
scanner = await LoraScanner.get_instance()
cache = await scanner.get_cached_data()
for item in cache.raw_data:
if item.get('file_name') == lora_name:
file_path = item.get('file_path')
if file_path:
for root in config.loras_roots:
root = root.replace(os.sep, '/')
if file_path.startswith(root):
relative_path = os.path.relpath(file_path, root).replace(os.sep, '/')
# Get trigger words from civitai metadata
civitai = item.get('civitai', {})
trigger_words = civitai.get('trainedWords', []) if civitai else []
return relative_path, trigger_words
return lora_name, [] # Fallback if not found
def extract_lora_name(lora_path):
"""Extract the lora name from a lora path (e.g., 'IL\\aorunIllstrious.safetensors' -> 'aorunIllstrious')"""
# Get the basename without extension
basename = os.path.basename(lora_path)
return os.path.splitext(basename)[0]
def get_loras_list(kwargs):
"""Helper to extract loras list from either old or new kwargs format"""
if 'loras' not in kwargs:
return []
loras_data = kwargs['loras']
# Handle new format: {'loras': {'__value__': [...]}}
if isinstance(loras_data, dict) and '__value__' in loras_data:
return loras_data['__value__']
# Handle old format: {'loras': [...]}
elif isinstance(loras_data, list):
return loras_data
# Unexpected format
else:
logger.warning(f"Unexpected loras format: {type(loras_data)}")
return []

24
py/recipes/__init__.py Normal file
View File

@@ -0,0 +1,24 @@
"""Recipe metadata parser package for ComfyUI-Lora-Manager."""
from .base import RecipeMetadataParser
from .factory import RecipeParserFactory
from .constants import GEN_PARAM_KEYS, VALID_LORA_TYPES
from .parsers import (
RecipeFormatParser,
ComfyMetadataParser,
MetaFormatParser,
AutomaticMetadataParser,
CivitaiApiMetadataParser
)
__all__ = [
'RecipeMetadataParser',
'RecipeParserFactory',
'GEN_PARAM_KEYS',
'VALID_LORA_TYPES',
'RecipeFormatParser',
'ComfyMetadataParser',
'MetaFormatParser',
'AutomaticMetadataParser',
'CivitaiApiMetadataParser'
]

184
py/recipes/base.py Normal file
View File

@@ -0,0 +1,184 @@
"""Base classes for recipe parsers."""
import json
import logging
import os
import re
from typing import Dict, List, Any, Optional, Tuple
from abc import ABC, abstractmethod
from ..config import config
from ..utils.constants import VALID_LORA_TYPES
logger = logging.getLogger(__name__)
class RecipeMetadataParser(ABC):
"""Interface for parsing recipe metadata from image user comments"""
METADATA_MARKER = None
@abstractmethod
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
pass
@abstractmethod
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""
Parse metadata from user comment and return structured recipe data
Args:
user_comment: The EXIF UserComment string from the image
recipe_scanner: Optional recipe scanner instance for local LoRA lookup
civitai_client: Optional Civitai client for fetching model information
Returns:
Dict containing parsed recipe data with standardized format
"""
pass
async def populate_lora_from_civitai(self, lora_entry: Dict[str, Any], civitai_info_tuple: Tuple[Dict[str, Any], Optional[str]],
recipe_scanner=None, base_model_counts=None, hash_value=None) -> Optional[Dict[str, Any]]:
"""
Populate a lora entry with information from Civitai API response
Args:
lora_entry: The lora entry to populate
civitai_info_tuple: The response tuple from Civitai API (data, error_msg)
recipe_scanner: Optional recipe scanner for local file lookup
base_model_counts: Optional dict to track base model counts
hash_value: Optional hash value to use if not available in civitai_info
Returns:
The populated lora_entry dict if type is valid, None otherwise
"""
try:
# Unpack the tuple to get the actual data
civitai_info, error_msg = civitai_info_tuple if isinstance(civitai_info_tuple, tuple) else (civitai_info_tuple, None)
if not civitai_info or civitai_info.get("error") == "Model not found":
# Model not found or deleted
lora_entry['isDeleted'] = True
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
return lora_entry
# Get model type and validate
model_type = civitai_info.get('model', {}).get('type', '').lower()
lora_entry['type'] = model_type
if model_type not in VALID_LORA_TYPES:
logger.debug(f"Skipping non-LoRA model type: {model_type}")
return None
# Check if this is an early access lora
if civitai_info.get('earlyAccessEndsAt'):
# Convert earlyAccessEndsAt to a human-readable date
early_access_date = civitai_info.get('earlyAccessEndsAt', '')
lora_entry['isEarlyAccess'] = True
lora_entry['earlyAccessEndsAt'] = early_access_date
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
lora_entry['name'] = civitai_info['model']['name']
lora_entry['id'] = civitai_info.get('id')
lora_entry['modelId'] = civitai_info.get('modelId')
# Update version if available
if 'name' in civitai_info:
lora_entry['version'] = civitai_info.get('name', '')
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
lora_entry['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
current_base_model = civitai_info.get('baseModel', '')
lora_entry['baseModel'] = current_base_model
# Update base model counts if tracking them
if base_model_counts is not None and current_base_model:
base_model_counts[current_base_model] = base_model_counts.get(current_base_model, 0) + 1
# Get download URL
lora_entry['downloadUrl'] = civitai_info.get('downloadUrl', '')
# Process file information if available
if 'files' in civitai_info:
# Find the primary model file (type="Model" and primary=true) in the files list
model_file = next((file for file in civitai_info.get('files', [])
if file.get('type') == 'Model' and file.get('primary') == True), None)
if model_file:
# Get size
lora_entry['size'] = model_file.get('sizeKB', 0) * 1024
# Get SHA256 hash
sha256 = model_file.get('hashes', {}).get('SHA256', hash_value)
if sha256:
lora_entry['hash'] = sha256.lower()
# Check if exists locally
if recipe_scanner and lora_entry['hash']:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_lora_hash(lora_entry['hash'])
if exists_locally:
try:
local_path = lora_scanner.get_lora_path_by_hash(lora_entry['hash'])
lora_entry['existsLocally'] = True
lora_entry['localPath'] = local_path
lora_entry['file_name'] = os.path.splitext(os.path.basename(local_path))[0]
# Get thumbnail from local preview if available
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data
if item['sha256'].lower() == lora_entry['hash'].lower()), None)
if lora_item and 'preview_url' in lora_item:
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
except Exception as e:
logger.error(f"Error getting local lora path: {e}")
else:
# For missing LoRAs, get file_name from model_file.name
file_name = model_file.get('name', '')
lora_entry['file_name'] = os.path.splitext(file_name)[0] if file_name else ''
except Exception as e:
logger.error(f"Error populating lora from Civitai info: {e}")
return lora_entry
async def populate_checkpoint_from_civitai(self, checkpoint: Dict[str, Any], civitai_info: Dict[str, Any]) -> Dict[str, Any]:
"""
Populate checkpoint information from Civitai API response
Args:
checkpoint: The checkpoint entry to populate
civitai_info: The response from Civitai API
Returns:
The populated checkpoint dict
"""
try:
if civitai_info and civitai_info.get("error") != "Model not found":
# Update model name if available
if 'model' in civitai_info and 'name' in civitai_info['model']:
checkpoint['name'] = civitai_info['model']['name']
# Update version if available
if 'name' in civitai_info:
checkpoint['version'] = civitai_info.get('name', '')
# Get thumbnail URL from first image
if 'images' in civitai_info and civitai_info['images']:
checkpoint['thumbnailUrl'] = civitai_info['images'][0].get('url', '')
# Get base model
checkpoint['baseModel'] = civitai_info.get('baseModel', '')
# Get download URL
checkpoint['downloadUrl'] = civitai_info.get('downloadUrl', '')
else:
# Model not found or deleted
checkpoint['isDeleted'] = True
except Exception as e:
logger.error(f"Error populating checkpoint from Civitai info: {e}")
return checkpoint

16
py/recipes/constants.py Normal file
View File

@@ -0,0 +1,16 @@
"""Constants used across recipe parsers."""
# Import VALID_LORA_TYPES from utils.constants
from ..utils.constants import VALID_LORA_TYPES
# Constants for generation parameters
GEN_PARAM_KEYS = [
'prompt',
'negative_prompt',
'steps',
'sampler',
'cfg_scale',
'seed',
'size',
'clip_skip',
]

64
py/recipes/factory.py Normal file
View File

@@ -0,0 +1,64 @@
"""Factory for creating recipe metadata parsers."""
import logging
from .parsers import (
RecipeFormatParser,
ComfyMetadataParser,
MetaFormatParser,
AutomaticMetadataParser,
CivitaiApiMetadataParser
)
from .base import RecipeMetadataParser
logger = logging.getLogger(__name__)
class RecipeParserFactory:
"""Factory for creating recipe metadata parsers"""
@staticmethod
def create_parser(metadata) -> RecipeMetadataParser:
"""
Create appropriate parser based on the metadata content
Args:
metadata: The metadata from the image (dict or str)
Returns:
Appropriate RecipeMetadataParser implementation
"""
# First, try CivitaiApiMetadataParser for dict input
if isinstance(metadata, dict):
try:
if CivitaiApiMetadataParser().is_metadata_matching(metadata):
return CivitaiApiMetadataParser()
except Exception as e:
logger.debug(f"CivitaiApiMetadataParser check failed: {e}")
pass
# Convert dict to string for other parsers that expect string input
try:
import json
metadata_str = json.dumps(metadata)
except Exception as e:
logger.debug(f"Failed to convert dict to JSON string: {e}")
return None
else:
metadata_str = metadata
# Try ComfyMetadataParser which requires valid JSON
try:
if ComfyMetadataParser().is_metadata_matching(metadata_str):
return ComfyMetadataParser()
except Exception:
# If JSON parsing fails, move on to other parsers
pass
# Check other parsers that expect string input
if RecipeFormatParser().is_metadata_matching(metadata_str):
return RecipeFormatParser()
elif AutomaticMetadataParser().is_metadata_matching(metadata_str):
return AutomaticMetadataParser()
elif MetaFormatParser().is_metadata_matching(metadata_str):
return MetaFormatParser()
else:
return None

View File

@@ -0,0 +1,15 @@
"""Recipe parsers package."""
from .recipe_format import RecipeFormatParser
from .comfy import ComfyMetadataParser
from .meta_format import MetaFormatParser
from .automatic import AutomaticMetadataParser
from .civitai_image import CivitaiApiMetadataParser
__all__ = [
'RecipeFormatParser',
'ComfyMetadataParser',
'MetaFormatParser',
'AutomaticMetadataParser',
'CivitaiApiMetadataParser',
]

View File

@@ -0,0 +1,304 @@
"""Parser for Automatic1111 metadata format."""
import re
import json
import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class AutomaticMetadataParser(RecipeMetadataParser):
"""Parser for Automatic1111 metadata format"""
METADATA_MARKER = r"Steps: \d+"
# Regular expressions for extracting specific metadata
HASHES_REGEX = r', Hashes:\s*({[^}]+})'
LORA_HASHES_REGEX = r', Lora hashes:\s*"([^"]+)"'
CIVITAI_RESOURCES_REGEX = r', Civitai resources:\s*(\[\{.*?\}\])'
CIVITAI_METADATA_REGEX = r', Civitai metadata:\s*(\{.*?\})'
EXTRANETS_REGEX = r'<(lora|hypernet):([a-zA-Z0-9_\.\-]+):([0-9.]+)>'
MODEL_HASH_PATTERN = r'Model hash: ([a-zA-Z0-9]+)'
VAE_HASH_PATTERN = r'VAE hash: ([a-zA-Z0-9]+)'
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the Automatic1111 format"""
return re.search(self.METADATA_MARKER, user_comment) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Automatic1111 format"""
try:
# Split on Negative prompt if it exists
if "Negative prompt:" in user_comment:
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
negative_and_params = parts[1] if len(parts) > 1 else ""
else:
# No negative prompt section
param_start = re.search(self.METADATA_MARKER, user_comment)
if param_start:
prompt = user_comment[:param_start.start()].strip()
negative_and_params = user_comment[param_start.start():]
else:
prompt = user_comment.strip()
negative_and_params = ""
# Initialize metadata
metadata = {
"prompt": prompt,
"loras": []
}
# Extract negative prompt and parameters
if negative_and_params:
# If we split on "Negative prompt:", check for params section
if "Negative prompt:" in user_comment:
param_start = re.search(r'Steps: ', negative_and_params)
if param_start:
neg_prompt = negative_and_params[:param_start.start()].strip()
metadata["negative_prompt"] = neg_prompt
params_section = negative_and_params[param_start.start():]
else:
metadata["negative_prompt"] = negative_and_params.strip()
params_section = ""
else:
# No negative prompt, entire section is params
params_section = negative_and_params
# Extract generation parameters
if params_section:
# Extract Civitai resources
civitai_resources_match = re.search(self.CIVITAI_RESOURCES_REGEX, params_section)
if civitai_resources_match:
try:
civitai_resources = json.loads(civitai_resources_match.group(1))
metadata["civitai_resources"] = civitai_resources
params_section = params_section.replace(civitai_resources_match.group(0), '')
except json.JSONDecodeError:
logger.error("Error parsing Civitai resources JSON")
# Extract Hashes
hashes_match = re.search(self.HASHES_REGEX, params_section)
if hashes_match:
try:
hashes = json.loads(hashes_match.group(1))
# Process hash keys
processed_hashes = {}
for key, value in hashes.items():
# Convert Model: or LORA: prefix to lowercase if present
if ':' in key:
prefix, name = key.split(':', 1)
prefix = prefix.lower()
else:
prefix = ''
name = key
# Clean up the name part
if '/' in name:
name = name.split('/')[-1] # Get last part after /
if '.safetensors' in name:
name = name.split('.safetensors')[0] # Remove .safetensors
# Reconstruct the key
new_key = f"{prefix}:{name}" if prefix else name
processed_hashes[new_key] = value
metadata["hashes"] = processed_hashes
# Remove hashes from params section to not interfere with other parsing
params_section = params_section.replace(hashes_match.group(0), '')
except json.JSONDecodeError:
logger.error("Error parsing hashes JSON")
# Extract Lora hashes in alternative format
lora_hashes_match = re.search(self.LORA_HASHES_REGEX, params_section)
if not hashes_match and lora_hashes_match:
try:
lora_hashes_str = lora_hashes_match.group(1)
lora_hash_entries = lora_hashes_str.split(', ')
# Initialize hashes dict if it doesn't exist
if "hashes" not in metadata:
metadata["hashes"] = {}
# Parse each lora hash entry (format: "name: hash")
for entry in lora_hash_entries:
if ': ' in entry:
lora_name, lora_hash = entry.split(': ', 1)
# Add as lora type in the same format as regular hashes
metadata["hashes"][f"lora:{lora_name}"] = lora_hash.strip()
# Remove lora hashes from params section
params_section = params_section.replace(lora_hashes_match.group(0), '')
except Exception as e:
logger.error(f"Error parsing Lora hashes: {e}")
# Extract basic parameters
param_pattern = r'([A-Za-z\s]+): ([^,]+)'
params = re.findall(param_pattern, params_section)
gen_params = {}
for key, value in params:
clean_key = key.strip().lower().replace(' ', '_')
# Skip if not in recognized gen param keys
if clean_key not in GEN_PARAM_KEYS:
continue
# Convert numeric values
if clean_key in ['steps', 'seed']:
try:
gen_params[clean_key] = int(value.strip())
except ValueError:
gen_params[clean_key] = value.strip()
elif clean_key in ['cfg_scale']:
try:
gen_params[clean_key] = float(value.strip())
except ValueError:
gen_params[clean_key] = value.strip()
else:
gen_params[clean_key] = value.strip()
# Extract size if available and add to gen_params if a recognized key
size_match = re.search(r'Size: (\d+)x(\d+)', params_section)
if size_match and 'size' in GEN_PARAM_KEYS:
width, height = size_match.groups()
gen_params['size'] = f"{width}x{height}"
# Add prompt and negative_prompt to gen_params if they're in GEN_PARAM_KEYS
if 'prompt' in GEN_PARAM_KEYS and 'prompt' in metadata:
gen_params['prompt'] = metadata['prompt']
if 'negative_prompt' in GEN_PARAM_KEYS and 'negative_prompt' in metadata:
gen_params['negative_prompt'] = metadata['negative_prompt']
metadata["gen_params"] = gen_params
# Extract LoRA information
loras = []
base_model_counts = {}
# First use Civitai resources if available (more reliable source)
if metadata.get("civitai_resources"):
for resource in metadata.get("civitai_resources", []):
if resource.get("type") in ["lora", "lycoris", "hypernet"] and resource.get("modelVersionId"):
# Initialize lora entry
lora_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown LoRA"),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "lora"),
'weight': round(float(resource.get("weight", 1.0)), 2),
'existsLocally': False,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get additional info from Civitai
if civitai_client:
try:
civitai_info = await civitai_client.get_model_version_info(resource.get("modelVersionId"))
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA {lora_entry['name']}: {e}")
loras.append(lora_entry)
# If no LoRAs from Civitai resources or to supplement, extract from metadata["hashes"]
if not loras or len(loras) == 0:
# Extract lora weights from extranet tags in prompt (for later use)
lora_weights = {}
lora_matches = re.findall(self.EXTRANETS_REGEX, prompt)
for lora_type, lora_name, lora_weight in lora_matches:
key = f"{lora_type}:{lora_name}"
lora_weights[key] = round(float(lora_weight), 2)
# Use hashes from metadata as the primary source
if metadata.get("hashes"):
for hash_key, lora_hash in metadata.get("hashes", {}).items():
# Only process lora or hypernet types
if not hash_key.startswith(("lora:", "hypernet:")):
continue
lora_type, lora_name = hash_key.split(':', 1)
# Get weight from extranet tags if available, else default to 1.0
weight = lora_weights.get(hash_key, 1.0)
# Initialize lora entry
lora_entry = {
'name': lora_name,
'type': lora_type, # 'lora' or 'hypernet'
'weight': weight,
'hash': lora_hash,
'existsLocally': False,
'localPath': None,
'file_name': lora_name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get info from Civitai
if civitai_client:
try:
if lora_hash:
# If we have hash, use it for lookup
civitai_info = await civitai_client.get_model_by_hash(lora_hash)
else:
civitai_info = None
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA {lora_name}: {e}")
loras.append(lora_entry)
# Try to get base model from resources or make educated guess
base_model = None
if base_model_counts:
# Use the most common base model from the loras
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Prepare final result structure
# Make sure gen_params only contains recognized keys
filtered_gen_params = {}
for key in GEN_PARAM_KEYS:
if key in metadata.get("gen_params", {}):
filtered_gen_params[key] = metadata["gen_params"][key]
result = {
'base_model': base_model,
'loras': loras,
'gen_params': filtered_gen_params,
'from_automatic_metadata': True
}
return result
except Exception as e:
logger.error(f"Error parsing Automatic1111 metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

View File

@@ -0,0 +1,248 @@
"""Parser for Civitai image metadata format."""
import json
import logging
from typing import Dict, Any, Union
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class CivitaiApiMetadataParser(RecipeMetadataParser):
"""Parser for Civitai image metadata format"""
def is_metadata_matching(self, metadata) -> bool:
"""Check if the metadata matches the Civitai image metadata format
Args:
metadata: The metadata from the image (dict)
Returns:
bool: True if this parser can handle the metadata
"""
if not metadata or not isinstance(metadata, dict):
return False
# Check for key markers specific to Civitai image metadata
return any([
"resources" in metadata,
"civitaiResources" in metadata,
"additionalResources" in metadata
])
async def parse_metadata(self, metadata, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai image format
Args:
metadata: The metadata from the image (dict)
recipe_scanner: Optional recipe scanner service
civitai_client: Optional Civitai API client
Returns:
Dict containing parsed recipe data
"""
try:
# Initialize result structure
result = {
'base_model': None,
'loras': [],
'gen_params': {},
'from_civitai_image': True
}
# Extract prompt and negative prompt
if "prompt" in metadata:
result["gen_params"]["prompt"] = metadata["prompt"]
if "negativePrompt" in metadata:
result["gen_params"]["negative_prompt"] = metadata["negativePrompt"]
# Extract other generation parameters
param_mapping = {
"steps": "steps",
"sampler": "sampler",
"cfgScale": "cfg_scale",
"seed": "seed",
"Size": "size",
"clipSkip": "clip_skip",
}
for civitai_key, our_key in param_mapping.items():
if civitai_key in metadata and our_key in GEN_PARAM_KEYS:
result["gen_params"][our_key] = metadata[civitai_key]
# Extract base model information - directly if available
if "baseModel" in metadata:
result["base_model"] = metadata["baseModel"]
elif "Model hash" in metadata and civitai_client:
model_hash = metadata["Model hash"]
model_info = await civitai_client.get_model_by_hash(model_hash)
if model_info:
result["base_model"] = model_info.get("baseModel", "")
elif "Model" in metadata and isinstance(metadata.get("resources"), list):
# Try to find base model in resources
for resource in metadata.get("resources", []):
if resource.get("type") == "model" and resource.get("name") == metadata.get("Model"):
# This is likely the checkpoint model
if civitai_client and resource.get("hash"):
model_info = await civitai_client.get_model_by_hash(resource.get("hash"))
if model_info:
result["base_model"] = model_info.get("baseModel", "")
base_model_counts = {}
# Process standard resources array
if "resources" in metadata and isinstance(metadata["resources"], list):
for resource in metadata["resources"]:
# Modified to process resources without a type field as potential LoRAs
if resource.get("type", "lora") == "lora":
lora_entry = {
'name': resource.get("name", "Unknown LoRA"),
'type': "lora",
'weight': float(resource.get("weight", 1.0)),
'hash': resource.get("hash", ""),
'existsLocally': False,
'localPath': None,
'file_name': resource.get("name", "Unknown"),
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get info from Civitai if hash is available
if lora_entry['hash'] and civitai_client:
try:
lora_hash = lora_entry['hash']
civitai_info = await civitai_client.get_model_by_hash(lora_hash)
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
lora_hash
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {lora_entry['hash']}: {e}")
result["loras"].append(lora_entry)
# Process civitaiResources array
if "civitaiResources" in metadata and isinstance(metadata["civitaiResources"], list):
for resource in metadata["civitaiResources"]:
# Modified to process resources without a type field as potential LoRAs
if resource.get("type") in ["lora", "lycoris"] or "type" not in resource:
# Initialize lora entry with the same structure as in automatic.py
lora_entry = {
'id': resource.get("modelVersionId", 0),
'modelId': resource.get("modelId", 0),
'name': resource.get("modelName", "Unknown LoRA"),
'version': resource.get("modelVersionName", ""),
'type': resource.get("type", "lora"),
'weight': round(float(resource.get("weight", 1.0)), 2),
'existsLocally': False,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Try to get info from Civitai if modelVersionId is available
if resource.get('modelVersionId') and civitai_client:
try:
version_id = str(resource.get('modelVersionId'))
# Use get_model_version_info instead of get_model_version
civitai_info, error = await civitai_client.get_model_version_info(version_id)
if error:
logger.warning(f"Error getting model version info: {error}")
continue
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for model version {resource.get('modelVersionId')}: {e}")
result["loras"].append(lora_entry)
# Process additionalResources array
if "additionalResources" in metadata and isinstance(metadata["additionalResources"], list):
for resource in metadata["additionalResources"]:
# Modified to process resources without a type field as potential LoRAs
if resource.get("type") in ["lora", "lycoris"] or "type" not in resource:
lora_type = resource.get("type", "lora")
name = resource.get("name", "")
# Extract ID from URN format if available
model_id = None
if name and "civitai:" in name:
parts = name.split("@")
if len(parts) > 1:
model_id = parts[1]
lora_entry = {
'name': name,
'type': lora_type,
'weight': float(resource.get("strength", 1.0)),
'hash': "",
'existsLocally': False,
'localPath': None,
'file_name': name,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# If we have a model ID and civitai client, try to get more info
if model_id and civitai_client:
try:
# Use get_model_version_info with the model ID
civitai_info, error = await civitai_client.get_model_version_info(model_id)
if error:
logger.warning(f"Error getting model version info: {error}")
else:
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for model ID {model_id}: {e}")
result["loras"].append(lora_entry)
# If base model wasn't found earlier, use the most common one from LoRAs
if not result["base_model"] and base_model_counts:
result["base_model"] = max(base_model_counts.items(), key=lambda x: x[1])[0]
return result
except Exception as e:
logger.error(f"Error parsing Civitai image metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

216
py/recipes/parsers/comfy.py Normal file
View File

@@ -0,0 +1,216 @@
"""Parser for ComfyUI metadata format."""
import re
import json
import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class ComfyMetadataParser(RecipeMetadataParser):
"""Parser for Civitai ComfyUI metadata JSON format"""
METADATA_MARKER = r"class_type"
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the ComfyUI metadata format"""
try:
data = json.loads(user_comment)
# Check if it contains class_type nodes typical of ComfyUI workflow
return isinstance(data, dict) and any(isinstance(v, dict) and 'class_type' in v for v in data.values())
except (json.JSONDecodeError, TypeError):
return False
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from Civitai ComfyUI metadata format"""
try:
data = json.loads(user_comment)
loras = []
# Find all LoraLoader nodes
lora_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'LoraLoader'}
if not lora_nodes:
return {"error": "No LoRA information found in this ComfyUI workflow", "loras": []}
# Process each LoraLoader node
for node_id, node in lora_nodes.items():
if 'inputs' not in node or 'lora_name' not in node['inputs']:
continue
lora_name = node['inputs'].get('lora_name', '')
# Parse the URN to extract model ID and version ID
# Format: "urn:air:sdxl:lora:civitai:1107767@1253442"
lora_id_match = re.search(r'civitai:(\d+)@(\d+)', lora_name)
if not lora_id_match:
continue
model_id = lora_id_match.group(1)
model_version_id = lora_id_match.group(2)
# Get strength from node inputs
weight = node['inputs'].get('strength_model', 1.0)
# Initialize lora entry with default values
lora_entry = {
'id': model_version_id,
'modelId': model_id,
'name': f"Lora {model_id}", # Default name
'version': '',
'type': 'lora',
'weight': weight,
'existsLocally': False,
'localPath': None,
'file_name': '',
'hash': '',
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get additional info from Civitai if client is available
if civitai_client:
try:
civitai_info_tuple = await civitai_client.get_model_version_info(model_version_id)
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
loras.append(lora_entry)
# Find checkpoint info
checkpoint_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'CheckpointLoaderSimple'}
checkpoint = None
checkpoint_id = None
checkpoint_version_id = None
if checkpoint_nodes:
# Get the first checkpoint node
checkpoint_node = next(iter(checkpoint_nodes.values()))
if 'inputs' in checkpoint_node and 'ckpt_name' in checkpoint_node['inputs']:
checkpoint_name = checkpoint_node['inputs']['ckpt_name']
# Parse checkpoint URN
checkpoint_match = re.search(r'civitai:(\d+)@(\d+)', checkpoint_name)
if checkpoint_match:
checkpoint_id = checkpoint_match.group(1)
checkpoint_version_id = checkpoint_match.group(2)
checkpoint = {
'id': checkpoint_version_id,
'modelId': checkpoint_id,
'name': f"Checkpoint {checkpoint_id}",
'version': '',
'type': 'checkpoint'
}
# Get additional checkpoint info from Civitai
if civitai_client:
try:
civitai_info_tuple = await civitai_client.get_model_version_info(checkpoint_version_id)
civitai_info, _ = civitai_info_tuple if isinstance(civitai_info_tuple, tuple) else (civitai_info_tuple, None)
# Populate checkpoint with Civitai info
checkpoint = await self.populate_checkpoint_from_civitai(checkpoint, civitai_info)
except Exception as e:
logger.error(f"Error fetching Civitai info for checkpoint: {e}")
# Extract generation parameters
gen_params = {}
# First try to get from extraMetadata
if 'extraMetadata' in data:
try:
# extraMetadata is a JSON string that needs to be parsed
extra_metadata = json.loads(data['extraMetadata'])
# Map fields from extraMetadata to our standard format
mapping = {
'prompt': 'prompt',
'negativePrompt': 'negative_prompt',
'steps': 'steps',
'sampler': 'sampler',
'cfgScale': 'cfg_scale',
'seed': 'seed'
}
for src_key, dest_key in mapping.items():
if src_key in extra_metadata:
gen_params[dest_key] = extra_metadata[src_key]
# If size info is available, format as "width x height"
if 'width' in extra_metadata and 'height' in extra_metadata:
gen_params['size'] = f"{extra_metadata['width']}x{extra_metadata['height']}"
except Exception as e:
logger.error(f"Error parsing extraMetadata: {e}")
# If extraMetadata doesn't have all the info, try to get from nodes
if not gen_params or len(gen_params) < 3: # At least we want prompt, negative_prompt, and steps
# Find positive prompt node
positive_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and
v.get('class_type', '').endswith('CLIPTextEncode') and
v.get('_meta', {}).get('title') == 'Positive'}
if positive_nodes:
positive_node = next(iter(positive_nodes.values()))
if 'inputs' in positive_node and 'text' in positive_node['inputs']:
gen_params['prompt'] = positive_node['inputs']['text']
# Find negative prompt node
negative_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and
v.get('class_type', '').endswith('CLIPTextEncode') and
v.get('_meta', {}).get('title') == 'Negative'}
if negative_nodes:
negative_node = next(iter(negative_nodes.values()))
if 'inputs' in negative_node and 'text' in negative_node['inputs']:
gen_params['negative_prompt'] = negative_node['inputs']['text']
# Find KSampler node for other parameters
ksampler_nodes = {k: v for k, v in data.items() if isinstance(v, dict) and v.get('class_type') == 'KSampler'}
if ksampler_nodes:
ksampler_node = next(iter(ksampler_nodes.values()))
if 'inputs' in ksampler_node:
inputs = ksampler_node['inputs']
if 'sampler_name' in inputs:
gen_params['sampler'] = inputs['sampler_name']
if 'steps' in inputs:
gen_params['steps'] = inputs['steps']
if 'cfg' in inputs:
gen_params['cfg_scale'] = inputs['cfg']
if 'seed' in inputs:
gen_params['seed'] = inputs['seed']
# Determine base model from loras info
base_model = None
if loras:
# Use the most common base model from loras
base_models = [lora['baseModel'] for lora in loras if lora.get('baseModel')]
if base_models:
from collections import Counter
base_model_counts = Counter(base_models)
base_model = base_model_counts.most_common(1)[0][0]
return {
'base_model': base_model,
'loras': loras,
'checkpoint': checkpoint,
'gen_params': gen_params,
'from_comfy_metadata': True
}
except Exception as e:
logger.error(f"Error parsing ComfyUI metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

View File

@@ -0,0 +1,174 @@
"""Parser for meta format (Lora_N Model hash) metadata."""
import re
import logging
from typing import Dict, Any
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class MetaFormatParser(RecipeMetadataParser):
"""Parser for images with meta format metadata (Lora_N Model hash format)"""
METADATA_MARKER = r'Lora_\d+ Model hash:'
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
return re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from images with meta format metadata"""
try:
# Extract prompt and negative prompt
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
# Initialize metadata
metadata = {"prompt": prompt, "loras": []}
# Extract negative prompt and parameters if available
if len(parts) > 1:
negative_and_params = parts[1]
# Extract negative prompt - everything until the first parameter (usually "Steps:")
param_start = re.search(r'([A-Za-z]+): ', negative_and_params)
if param_start:
neg_prompt = negative_and_params[:param_start.start()].strip()
metadata["negative_prompt"] = neg_prompt
params_section = negative_and_params[param_start.start():]
else:
params_section = negative_and_params
# Extract key-value parameters (Steps, Sampler, Seed, etc.)
param_pattern = r'([A-Za-z_0-9 ]+): ([^,]+)'
params = re.findall(param_pattern, params_section)
for key, value in params:
clean_key = key.strip().lower().replace(' ', '_')
metadata[clean_key] = value.strip()
# Extract LoRA information
# Pattern to match lora entries: Lora_0 Model name: ArtVador I.safetensors, Lora_0 Model hash: 08f7133a58, etc.
lora_pattern = r'Lora_(\d+) Model name: ([^,]+), Lora_\1 Model hash: ([^,]+), Lora_\1 Strength model: ([^,]+), Lora_\1 Strength clip: ([^,]+)'
lora_matches = re.findall(lora_pattern, user_comment)
# If the regular pattern doesn't match, try a more flexible approach
if not lora_matches:
# First find all Lora indices
lora_indices = set(re.findall(r'Lora_(\d+)', user_comment))
# For each index, extract the information
for idx in lora_indices:
lora_info = {}
# Extract model name
name_match = re.search(f'Lora_{idx} Model name: ([^,]+)', user_comment)
if name_match:
lora_info['name'] = name_match.group(1).strip()
# Extract model hash
hash_match = re.search(f'Lora_{idx} Model hash: ([^,]+)', user_comment)
if hash_match:
lora_info['hash'] = hash_match.group(1).strip()
# Extract strength model
strength_model_match = re.search(f'Lora_{idx} Strength model: ([^,]+)', user_comment)
if strength_model_match:
lora_info['strength_model'] = float(strength_model_match.group(1).strip())
# Extract strength clip
strength_clip_match = re.search(f'Lora_{idx} Strength clip: ([^,]+)', user_comment)
if strength_clip_match:
lora_info['strength_clip'] = float(strength_clip_match.group(1).strip())
# Only add if we have at least name and hash
if 'name' in lora_info and 'hash' in lora_info:
lora_matches.append((idx, lora_info['name'], lora_info['hash'],
str(lora_info.get('strength_model', 1.0)),
str(lora_info.get('strength_clip', 1.0))))
# Process LoRAs
base_model_counts = {}
loras = []
for match in lora_matches:
if len(match) == 5: # Regular pattern match
idx, name, hash_value, strength_model, strength_clip = match
else: # Flexible approach match
continue # Should not happen now
# Clean up the values
name = name.strip()
if name.endswith('.safetensors'):
name = name[:-12] # Remove .safetensors extension
hash_value = hash_value.strip()
weight = float(strength_model) # Use model strength as weight
# Initialize lora entry with default values
lora_entry = {
'name': name,
'type': 'lora',
'weight': weight,
'existsLocally': False,
'localPath': None,
'file_name': name,
'hash': hash_value,
'thumbnailUrl': '/loras_static/images/no-preview.png',
'baseModel': '',
'size': 0,
'downloadUrl': '',
'isDeleted': False
}
# Get info from Civitai by hash if available
if civitai_client and hash_value:
try:
civitai_info = await civitai_client.get_model_by_hash(hash_value)
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info,
recipe_scanner,
base_model_counts,
hash_value
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA hash {hash_value}: {e}")
loras.append(lora_entry)
# Extract model information
model = None
if 'model' in metadata:
model = metadata['model']
# Set base_model to the most common one from civitai_info
base_model = None
if base_model_counts:
base_model = max(base_model_counts.items(), key=lambda x: x[1])[0]
# Extract generation parameters for recipe metadata
gen_params = {}
for key in GEN_PARAM_KEYS:
if key in metadata:
gen_params[key] = metadata.get(key, '')
# Try to extract size information if available
if 'width' in metadata and 'height' in metadata:
gen_params['size'] = f"{metadata['width']}x{metadata['height']}"
return {
'base_model': base_model,
'loras': loras,
'gen_params': gen_params,
'raw_metadata': metadata,
'from_meta_format': True
}
except Exception as e:
logger.error(f"Error parsing meta format metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

View File

@@ -0,0 +1,114 @@
"""Parser for dedicated recipe metadata format."""
import re
import json
import logging
from typing import Dict, Any
from ...config import config
from ..base import RecipeMetadataParser
from ..constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class RecipeFormatParser(RecipeMetadataParser):
"""Parser for images with dedicated recipe metadata format"""
# Regular expression pattern for extracting recipe metadata
METADATA_MARKER = r'Recipe metadata: (\{.*\})'
def is_metadata_matching(self, user_comment: str) -> bool:
"""Check if the user comment matches the metadata format"""
return re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL) is not None
async def parse_metadata(self, user_comment: str, recipe_scanner=None, civitai_client=None) -> Dict[str, Any]:
"""Parse metadata from images with dedicated recipe metadata format"""
try:
# Extract recipe metadata from user comment
try:
# Look for recipe metadata section
recipe_match = re.search(self.METADATA_MARKER, user_comment, re.IGNORECASE | re.DOTALL)
if not recipe_match:
recipe_metadata = None
else:
recipe_json = recipe_match.group(1)
recipe_metadata = json.loads(recipe_json)
except Exception as e:
logger.error(f"Error extracting recipe metadata: {e}")
recipe_metadata = None
if not recipe_metadata:
return {"error": "No recipe metadata found", "loras": []}
# Process the recipe metadata
loras = []
for lora in recipe_metadata.get('loras', []):
# Convert recipe lora format to frontend format
lora_entry = {
'id': int(lora.get('modelVersionId', 0)),
'name': lora.get('modelName', ''),
'version': lora.get('modelVersionName', ''),
'type': 'lora',
'weight': lora.get('strength', 1.0),
'file_name': lora.get('file_name', ''),
'hash': lora.get('hash', '')
}
# Check if this LoRA exists locally by SHA256 hash
if lora.get('hash') and recipe_scanner:
lora_scanner = recipe_scanner._lora_scanner
exists_locally = lora_scanner.has_lora_hash(lora['hash'])
if exists_locally:
lora_cache = await lora_scanner.get_cached_data()
lora_item = next((item for item in lora_cache.raw_data if item['sha256'].lower() == lora['hash'].lower()), None)
if lora_item:
lora_entry['existsLocally'] = True
lora_entry['localPath'] = lora_item['file_path']
lora_entry['file_name'] = lora_item['file_name']
lora_entry['size'] = lora_item['size']
lora_entry['thumbnailUrl'] = config.get_preview_static_url(lora_item['preview_url'])
else:
lora_entry['existsLocally'] = False
lora_entry['localPath'] = None
# Try to get additional info from Civitai if we have a model version ID
if lora.get('modelVersionId') and civitai_client:
try:
civitai_info_tuple = await civitai_client.get_model_version_info(lora['modelVersionId'])
# Populate lora entry with Civitai info
populated_entry = await self.populate_lora_from_civitai(
lora_entry,
civitai_info_tuple,
recipe_scanner,
None, # No need to track base model counts
lora['hash']
)
if populated_entry is None:
continue # Skip invalid LoRA types
lora_entry = populated_entry
except Exception as e:
logger.error(f"Error fetching Civitai info for LoRA: {e}")
lora_entry['thumbnailUrl'] = '/loras_static/images/no-preview.png'
loras.append(lora_entry)
logger.info(f"Found {len(loras)} loras in recipe metadata")
# Filter gen_params to only include recognized keys
filtered_gen_params = {}
if 'gen_params' in recipe_metadata:
for key, value in recipe_metadata['gen_params'].items():
if key in GEN_PARAM_KEYS:
filtered_gen_params[key] = value
return {
'base_model': recipe_metadata.get('base_model', ''),
'loras': loras,
'gen_params': filtered_gen_params,
'tags': recipe_metadata.get('tags', []),
'title': recipe_metadata.get('title', ''),
'from_recipe_metadata': True
}
except Exception as e:
logger.error(f"Error parsing recipe format metadata: {e}", exc_info=True)
return {"error": str(e), "loras": []}

View File

@@ -3,16 +3,18 @@ import json
import logging
from aiohttp import web
from typing import Dict
from server import PromptServer # type: ignore
from ..utils.routes_common import ModelRouteUtils
from ..nodes.utils import get_lora_info
from ..config import config
from ..services.websocket_manager import ws_manager
from ..services.settings_manager import settings
import asyncio
from .update_routes import UpdateRoutes
from ..utils.constants import PREVIEW_EXTENSIONS, CARD_PREVIEW_WIDTH
from ..utils.constants import PREVIEW_EXTENSIONS, CARD_PREVIEW_WIDTH, VALID_LORA_TYPES
from ..utils.exif_utils import ExifUtils
from ..utils.metadata_manager import MetadataManager
from ..services.service_registry import ServiceRegistry
logger = logging.getLogger(__name__)
@@ -41,7 +43,9 @@ class ApiRoutes:
app.on_startup.append(lambda _: routes.initialize_services())
app.router.add_post('/api/delete_model', routes.delete_model)
app.router.add_post('/api/loras/exclude', routes.exclude_model) # Add new exclude endpoint
app.router.add_post('/api/fetch-civitai', routes.fetch_civitai)
app.router.add_post('/api/relink-civitai', routes.relink_civitai) # Add new relink endpoint
app.router.add_post('/api/replace_preview', routes.replace_preview)
app.router.add_get('/api/loras', routes.get_loras)
app.router.add_post('/api/fetch-all-civitai', routes.fetch_all_civitai)
@@ -53,7 +57,6 @@ class ApiRoutes:
app.router.add_get('/api/civitai/model/version/{modelVersionId}', routes.get_civitai_model_by_version)
app.router.add_get('/api/civitai/model/hash/{hash}', routes.get_civitai_model_by_hash)
app.router.add_post('/api/download-lora', routes.download_lora)
app.router.add_post('/api/settings', routes.update_settings)
app.router.add_post('/api/move_model', routes.move_model)
app.router.add_get('/api/lora-model-description', routes.get_lora_model_description) # Add new route
app.router.add_post('/api/loras/save-metadata', routes.save_metadata)
@@ -62,23 +65,63 @@ class ApiRoutes:
app.router.add_get('/api/loras/top-tags', routes.get_top_tags) # Add new route for top tags
app.router.add_get('/api/loras/base-models', routes.get_base_models) # Add new route for base models
app.router.add_get('/api/lora-civitai-url', routes.get_lora_civitai_url) # Add new route for Civitai URL
app.router.add_post('/api/rename_lora', routes.rename_lora) # Add new route for renaming LoRA files
app.router.add_post('/api/loras/rename', routes.rename_lora) # Add new route for renaming LoRA files
app.router.add_get('/api/loras/scan', routes.scan_loras) # Add new route for scanning LoRA files
# Add the new trigger words route
app.router.add_post('/loramanager/get_trigger_words', routes.get_trigger_words)
# Add new endpoint for letter counts
app.router.add_get('/api/loras/letter-counts', routes.get_letter_counts)
# Add new endpoints for copying lora data
app.router.add_get('/api/loras/get-notes', routes.get_lora_notes)
app.router.add_get('/api/loras/get-trigger-words', routes.get_lora_trigger_words)
# Add update check routes
UpdateRoutes.setup_routes(app)
# Add new endpoints for finding duplicates
app.router.add_get('/api/loras/find-duplicates', routes.find_duplicate_loras)
app.router.add_get('/api/loras/find-filename-conflicts', routes.find_filename_conflicts)
# Add new endpoint for bulk deleting loras
app.router.add_post('/api/loras/bulk-delete', routes.bulk_delete_loras)
# Add new endpoint for verifying duplicates
app.router.add_post('/api/loras/verify-duplicates', routes.verify_duplicates)
async def delete_model(self, request: web.Request) -> web.Response:
"""Handle model deletion request"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_delete_model(request, self.scanner)
async def exclude_model(self, request: web.Request) -> web.Response:
"""Handle model exclusion request"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_exclude_model(request, self.scanner)
async def fetch_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata fetch request"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_fetch_civitai(request, self.scanner)
response = await ModelRouteUtils.handle_fetch_civitai(request, self.scanner)
# If successful, format the metadata before returning
if response.status == 200:
data = json.loads(response.body.decode('utf-8'))
if data.get("success") and data.get("metadata"):
formatted_metadata = self._format_lora_response(data["metadata"])
return web.json_response({
"success": True,
"metadata": formatted_metadata
})
# Otherwise, return the original response
return response
async def replace_preview(self, request: web.Request) -> web.Response:
"""Handle preview image replacement request"""
@@ -88,8 +131,11 @@ class ApiRoutes:
async def scan_loras(self, request: web.Request) -> web.Response:
"""Force a rescan of LoRA files"""
try:
await self.scanner.get_cached_data(force_refresh=True)
try:
# Get full_rebuild parameter from query string, default to false
full_rebuild = request.query.get('full_rebuild', 'false').lower() == 'true'
await self.scanner.get_cached_data(force_refresh=True, rebuild_cache=full_rebuild)
return web.json_response({"status": "success", "message": "LoRA scan completed"})
except Exception as e:
logger.error(f"Error in scan_loras: {e}", exc_info=True)
@@ -120,6 +166,10 @@ class ApiRoutes:
# Get filter parameters
base_models = request.query.get('base_models', None)
tags = request.query.get('tags', None)
favorites_only = request.query.get('favorites_only', 'false').lower() == 'true' # New parameter
# New parameter for alphabet filtering
first_letter = request.query.get('first_letter', None)
# New parameters for recipe filtering
lora_hash = request.query.get('lora_hash', None)
@@ -150,7 +200,9 @@ class ApiRoutes:
base_models=filters.get('base_model', None),
tags=filters.get('tags', None),
search_options=search_options,
hash_filters=hash_filters
hash_filters=hash_filters,
favorites_only=favorites_only, # Pass favorites_only parameter
first_letter=first_letter # Pass the new first_letter parameter
)
# Get all available folders from cache
@@ -190,69 +242,10 @@ class ApiRoutes:
"from_civitai": lora.get("from_civitai", True),
"usage_tips": lora.get("usage_tips", ""),
"notes": lora.get("notes", ""),
"favorite": lora.get("favorite", False), # Include favorite status in response
"civitai": ModelRouteUtils.filter_civitai_data(lora.get("civitai", {}))
}
# Private helper methods
async def _read_preview_file(self, reader) -> tuple[bytes, str]:
"""Read preview file and content type from multipart request"""
field = await reader.next()
if field.name != 'preview_file':
raise ValueError("Expected 'preview_file' field")
content_type = field.headers.get('Content-Type', 'image/png')
return await field.read(), content_type
async def _read_model_path(self, reader) -> str:
"""Read model path from multipart request"""
field = await reader.next()
if field.name != 'model_path':
raise ValueError("Expected 'model_path' field")
return (await field.read()).decode()
async def _save_preview_file(self, model_path: str, preview_data: bytes, content_type: str) -> str:
"""Save preview file and return its path"""
base_name = os.path.splitext(os.path.basename(model_path))[0]
folder = os.path.dirname(model_path)
# Determine if content is video or image
if content_type.startswith('video/'):
# For videos, keep original format and use .mp4 extension
extension = '.mp4'
optimized_data = preview_data
else:
# For images, optimize and convert to WebP
optimized_data, _ = ExifUtils.optimize_image(
image_data=preview_data,
target_width=CARD_PREVIEW_WIDTH,
format='webp',
quality=85,
preserve_metadata=False
)
extension = '.webp' # Use .webp without .preview part
preview_path = os.path.join(folder, base_name + extension).replace(os.sep, '/')
with open(preview_path, 'wb') as f:
f.write(optimized_data)
return preview_path
async def _update_preview_metadata(self, model_path: str, preview_path: str):
"""Update preview path in metadata"""
metadata_path = os.path.splitext(model_path)[0] + '.metadata.json'
if os.path.exists(metadata_path):
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
# Update preview_url directly in the metadata dict
metadata['preview_url'] = preview_path
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
except Exception as e:
logger.error(f"Error updating metadata: {e}")
async def fetch_all_civitai(self, request: web.Request) -> web.Response:
"""Fetch CivitAI metadata for all loras in the background"""
try:
@@ -365,10 +358,10 @@ class ApiRoutes:
versions = response.get('modelVersions', [])
model_type = response.get('type', '')
# Check model type - should be LORA
if model_type.lower() != 'lora':
# Check model type - should be LORA, LoCon, or DORA
if model_type.lower() not in VALID_LORA_TYPES:
return web.json_response({
'error': f"Model type mismatch. Expected LORA, got {model_type}"
'error': f"Model type mismatch. Expected LORA or LoCon, got {model_type}"
}, status=400)
# Check local availability for each version
@@ -487,7 +480,7 @@ class ApiRoutes:
logger.warning(f"Early access download failed: {error_message}")
return web.Response(
status=401, # Use 401 status code to match Civitai's response
text=f"Early Access Restriction: {error_message}"
text=error_message
)
return web.Response(status=500, text=error_message)
@@ -507,21 +500,6 @@ class ApiRoutes:
logger.error(f"Error downloading LoRA: {error_message}")
return web.Response(status=500, text=error_message)
async def update_settings(self, request: web.Request) -> web.Response:
"""Update application settings"""
try:
data = await request.json()
# Validate and update settings
if 'civitai_api_key' in data:
settings.set('civitai_api_key', data['civitai_api_key'])
if 'show_only_sfw' in data:
settings.set('show_only_sfw', data['show_only_sfw'])
return web.json_response({'success': True})
except Exception as e:
logger.error(f"Error updating settings: {e}", exc_info=True)
return web.Response(status=500, text=str(e))
async def move_model(self, request: web.Request) -> web.Response:
"""Handle model move request"""
@@ -603,8 +581,7 @@ class ApiRoutes:
metadata[key] = value
# Save updated metadata
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(file_path, metadata)
# Update cache
await self.scanner.update_single_model_cache(file_path, file_path, metadata)
@@ -789,11 +766,13 @@ class ApiRoutes:
# Check if we already have the description stored in metadata
description = None
tags = []
creator = {}
if file_path:
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
description = metadata.get('modelDescription')
tags = metadata.get('tags', [])
creator = metadata.get('creator', {})
# If description is not in metadata, fetch from CivitAI
if not description:
@@ -803,6 +782,7 @@ class ApiRoutes:
if (model_metadata):
description = model_metadata.get('description')
tags = model_metadata.get('tags', [])
creator = model_metadata.get('creator', {})
# Save the metadata to file if we have a file path and got metadata
if file_path:
@@ -812,17 +792,21 @@ class ApiRoutes:
metadata['modelDescription'] = description
metadata['tags'] = tags
# Ensure the civitai dict exists
if 'civitai' not in metadata:
metadata['civitai'] = {}
# Store creator in the civitai nested structure
metadata['civitai']['creator'] = creator
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
logger.info(f"Saved model metadata to file for {file_path}")
await MetadataManager.save_metadata(file_path, metadata, True)
except Exception as e:
logger.error(f"Error saving model metadata: {e}")
return web.json_response({
'success': True,
'description': description or "<p>No model description available.</p>",
'tags': tags
'tags': tags,
'creator': creator
})
except Exception as e:
@@ -889,136 +873,257 @@ class ApiRoutes:
async def rename_lora(self, request: web.Request) -> web.Response:
"""Handle renaming a LoRA file and its associated files"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_rename_model(request, self.scanner)
async def get_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for specified LoRA models"""
try:
json_data = await request.json()
lora_names = json_data.get("lora_names", [])
node_ids = json_data.get("node_ids", [])
all_trigger_words = []
for lora_name in lora_names:
_, trigger_words = await get_lora_info(lora_name)
all_trigger_words.extend(trigger_words)
# Format the trigger words
trigger_words_text = ",, ".join(all_trigger_words) if all_trigger_words else ""
# Send update to all connected trigger word toggle nodes
for node_id in node_ids:
PromptServer.instance.send_sync("trigger_word_update", {
"id": node_id,
"message": trigger_words_text
})
return web.json_response({"success": True})
except Exception as e:
logger.error(f"Error getting trigger words: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def get_letter_counts(self, request: web.Request) -> web.Response:
"""Get count of loras for each letter of the alphabet"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
if self.download_manager is None:
self.download_manager = await ServiceRegistry.get_download_manager()
data = await request.json()
file_path = data.get('file_path')
new_file_name = data.get('new_file_name')
if not file_path or not new_file_name:
return web.json_response({
'success': False,
'error': 'File path and new file name are required'
}, status=400)
# Validate the new file name (no path separators or invalid characters)
invalid_chars = ['/', '\\', ':', '*', '?', '"', '<', '>', '|']
if any(char in new_file_name for char in invalid_chars):
return web.json_response({
'success': False,
'error': 'Invalid characters in file name'
}, status=400)
# Get the directory and current file name
target_dir = os.path.dirname(file_path)
old_file_name = os.path.splitext(os.path.basename(file_path))[0]
# Check if the target file already exists
new_file_path = os.path.join(target_dir, f"{new_file_name}.safetensors").replace(os.sep, '/')
if os.path.exists(new_file_path):
return web.json_response({
'success': False,
'error': 'A file with this name already exists'
}, status=400)
# Define the patterns for associated files
patterns = [
f"{old_file_name}.safetensors", # Required
f"{old_file_name}.metadata.json",
]
# Add all preview file extensions
for ext in PREVIEW_EXTENSIONS:
patterns.append(f"{old_file_name}{ext}")
# Find all matching files
existing_files = []
for pattern in patterns:
path = os.path.join(target_dir, pattern)
if os.path.exists(path):
existing_files.append((path, pattern))
# Get the hash from the main file to update hash index
hash_value = None
metadata = None
metadata_path = os.path.join(target_dir, f"{old_file_name}.metadata.json")
if os.path.exists(metadata_path):
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
hash_value = metadata.get('sha256')
# Rename all files
renamed_files = []
new_metadata_path = None
# Notify file monitor to ignore these events
main_file_path = os.path.join(target_dir, f"{old_file_name}.safetensors")
if os.path.exists(main_file_path):
# Get lora monitor through ServiceRegistry instead of download_manager
lora_monitor = await ServiceRegistry.get_lora_monitor()
if lora_monitor:
# Add old and new paths to ignore list
file_size = os.path.getsize(main_file_path)
lora_monitor.handler.add_ignore_path(main_file_path, file_size)
lora_monitor.handler.add_ignore_path(new_file_path, file_size)
for old_path, pattern in existing_files:
# Get the file extension like .safetensors or .metadata.json
ext = ModelRouteUtils.get_multipart_ext(pattern)
# Create the new path
new_path = os.path.join(target_dir, f"{new_file_name}{ext}").replace(os.sep, '/')
# Rename the file
os.rename(old_path, new_path)
renamed_files.append(new_path)
# Keep track of metadata path for later update
if ext == '.metadata.json':
new_metadata_path = new_path
# Update the metadata file with new file name and paths
if new_metadata_path and metadata:
# Update file_name, file_path and preview_url in metadata
metadata['file_name'] = new_file_name
metadata['file_path'] = new_file_path
# Update preview_url if it exists
if 'preview_url' in metadata and metadata['preview_url']:
old_preview = metadata['preview_url']
ext = ModelRouteUtils.get_multipart_ext(old_preview)
new_preview = os.path.join(target_dir, f"{new_file_name}{ext}").replace(os.sep, '/')
metadata['preview_url'] = new_preview
# Save updated metadata
with open(new_metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
# Update the scanner cache
if metadata:
await self.scanner.update_single_model_cache(file_path, new_file_path, metadata)
# Update recipe files and cache if hash is available
if hash_value:
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
recipes_updated, cache_updated = await recipe_scanner.update_lora_filename_by_hash(hash_value, new_file_name)
logger.info(f"Updated {recipes_updated} recipe files and {cache_updated} cache entries for renamed LoRA")
# Get letter counts
letter_counts = await self.scanner.get_letter_counts()
return web.json_response({
'success': True,
'new_file_path': new_file_path,
'renamed_files': renamed_files,
'reload_required': False
'letter_counts': letter_counts
})
except Exception as e:
logger.error(f"Error renaming LoRA: {e}", exc_info=True)
logger.error(f"Error getting letter counts: {e}")
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
}, status=500)
async def get_lora_notes(self, request: web.Request) -> web.Response:
"""Get notes for a specific LoRA file"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
# Get lora file name from query parameters
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text='Lora file name is required', status=400)
# Get cache data
cache = await self.scanner.get_cached_data()
# Search for the lora in cache data
for lora in cache.raw_data:
file_name = lora['file_name']
if file_name == lora_name:
notes = lora.get('notes', '')
return web.json_response({
'success': True,
'notes': notes
})
# If lora not found
return web.json_response({
'success': False,
'error': 'LoRA not found in cache'
}, status=404)
except Exception as e:
logger.error(f"Error getting lora notes: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_lora_trigger_words(self, request: web.Request) -> web.Response:
"""Get trigger words for a specific LoRA file"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
# Get lora file name from query parameters
lora_name = request.query.get('name')
if not lora_name:
return web.Response(text='Lora file name is required', status=400)
# Get cache data
cache = await self.scanner.get_cached_data()
# Search for the lora in cache data
for lora in cache.raw_data:
file_name = lora['file_name']
if file_name == lora_name:
# Get trigger words from civitai data
civitai_data = lora.get('civitai', {})
trigger_words = civitai_data.get('trainedWords', [])
return web.json_response({
'success': True,
'trigger_words': trigger_words
})
# If lora not found
return web.json_response({
'success': False,
'error': 'LoRA not found in cache'
}, status=404)
except Exception as e:
logger.error(f"Error getting lora trigger words: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def find_duplicate_loras(self, request: web.Request) -> web.Response:
"""Find loras with duplicate SHA256 hashes"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
# Get duplicate hashes from hash index
duplicates = self.scanner._hash_index.get_duplicate_hashes()
# Format the response
result = []
cache = await self.scanner.get_cached_data()
for sha256, paths in duplicates.items():
group = {
"hash": sha256,
"models": []
}
# Find matching models for each duplicate path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(self._format_lora_response(model))
# Add the primary model too
primary_path = self.scanner._hash_index.get_path(sha256)
if primary_path and primary_path not in paths:
primary_model = next((m for m in cache.raw_data if m['file_path'] == primary_path), None)
if primary_model:
group["models"].insert(0, self._format_lora_response(primary_model))
if len(group["models"]) > 1: # Only include if we found multiple models
result.append(group)
return web.json_response({
"success": True,
"duplicates": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding duplicate loras: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def find_filename_conflicts(self, request: web.Request) -> web.Response:
"""Find loras with conflicting filenames"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
# Get duplicate filenames from hash index
duplicates = self.scanner._hash_index.get_duplicate_filenames()
# Format the response
result = []
cache = await self.scanner.get_cached_data()
for filename, paths in duplicates.items():
group = {
"filename": filename,
"models": []
}
# Find matching models for each path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(self._format_lora_response(model))
# Find the model from the main index too
hash_val = self.scanner._hash_index.get_hash_by_filename(filename)
if hash_val:
main_path = self.scanner._hash_index.get_path(hash_val)
if main_path and main_path not in paths:
main_model = next((m for m in cache.raw_data if m['file_path'] == main_path), None)
if main_model:
group["models"].insert(0, self._format_lora_response(main_model))
if group["models"]: # Only include if we found models
result.append(group)
return web.json_response({
"success": True,
"conflicts": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding filename conflicts: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def bulk_delete_loras(self, request: web.Request) -> web.Response:
"""Handle bulk deletion of lora models"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_bulk_delete_models(request, self.scanner)
except Exception as e:
logger.error(f"Error in bulk delete loras: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def relink_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata re-linking request by model version ID for LoRAs"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_relink_civitai(request, self.scanner)
async def verify_duplicates(self, request: web.Request) -> web.Response:
"""Handle verification of duplicate lora hashes"""
if self.scanner is None:
self.scanner = await ServiceRegistry.get_lora_scanner()
return await ModelRouteUtils.handle_verify_duplicates(request, self.scanner)

View File

@@ -7,6 +7,7 @@ import asyncio
from ..utils.routes_common import ModelRouteUtils
from ..utils.constants import NSFW_LEVELS
from ..utils.metadata_manager import MetadataManager
from ..services.websocket_manager import ws_manager
from ..services.service_registry import ServiceRegistry
from ..config import config
@@ -49,14 +50,27 @@ class CheckpointsRoutes:
# Add new routes for model management similar to LoRA routes
app.router.add_post('/api/checkpoints/delete', self.delete_model)
app.router.add_post('/api/checkpoints/exclude', self.exclude_model) # Add new exclude endpoint
app.router.add_post('/api/checkpoints/fetch-civitai', self.fetch_civitai)
app.router.add_post('/api/checkpoints/relink-civitai', self.relink_civitai) # Add new relink endpoint
app.router.add_post('/api/checkpoints/replace-preview', self.replace_preview)
app.router.add_post('/api/checkpoints/download', self.download_checkpoint)
app.router.add_post('/api/checkpoints/save-metadata', self.save_metadata) # Add new route
app.router.add_post('/api/checkpoints/rename', self.rename_checkpoint) # Add new rename endpoint
# Add new WebSocket endpoint for checkpoint progress
app.router.add_get('/ws/checkpoint-progress', ws_manager.handle_checkpoint_connection)
# Add new routes for finding duplicates and filename conflicts
app.router.add_get('/api/checkpoints/find-duplicates', self.find_duplicate_checkpoints)
app.router.add_get('/api/checkpoints/find-filename-conflicts', self.find_filename_conflicts)
# Add new endpoint for bulk deleting checkpoints
app.router.add_post('/api/checkpoints/bulk-delete', self.bulk_delete_checkpoints)
# Add new endpoint for verifying duplicates
app.router.add_post('/api/checkpoints/verify-duplicates', self.verify_duplicates)
async def get_checkpoints(self, request):
"""Get paginated checkpoint data"""
try:
@@ -69,6 +83,7 @@ class CheckpointsRoutes:
fuzzy_search = request.query.get('fuzzy_search', 'false').lower() == 'true'
base_models = request.query.getall('base_model', [])
tags = request.query.getall('tag', [])
favorites_only = request.query.get('favorites_only', 'false').lower() == 'true' # Add favorites_only parameter
# Process search options
search_options = {
@@ -101,7 +116,8 @@ class CheckpointsRoutes:
base_models=base_models,
tags=tags,
search_options=search_options,
hash_filters=hash_filters
hash_filters=hash_filters,
favorites_only=favorites_only # Pass favorites_only parameter
)
# Format response items
@@ -123,7 +139,8 @@ class CheckpointsRoutes:
async def get_paginated_data(self, page, page_size, sort_by='name',
folder=None, search=None, fuzzy_search=False,
base_models=None, tags=None,
search_options=None, hash_filters=None):
search_options=None, hash_filters=None,
favorites_only=False): # Add favorites_only parameter with default False
"""Get paginated and filtered checkpoint data"""
cache = await self.scanner.get_cached_data()
@@ -181,6 +198,13 @@ class CheckpointsRoutes:
if not cp.get('preview_nsfw_level') or cp.get('preview_nsfw_level') < NSFW_LEVELS['R']
]
# Apply favorites filtering if enabled
if favorites_only:
filtered_data = [
cp for cp in filtered_data
if cp.get('favorite', False) is True
]
# Apply folder filtering
if folder is not None:
if search_options.get('recursive', False):
@@ -276,6 +300,7 @@ class CheckpointsRoutes:
"from_civitai": checkpoint.get("from_civitai", True),
"notes": checkpoint.get("notes", ""),
"model_type": checkpoint.get("model_type", "checkpoint"),
"favorite": checkpoint.get("favorite", False),
"civitai": ModelRouteUtils.filter_civitai_data(checkpoint.get("civitai", {}))
}
@@ -408,7 +433,10 @@ class CheckpointsRoutes:
async def scan_checkpoints(self, request):
"""Force a rescan of checkpoint files"""
try:
await self.scanner.get_cached_data(force_refresh=True)
# Get the full_rebuild parameter and convert to bool, default to False
full_rebuild = request.query.get('full_rebuild', 'false').lower() == 'true'
await self.scanner.get_cached_data(force_refresh=True, rebuild_cache=full_rebuild)
return web.json_response({"status": "success", "message": "Checkpoint scan completed"})
except Exception as e:
logger.error(f"Error in scan_checkpoints: {e}", exc_info=True)
@@ -418,7 +446,7 @@ class CheckpointsRoutes:
"""Get detailed information for a specific checkpoint by name"""
try:
name = request.match_info.get('name', '')
checkpoint_info = await self.scanner.get_checkpoint_info_by_name(name)
checkpoint_info = await self.scanner.get_model_info_by_name(name)
if checkpoint_info:
return web.json_response(checkpoint_info)
@@ -488,10 +516,27 @@ class CheckpointsRoutes:
async def delete_model(self, request: web.Request) -> web.Response:
"""Handle checkpoint model deletion request"""
return await ModelRouteUtils.handle_delete_model(request, self.scanner)
async def exclude_model(self, request: web.Request) -> web.Response:
"""Handle checkpoint model exclusion request"""
return await ModelRouteUtils.handle_exclude_model(request, self.scanner)
async def fetch_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata fetch request for checkpoints"""
return await ModelRouteUtils.handle_fetch_civitai(request, self.scanner)
response = await ModelRouteUtils.handle_fetch_civitai(request, self.scanner)
# If successful, format the metadata before returning
if response.status == 200:
data = json.loads(response.body.decode('utf-8'))
if data.get("success") and data.get("metadata"):
formatted_metadata = self._format_checkpoint_response(data["metadata"])
return web.json_response({
"success": True,
"metadata": formatted_metadata
})
# Otherwise, return the original response
return response
async def replace_preview(self, request: web.Request) -> web.Response:
"""Handle preview image replacement for checkpoints"""
@@ -607,8 +652,7 @@ class CheckpointsRoutes:
metadata.update(metadata_updates)
# Save updated metadata
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(file_path, metadata)
# Update cache
await self.scanner.update_single_model_cache(file_path, file_path, metadata)
@@ -642,7 +686,7 @@ class CheckpointsRoutes:
model_type = response.get('type', '')
# Check model type - should be Checkpoint
if model_type.lower() != 'checkpoint':
if (model_type.lower() != 'checkpoint'):
return web.json_response({
'error': f"Model type mismatch. Expected Checkpoint, got {model_type}"
}, status=400)
@@ -676,3 +720,124 @@ class CheckpointsRoutes:
except Exception as e:
logger.error(f"Error fetching checkpoint model versions: {e}")
return web.Response(status=500, text=str(e))
async def find_duplicate_checkpoints(self, request: web.Request) -> web.Response:
"""Find checkpoints with duplicate SHA256 hashes"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_checkpoint_scanner()
# Get duplicate hashes from hash index
duplicates = self.scanner._hash_index.get_duplicate_hashes()
# Format the response
result = []
cache = await self.scanner.get_cached_data()
for sha256, paths in duplicates.items():
group = {
"hash": sha256,
"models": []
}
# Find matching models for each path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(self._format_checkpoint_response(model))
# Add the primary model too
primary_path = self.scanner._hash_index.get_path(sha256)
if primary_path and primary_path not in paths:
primary_model = next((m for m in cache.raw_data if m['file_path'] == primary_path), None)
if primary_model:
group["models"].insert(0, self._format_checkpoint_response(primary_model))
if len(group["models"]) > 1: # Only include if we found multiple models
result.append(group)
return web.json_response({
"success": True,
"duplicates": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding duplicate checkpoints: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def find_filename_conflicts(self, request: web.Request) -> web.Response:
"""Find checkpoints with conflicting filenames"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_checkpoint_scanner()
# Get duplicate filenames from hash index
duplicates = self.scanner._hash_index.get_duplicate_filenames()
# Format the response
result = []
cache = await self.scanner.get_cached_data()
for filename, paths in duplicates.items():
group = {
"filename": filename,
"models": []
}
# Find matching models for each path
for path in paths:
model = next((m for m in cache.raw_data if m['file_path'] == path), None)
if model:
group["models"].append(self._format_checkpoint_response(model))
# Find the model from the main index too
hash_val = self.scanner._hash_index.get_hash_by_filename(filename)
if hash_val:
main_path = self.scanner._hash_index.get_path(hash_val)
if main_path and main_path not in paths:
main_model = next((m for m in cache.raw_data if m['file_path'] == main_path), None)
if main_model:
group["models"].insert(0, self._format_checkpoint_response(main_model))
if group["models"]:
result.append(group)
return web.json_response({
"success": True,
"conflicts": result,
"count": len(result)
})
except Exception as e:
logger.error(f"Error finding filename conflicts: {e}", exc_info=True)
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
async def bulk_delete_checkpoints(self, request: web.Request) -> web.Response:
"""Handle bulk deletion of checkpoint models"""
try:
if self.scanner is None:
self.scanner = await ServiceRegistry.get_checkpoint_scanner()
return await ModelRouteUtils.handle_bulk_delete_models(request, self.scanner)
except Exception as e:
logger.error(f"Error in bulk delete checkpoints: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def relink_civitai(self, request: web.Request) -> web.Response:
"""Handle CivitAI metadata re-linking request by model version ID for checkpoints"""
return await ModelRouteUtils.handle_relink_civitai(request, self.scanner)
async def verify_duplicates(self, request: web.Request) -> web.Response:
"""Handle verification of duplicate checkpoint hashes"""
return await ModelRouteUtils.handle_verify_duplicates(request, self.scanner)
async def rename_checkpoint(self, request: web.Request) -> web.Response:
"""Handle renaming a checkpoint file and its associated files"""
return await ModelRouteUtils.handle_rename_model(request, self.scanner)

View File

@@ -0,0 +1,68 @@
import logging
from ..utils.example_images_download_manager import DownloadManager
from ..utils.example_images_processor import ExampleImagesProcessor
from ..utils.example_images_metadata import MetadataUpdater
from ..utils.example_images_file_manager import ExampleImagesFileManager
logger = logging.getLogger(__name__)
class ExampleImagesRoutes:
"""Routes for example images related functionality"""
@staticmethod
def setup_routes(app):
"""Register example images routes"""
app.router.add_post('/api/download-example-images', ExampleImagesRoutes.download_example_images)
app.router.add_post('/api/import-example-images', ExampleImagesRoutes.import_example_images)
app.router.add_get('/api/example-images-status', ExampleImagesRoutes.get_example_images_status)
app.router.add_post('/api/pause-example-images', ExampleImagesRoutes.pause_example_images)
app.router.add_post('/api/resume-example-images', ExampleImagesRoutes.resume_example_images)
app.router.add_post('/api/open-example-images-folder', ExampleImagesRoutes.open_example_images_folder)
app.router.add_get('/api/example-image-files', ExampleImagesRoutes.get_example_image_files)
app.router.add_get('/api/has-example-images', ExampleImagesRoutes.has_example_images)
app.router.add_post('/api/delete-example-image', ExampleImagesRoutes.delete_example_image)
@staticmethod
async def download_example_images(request):
"""Download example images for models from Civitai"""
return await DownloadManager.start_download(request)
@staticmethod
async def get_example_images_status(request):
"""Get the current status of example images download"""
return await DownloadManager.get_status(request)
@staticmethod
async def pause_example_images(request):
"""Pause the example images download"""
return await DownloadManager.pause_download(request)
@staticmethod
async def resume_example_images(request):
"""Resume the example images download"""
return await DownloadManager.resume_download(request)
@staticmethod
async def open_example_images_folder(request):
"""Open the example images folder for a specific model"""
return await ExampleImagesFileManager.open_folder(request)
@staticmethod
async def get_example_image_files(request):
"""Get list of example image files for a specific model"""
return await ExampleImagesFileManager.get_files(request)
@staticmethod
async def import_example_images(request):
"""Import local example images for a model"""
return await ExampleImagesProcessor.import_images(request)
@staticmethod
async def has_example_images(request):
"""Check if example images folder exists and is not empty for a model"""
return await ExampleImagesFileManager.has_images(request)
@staticmethod
async def delete_example_image(request):
"""Delete a custom example image for a model"""
return await ExampleImagesProcessor.delete_custom_image(request)

View File

@@ -70,8 +70,7 @@ class LoraRoutes:
# It's initializing if the cache object doesn't exist yet,
# OR if the scanner explicitly says it's initializing (background task running).
is_initializing = (
self.scanner._cache is None or
(hasattr(self.scanner, '_is_initializing') and self.scanner._is_initializing)
self.scanner._cache is None or self.scanner.is_initializing()
)
if is_initializing:

582
py/routes/misc_routes.py Normal file
View File

@@ -0,0 +1,582 @@
import logging
import os
import sys
import threading
import asyncio
from server import PromptServer # type: ignore
from aiohttp import web
from ..services.settings_manager import settings
from ..utils.usage_stats import UsageStats
from ..utils.lora_metadata import extract_trained_words
from ..config import config
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS, NODE_TYPES, DEFAULT_NODE_COLOR
import re
logger = logging.getLogger(__name__)
standalone_mode = 'nodes' not in sys.modules
# Node registry for tracking active workflow nodes
class NodeRegistry:
"""Thread-safe registry for tracking Lora nodes in active workflows"""
def __init__(self):
self._lock = threading.RLock()
self._nodes = {} # node_id -> node_info
self._registry_updated = threading.Event()
def register_nodes(self, nodes):
"""Register multiple nodes at once, replacing existing registry"""
with self._lock:
# Clear existing registry
self._nodes.clear()
# Register all new nodes
for node in nodes:
node_id = node['node_id']
node_type = node.get('type', '')
# Convert node type name to integer
type_id = NODE_TYPES.get(node_type, 0) # 0 for unknown types
# Handle null bgcolor with default color
bgcolor = node.get('bgcolor')
if bgcolor is None:
bgcolor = DEFAULT_NODE_COLOR
self._nodes[node_id] = {
'id': node_id,
'bgcolor': bgcolor,
'title': node.get('title'),
'type': type_id,
'type_name': node_type
}
logger.debug(f"Registered {len(nodes)} nodes in registry")
# Signal that registry has been updated
self._registry_updated.set()
def get_registry(self):
"""Get current registry information"""
with self._lock:
return {
'nodes': dict(self._nodes), # Return a copy
'node_count': len(self._nodes)
}
def clear_registry(self):
"""Clear the entire registry"""
with self._lock:
self._nodes.clear()
logger.info("Node registry cleared")
def wait_for_update(self, timeout=1.0):
"""Wait for registry update with timeout"""
self._registry_updated.clear()
return self._registry_updated.wait(timeout)
# Global registry instance
node_registry = NodeRegistry()
class MiscRoutes:
"""Miscellaneous routes for various utility functions"""
@staticmethod
def setup_routes(app):
"""Register miscellaneous routes"""
app.router.add_post('/api/settings', MiscRoutes.update_settings)
# Add new route for clearing cache
app.router.add_post('/api/clear-cache', MiscRoutes.clear_cache)
# Usage stats routes
app.router.add_post('/api/update-usage-stats', MiscRoutes.update_usage_stats)
app.router.add_get('/api/get-usage-stats', MiscRoutes.get_usage_stats)
# Lora code update endpoint
app.router.add_post('/api/update-lora-code', MiscRoutes.update_lora_code)
# Add new route for getting trained words
app.router.add_get('/api/trained-words', MiscRoutes.get_trained_words)
# Add new route for getting model example files
app.router.add_get('/api/model-example-files', MiscRoutes.get_model_example_files)
# Node registry endpoints
app.router.add_post('/api/register-nodes', MiscRoutes.register_nodes)
app.router.add_get('/api/get-registry', MiscRoutes.get_registry)
@staticmethod
async def clear_cache(request):
"""Clear all cache files from the cache folder"""
try:
# Get the cache folder path (relative to project directory)
project_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
cache_folder = os.path.join(project_dir, 'cache')
# Check if cache folder exists
if not os.path.exists(cache_folder):
logger.info("Cache folder does not exist, nothing to clear")
return web.json_response({'success': True, 'message': 'No cache folder found'})
# Get list of cache files before deleting for reporting
cache_files = [f for f in os.listdir(cache_folder) if os.path.isfile(os.path.join(cache_folder, f))]
deleted_files = []
# Delete each .msgpack file in the cache folder
for filename in cache_files:
if filename.endswith('.msgpack'):
file_path = os.path.join(cache_folder, filename)
try:
os.remove(file_path)
deleted_files.append(filename)
logger.info(f"Deleted cache file: {filename}")
except Exception as e:
logger.error(f"Failed to delete {filename}: {e}")
return web.json_response({
'success': False,
'error': f"Failed to delete {filename}: {str(e)}"
}, status=500)
return web.json_response({
'success': True,
'message': f"Successfully cleared {len(deleted_files)} cache files",
'deleted_files': deleted_files
})
except Exception as e:
logger.error(f"Error clearing cache files: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def update_settings(request):
"""Update application settings"""
try:
data = await request.json()
# Validate and update settings
for key, value in data.items():
# Special handling for example_images_path - verify path exists
if key == 'example_images_path' and value:
if not os.path.exists(value):
return web.json_response({
'success': False,
'error': f"Path does not exist: {value}"
})
# Path changed - server restart required for new path to take effect
old_path = settings.get('example_images_path')
if old_path != value:
logger.info(f"Example images path changed to {value} - server restart required")
# Save to settings
settings.set(key, value)
return web.json_response({'success': True})
except Exception as e:
logger.error(f"Error updating settings: {e}", exc_info=True)
return web.Response(status=500, text=str(e))
@staticmethod
async def update_usage_stats(request):
"""
Update usage statistics based on a prompt_id
Expects a JSON body with:
{
"prompt_id": "string"
}
"""
try:
# Parse the request body
data = await request.json()
prompt_id = data.get('prompt_id')
if not prompt_id:
return web.json_response({
'success': False,
'error': 'Missing prompt_id'
}, status=400)
# Call the UsageStats to process this prompt_id synchronously
usage_stats = UsageStats()
await usage_stats.process_execution(prompt_id)
return web.json_response({
'success': True
})
except Exception as e:
logger.error(f"Failed to update usage stats: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_usage_stats(request):
"""Get current usage statistics"""
try:
usage_stats = UsageStats()
stats = await usage_stats.get_stats()
# Add version information to help clients handle format changes
stats_response = {
'success': True,
'data': stats,
'format_version': 2 # Indicate this is the new format with history
}
return web.json_response(stats_response)
except Exception as e:
logger.error(f"Failed to get usage stats: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def update_lora_code(request):
"""
Update Lora code in ComfyUI nodes
Expects a JSON body with:
{
"node_ids": [123, 456], # Optional - List of node IDs to update (for browser mode)
"lora_code": "<lora:modelname:1.0>", # The Lora code to send
"mode": "append" # or "replace" - whether to append or replace existing code
}
"""
try:
# Parse the request body
data = await request.json()
node_ids = data.get('node_ids')
lora_code = data.get('lora_code', '')
mode = data.get('mode', 'append')
if not lora_code:
return web.json_response({
'success': False,
'error': 'Missing lora_code parameter'
}, status=400)
results = []
# Desktop mode: no specific node_ids provided
if node_ids is None:
try:
# Send broadcast message with id=-1 to all Lora Loader nodes
PromptServer.instance.send_sync("lora_code_update", {
"id": -1,
"lora_code": lora_code,
"mode": mode
})
results.append({
'node_id': 'broadcast',
'success': True
})
except Exception as e:
logger.error(f"Error broadcasting lora code: {e}")
results.append({
'node_id': 'broadcast',
'success': False,
'error': str(e)
})
else:
# Browser mode: send to specific nodes
for node_id in node_ids:
try:
# Send the message to the frontend
PromptServer.instance.send_sync("lora_code_update", {
"id": node_id,
"lora_code": lora_code,
"mode": mode
})
results.append({
'node_id': node_id,
'success': True
})
except Exception as e:
logger.error(f"Error sending lora code to node {node_id}: {e}")
results.append({
'node_id': node_id,
'success': False,
'error': str(e)
})
return web.json_response({
'success': True,
'results': results
})
except Exception as e:
logger.error(f"Failed to update lora code: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_trained_words(request):
"""
Get trained words from a safetensors file, sorted by frequency
Expects a query parameter:
file_path: Path to the safetensors file
"""
try:
# Get file path from query parameters
file_path = request.query.get('file_path')
if not file_path:
return web.json_response({
'success': False,
'error': 'Missing file_path parameter'
}, status=400)
# Check if file exists and is a safetensors file
if not os.path.exists(file_path):
return web.json_response({
'success': False,
'error': f"File not found: {file_path}"
}, status=404)
if not file_path.lower().endswith('.safetensors'):
return web.json_response({
'success': False,
'error': 'File is not a safetensors file'
}, status=400)
# Extract trained words and class_tokens
trained_words, class_tokens = await extract_trained_words(file_path)
# Return result with both trained words and class tokens
return web.json_response({
'success': True,
'trained_words': trained_words,
'class_tokens': class_tokens
})
except Exception as e:
logger.error(f"Failed to get trained words: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_model_example_files(request):
"""
Get list of example image files for a specific model based on file path
Expects:
- file_path in query parameters
Returns:
- List of image files with their paths as static URLs
"""
try:
# Get the model file path from query parameters
file_path = request.query.get('file_path')
if not file_path:
return web.json_response({
'success': False,
'error': 'Missing file_path parameter'
}, status=400)
# Extract directory and base filename
model_dir = os.path.dirname(file_path)
model_filename = os.path.basename(file_path)
model_name = os.path.splitext(model_filename)[0]
# Check if the directory exists
if not os.path.exists(model_dir):
return web.json_response({
'success': False,
'error': 'Model directory not found',
'files': []
}, status=404)
# Look for files matching the pattern modelname.example.<index>.<ext>
files = []
pattern = f"{model_name}.example."
for file in os.listdir(model_dir):
file_lower = file.lower()
if file_lower.startswith(pattern.lower()):
file_full_path = os.path.join(model_dir, file)
if os.path.isfile(file_full_path):
# Check if the file is a supported media file
file_ext = os.path.splitext(file)[1].lower()
if (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
# Extract the index from the filename
try:
# Extract the part after '.example.' and before file extension
index_part = file[len(pattern):].split('.')[0]
# Try to parse it as an integer
index = int(index_part)
except (ValueError, IndexError):
# If we can't parse the index, use infinity to sort at the end
index = float('inf')
# Convert file path to static URL
static_url = config.get_preview_static_url(file_full_path)
files.append({
'name': file,
'path': static_url,
'extension': file_ext,
'is_video': file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos'],
'index': index
})
# Sort files by their index for consistent ordering
files.sort(key=lambda x: x['index'])
# Remove the index field as it's only used for sorting
for file in files:
file.pop('index', None)
return web.json_response({
'success': True,
'files': files
})
except Exception as e:
logger.error(f"Failed to get model example files: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def register_nodes(request):
"""
Register multiple Lora nodes at once
Expects a JSON body with:
{
"nodes": [
{
"node_id": 123,
"bgcolor": "#535",
"title": "Lora Loader (LoraManager)"
},
...
]
}
"""
try:
data = await request.json()
# Validate required fields
nodes = data.get('nodes', [])
if not isinstance(nodes, list):
return web.json_response({
'success': False,
'error': 'nodes must be a list'
}, status=400)
# Validate each node
for i, node in enumerate(nodes):
if not isinstance(node, dict):
return web.json_response({
'success': False,
'error': f'Node {i} must be an object'
}, status=400)
node_id = node.get('node_id')
if node_id is None:
return web.json_response({
'success': False,
'error': f'Node {i} missing node_id parameter'
}, status=400)
# Validate node_id is an integer
try:
node['node_id'] = int(node_id)
except (ValueError, TypeError):
return web.json_response({
'success': False,
'error': f'Node {i} node_id must be an integer'
}, status=400)
# Register all nodes
node_registry.register_nodes(nodes)
return web.json_response({
'success': True,
'message': f'{len(nodes)} nodes registered successfully'
})
except Exception as e:
logger.error(f"Failed to register nodes: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_registry(request):
"""Get current node registry information by refreshing from frontend"""
try:
# Check if running in standalone mode
if standalone_mode:
logger.warning("Registry refresh not available in standalone mode")
return web.json_response({
'success': False,
'error': 'Standalone Mode Active',
'message': 'Cannot interact with ComfyUI in standalone mode.'
}, status=503)
# Send message to frontend to refresh registry
try:
PromptServer.instance.send_sync("lora_registry_refresh", {})
logger.debug("Sent registry refresh request to frontend")
except Exception as e:
logger.error(f"Failed to send registry refresh message: {e}")
return web.json_response({
'success': False,
'error': 'Communication Error',
'message': f'Failed to communicate with ComfyUI frontend: {str(e)}'
}, status=500)
# Wait for registry update with timeout
def wait_for_registry():
return node_registry.wait_for_update(timeout=1.0)
# Run the wait in a thread to avoid blocking the event loop
loop = asyncio.get_event_loop()
registry_updated = await loop.run_in_executor(None, wait_for_registry)
if not registry_updated:
logger.warning("Registry refresh timeout after 1 second")
return web.json_response({
'success': False,
'error': 'Timeout Error',
'message': 'Registry refresh timeout - ComfyUI frontend may not be responsive'
}, status=408)
# Get updated registry
registry_info = node_registry.get_registry()
return web.json_response({
'success': True,
'data': registry_info
})
except Exception as e:
logger.error(f"Failed to get registry: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': 'Internal Error',
'message': str(e)
}, status=500)

View File

@@ -1,5 +1,6 @@
import os
import time
import base64
import numpy as np
from PIL import Image
import torch
@@ -10,16 +11,25 @@ from typing import Dict
import tempfile
import json
import asyncio
import sys
from ..utils.exif_utils import ExifUtils
from ..utils.recipe_parsers import RecipeParserFactory
from ..recipes import RecipeParserFactory
from ..utils.constants import CARD_PREVIEW_WIDTH
from ..config import config
from ..metadata_collector import get_metadata # Add MetadataCollector import
from ..metadata_collector.metadata_processor import MetadataProcessor # Add MetadataProcessor import
# Check if running in standalone mode
standalone_mode = 'nodes' not in sys.modules
from ..utils.utils import download_civitai_image
from ..services.service_registry import ServiceRegistry # Add ServiceRegistry import
from ..metadata_collector.metadata_registry import MetadataRegistry
# Only import MetadataRegistry in non-standalone mode
if not standalone_mode:
# Import metadata_collector functions and classes conditionally
from ..metadata_collector import get_metadata # Add MetadataCollector import
from ..metadata_collector.metadata_processor import MetadataProcessor # Add MetadataProcessor import
from ..metadata_collector.metadata_registry import MetadataRegistry
logger = logging.getLogger(__name__)
@@ -47,6 +57,7 @@ class RecipeRoutes:
app.router.add_get('/api/recipes', routes.get_recipes)
app.router.add_get('/api/recipe/{recipe_id}', routes.get_recipe_detail)
app.router.add_post('/api/recipes/analyze-image', routes.analyze_recipe_image)
app.router.add_post('/api/recipes/analyze-local-image', routes.analyze_local_image)
app.router.add_post('/api/recipes/save', routes.save_recipe)
app.router.add_delete('/api/recipe/{recipe_id}', routes.delete_recipe)
@@ -61,12 +72,18 @@ class RecipeRoutes:
# Add new endpoint for getting recipe syntax
app.router.add_get('/api/recipe/{recipe_id}/syntax', routes.get_recipe_syntax)
# Add new endpoint for updating recipe metadata (name and tags)
# Add new endpoint for updating recipe metadata (name, tags and source_path)
app.router.add_put('/api/recipe/{recipe_id}/update', routes.update_recipe)
# Add new endpoint for reconnecting deleted LoRAs
app.router.add_post('/api/recipe/lora/reconnect', routes.reconnect_lora)
# Add new endpoint for finding duplicate recipes
app.router.add_get('/api/recipes/find-duplicates', routes.find_duplicates)
# Add new endpoint for bulk deletion of recipes
app.router.add_post('/api/recipes/bulk-delete', routes.bulk_delete)
# Start cache initialization
app.on_startup.append(routes._init_cache)
@@ -74,6 +91,9 @@ class RecipeRoutes:
# Add route to get recipes for a specific Lora
app.router.add_get('/api/recipes/for-lora', routes.get_recipes_for_lora)
# Add new endpoint for scanning and rebuilding the recipe cache
app.router.add_get('/api/recipes/scan', routes.scan_recipes)
async def _init_cache(self, app):
"""Initialize cache on startup"""
@@ -234,6 +254,7 @@ class RecipeRoutes:
content_type = request.headers.get('Content-Type', '')
is_url_mode = False
metadata = None # Initialize metadata variable
if 'multipart/form-data' in content_type:
# Handle image upload
@@ -267,17 +288,63 @@ class RecipeRoutes:
"loras": []
}, status=400)
# Download image from URL
temp_path = download_civitai_image(url)
# Check if this is a Civitai image URL
import re
civitai_image_match = re.match(r'https://civitai\.com/images/(\d+)', url)
if not temp_path:
return web.json_response({
"error": "Failed to download image from URL",
"loras": []
}, status=400)
if civitai_image_match:
# Extract image ID and fetch image info using get_image_info
image_id = civitai_image_match.group(1)
image_info = await self.civitai_client.get_image_info(image_id)
if not image_info:
return web.json_response({
"error": "Failed to fetch image information from Civitai",
"loras": []
}, status=400)
# Get image URL from response
image_url = image_info.get('url')
if not image_url:
return web.json_response({
"error": "No image URL found in Civitai response",
"loras": []
}, status=400)
# Download image directly from URL
session = await self.civitai_client.session
# Create a temporary file to save the downloaded image
with tempfile.NamedTemporaryFile(delete=False, suffix='.jpg') as temp_file:
temp_path = temp_file.name
async with session.get(image_url) as response:
if response.status != 200:
return web.json_response({
"error": f"Failed to download image from URL: HTTP {response.status}",
"loras": []
}, status=400)
with open(temp_path, 'wb') as f:
f.write(await response.read())
# Use meta field from image_info as metadata
if 'meta' in image_info:
metadata = image_info['meta']
else:
# Not a Civitai image URL, use the original download method
temp_path = download_civitai_image(url)
if not temp_path:
return web.json_response({
"error": "Failed to download image from URL",
"loras": []
}, status=400)
# Extract metadata from the image using ExifUtils
metadata = ExifUtils.extract_image_metadata(temp_path)
# If metadata wasn't obtained from Civitai API, extract it from the image
if metadata is None:
# Extract metadata from the image using ExifUtils
metadata = ExifUtils.extract_image_metadata(temp_path)
# If no metadata found, return a more specific error
if not metadata:
@@ -288,7 +355,6 @@ class RecipeRoutes:
# For URL mode, include the image data as base64
if is_url_mode and temp_path:
import base64
with open(temp_path, "rb") as image_file:
result["image_base64"] = base64.b64encode(image_file.read()).decode('utf-8')
@@ -305,7 +371,6 @@ class RecipeRoutes:
# For URL mode, include the image data as base64
if is_url_mode and temp_path:
import base64
with open(temp_path, "rb") as image_file:
result["image_base64"] = base64.b64encode(image_file.read()).decode('utf-8')
@@ -320,7 +385,6 @@ class RecipeRoutes:
# For URL mode, include the image data as base64
if is_url_mode and temp_path:
import base64
with open(temp_path, "rb") as image_file:
result["image_base64"] = base64.b64encode(image_file.read()).decode('utf-8')
@@ -328,6 +392,21 @@ class RecipeRoutes:
if "error" in result and not result.get("loras"):
return web.json_response(result, status=200)
# Calculate fingerprint from parsed loras
from ..utils.utils import calculate_recipe_fingerprint
fingerprint = calculate_recipe_fingerprint(result.get("loras", []))
# Add fingerprint to result
result["fingerprint"] = fingerprint
# Find matching recipes with the same fingerprint
matching_recipes = []
if fingerprint:
matching_recipes = await self.recipe_scanner.find_recipes_by_fingerprint(fingerprint)
# Add matching recipes to result
result["matching_recipes"] = matching_recipes
return web.json_response(result)
except Exception as e:
@@ -343,7 +422,100 @@ class RecipeRoutes:
os.unlink(temp_path)
except Exception as e:
logger.error(f"Error deleting temporary file: {e}")
async def analyze_local_image(self, request: web.Request) -> web.Response:
"""Analyze a local image file for recipe metadata"""
try:
# Ensure services are initialized
await self.init_services()
# Get JSON data from request
data = await request.json()
file_path = data.get('path')
if not file_path:
return web.json_response({
'error': 'No file path provided',
'loras': []
}, status=400)
# Normalize file path for cross-platform compatibility
file_path = os.path.normpath(file_path.strip('"').strip("'"))
# Validate that the file exists
if not os.path.isfile(file_path):
return web.json_response({
'error': 'File not found',
'loras': []
}, status=404)
# Extract metadata from the image using ExifUtils
metadata = ExifUtils.extract_image_metadata(file_path)
# If no metadata found, return error
if not metadata:
# Get base64 image data
with open(file_path, "rb") as image_file:
image_base64 = base64.b64encode(image_file.read()).decode('utf-8')
return web.json_response({
"error": "No metadata found in this image",
"loras": [], # Return empty loras array to prevent client-side errors
"image_base64": image_base64
}, status=200)
# Use the parser factory to get the appropriate parser
parser = RecipeParserFactory.create_parser(metadata)
if parser is None:
# Get base64 image data
with open(file_path, "rb") as image_file:
image_base64 = base64.b64encode(image_file.read()).decode('utf-8')
return web.json_response({
"error": "No parser found for this image",
"loras": [], # Return empty loras array to prevent client-side errors
"image_base64": image_base64
}, status=200)
# Parse the metadata
result = await parser.parse_metadata(
metadata,
recipe_scanner=self.recipe_scanner,
civitai_client=self.civitai_client
)
# Add base64 image data to result
with open(file_path, "rb") as image_file:
result["image_base64"] = base64.b64encode(image_file.read()).decode('utf-8')
# Check for errors
if "error" in result and not result.get("loras"):
return web.json_response(result, status=200)
# Calculate fingerprint from parsed loras
from ..utils.utils import calculate_recipe_fingerprint
fingerprint = calculate_recipe_fingerprint(result.get("loras", []))
# Add fingerprint to result
result["fingerprint"] = fingerprint
# Find matching recipes with the same fingerprint
matching_recipes = []
if fingerprint:
matching_recipes = await self.recipe_scanner.find_recipes_by_fingerprint(fingerprint)
# Add matching recipes to result
result["matching_recipes"] = matching_recipes
return web.json_response(result)
except Exception as e:
logger.error(f"Error analyzing local image: {e}", exc_info=True)
return web.json_response({
'error': str(e),
'loras': [] # Return empty loras array to prevent client-side errors
}, status=500)
async def save_recipe(self, request: web.Request) -> web.Response:
"""Save a recipe to the recipes folder"""
@@ -413,7 +585,6 @@ class RecipeRoutes:
if not image:
if image_base64:
# Convert base64 to binary
import base64
try:
# Remove potential data URL prefix
if ',' in image_base64:
@@ -462,7 +633,7 @@ class RecipeRoutes:
with open(image_path, 'wb') as f:
f.write(optimized_image)
# Create the recipe JSON
# Create the recipe data structure
current_time = time.time()
# Format loras data according to the recipe.json format
@@ -477,7 +648,7 @@ class RecipeRoutes:
"file_name": lora.get("file_name", "") or os.path.splitext(os.path.basename(lora.get("localPath", "")))[0] if lora.get("localPath") else "",
"hash": lora.get("hash", "").lower() if lora.get("hash") else "",
"strength": float(lora.get("weight", 1.0)),
"modelVersionId": lora.get("id", ""),
"modelVersionId": lora.get("id", 0),
"modelName": lora.get("name", ""),
"modelVersionName": lora.get("version", ""),
"isDeleted": lora.get("isDeleted", False), # Preserve deletion status in saved recipe
@@ -502,6 +673,10 @@ class RecipeRoutes:
"clip_skip": raw_metadata.get("clip_skip", "")
}
# Calculate recipe fingerprint
from ..utils.utils import calculate_recipe_fingerprint
fingerprint = calculate_recipe_fingerprint(loras_data)
# Create the recipe data structure
recipe_data = {
"id": recipe_id,
@@ -511,13 +686,18 @@ class RecipeRoutes:
"created_date": current_time,
"base_model": metadata.get("base_model", ""),
"loras": loras_data,
"gen_params": gen_params
"gen_params": gen_params,
"fingerprint": fingerprint
}
# Add tags if provided
if tags:
recipe_data["tags"] = tags
# Add source_path if provided in metadata
if metadata.get("source_path"):
recipe_data["source_path"] = metadata.get("source_path")
# Save the recipe JSON
json_filename = f"{recipe_id}.recipe.json"
json_path = os.path.join(recipes_dir, json_filename)
@@ -527,6 +707,14 @@ class RecipeRoutes:
# Add recipe metadata to the image
ExifUtils.append_recipe_metadata(image_path, recipe_data)
# Check for duplicates
matching_recipes = []
if fingerprint:
matching_recipes = await self.recipe_scanner.find_recipes_by_fingerprint(fingerprint)
# Remove current recipe from matches
if recipe_id in matching_recipes:
matching_recipes.remove(recipe_id)
# Simplified cache update approach
# Instead of trying to update the cache directly, just set it to None
# to force a refresh on the next get_cached_data call
@@ -542,7 +730,8 @@ class RecipeRoutes:
'success': True,
'recipe_id': recipe_id,
'image_path': image_path,
'json_path': json_path
'json_path': json_path,
'matching_recipes': matching_recipes
})
except Exception as e:
@@ -801,10 +990,13 @@ class RecipeRoutes:
return web.json_response({"error": "No generation metadata found"}, status=400)
# Get the most recent image from metadata registry instead of temp directory
metadata_registry = MetadataRegistry()
latest_image = metadata_registry.get_first_decoded_image()
if not standalone_mode:
metadata_registry = MetadataRegistry()
latest_image = metadata_registry.get_first_decoded_image()
else:
latest_image = None
if not latest_image:
if latest_image is None:
return web.json_response({"error": "No recent images found to use for recipe. Try generating an image first."}, status=400)
# Convert the image data to bytes - handle tuple and tensor cases
@@ -915,7 +1107,7 @@ class RecipeRoutes:
"file_name": lora_name,
"hash": lora_info.get("sha256", "").lower() if lora_info else "",
"strength": float(lora_strength),
"modelVersionId": lora_info.get("civitai", {}).get("id", "") if lora_info else "",
"modelVersionId": lora_info.get("civitai", {}).get("id", 0) if lora_info else 0,
"modelName": lora_info.get("civitai", {}).get("model", {}).get("name", "") if lora_info else lora_name,
"modelVersionName": lora_info.get("civitai", {}).get("name", "") if lora_info else "",
"isDeleted": False
@@ -1074,9 +1266,9 @@ class RecipeRoutes:
data = await request.json()
# Validate required fields
if 'title' not in data and 'tags' not in data:
if 'title' not in data and 'tags' not in data and 'source_path' not in data and 'preview_nsfw_level' not in data:
return web.json_response({
"error": "At least one field to update must be provided (title or tags)"
"error": "At least one field to update must be provided (title or tags or source_path or preview_nsfw_level)"
}, status=400)
# Use the recipe scanner's update method
@@ -1104,7 +1296,7 @@ class RecipeRoutes:
data = await request.json()
# Validate required fields
required_fields = ['recipe_id', 'lora_data', 'target_name']
required_fields = ['recipe_id', 'lora_index', 'target_name']
for field in required_fields:
if field not in data:
return web.json_response({
@@ -1112,7 +1304,7 @@ class RecipeRoutes:
}, status=400)
recipe_id = data['recipe_id']
lora_data = data['lora_data']
lora_index = int(data['lora_index'])
target_name = data['target_name']
# Get recipe scanner
@@ -1132,52 +1324,37 @@ class RecipeRoutes:
# Load recipe data
with open(recipe_path, 'r', encoding='utf-8') as f:
recipe_data = json.load(f)
# Find the deleted LoRA in the recipe
found = False
updated_lora = None
lora = recipe_data.get("loras", [])[lora_index] if lora_index < len(recipe_data.get('loras', [])) else None
if lora is None:
return web.json_response({"error": "LoRA index out of range in recipe"}, status=404)
# Update LoRA data
lora['isDeleted'] = False
lora['exclude'] = False
lora['file_name'] = target_name
# Identification can be by hash, modelVersionId, or modelName
for i, lora in enumerate(recipe_data.get('loras', [])):
match_found = False
# Try to match by available identifiers
if 'hash' in lora and 'hash' in lora_data and lora['hash'] == lora_data['hash']:
match_found = True
elif 'modelVersionId' in lora and 'modelVersionId' in lora_data and lora['modelVersionId'] == lora_data['modelVersionId']:
match_found = True
elif 'modelName' in lora and 'modelName' in lora_data and lora['modelName'] == lora_data['modelName']:
match_found = True
if match_found:
# Update LoRA data
lora['isDeleted'] = False
lora['file_name'] = target_name
# Update with information from the target LoRA
if 'sha256' in target_lora:
lora['hash'] = target_lora['sha256'].lower()
if target_lora.get("civitai"):
lora['modelName'] = target_lora['civitai']['model']['name']
lora['modelVersionName'] = target_lora['civitai']['name']
lora['modelVersionId'] = target_lora['civitai']['id']
# Keep original fields for identification
# Mark as found and store updated lora
found = True
updated_lora = dict(lora) # Make a copy for response
break
if not found:
return web.json_response({"error": "Could not find matching deleted LoRA in recipe"}, status=404)
# Update with information from the target LoRA
if 'sha256' in target_lora:
lora['hash'] = target_lora['sha256'].lower()
if target_lora.get("civitai"):
lora['modelName'] = target_lora['civitai']['model']['name']
lora['modelVersionName'] = target_lora['civitai']['name']
lora['modelVersionId'] = target_lora['civitai']['id']
updated_lora = dict(lora) # Make a copy for response
# Recalculate recipe fingerprint after updating LoRA
from ..utils.utils import calculate_recipe_fingerprint
recipe_data['fingerprint'] = calculate_recipe_fingerprint(recipe_data.get('loras', []))
# Save updated recipe
with open(recipe_path, 'w', encoding='utf-8') as f:
json.dump(recipe_data, f, indent=4, ensure_ascii=False)
updated_lora['inLibrary'] = True
updated_lora['preview_url'] = target_lora['preview_url']
updated_lora['preview_url'] = config.get_preview_static_url(target_lora['preview_url'])
updated_lora['localPath'] = target_lora['file_path']
# Update in cache if it exists
@@ -1186,6 +1363,8 @@ class RecipeRoutes:
if cache_item.get('id') == recipe_id:
# Replace loras array with updated version
cache_item['loras'] = recipe_data['loras']
# Update fingerprint in cache
cache_item['fingerprint'] = recipe_data['fingerprint']
# Resort the cache
asyncio.create_task(scanner._cache.resort())
@@ -1196,11 +1375,20 @@ class RecipeRoutes:
if image_path and os.path.exists(image_path):
from ..utils.exif_utils import ExifUtils
ExifUtils.append_recipe_metadata(image_path, recipe_data)
# Find other recipes with the same fingerprint
matching_recipes = []
if 'fingerprint' in recipe_data:
matching_recipes = await scanner.find_recipes_by_fingerprint(recipe_data['fingerprint'])
# Remove current recipe from matches
if recipe_id in matching_recipes:
matching_recipes.remove(recipe_id)
return web.json_response({
"success": True,
"recipe_id": recipe_id,
"updated_lora": updated_lora
"updated_lora": updated_lora,
"matching_recipes": matching_recipes
})
except Exception as e:
@@ -1255,3 +1443,171 @@ class RecipeRoutes:
except Exception as e:
logger.error(f"Error getting recipes for Lora: {str(e)}")
return web.json_response({'success': False, 'error': str(e)}, status=500)
async def scan_recipes(self, request: web.Request) -> web.Response:
"""API endpoint for scanning and rebuilding the recipe cache"""
try:
# Ensure services are initialized
await self.init_services()
# Force refresh the recipe cache
logger.info("Manually triggering recipe cache rebuild")
await self.recipe_scanner.get_cached_data(force_refresh=True)
return web.json_response({
'success': True,
'message': 'Recipe cache refreshed successfully'
})
except Exception as e:
logger.error(f"Error refreshing recipe cache: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def find_duplicates(self, request: web.Request) -> web.Response:
"""Find all duplicate recipes based on fingerprints"""
try:
# Ensure services are initialized
await self.init_services()
# Get all duplicate recipes
duplicate_groups = await self.recipe_scanner.find_all_duplicate_recipes()
# Create response data with additional recipe information
response_data = []
for fingerprint, recipe_ids in duplicate_groups.items():
# Skip groups with only one recipe (not duplicates)
if len(recipe_ids) <= 1:
continue
# Get recipe details for each recipe in the group
recipes = []
for recipe_id in recipe_ids:
recipe = await self.recipe_scanner.get_recipe_by_id(recipe_id)
if recipe:
# Add only needed fields to keep response size manageable
recipes.append({
'id': recipe.get('id'),
'title': recipe.get('title'),
'file_url': recipe.get('file_url') or self._format_recipe_file_url(recipe.get('file_path', '')),
'modified': recipe.get('modified'),
'created_date': recipe.get('created_date'),
'lora_count': len(recipe.get('loras', [])),
})
# Only include groups with at least 2 valid recipes
if len(recipes) >= 2:
# Sort recipes by modified date (newest first)
recipes.sort(key=lambda x: x.get('modified', 0), reverse=True)
response_data.append({
'fingerprint': fingerprint,
'count': len(recipes),
'recipes': recipes
})
# Sort groups by count (highest first)
response_data.sort(key=lambda x: x['count'], reverse=True)
return web.json_response({
'success': True,
'duplicate_groups': response_data
})
except Exception as e:
logger.error(f"Error finding duplicate recipes: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def bulk_delete(self, request: web.Request) -> web.Response:
"""Delete multiple recipes by ID"""
try:
# Ensure services are initialized
await self.init_services()
# Parse request data
data = await request.json()
recipe_ids = data.get('recipe_ids', [])
if not recipe_ids:
return web.json_response({
'success': False,
'error': 'No recipe IDs provided'
}, status=400)
# Get recipes directory
recipes_dir = self.recipe_scanner.recipes_dir
if not recipes_dir or not os.path.exists(recipes_dir):
return web.json_response({
'success': False,
'error': 'Recipes directory not found'
}, status=404)
# Track deleted and failed recipes
deleted_recipes = []
failed_recipes = []
# Process each recipe ID
for recipe_id in recipe_ids:
# Find recipe JSON file
recipe_json_path = os.path.join(recipes_dir, f"{recipe_id}.recipe.json")
if not os.path.exists(recipe_json_path):
failed_recipes.append({
'id': recipe_id,
'reason': 'Recipe not found'
})
continue
try:
# Load recipe data to get image path
with open(recipe_json_path, 'r', encoding='utf-8') as f:
recipe_data = json.load(f)
# Get image path
image_path = recipe_data.get('file_path')
# Delete recipe JSON file
os.remove(recipe_json_path)
# Delete recipe image if it exists
if image_path and os.path.exists(image_path):
os.remove(image_path)
deleted_recipes.append(recipe_id)
except Exception as e:
failed_recipes.append({
'id': recipe_id,
'reason': str(e)
})
# Update cache if any recipes were deleted
if deleted_recipes and self.recipe_scanner._cache is not None:
# Remove deleted recipes from raw_data
self.recipe_scanner._cache.raw_data = [
r for r in self.recipe_scanner._cache.raw_data
if r.get('id') not in deleted_recipes
]
# Resort the cache
asyncio.create_task(self.recipe_scanner._cache.resort())
logger.info(f"Removed {len(deleted_recipes)} recipes from cache")
return web.json_response({
'success': True,
'deleted': deleted_recipes,
'failed': failed_recipes,
'total_deleted': len(deleted_recipes),
'total_failed': len(failed_recipes)
})
except Exception as e:
logger.error(f"Error performing bulk delete: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)

438
py/routes/stats_routes.py Normal file
View File

@@ -0,0 +1,438 @@
import os
import json
import jinja2
from aiohttp import web
import logging
from datetime import datetime, timedelta
from collections import defaultdict, Counter
from typing import Dict, List, Any
from ..config import config
from ..services.settings_manager import settings
from ..services.service_registry import ServiceRegistry
from ..utils.usage_stats import UsageStats
logger = logging.getLogger(__name__)
class StatsRoutes:
"""Route handlers for Statistics page and API endpoints"""
def __init__(self):
self.lora_scanner = None
self.checkpoint_scanner = None
self.usage_stats = None
self.template_env = jinja2.Environment(
loader=jinja2.FileSystemLoader(config.templates_path),
autoescape=True
)
async def init_services(self):
"""Initialize services from ServiceRegistry"""
self.lora_scanner = await ServiceRegistry.get_lora_scanner()
self.checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
self.usage_stats = UsageStats()
async def handle_stats_page(self, request: web.Request) -> web.Response:
"""Handle GET /statistics request"""
try:
# Ensure services are initialized
await self.init_services()
# Check if scanners are initializing
lora_initializing = (
self.lora_scanner._cache is None or
(hasattr(self.lora_scanner, 'is_initializing') and self.lora_scanner.is_initializing())
)
checkpoint_initializing = (
self.checkpoint_scanner._cache is None or
(hasattr(self.checkpoint_scanner, '_is_initializing') and self.checkpoint_scanner._is_initializing)
)
is_initializing = lora_initializing or checkpoint_initializing
template = self.template_env.get_template('statistics.html')
rendered = template.render(
is_initializing=is_initializing,
settings=settings,
request=request
)
return web.Response(
text=rendered,
content_type='text/html'
)
except Exception as e:
logger.error(f"Error handling statistics request: {e}", exc_info=True)
return web.Response(
text="Error loading statistics page",
status=500
)
async def get_collection_overview(self, request: web.Request) -> web.Response:
"""Get collection overview statistics"""
try:
await self.init_services()
# Get LoRA statistics
lora_cache = await self.lora_scanner.get_cached_data()
lora_count = len(lora_cache.raw_data)
lora_size = sum(lora.get('size', 0) for lora in lora_cache.raw_data)
# Get Checkpoint statistics
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
checkpoint_count = len(checkpoint_cache.raw_data)
checkpoint_size = sum(cp.get('size', 0) for cp in checkpoint_cache.raw_data)
# Get usage statistics
usage_data = await self.usage_stats.get_stats()
return web.json_response({
'success': True,
'data': {
'total_models': lora_count + checkpoint_count,
'lora_count': lora_count,
'checkpoint_count': checkpoint_count,
'total_size': lora_size + checkpoint_size,
'lora_size': lora_size,
'checkpoint_size': checkpoint_size,
'total_generations': usage_data.get('total_executions', 0),
'unused_loras': self._count_unused_models(lora_cache.raw_data, usage_data.get('loras', {})),
'unused_checkpoints': self._count_unused_models(checkpoint_cache.raw_data, usage_data.get('checkpoints', {}))
}
})
except Exception as e:
logger.error(f"Error getting collection overview: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_usage_analytics(self, request: web.Request) -> web.Response:
"""Get usage analytics data"""
try:
await self.init_services()
# Get usage statistics
usage_data = await self.usage_stats.get_stats()
# Get model data for enrichment
lora_cache = await self.lora_scanner.get_cached_data()
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
# Create hash to model mapping
lora_map = {lora['sha256']: lora for lora in lora_cache.raw_data}
checkpoint_map = {cp['sha256']: cp for cp in checkpoint_cache.raw_data}
# Prepare top used models
top_loras = self._get_top_used_models(usage_data.get('loras', {}), lora_map, 10)
top_checkpoints = self._get_top_used_models(usage_data.get('checkpoints', {}), checkpoint_map, 10)
# Prepare usage timeline (last 30 days)
timeline = self._get_usage_timeline(usage_data, 30)
return web.json_response({
'success': True,
'data': {
'top_loras': top_loras,
'top_checkpoints': top_checkpoints,
'usage_timeline': timeline,
'total_executions': usage_data.get('total_executions', 0)
}
})
except Exception as e:
logger.error(f"Error getting usage analytics: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_base_model_distribution(self, request: web.Request) -> web.Response:
"""Get base model distribution statistics"""
try:
await self.init_services()
# Get model data
lora_cache = await self.lora_scanner.get_cached_data()
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
# Count by base model
lora_base_models = Counter(lora.get('base_model', 'Unknown') for lora in lora_cache.raw_data)
checkpoint_base_models = Counter(cp.get('base_model', 'Unknown') for cp in checkpoint_cache.raw_data)
return web.json_response({
'success': True,
'data': {
'loras': dict(lora_base_models),
'checkpoints': dict(checkpoint_base_models)
}
})
except Exception as e:
logger.error(f"Error getting base model distribution: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_tag_analytics(self, request: web.Request) -> web.Response:
"""Get tag usage analytics"""
try:
await self.init_services()
# Get model data
lora_cache = await self.lora_scanner.get_cached_data()
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
# Count tag frequencies
all_tags = []
for lora in lora_cache.raw_data:
all_tags.extend(lora.get('tags', []))
for cp in checkpoint_cache.raw_data:
all_tags.extend(cp.get('tags', []))
tag_counts = Counter(all_tags)
# Get top 50 tags
top_tags = [{'tag': tag, 'count': count} for tag, count in tag_counts.most_common(50)]
return web.json_response({
'success': True,
'data': {
'top_tags': top_tags,
'total_unique_tags': len(tag_counts)
}
})
except Exception as e:
logger.error(f"Error getting tag analytics: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_storage_analytics(self, request: web.Request) -> web.Response:
"""Get storage usage analytics"""
try:
await self.init_services()
# Get usage statistics
usage_data = await self.usage_stats.get_stats()
# Get model data
lora_cache = await self.lora_scanner.get_cached_data()
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
# Create models with usage data
lora_storage = []
for lora in lora_cache.raw_data:
usage_count = 0
if lora['sha256'] in usage_data.get('loras', {}):
usage_count = usage_data['loras'][lora['sha256']].get('total', 0)
lora_storage.append({
'name': lora['model_name'],
'size': lora.get('size', 0),
'usage_count': usage_count,
'folder': lora.get('folder', ''),
'base_model': lora.get('base_model', 'Unknown')
})
checkpoint_storage = []
for cp in checkpoint_cache.raw_data:
usage_count = 0
if cp['sha256'] in usage_data.get('checkpoints', {}):
usage_count = usage_data['checkpoints'][cp['sha256']].get('total', 0)
checkpoint_storage.append({
'name': cp['model_name'],
'size': cp.get('size', 0),
'usage_count': usage_count,
'folder': cp.get('folder', ''),
'base_model': cp.get('base_model', 'Unknown')
})
# Sort by size
lora_storage.sort(key=lambda x: x['size'], reverse=True)
checkpoint_storage.sort(key=lambda x: x['size'], reverse=True)
return web.json_response({
'success': True,
'data': {
'loras': lora_storage[:20], # Top 20 by size
'checkpoints': checkpoint_storage[:20]
}
})
except Exception as e:
logger.error(f"Error getting storage analytics: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
async def get_insights(self, request: web.Request) -> web.Response:
"""Get smart insights about the collection"""
try:
await self.init_services()
# Get usage statistics
usage_data = await self.usage_stats.get_stats()
# Get model data
lora_cache = await self.lora_scanner.get_cached_data()
checkpoint_cache = await self.checkpoint_scanner.get_cached_data()
insights = []
# Calculate unused models
unused_loras = self._count_unused_models(lora_cache.raw_data, usage_data.get('loras', {}))
unused_checkpoints = self._count_unused_models(checkpoint_cache.raw_data, usage_data.get('checkpoints', {}))
total_loras = len(lora_cache.raw_data)
total_checkpoints = len(checkpoint_cache.raw_data)
if total_loras > 0:
unused_lora_percent = (unused_loras / total_loras) * 100
if unused_lora_percent > 50:
insights.append({
'type': 'warning',
'title': 'High Number of Unused LoRAs',
'description': f'{unused_lora_percent:.1f}% of your LoRAs ({unused_loras}/{total_loras}) have never been used.',
'suggestion': 'Consider organizing or archiving unused models to free up storage space.'
})
if total_checkpoints > 0:
unused_checkpoint_percent = (unused_checkpoints / total_checkpoints) * 100
if unused_checkpoint_percent > 30:
insights.append({
'type': 'warning',
'title': 'Unused Checkpoints Detected',
'description': f'{unused_checkpoint_percent:.1f}% of your checkpoints ({unused_checkpoints}/{total_checkpoints}) have never been used.',
'suggestion': 'Review and consider removing checkpoints you no longer need.'
})
# Storage insights
total_size = sum(lora.get('size', 0) for lora in lora_cache.raw_data) + \
sum(cp.get('size', 0) for cp in checkpoint_cache.raw_data)
if total_size > 100 * 1024 * 1024 * 1024: # 100GB
insights.append({
'type': 'info',
'title': 'Large Collection Detected',
'description': f'Your model collection is using {self._format_size(total_size)} of storage.',
'suggestion': 'Consider using external storage or cloud solutions for better organization.'
})
# Recent activity insight
if usage_data.get('total_executions', 0) > 100:
insights.append({
'type': 'success',
'title': 'Active User',
'description': f'You\'ve completed {usage_data["total_executions"]} generations so far!',
'suggestion': 'Keep exploring and creating amazing content with your models.'
})
return web.json_response({
'success': True,
'data': {
'insights': insights
}
})
except Exception as e:
logger.error(f"Error getting insights: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
def _count_unused_models(self, models: List[Dict], usage_data: Dict) -> int:
"""Count models that have never been used"""
used_hashes = set(usage_data.keys())
unused_count = 0
for model in models:
if model.get('sha256') not in used_hashes:
unused_count += 1
return unused_count
def _get_top_used_models(self, usage_data: Dict, model_map: Dict, limit: int) -> List[Dict]:
"""Get top used models with their metadata"""
sorted_usage = sorted(usage_data.items(), key=lambda x: x[1].get('total', 0), reverse=True)
top_models = []
for sha256, usage_info in sorted_usage[:limit]:
if sha256 in model_map:
model = model_map[sha256]
top_models.append({
'name': model['model_name'],
'usage_count': usage_info.get('total', 0),
'base_model': model.get('base_model', 'Unknown'),
'preview_url': config.get_preview_static_url(model.get('preview_url', '')),
'folder': model.get('folder', '')
})
return top_models
def _get_usage_timeline(self, usage_data: Dict, days: int) -> List[Dict]:
"""Get usage timeline for the past N days"""
timeline = []
today = datetime.now()
for i in range(days):
date = today - timedelta(days=i)
date_str = date.strftime('%Y-%m-%d')
lora_usage = 0
checkpoint_usage = 0
# Count usage for this date
for model_usage in usage_data.get('loras', {}).values():
if isinstance(model_usage, dict) and 'history' in model_usage:
lora_usage += model_usage['history'].get(date_str, 0)
for model_usage in usage_data.get('checkpoints', {}).values():
if isinstance(model_usage, dict) and 'history' in model_usage:
checkpoint_usage += model_usage['history'].get(date_str, 0)
timeline.append({
'date': date_str,
'lora_usage': lora_usage,
'checkpoint_usage': checkpoint_usage,
'total_usage': lora_usage + checkpoint_usage
})
return list(reversed(timeline)) # Oldest to newest
def _format_size(self, size_bytes: int) -> str:
"""Format file size in human readable format"""
for unit in ['B', 'KB', 'MB', 'GB', 'TB']:
if size_bytes < 1024.0:
return f"{size_bytes:.1f} {unit}"
size_bytes /= 1024.0
return f"{size_bytes:.1f} PB"
def setup_routes(self, app: web.Application):
"""Register routes with the application"""
# Add an app startup handler to initialize services
app.on_startup.append(self._on_startup)
# Register page route
app.router.add_get('/statistics', self.handle_stats_page)
# Register API routes
app.router.add_get('/api/stats/collection-overview', self.get_collection_overview)
app.router.add_get('/api/stats/usage-analytics', self.get_usage_analytics)
app.router.add_get('/api/stats/base-model-distribution', self.get_base_model_distribution)
app.router.add_get('/api/stats/tag-analytics', self.get_tag_analytics)
app.router.add_get('/api/stats/storage-analytics', self.get_storage_analytics)
app.router.add_get('/api/stats/insights', self.get_insights)
async def _on_startup(self, app):
"""Initialize services when the app starts"""
await self.init_services()

View File

@@ -2,6 +2,8 @@ import os
import aiohttp
import logging
import toml
import subprocess
from datetime import datetime
from aiohttp import web
from typing import Dict, Any, List
@@ -13,7 +15,8 @@ class UpdateRoutes:
@staticmethod
def setup_routes(app):
"""Register update check routes"""
app.router.add_get('/loras/api/check-updates', UpdateRoutes.check_updates)
app.router.add_get('/api/check-updates', UpdateRoutes.check_updates)
app.router.add_get('/api/version-info', UpdateRoutes.get_version_info)
@staticmethod
async def check_updates(request):
@@ -24,6 +27,9 @@ class UpdateRoutes:
try:
# Read local version from pyproject.toml
local_version = UpdateRoutes._get_local_version()
# Get git info (commit hash, branch)
git_info = UpdateRoutes._get_git_info()
# Fetch remote version from GitHub
remote_version, changelog = await UpdateRoutes._get_remote_version()
@@ -39,7 +45,8 @@ class UpdateRoutes:
'current_version': local_version,
'latest_version': remote_version,
'update_available': update_available,
'changelog': changelog
'changelog': changelog,
'git_info': git_info
})
except Exception as e:
@@ -49,6 +56,34 @@ class UpdateRoutes:
'error': str(e)
})
@staticmethod
async def get_version_info(request):
"""
Returns the current version in the format 'version-short_hash'
"""
try:
# Read local version from pyproject.toml
local_version = UpdateRoutes._get_local_version().replace('v', '')
# Get git info (commit hash, branch)
git_info = UpdateRoutes._get_git_info()
short_hash = git_info['short_hash']
# Format: version-short_hash
version_string = f"{local_version}-{short_hash}"
return web.json_response({
'success': True,
'version': version_string
})
except Exception as e:
logger.error(f"Failed to get version info: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
})
@staticmethod
def _get_local_version() -> str:
"""Get local plugin version from pyproject.toml"""
@@ -72,6 +107,72 @@ class UpdateRoutes:
logger.error(f"Failed to get local version: {e}", exc_info=True)
return "v0.0.0"
@staticmethod
def _get_git_info() -> Dict[str, str]:
"""Get Git repository information"""
current_dir = os.path.dirname(os.path.abspath(__file__))
plugin_root = os.path.dirname(os.path.dirname(current_dir))
git_info = {
'commit_hash': 'unknown',
'short_hash': 'unknown',
'branch': 'unknown',
'commit_date': 'unknown'
}
try:
# Check if we're in a git repository
if not os.path.exists(os.path.join(plugin_root, '.git')):
return git_info
# Get current commit hash
result = subprocess.run(
['git', 'rev-parse', 'HEAD'],
cwd=plugin_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=False
)
if result.returncode == 0:
git_info['commit_hash'] = result.stdout.strip()
git_info['short_hash'] = git_info['commit_hash'][:7]
# Get current branch name
result = subprocess.run(
['git', 'rev-parse', '--abbrev-ref', 'HEAD'],
cwd=plugin_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=False
)
if result.returncode == 0:
git_info['branch'] = result.stdout.strip()
# Get commit date
result = subprocess.run(
['git', 'show', '-s', '--format=%ci', 'HEAD'],
cwd=plugin_root,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
check=False
)
if result.returncode == 0:
commit_date = result.stdout.strip()
# Format the date nicely if possible
try:
date_obj = datetime.strptime(commit_date, '%Y-%m-%d %H:%M:%S %z')
git_info['commit_date'] = date_obj.strftime('%Y-%m-%d')
except:
git_info['commit_date'] = commit_date
except Exception as e:
logger.warning(f"Error getting git info: {e}")
return git_info
@staticmethod
async def _get_remote_version() -> tuple[str, List[str]]:
"""
@@ -150,11 +251,16 @@ class UpdateRoutes:
"""
Compare two semantic version strings
Returns True if version2 is newer than version1
Ignores any suffixes after '-' (e.g., -bugfix, -alpha)
"""
try:
# Clean version strings - remove any suffix after '-'
v1_clean = version1.split('-')[0]
v2_clean = version2.split('-')[0]
# Split versions into components
v1_parts = [int(x) for x in version1.split('.')]
v2_parts = [int(x) for x in version2.split('.')]
v1_parts = [int(x) for x in v1_clean.split('.')]
v2_parts = [int(x) for x in v2_clean.split('.')]
# Ensure both have 3 components (major.minor.patch)
while len(v1_parts) < 3:

View File

@@ -34,6 +34,7 @@ class CivitaiClient:
'User-Agent': 'ComfyUI-LoRA-Manager/1.0'
}
self._session = None
self._session_created_at = None
# Set default buffer size to 1MB for higher throughput
self.chunk_size = 1024 * 1024
@@ -44,8 +45,8 @@ class CivitaiClient:
# Optimize TCP connection parameters
connector = aiohttp.TCPConnector(
ssl=True,
limit=10, # Increase parallel connections
ttl_dns_cache=300, # DNS cache time
limit=3, # Further reduced from 5 to 3
ttl_dns_cache=0, # Disabled DNS caching completely
force_close=False, # Keep connections for reuse
enable_cleanup_closed=True
)
@@ -57,7 +58,18 @@ class CivitaiClient:
trust_env=trust_env,
timeout=timeout
)
self._session_created_at = datetime.now()
return self._session
async def _ensure_fresh_session(self):
"""Refresh session if it's been open too long"""
if self._session is not None:
if not hasattr(self, '_session_created_at') or \
(datetime.now() - self._session_created_at).total_seconds() > 300: # 5 minutes
await self.close()
self._session = None
return await self.session
def _parse_content_disposition(self, header: str) -> str:
"""Parse filename from content-disposition header"""
@@ -103,13 +115,15 @@ class CivitaiClient:
Returns:
Tuple[bool, str]: (success, save_path or error message)
"""
session = await self.session
logger.debug(f"Resolving DNS for: {url}")
session = await self._ensure_fresh_session()
try:
headers = self._get_request_headers()
# Add Range header to allow resumable downloads
headers['Accept-Encoding'] = 'identity' # Disable compression for better chunked downloads
logger.debug(f"Starting download from: {url}")
async with session.get(url, headers=headers, allow_redirects=True) as response:
if response.status != 200:
# Handle 401 unauthorized responses
@@ -124,6 +138,7 @@ class CivitaiClient:
return False, "Access forbidden: You don't have permission to download this file."
# Generic error response for other status codes
logger.error(f"Download failed for {url} with status {response.status}")
return False, f"Download failed with status {response.status}"
# Get filename from content-disposition header
@@ -170,7 +185,7 @@ class CivitaiClient:
async def get_model_by_hash(self, model_hash: str) -> Optional[Dict]:
try:
session = await self.session
session = await self._ensure_fresh_session()
async with session.get(f"{self.base_url}/model-versions/by-hash/{model_hash}") as response:
if response.status == 200:
return await response.json()
@@ -181,7 +196,7 @@ class CivitaiClient:
async def download_preview_image(self, image_url: str, save_path: str):
try:
session = await self.session
session = await self._ensure_fresh_session()
async with session.get(image_url) as response:
if response.status == 200:
content = await response.read()
@@ -196,7 +211,7 @@ class CivitaiClient:
async def get_model_versions(self, model_id: str) -> List[Dict]:
"""Get all versions of a model with local availability info"""
try:
session = await self.session # 等待获取 session
session = await self._ensure_fresh_session() # Use fresh session
async with session.get(f"{self.base_url}/models/{model_id}") as response:
if response.status != 200:
return None
@@ -209,6 +224,69 @@ class CivitaiClient:
except Exception as e:
logger.error(f"Error fetching model versions: {e}")
return None
async def get_model_version(self, model_id: str, version_id: str = "") -> Optional[Dict]:
"""Get specific model version with additional metadata
Args:
model_id: The Civitai model ID
version_id: Optional specific version ID to retrieve
Returns:
Optional[Dict]: The model version data with additional fields or None if not found
"""
try:
session = await self._ensure_fresh_session()
async with session.get(f"{self.base_url}/models/{model_id}") as response:
if response.status != 200:
return None
data = await response.json()
model_versions = data.get('modelVersions', [])
# Find matching version
matched_version = None
if version_id:
# If version_id provided, find exact match
for version in model_versions:
if str(version.get('id')) == str(version_id):
matched_version = version
break
else:
# If no version_id then use the first version
matched_version = model_versions[0] if model_versions else None
# If no match found, return None
if not matched_version:
return None
# Build result with modified fields
result = matched_version.copy() # Copy to avoid modifying original
# Replace index with modelId
if 'index' in result:
del result['index']
result['modelId'] = model_id
# Add model field with metadata from top level
result['model'] = {
"name": data.get("name"),
"type": data.get("type"),
"nsfw": data.get("nsfw", False),
"poi": data.get("poi", False),
"description": data.get("description"),
"tags": data.get("tags", [])
}
# Add creator field from top level
result['creator'] = data.get("creator")
return result
except Exception as e:
logger.error(f"Error fetching model version: {e}")
return None
async def get_model_version_info(self, version_id: str) -> Tuple[Optional[Dict], Optional[str]]:
"""Fetch model version metadata from Civitai
@@ -222,12 +300,14 @@ class CivitaiClient:
- An error message if there was an error, or None on success
"""
try:
session = await self.session
session = await self._ensure_fresh_session()
url = f"{self.base_url}/model-versions/{version_id}"
headers = self._get_request_headers()
logger.debug(f"Resolving DNS for model version info: {url}")
async with session.get(url, headers=headers) as response:
if response.status == 200:
logger.debug(f"Successfully fetched model version info for: {version_id}")
return await response.json(), None
# Handle specific error cases
@@ -242,6 +322,7 @@ class CivitaiClient:
return None, "Model not found (status 404)"
# Other error cases
logger.error(f"Failed to fetch model info for {version_id} (status {response.status})")
return None, f"Failed to fetch model info (status {response.status})"
except Exception as e:
error_msg = f"Error fetching model version info: {e}"
@@ -249,7 +330,7 @@ class CivitaiClient:
return None, error_msg
async def get_model_metadata(self, model_id: str) -> Tuple[Optional[Dict], int]:
"""Fetch model metadata (description and tags) from Civitai API
"""Fetch model metadata (description, tags, and creator info) from Civitai API
Args:
model_id: The Civitai model ID
@@ -260,7 +341,7 @@ class CivitaiClient:
- The HTTP status code from the request
"""
try:
session = await self.session
session = await self._ensure_fresh_session()
headers = self._get_request_headers()
url = f"{self.base_url}/models/{model_id}"
@@ -276,10 +357,14 @@ class CivitaiClient:
# Extract relevant metadata
metadata = {
"description": data.get("description") or "No model description available",
"tags": data.get("tags", [])
"tags": data.get("tags", []),
"creator": {
"username": data.get("creator", {}).get("username"),
"image": data.get("creator", {}).get("image")
}
}
if metadata["description"] or metadata["tags"]:
if metadata["description"] or metadata["tags"] or metadata["creator"]["username"]:
return metadata, status_code
else:
logger.warning(f"No metadata found for model {model_id}")
@@ -304,10 +389,11 @@ class CivitaiClient:
async def _get_hash_from_civitai(self, model_version_id: str) -> Optional[str]:
"""Get hash from Civitai API"""
try:
if not self._session:
session = await self._ensure_fresh_session()
if not session:
return None
version_info = await self._session.get(f"{self.base_url}/model-versions/{model_version_id}")
version_info = await session.get(f"{self.base_url}/model-versions/{model_version_id}")
if not version_info or not version_info.json().get('files'):
return None
@@ -323,3 +409,34 @@ class CivitaiClient:
except Exception as e:
logger.error(f"Error getting hash from Civitai: {e}")
return None
async def get_image_info(self, image_id: str) -> Optional[Dict]:
"""Fetch image information from Civitai API
Args:
image_id: The Civitai image ID
Returns:
Optional[Dict]: The image data or None if not found
"""
try:
session = await self._ensure_fresh_session()
headers = self._get_request_headers()
url = f"{self.base_url}/images?imageId={image_id}&nsfw=X"
logger.debug(f"Fetching image info for ID: {image_id}")
async with session.get(url, headers=headers) as response:
if response.status == 200:
data = await response.json()
if data and "items" in data and len(data["items"]) > 0:
logger.debug(f"Successfully fetched image info for ID: {image_id}")
return data["items"][0]
logger.warning(f"No image found with ID: {image_id}")
return None
logger.error(f"Failed to fetch image info for ID: {image_id} (status {response.status})")
return None
except Exception as e:
error_msg = f"Error fetching image info: {e}"
logger.error(error_msg)
return None

View File

@@ -2,11 +2,11 @@ import logging
import os
import json
import asyncio
from typing import Optional, Dict, Any
from .civitai_client import CivitaiClient
from typing import Dict
from ..utils.models import LoraMetadata, CheckpointMetadata
from ..utils.constants import CARD_PREVIEW_WIDTH
from ..utils.exif_utils import ExifUtils
from ..utils.metadata_manager import MetadataManager
from .service_registry import ServiceRegistry
# Download to temporary file first
@@ -39,14 +39,6 @@ class DownloadManager:
if self._civitai_client is None:
self._civitai_client = await ServiceRegistry.get_civitai_client()
return self._civitai_client
async def _get_lora_monitor(self):
"""Get the lora file monitor from registry"""
return await ServiceRegistry.get_lora_monitor()
async def _get_checkpoint_monitor(self):
"""Get the checkpoint file monitor from registry"""
return await ServiceRegistry.get_checkpoint_monitor()
async def _get_lora_scanner(self):
"""Get the lora scanner from registry"""
@@ -88,16 +80,16 @@ class DownloadManager:
version_info = None
error_msg = None
if download_url:
# Extract version ID from download URL
version_id = download_url.split('/')[-1]
version_info, error_msg = await civitai_client.get_model_version_info(version_id)
if model_hash:
# Get model by hash
version_info = await civitai_client.get_model_by_hash(model_hash)
elif model_version_id:
# Use model version ID directly
version_info, error_msg = await civitai_client.get_model_version_info(model_version_id)
elif model_hash:
# Get model by hash
version_info = await civitai_client.get_model_by_hash(model_hash)
elif download_url:
# Extract version ID from download URL
version_id = download_url.split('/')[-1]
version_info, error_msg = await civitai_client.get_model_version_info(version_id)
if not version_info:
@@ -136,15 +128,6 @@ class DownloadManager:
# 3. Prepare download
file_name = file_info['name']
save_path = os.path.join(save_dir, file_name)
file_size = file_info.get('sizeKB', 0) * 1024
# 4. Notify file monitor - use normalized path and file size
file_monitor = await self._get_lora_monitor() if model_type == "lora" else await self._get_checkpoint_monitor()
if file_monitor and file_monitor.handler:
file_monitor.handler.add_ignore_path(
save_path.replace(os.sep, '/'),
file_size
)
# 5. Prepare metadata based on model type
if model_type == "checkpoint":
@@ -154,7 +137,7 @@ class DownloadManager:
metadata = LoraMetadata.from_civitai_info(version_info, file_info, save_path)
logger.info(f"Creating LoraMetadata for {file_name}")
# 5.1 Get and update model tags and description
# 5.1 Get and update model tags, description and creator info
model_id = version_info.get('modelId')
if model_id:
model_metadata, _ = await civitai_client.get_model_metadata(str(model_id))
@@ -163,6 +146,8 @@ class DownloadManager:
metadata.tags = model_metadata.get("tags", [])
if model_metadata.get("description"):
metadata.modelDescription = model_metadata.get("description", "")
if model_metadata.get("creator"):
metadata.civitai["creator"] = model_metadata.get("creator")
# 6. Start download process
result = await self._execute_download(
@@ -214,8 +199,6 @@ class DownloadManager:
if await civitai_client.download_preview_image(images[0]['url'], preview_path):
metadata.preview_url = preview_path.replace(os.sep, '/')
metadata.preview_nsfw_level = images[0].get('nsfwLevel', 0)
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata.to_dict(), f, indent=2, ensure_ascii=False)
else:
# For images, use WebP format for better performance
with tempfile.NamedTemporaryFile(suffix='.png', delete=False) as temp_file:
@@ -242,8 +225,6 @@ class DownloadManager:
# Update metadata
metadata.preview_url = preview_path.replace(os.sep, '/')
metadata.preview_nsfw_level = images[0].get('nsfwLevel', 0)
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata.to_dict(), f, indent=2, ensure_ascii=False)
# Remove temporary file
try:
@@ -274,8 +255,7 @@ class DownloadManager:
metadata.update_file_info(save_path)
# 5. Final metadata update
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata.to_dict(), f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(save_path, metadata, True)
# 6. Update cache based on model type
if model_type == "checkpoint":
@@ -285,17 +265,11 @@ class DownloadManager:
scanner = await self._get_lora_scanner()
logger.info(f"Updating lora cache for {save_path}")
cache = await scanner.get_cached_data()
# Convert metadata to dictionary
metadata_dict = metadata.to_dict()
metadata_dict['folder'] = relative_path
cache.raw_data.append(metadata_dict)
await cache.resort()
all_folders = set(cache.folders)
all_folders.add(relative_path)
cache.folders = sorted(list(all_folders), key=lambda x: x.lower())
# Update the hash index with the new model entry
scanner._hash_index.add_entry(metadata_dict['sha256'], metadata_dict['file_path'])
# Add model to cache and save to disk in a single operation
await scanner.add_model_to_cache(metadata_dict, relative_path)
# Report 100% completion
if progress_callback:

View File

@@ -1,542 +0,0 @@
import os
import logging
import asyncio
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
from typing import List, Dict, Set, Optional
from threading import Lock
from ..config import config
from .service_registry import ServiceRegistry
logger = logging.getLogger(__name__)
# Configuration constant to control file monitoring functionality
ENABLE_FILE_MONITORING = False
class BaseFileHandler(FileSystemEventHandler):
"""Base handler for file system events"""
def __init__(self, loop: asyncio.AbstractEventLoop):
self.loop = loop # Store event loop reference
self.pending_changes = set() # Pending changes
self.lock = Lock() # Thread-safe lock
self.update_task = None # Async update task
self._ignore_paths = set() # Paths to ignore
self._min_ignore_timeout = 5 # Minimum timeout in seconds
self._download_speed = 1024 * 1024 # Assume 1MB/s as base speed
# Track modified files with timestamps for debouncing
self.modified_files: Dict[str, float] = {}
self.debounce_timer = None
self.debounce_delay = 3.0 # Seconds to wait after last modification
# Track files already scheduled for processing
self.scheduled_files: Set[str] = set()
# File extensions to monitor - should be overridden by subclasses
self.file_extensions = set()
def _should_ignore(self, path: str) -> bool:
"""Check if path should be ignored"""
real_path = os.path.realpath(path) # Resolve any symbolic links
return real_path.replace(os.sep, '/') in self._ignore_paths
def add_ignore_path(self, path: str, file_size: int = 0):
"""Add path to ignore list with dynamic timeout based on file size"""
real_path = os.path.realpath(path) # Resolve any symbolic links
self._ignore_paths.add(real_path.replace(os.sep, '/'))
# Short timeout (e.g. 5 seconds) is sufficient to ignore the CREATE event
timeout = 5
self.loop.call_later(
timeout,
self._ignore_paths.discard,
real_path.replace(os.sep, '/')
)
def on_created(self, event):
if event.is_directory:
return
# Handle appropriate files based on extensions
file_ext = os.path.splitext(event.src_path)[1].lower()
if file_ext in self.file_extensions:
if self._should_ignore(event.src_path):
return
# Process this file directly and ignore subsequent modifications
normalized_path = os.path.realpath(event.src_path).replace(os.sep, '/')
if normalized_path not in self.scheduled_files:
logger.info(f"File created: {event.src_path}")
self.scheduled_files.add(normalized_path)
self._schedule_update('add', event.src_path)
# Ignore modifications for a short period after creation
self.loop.call_later(
self.debounce_delay * 2,
self.scheduled_files.discard,
normalized_path
)
def on_modified(self, event):
if event.is_directory:
return
# Only process files with supported extensions
file_ext = os.path.splitext(event.src_path)[1].lower()
if file_ext in self.file_extensions:
if self._should_ignore(event.src_path):
return
normalized_path = os.path.realpath(event.src_path).replace(os.sep, '/')
# Skip if this file is already scheduled for processing
if normalized_path in self.scheduled_files:
return
# Update the timestamp for this file
self.modified_files[normalized_path] = time.time()
# Cancel any existing timer
if self.debounce_timer:
self.debounce_timer.cancel()
# Set a new timer to process modified files after debounce period
self.debounce_timer = self.loop.call_later(
self.debounce_delay,
self.loop.call_soon_threadsafe,
self._process_modified_files
)
def _process_modified_files(self):
"""Process files that have been modified after debounce period"""
current_time = time.time()
files_to_process = []
# Find files that haven't been modified for debounce_delay seconds
for file_path, last_modified in list(self.modified_files.items()):
if current_time - last_modified >= self.debounce_delay:
# Only process if not already scheduled
if file_path not in self.scheduled_files:
files_to_process.append(file_path)
self.scheduled_files.add(file_path)
# Auto-remove from scheduled list after reasonable time
self.loop.call_later(
self.debounce_delay * 2,
self.scheduled_files.discard,
file_path
)
del self.modified_files[file_path]
# Process stable files
for file_path in files_to_process:
logger.info(f"Processing modified file: {file_path}")
self._schedule_update('add', file_path)
def on_deleted(self, event):
if event.is_directory:
return
file_ext = os.path.splitext(event.src_path)[1].lower()
if file_ext not in self.file_extensions:
return
if self._should_ignore(event.src_path):
return
# Remove from scheduled files if present
normalized_path = os.path.realpath(event.src_path).replace(os.sep, '/')
self.scheduled_files.discard(normalized_path)
logger.info(f"File deleted: {event.src_path}")
self._schedule_update('remove', event.src_path)
def on_moved(self, event):
"""Handle file move/rename events"""
src_ext = os.path.splitext(event.src_path)[1].lower()
dest_ext = os.path.splitext(event.dest_path)[1].lower()
# If destination has supported extension, treat as new file
if dest_ext in self.file_extensions:
if self._should_ignore(event.dest_path):
return
normalized_path = os.path.realpath(event.dest_path).replace(os.sep, '/')
# Only process if not already scheduled
if normalized_path not in self.scheduled_files:
logger.info(f"File renamed/moved to: {event.dest_path}")
self.scheduled_files.add(normalized_path)
self._schedule_update('add', event.dest_path)
# Auto-remove from scheduled list after reasonable time
self.loop.call_later(
self.debounce_delay * 2,
self.scheduled_files.discard,
normalized_path
)
# If source was a supported file, treat it as deleted
if src_ext in self.file_extensions:
if self._should_ignore(event.src_path):
return
normalized_path = os.path.realpath(event.src_path).replace(os.sep, '/')
self.scheduled_files.discard(normalized_path)
logger.info(f"File moved/renamed from: {event.src_path}")
self._schedule_update('remove', event.src_path)
def _schedule_update(self, action: str, file_path: str):
"""Schedule a cache update"""
with self.lock:
# Use config method to map path
mapped_path = config.map_path_to_link(file_path)
normalized_path = mapped_path.replace(os.sep, '/')
self.pending_changes.add((action, normalized_path))
self.loop.call_soon_threadsafe(self._create_update_task)
def _create_update_task(self):
"""Create update task in the event loop"""
if self.update_task is None or self.update_task.done():
self.update_task = asyncio.create_task(self._process_changes())
async def _process_changes(self, delay: float = 2.0):
"""Process pending changes with debouncing - should be implemented by subclasses"""
raise NotImplementedError("Subclasses must implement _process_changes")
class LoraFileHandler(BaseFileHandler):
"""Handler for LoRA file system events"""
def __init__(self, loop: asyncio.AbstractEventLoop):
super().__init__(loop)
# Set supported file extensions for LoRAs
self.file_extensions = {'.safetensors'}
async def _process_changes(self, delay: float = 2.0):
"""Process pending changes with debouncing"""
await asyncio.sleep(delay)
try:
with self.lock:
changes = self.pending_changes.copy()
self.pending_changes.clear()
if not changes:
return
logger.info(f"Processing {len(changes)} LoRA file changes")
# Get scanner through ServiceRegistry
scanner = await ServiceRegistry.get_lora_scanner()
cache = await scanner.get_cached_data()
needs_resort = False
new_folders = set()
for action, file_path in changes:
try:
if action == 'add':
# Check if file already exists in cache
existing = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if existing:
logger.info(f"File {file_path} already in cache, skipping")
continue
# Scan new file
model_data = await scanner.scan_single_model(file_path)
if model_data:
# Update tags count
for tag in model_data.get('tags', []):
scanner._tags_count[tag] = scanner._tags_count.get(tag, 0) + 1
cache.raw_data.append(model_data)
new_folders.add(model_data['folder'])
# Update hash index
if 'sha256' in model_data:
scanner._hash_index.add_entry(
model_data['sha256'],
model_data['file_path']
)
needs_resort = True
elif action == 'remove':
# Find the model to remove so we can update tags count
model_to_remove = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if model_to_remove:
# Update tags count by reducing counts
for tag in model_to_remove.get('tags', []):
if tag in scanner._tags_count:
scanner._tags_count[tag] = max(0, scanner._tags_count[tag] - 1)
if scanner._tags_count[tag] == 0:
del scanner._tags_count[tag]
# Remove from cache and hash index
logger.info(f"Removing {file_path} from cache")
scanner._hash_index.remove_by_path(file_path)
cache.raw_data = [
item for item in cache.raw_data
if item['file_path'] != file_path
]
needs_resort = True
except Exception as e:
logger.error(f"Error processing {action} for {file_path}: {e}")
if needs_resort:
await cache.resort()
# Update folder list
all_folders = set(cache.folders) | new_folders
cache.folders = sorted(list(all_folders), key=lambda x: x.lower())
except Exception as e:
logger.error(f"Error in process_changes for LoRA: {e}")
class CheckpointFileHandler(BaseFileHandler):
"""Handler for checkpoint file system events"""
def __init__(self, loop: asyncio.AbstractEventLoop):
super().__init__(loop)
# Set supported file extensions for checkpoints
self.file_extensions = {'.safetensors', '.ckpt', '.pt', '.pth', '.sft', '.gguf'}
async def _process_changes(self, delay: float = 2.0):
"""Process pending changes with debouncing for checkpoint files"""
await asyncio.sleep(delay)
try:
with self.lock:
changes = self.pending_changes.copy()
self.pending_changes.clear()
if not changes:
return
logger.info(f"Processing {len(changes)} checkpoint file changes")
# Get scanner through ServiceRegistry
scanner = await ServiceRegistry.get_checkpoint_scanner()
cache = await scanner.get_cached_data()
needs_resort = False
new_folders = set()
for action, file_path in changes:
try:
if action == 'add':
# Check if file already exists in cache
existing = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if existing:
logger.info(f"File {file_path} already in cache, skipping")
continue
# Scan new file
model_data = await scanner.scan_single_model(file_path)
if model_data:
# Update tags count if applicable
for tag in model_data.get('tags', []):
scanner._tags_count[tag] = scanner._tags_count.get(tag, 0) + 1
cache.raw_data.append(model_data)
new_folders.add(model_data['folder'])
# Update hash index
if 'sha256' in model_data:
scanner._hash_index.add_entry(
model_data['sha256'],
model_data['file_path']
)
needs_resort = True
elif action == 'remove':
# Find the model to remove so we can update tags count
model_to_remove = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if model_to_remove:
# Update tags count by reducing counts
for tag in model_to_remove.get('tags', []):
if tag in scanner._tags_count:
scanner._tags_count[tag] = max(0, scanner._tags_count[tag] - 1)
if scanner._tags_count[tag] == 0:
del scanner._tags_count[tag]
# Remove from cache and hash index
logger.info(f"Removing {file_path} from checkpoint cache")
scanner._hash_index.remove_by_path(file_path)
cache.raw_data = [
item for item in cache.raw_data
if item['file_path'] != file_path
]
needs_resort = True
except Exception as e:
logger.error(f"Error processing checkpoint {action} for {file_path}: {e}")
if needs_resort:
await cache.resort()
# Update folder list
all_folders = set(cache.folders) | new_folders
cache.folders = sorted(list(all_folders), key=lambda x: x.lower())
except Exception as e:
logger.error(f"Error in process_changes for checkpoint: {e}")
class BaseFileMonitor:
"""Base class for file monitoring"""
def __init__(self, monitor_paths: List[str]):
self.observer = Observer()
self.loop = asyncio.get_event_loop()
self.monitor_paths = set()
# Process monitor paths
for path in monitor_paths:
self.monitor_paths.add(os.path.realpath(path).replace(os.sep, '/'))
# Add mapped paths from config
for target_path in config._path_mappings.keys():
self.monitor_paths.add(target_path)
def start(self):
"""Start file monitoring"""
if not ENABLE_FILE_MONITORING:
logger.debug("File monitoring is disabled via ENABLE_FILE_MONITORING setting")
return
for path in self.monitor_paths:
try:
self.observer.schedule(self.handler, path, recursive=True)
logger.info(f"Started monitoring: {path}")
except Exception as e:
logger.error(f"Error monitoring {path}: {e}")
self.observer.start()
def stop(self):
"""Stop file monitoring"""
if not ENABLE_FILE_MONITORING:
return
self.observer.stop()
self.observer.join()
def rescan_links(self):
"""Rescan links when new ones are added"""
if not ENABLE_FILE_MONITORING:
return
# Find new paths not yet being monitored
new_paths = set()
for path in config._path_mappings.keys():
real_path = os.path.realpath(path).replace(os.sep, '/')
if real_path not in self.monitor_paths:
new_paths.add(real_path)
self.monitor_paths.add(real_path)
# Add new paths to monitoring
for path in new_paths:
try:
self.observer.schedule(self.handler, path, recursive=True)
logger.info(f"Added new monitoring path: {path}")
except Exception as e:
logger.error(f"Error adding new monitor for {path}: {e}")
class LoraFileMonitor(BaseFileMonitor):
"""Monitor for LoRA file changes"""
_instance = None
_lock = asyncio.Lock()
def __new__(cls, monitor_paths=None):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self, monitor_paths=None):
if not hasattr(self, '_initialized'):
if monitor_paths is None:
from ..config import config
monitor_paths = config.loras_roots
super().__init__(monitor_paths)
self.handler = LoraFileHandler(self.loop)
self._initialized = True
@classmethod
async def get_instance(cls):
"""Get singleton instance with async support"""
async with cls._lock:
if cls._instance is None:
from ..config import config
cls._instance = cls(config.loras_roots)
return cls._instance
class CheckpointFileMonitor(BaseFileMonitor):
"""Monitor for checkpoint file changes"""
_instance = None
_lock = asyncio.Lock()
def __new__(cls, monitor_paths=None):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
def __init__(self, monitor_paths=None):
if not hasattr(self, '_initialized'):
if monitor_paths is None:
# Get checkpoint roots from scanner
monitor_paths = []
# We'll initialize monitor paths later when scanner is available
super().__init__(monitor_paths or [])
self.handler = CheckpointFileHandler(self.loop)
self._initialized = True
@classmethod
async def get_instance(cls):
"""Get singleton instance with async support"""
async with cls._lock:
if cls._instance is None:
cls._instance = cls([])
# Now get checkpoint roots from scanner
from .checkpoint_scanner import CheckpointScanner
scanner = await CheckpointScanner.get_instance()
monitor_paths = scanner.get_model_roots()
# Update monitor paths - but don't actually monitor them
for path in monitor_paths:
real_path = os.path.realpath(path).replace(os.sep, '/')
cls._instance.monitor_paths.add(real_path)
return cls._instance
def start(self):
"""Override start to check global enable flag"""
if not ENABLE_FILE_MONITORING:
logger.debug("Checkpoint file monitoring is disabled via ENABLE_FILE_MONITORING setting")
return
logger.debug("Checkpoint file monitoring is temporarily disabled")
# Skip the actual monitoring setup
pass
async def initialize_paths(self):
"""Initialize monitor paths from scanner - currently disabled"""
if not ENABLE_FILE_MONITORING:
logger.debug("Checkpoint path initialization skipped (monitoring disabled)")
return
logger.debug("Checkpoint file path initialization skipped (monitoring disabled)")
pass

View File

@@ -2,6 +2,7 @@ import asyncio
from typing import List, Dict
from dataclasses import dataclass
from operator import itemgetter
from natsort import natsorted
@dataclass
class LoraCache:
@@ -17,7 +18,7 @@ class LoraCache:
async def resort(self, name_only: bool = False):
"""Resort all cached data views"""
async with self._lock:
self.sorted_by_name = sorted(
self.sorted_by_name = natsorted(
self.raw_data,
key=lambda x: x['model_name'].lower() # Case-insensitive sort
)

View File

@@ -1,54 +0,0 @@
from typing import Dict, Optional
import logging
from dataclasses import dataclass
logger = logging.getLogger(__name__)
@dataclass
class LoraHashIndex:
"""Index for mapping LoRA file hashes to their file paths"""
def __init__(self):
self._hash_to_path: Dict[str, str] = {}
def add_entry(self, sha256: str, file_path: str) -> None:
"""Add or update a hash -> path mapping"""
if not sha256 or not file_path:
return
# Always store lowercase hashes for consistency
self._hash_to_path[sha256.lower()] = file_path
def remove_entry(self, sha256: str) -> None:
"""Remove a hash entry"""
if sha256:
self._hash_to_path.pop(sha256.lower(), None)
def remove_by_path(self, file_path: str) -> None:
"""Remove entry by file path"""
for sha256, path in list(self._hash_to_path.items()):
if path == file_path:
del self._hash_to_path[sha256]
break
def get_path(self, sha256: str) -> Optional[str]:
"""Get file path for a given hash"""
if not sha256:
return None
return self._hash_to_path.get(sha256.lower())
def get_hash(self, file_path: str) -> Optional[str]:
"""Get hash for a given file path"""
for sha256, path in self._hash_to_path.items():
if path == file_path:
return sha256
return None
def has_hash(self, sha256: str) -> bool:
"""Check if hash exists in index"""
if not sha256:
return False
return sha256.lower() in self._hash_to_path
def clear(self) -> None:
"""Clear all entries"""
self._hash_to_path.clear()

View File

@@ -4,12 +4,13 @@ import logging
import asyncio
import shutil
import time
import re
from typing import List, Dict, Optional, Set
from ..utils.models import LoraMetadata
from ..config import config
from .model_scanner import ModelScanner
from .lora_hash_index import LoraHashIndex
from .model_hash_index import ModelHashIndex # Changed from LoraHashIndex to ModelHashIndex
from .settings_manager import settings
from ..utils.constants import NSFW_LEVELS
from ..utils.utils import fuzzy_match
@@ -35,12 +36,12 @@ class LoraScanner(ModelScanner):
# Define supported file extensions
file_extensions = {'.safetensors'}
# Initialize parent class
# Initialize parent class with ModelHashIndex
super().__init__(
model_type="lora",
model_class=LoraMetadata,
file_extensions=file_extensions,
hash_index=LoraHashIndex()
hash_index=ModelHashIndex() # Changed from LoraHashIndex to ModelHashIndex
)
self._initialized = True
@@ -122,7 +123,8 @@ class LoraScanner(ModelScanner):
async def get_paginated_data(self, page: int, page_size: int, sort_by: str = 'name',
folder: str = None, search: str = None, fuzzy_search: bool = False,
base_models: list = None, tags: list = None,
search_options: dict = None, hash_filters: dict = None) -> Dict:
search_options: dict = None, hash_filters: dict = None,
favorites_only: bool = False, first_letter: str = None) -> Dict:
"""Get paginated and filtered lora data
Args:
@@ -136,6 +138,8 @@ class LoraScanner(ModelScanner):
tags: List of tags to filter by
search_options: Dictionary with search options (filename, modelname, tags, recursive)
hash_filters: Dictionary with hash filtering options (single_hash or multiple_hashes)
favorites_only: Filter for favorite models only
first_letter: Filter by first letter of model name
"""
cache = await self.get_cached_data()
@@ -194,6 +198,17 @@ class LoraScanner(ModelScanner):
if not lora.get('preview_nsfw_level') or lora.get('preview_nsfw_level') < NSFW_LEVELS['R']
]
# Apply favorites filtering if enabled
if favorites_only:
filtered_data = [
lora for lora in filtered_data
if lora.get('favorite', False) is True
]
# Apply first letter filtering
if first_letter:
filtered_data = self._filter_by_first_letter(filtered_data, first_letter)
# Apply folder filtering
if folder is not None:
if search_options.get('recursive', False):
@@ -264,31 +279,100 @@ class LoraScanner(ModelScanner):
return result
async def _update_metadata_paths(self, metadata_path: str, lora_path: str) -> Dict:
"""Update file paths in metadata file"""
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
# Update file_path
metadata['file_path'] = lora_path.replace(os.sep, '/')
# Update preview_url if exists
if 'preview_url' in metadata:
preview_dir = os.path.dirname(lora_path)
preview_name = os.path.splitext(os.path.basename(metadata['preview_url']))[0]
preview_ext = os.path.splitext(metadata['preview_url'])[1]
new_preview_path = os.path.join(preview_dir, f"{preview_name}{preview_ext}")
metadata['preview_url'] = new_preview_path.replace(os.sep, '/')
# Save updated metadata
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
return metadata
def _filter_by_first_letter(self, data, letter):
"""Filter data by first letter of model name
Special handling:
- '#': Numbers (0-9)
- '@': Special characters (not alphanumeric)
- '': CJK characters
"""
filtered_data = []
for lora in data:
model_name = lora.get('model_name', '')
if not model_name:
continue
except Exception as e:
logger.error(f"Error updating metadata paths: {e}", exc_info=True)
first_char = model_name[0].upper()
if letter == '#' and first_char.isdigit():
filtered_data.append(lora)
elif letter == '@' and not first_char.isalnum():
# Special characters (not alphanumeric)
filtered_data.append(lora)
elif letter == '' and self._is_cjk_character(first_char):
# CJK characters
filtered_data.append(lora)
elif letter.upper() == first_char:
# Regular alphabet matching
filtered_data.append(lora)
return filtered_data
def _is_cjk_character(self, char):
"""Check if character is a CJK character"""
# Define Unicode ranges for CJK characters
cjk_ranges = [
(0x4E00, 0x9FFF), # CJK Unified Ideographs
(0x3400, 0x4DBF), # CJK Unified Ideographs Extension A
(0x20000, 0x2A6DF), # CJK Unified Ideographs Extension B
(0x2A700, 0x2B73F), # CJK Unified Ideographs Extension C
(0x2B740, 0x2B81F), # CJK Unified Ideographs Extension D
(0x2B820, 0x2CEAF), # CJK Unified Ideographs Extension E
(0x2CEB0, 0x2EBEF), # CJK Unified Ideographs Extension F
(0x30000, 0x3134F), # CJK Unified Ideographs Extension G
(0xF900, 0xFAFF), # CJK Compatibility Ideographs
(0x3300, 0x33FF), # CJK Compatibility
(0x3200, 0x32FF), # Enclosed CJK Letters and Months
(0x3100, 0x312F), # Bopomofo
(0x31A0, 0x31BF), # Bopomofo Extended
(0x3040, 0x309F), # Hiragana
(0x30A0, 0x30FF), # Katakana
(0x31F0, 0x31FF), # Katakana Phonetic Extensions
(0xAC00, 0xD7AF), # Hangul Syllables
(0x1100, 0x11FF), # Hangul Jamo
(0xA960, 0xA97F), # Hangul Jamo Extended-A
(0xD7B0, 0xD7FF), # Hangul Jamo Extended-B
]
code_point = ord(char)
return any(start <= code_point <= end for start, end in cjk_ranges)
async def get_letter_counts(self):
"""Get count of models for each letter of the alphabet"""
cache = await self.get_cached_data()
data = cache.sorted_by_name
# Define letter categories
letters = {
'#': 0, # Numbers
'A': 0, 'B': 0, 'C': 0, 'D': 0, 'E': 0, 'F': 0, 'G': 0, 'H': 0,
'I': 0, 'J': 0, 'K': 0, 'L': 0, 'M': 0, 'N': 0, 'O': 0, 'P': 0,
'Q': 0, 'R': 0, 'S': 0, 'T': 0, 'U': 0, 'V': 0, 'W': 0, 'X': 0,
'Y': 0, 'Z': 0,
'@': 0, # Special characters
'': 0 # CJK characters
}
# Count models for each letter
for lora in data:
model_name = lora.get('model_name', '')
if not model_name:
continue
first_char = model_name[0].upper()
if first_char.isdigit():
letters['#'] += 1
elif first_char in letters:
letters[first_char] += 1
elif self._is_cjk_character(first_char):
letters[''] += 1
elif not first_char.isalnum():
letters['@'] += 1
return letters
# Lora-specific hash index functionality
def has_lora_hash(self, sha256: str) -> bool:

View File

@@ -2,6 +2,7 @@ import asyncio
from typing import List, Dict
from dataclasses import dataclass
from operator import itemgetter
from natsort import natsorted
@dataclass
class ModelCache:
@@ -17,7 +18,7 @@ class ModelCache:
async def resort(self, name_only: bool = False):
"""Resort all cached data views"""
async with self._lock:
self.sorted_by_name = sorted(
self.sorted_by_name = natsorted(
self.raw_data,
key=lambda x: x['model_name'].lower() # Case-insensitive sort
)
@@ -31,12 +32,13 @@ class ModelCache:
all_folders = set(l['folder'] for l in self.raw_data)
self.folders = sorted(list(all_folders), key=lambda x: x.lower())
async def update_preview_url(self, file_path: str, preview_url: str) -> bool:
async def update_preview_url(self, file_path: str, preview_url: str, preview_nsfw_level: int) -> bool:
"""Update preview_url for a specific model in all cached data
Args:
file_path: The file path of the model to update
preview_url: The new preview URL
preview_nsfw_level: The NSFW level of the preview
Returns:
bool: True if the update was successful, False if the model wasn't found
@@ -46,19 +48,9 @@ class ModelCache:
for item in self.raw_data:
if item['file_path'] == file_path:
item['preview_url'] = preview_url
item['preview_nsfw_level'] = preview_nsfw_level
break
else:
return False # Model not found
# Update in sorted lists (references to the same dict objects)
for item in self.sorted_by_name:
if item['file_path'] == file_path:
item['preview_url'] = preview_url
break
for item in self.sorted_by_date:
if item['file_path'] == file_path:
item['preview_url'] = preview_url
break
return True

View File

@@ -1,11 +1,15 @@
from typing import Dict, Optional, Set
from typing import Dict, Optional, Set, List
import os
class ModelHashIndex:
"""Index for looking up models by hash or path"""
"""Index for looking up models by hash or filename"""
def __init__(self):
self._hash_to_path: Dict[str, str] = {}
self._path_to_hash: Dict[str, str] = {}
self._filename_to_hash: Dict[str, str] = {}
# New data structures for tracking duplicates
self._duplicate_hashes: Dict[str, List[str]] = {} # sha256 -> list of paths
self._duplicate_filenames: Dict[str, List[str]] = {} # filename -> list of paths
def add_entry(self, sha256: str, file_path: str) -> None:
"""Add or update hash index entry"""
@@ -15,38 +19,170 @@ class ModelHashIndex:
# Ensure hash is lowercase for consistency
sha256 = sha256.lower()
# Extract filename without extension
filename = self._get_filename_from_path(file_path)
# Track duplicates by hash
if sha256 in self._hash_to_path:
old_path = self._hash_to_path[sha256]
if old_path != file_path: # Only record if it's actually a different path
if sha256 not in self._duplicate_hashes:
self._duplicate_hashes[sha256] = [old_path]
if file_path not in self._duplicate_hashes.get(sha256, []):
self._duplicate_hashes.setdefault(sha256, []).append(file_path)
# Track duplicates by filename
if filename in self._filename_to_hash:
old_hash = self._filename_to_hash[filename]
if old_hash != sha256: # Different models with the same name
old_path = self._hash_to_path.get(old_hash)
if old_path:
if filename not in self._duplicate_filenames:
self._duplicate_filenames[filename] = [old_path]
if file_path not in self._duplicate_filenames.get(filename, []):
self._duplicate_filenames.setdefault(filename, []).append(file_path)
# Remove old path mapping if hash exists
if sha256 in self._hash_to_path:
old_path = self._hash_to_path[sha256]
if old_path in self._path_to_hash:
del self._path_to_hash[old_path]
old_filename = self._get_filename_from_path(old_path)
if old_filename in self._filename_to_hash:
del self._filename_to_hash[old_filename]
# Remove old hash mapping if path exists
if file_path in self._path_to_hash:
old_hash = self._path_to_hash[file_path]
# Remove old hash mapping if filename exists
if filename in self._filename_to_hash:
old_hash = self._filename_to_hash[filename]
if old_hash in self._hash_to_path:
del self._hash_to_path[old_hash]
# Add new mappings
self._hash_to_path[sha256] = file_path
self._path_to_hash[file_path] = sha256
self._filename_to_hash[filename] = sha256
def remove_by_path(self, file_path: str) -> None:
def _get_filename_from_path(self, file_path: str) -> str:
"""Extract filename without extension from path"""
return os.path.splitext(os.path.basename(file_path))[0]
def remove_by_path(self, file_path: str, hash_val: str = None) -> None:
"""Remove entry by file path"""
if file_path in self._path_to_hash:
hash_val = self._path_to_hash[file_path]
if hash_val in self._hash_to_path:
filename = self._get_filename_from_path(file_path)
# Find the hash for this file path
if hash_val is None:
for h, p in self._hash_to_path.items():
if p == file_path:
hash_val = h
break
# If we didn't find a hash, nothing to do
if not hash_val:
return
# Update duplicates tracking for hash
if hash_val in self._duplicate_hashes:
# Remove the current path from duplicates
self._duplicate_hashes[hash_val] = [p for p in self._duplicate_hashes[hash_val] if p != file_path]
# Update or remove hash mapping based on remaining duplicates
if len(self._duplicate_hashes[hash_val]) > 0:
# Replace with one of the remaining paths
new_path = self._duplicate_hashes[hash_val][0]
new_filename = self._get_filename_from_path(new_path)
# Update hash-to-path mapping
self._hash_to_path[hash_val] = new_path
# IMPORTANT: Update filename-to-hash mapping for consistency
# Remove old filename mapping if it points to this hash
if filename in self._filename_to_hash and self._filename_to_hash[filename] == hash_val:
del self._filename_to_hash[filename]
# Add new filename mapping
self._filename_to_hash[new_filename] = hash_val
# If only one duplicate left, remove from duplicates tracking
if len(self._duplicate_hashes[hash_val]) == 1:
del self._duplicate_hashes[hash_val]
else:
# No duplicates left, remove hash entry completely
del self._duplicate_hashes[hash_val]
del self._hash_to_path[hash_val]
del self._path_to_hash[file_path]
# Remove corresponding filename entry if it points to this hash
if filename in self._filename_to_hash and self._filename_to_hash[filename] == hash_val:
del self._filename_to_hash[filename]
else:
# No duplicates, simply remove the hash entry
del self._hash_to_path[hash_val]
# Remove corresponding filename entry if it points to this hash
if filename in self._filename_to_hash and self._filename_to_hash[filename] == hash_val:
del self._filename_to_hash[filename]
# Update duplicates tracking for filename
if filename in self._duplicate_filenames:
# Remove the current path from duplicates
self._duplicate_filenames[filename] = [p for p in self._duplicate_filenames[filename] if p != file_path]
# Update or remove filename mapping based on remaining duplicates
if len(self._duplicate_filenames[filename]) > 0:
# Get the hash for the first remaining duplicate path
first_dup_path = self._duplicate_filenames[filename][0]
first_dup_hash = None
for h, p in self._hash_to_path.items():
if p == first_dup_path:
first_dup_hash = h
break
# Update the filename to hash mapping if we found a hash
if first_dup_hash:
self._filename_to_hash[filename] = first_dup_hash
# If only one duplicate left, remove from duplicates tracking
if len(self._duplicate_filenames[filename]) == 1:
del self._duplicate_filenames[filename]
else:
# No duplicates left, remove filename entry completely
del self._duplicate_filenames[filename]
if filename in self._filename_to_hash:
del self._filename_to_hash[filename]
def remove_by_hash(self, sha256: str) -> None:
"""Remove entry by hash"""
sha256 = sha256.lower()
if sha256 in self._hash_to_path:
path = self._hash_to_path[sha256]
if path in self._path_to_hash:
del self._path_to_hash[path]
del self._hash_to_path[sha256]
if sha256 not in self._hash_to_path:
return
# Get the path and filename
path = self._hash_to_path[sha256]
filename = self._get_filename_from_path(path)
# Get all paths for this hash (including duplicates)
paths_to_remove = [path]
if sha256 in self._duplicate_hashes:
paths_to_remove.extend(self._duplicate_hashes[sha256])
del self._duplicate_hashes[sha256]
# Remove hash-to-path mapping
del self._hash_to_path[sha256]
# Update filename-to-hash and duplicate filenames for all paths
for path_to_remove in paths_to_remove:
fname = self._get_filename_from_path(path_to_remove)
# If this filename maps to the hash we're removing, remove it
if fname in self._filename_to_hash and self._filename_to_hash[fname] == sha256:
del self._filename_to_hash[fname]
# Update duplicate filenames tracking
if fname in self._duplicate_filenames:
self._duplicate_filenames[fname] = [p for p in self._duplicate_filenames[fname] if p != path_to_remove]
if not self._duplicate_filenames[fname]:
del self._duplicate_filenames[fname]
elif len(self._duplicate_filenames[fname]) == 1:
# If only one entry remains, it's no longer a duplicate
del self._duplicate_filenames[fname]
def has_hash(self, sha256: str) -> bool:
"""Check if hash exists in index"""
@@ -58,20 +194,37 @@ class ModelHashIndex:
def get_hash(self, file_path: str) -> Optional[str]:
"""Get hash for a file path"""
return self._path_to_hash.get(file_path)
filename = self._get_filename_from_path(file_path)
return self._filename_to_hash.get(filename)
def get_hash_by_filename(self, filename: str) -> Optional[str]:
"""Get hash for a filename without extension"""
# Strip extension if present to make the function more flexible
filename = os.path.splitext(filename)[0]
return self._filename_to_hash.get(filename)
def clear(self) -> None:
"""Clear all entries"""
self._hash_to_path.clear()
self._path_to_hash.clear()
self._filename_to_hash.clear()
self._duplicate_hashes.clear()
self._duplicate_filenames.clear()
def get_all_hashes(self) -> Set[str]:
"""Get all hashes in the index"""
return set(self._hash_to_path.keys())
def get_all_paths(self) -> Set[str]:
"""Get all file paths in the index"""
return set(self._path_to_hash.keys())
def get_all_filenames(self) -> Set[str]:
"""Get all filenames in the index"""
return set(self._filename_to_hash.keys())
def get_duplicate_hashes(self) -> Dict[str, List[str]]:
"""Get dictionary of duplicate hashes and their paths"""
return self._duplicate_hashes
def get_duplicate_filenames(self) -> Dict[str, List[str]]:
"""Get dictionary of duplicate filenames and their paths"""
return self._duplicate_filenames
def __len__(self) -> int:
"""Get number of entries"""

View File

@@ -5,10 +5,12 @@ import asyncio
import time
import shutil
from typing import List, Dict, Optional, Type, Set
import msgpack # Add MessagePack import for efficient serialization
from ..utils.models import BaseModelMetadata
from ..config import config
from ..utils.file_utils import load_metadata, get_file_info, find_preview_file, save_metadata
from ..utils.file_utils import find_preview_file
from ..utils.metadata_manager import MetadataManager
from .model_cache import ModelCache
from .model_hash_index import ModelHashIndex
from ..utils.constants import PREVIEW_EXTENSIONS
@@ -17,6 +19,13 @@ from .websocket_manager import ws_manager
logger = logging.getLogger(__name__)
# Define cache version to handle future format changes
# Version history:
# 1 - Initial version
# 2 - Added duplicate_filenames and duplicate_hashes tracking
# 3 - Added _excluded_models list to cache
CACHE_VERSION = 3
class ModelScanner:
"""Base service for scanning and managing model files"""
@@ -38,15 +47,204 @@ class ModelScanner:
self._hash_index = hash_index or ModelHashIndex()
self._tags_count = {} # Dictionary to store tag counts
self._is_initializing = False # Flag to track initialization state
self._excluded_models = [] # List to track excluded models
self._dirs_last_modified = {} # Track directory modification times
self._use_cache_files = False # Flag to control cache file usage, default to disabled
# Clear cache files if disabled
if not self._use_cache_files:
self._clear_cache_files()
# Register this service
asyncio.create_task(self._register_service())
def _clear_cache_files(self):
"""Clear existing cache files if they exist"""
try:
cache_path = self._get_cache_file_path()
if cache_path and os.path.exists(cache_path):
os.remove(cache_path)
logger.info(f"Cleared {self.model_type} cache file: {cache_path}")
except Exception as e:
logger.error(f"Error clearing {self.model_type} cache file: {e}")
async def _register_service(self):
"""Register this instance with the ServiceRegistry"""
service_name = f"{self.model_type}_scanner"
await ServiceRegistry.register_service(service_name, self)
def _get_cache_file_path(self) -> Optional[str]:
"""Get the path to the cache file"""
# Get the directory where this module is located
current_dir = os.path.dirname(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
# Create a cache directory within the project if it doesn't exist
cache_dir = os.path.join(current_dir, "cache")
os.makedirs(cache_dir, exist_ok=True)
# Create filename based on model type
cache_filename = f"lm_{self.model_type}_cache.msgpack"
return os.path.join(cache_dir, cache_filename)
def _prepare_for_msgpack(self, data):
"""Preprocess data to accommodate MessagePack serialization limitations
Converts integers exceeding safe range to strings
Args:
data: Any type of data structure
Returns:
Preprocessed data structure with large integers converted to strings
"""
if isinstance(data, dict):
return {k: self._prepare_for_msgpack(v) for k, v in data.items()}
elif isinstance(data, list):
return [self._prepare_for_msgpack(item) for item in data]
elif isinstance(data, int) and (data > 9007199254740991 or data < -9007199254740991):
# Convert integers exceeding JavaScript's safe integer range (2^53-1) to strings
return str(data)
else:
return data
async def _save_cache_to_disk(self) -> bool:
"""Save cache data to disk using MessagePack"""
if not self._use_cache_files:
logger.debug(f"Cache files disabled for {self.model_type}, skipping save")
return False
if self._cache is None or not self._cache.raw_data:
logger.debug(f"No {self.model_type} cache data to save")
return False
cache_path = self._get_cache_file_path()
if not cache_path:
logger.warning(f"Cannot determine {self.model_type} cache file location")
return False
try:
# Create cache data structure
cache_data = {
"version": CACHE_VERSION,
"timestamp": time.time(),
"model_type": self.model_type,
"raw_data": self._cache.raw_data,
"hash_index": {
"hash_to_path": self._hash_index._hash_to_path,
"filename_to_hash": self._hash_index._filename_to_hash, # Fix: changed from path_to_hash to filename_to_hash
"duplicate_hashes": self._hash_index._duplicate_hashes,
"duplicate_filenames": self._hash_index._duplicate_filenames
},
"tags_count": self._tags_count,
"dirs_last_modified": self._get_dirs_last_modified(),
"excluded_models": self._excluded_models # Add excluded_models to cache data
}
# Preprocess data to handle large integers
processed_cache_data = self._prepare_for_msgpack(cache_data)
# Write to temporary file first (atomic operation)
temp_path = f"{cache_path}.tmp"
with open(temp_path, 'wb') as f:
msgpack.pack(processed_cache_data, f)
# Replace the old file with the new one
if os.path.exists(cache_path):
os.replace(temp_path, cache_path)
else:
os.rename(temp_path, cache_path)
logger.info(f"Saved {self.model_type} cache with {len(self._cache.raw_data)} models to {cache_path}")
logger.debug(f"Hash index stats - hash_to_path: {len(self._hash_index._hash_to_path)}, filename_to_hash: {len(self._hash_index._filename_to_hash)}, duplicate_hashes: {len(self._hash_index._duplicate_hashes)}, duplicate_filenames: {len(self._hash_index._duplicate_filenames)}")
return True
except Exception as e:
logger.error(f"Error saving {self.model_type} cache to disk: {e}")
# Try to clean up temp file if it exists
if 'temp_path' in locals() and os.path.exists(temp_path):
try:
os.remove(temp_path)
except:
pass
return False
def _get_dirs_last_modified(self) -> Dict[str, float]:
"""Get last modified time for all model directories"""
dirs_info = {}
for root in self.get_model_roots():
if os.path.exists(root):
dirs_info[root] = os.path.getmtime(root)
# Also check immediate subdirectories for changes
try:
with os.scandir(root) as it:
for entry in it:
if entry.is_dir(follow_symlinks=True):
dirs_info[entry.path] = entry.stat().st_mtime
except Exception as e:
logger.error(f"Error getting directory info for {root}: {e}")
return dirs_info
def _is_cache_valid(self, cache_data: Dict) -> bool:
"""Validate if the loaded cache is still valid"""
if not cache_data or cache_data.get("version") != CACHE_VERSION:
logger.info(f"Cache invalid - version mismatch. Got: {cache_data.get('version')}, Expected: {CACHE_VERSION}")
return False
if cache_data.get("model_type") != self.model_type:
logger.info(f"Cache invalid - model type mismatch. Got: {cache_data.get('model_type')}, Expected: {self.model_type}")
return False
return True
async def _load_cache_from_disk(self) -> bool:
"""Load cache data from disk using MessagePack"""
if not self._use_cache_files:
logger.info(f"Cache files disabled for {self.model_type}, skipping load")
return False
start_time = time.time()
cache_path = self._get_cache_file_path()
if not cache_path or not os.path.exists(cache_path):
return False
try:
with open(cache_path, 'rb') as f:
cache_data = msgpack.unpack(f)
# Validate cache data
if not self._is_cache_valid(cache_data):
logger.info(f"{self.model_type.capitalize()} cache file found but invalid or outdated")
return False
# Load data into memory
self._cache = ModelCache(
raw_data=cache_data["raw_data"],
sorted_by_name=[],
sorted_by_date=[],
folders=[]
)
# Load hash index
hash_index_data = cache_data.get("hash_index", {})
self._hash_index._hash_to_path = hash_index_data.get("hash_to_path", {})
self._hash_index._filename_to_hash = hash_index_data.get("filename_to_hash", {}) # Fix: changed from path_to_hash to filename_to_hash
self._hash_index._duplicate_hashes = hash_index_data.get("duplicate_hashes", {})
self._hash_index._duplicate_filenames = hash_index_data.get("duplicate_filenames", {})
# Load tags count
self._tags_count = cache_data.get("tags_count", {})
# Load excluded models
self._excluded_models = cache_data.get("excluded_models", [])
# Resort the cache
await self._cache.resort()
logger.info(f"Loaded {self.model_type} cache from disk with {len(self._cache.raw_data)} models in {time.time() - start_time:.2f} seconds")
return True
except Exception as e:
logger.error(f"Error loading {self.model_type} cache from disk: {e}")
return False
async def initialize_in_background(self) -> None:
"""Initialize cache in background using thread pool"""
try:
@@ -65,7 +263,31 @@ class ModelScanner:
# Determine the page type based on model type
page_type = 'loras' if self.model_type == 'lora' else 'checkpoints'
# First, count all model files to track progress
# First, try to load from cache
await ws_manager.broadcast_init_progress({
'stage': 'loading_cache',
'progress': 0,
'details': f"Loading {self.model_type} cache...",
'scanner_type': self.model_type,
'pageType': page_type
})
cache_loaded = await self._load_cache_from_disk()
if cache_loaded:
# Cache loaded successfully, broadcast complete message
await ws_manager.broadcast_init_progress({
'stage': 'finalizing',
'progress': 100,
'status': 'complete',
'details': f"Loaded {len(self._cache.raw_data)} {self.model_type} files from cache.",
'scanner_type': self.model_type,
'pageType': page_type
})
self._is_initializing = False
return
# If cache loading failed, proceed with full scan
await ws_manager.broadcast_init_progress({
'stage': 'scan_folders',
'progress': 0,
@@ -110,6 +332,9 @@ class ModelScanner:
logger.info(f"{self.model_type.capitalize()} cache initialized in {time.time() - start_time:.2f} seconds. Found {len(self._cache.raw_data)} models")
# Save the cache to disk after initialization
await self._save_cache_to_disk()
# Send completion message
await asyncio.sleep(0.5) # Small delay to ensure final progress message is sent
await ws_manager.broadcast_init_progress({
@@ -279,8 +504,13 @@ class ModelScanner:
# Clean up the event loop
loop.close()
async def get_cached_data(self, force_refresh: bool = False) -> ModelCache:
"""Get cached model data, refresh if needed"""
async def get_cached_data(self, force_refresh: bool = False, rebuild_cache: bool = False) -> ModelCache:
"""Get cached model data, refresh if needed
Args:
force_refresh: Whether to refresh the cache
rebuild_cache: Whether to completely rebuild the cache by reloading from disk first
"""
# If cache is not initialized, return an empty cache
# Actual initialization should be done via initialize_in_background
if self._cache is None and not force_refresh:
@@ -293,9 +523,24 @@ class ModelScanner:
# If force refresh is requested, initialize the cache directly
if force_refresh:
# If rebuild_cache is True, try to reload from disk before reconciliation
if rebuild_cache:
logger.info(f"{self.model_type.capitalize()} Scanner: Attempting to rebuild cache from disk...")
cache_loaded = await self._load_cache_from_disk()
if cache_loaded:
logger.info(f"{self.model_type.capitalize()} Scanner: Successfully reloaded cache from disk")
else:
logger.info(f"{self.model_type.capitalize()} Scanner: Could not reload cache from disk, proceeding with complete rebuild")
# If loading from disk failed, do a complete rebuild and save to disk
await self._initialize_cache()
await self._save_cache_to_disk()
return self._cache
if self._cache is None:
# For initial creation, do a full initialization
await self._initialize_cache()
# Save the newly built cache
await self._save_cache_to_disk()
else:
# For subsequent refreshes, use fast reconciliation
await self._reconcile_cache()
@@ -394,6 +639,9 @@ class ModelScanner:
if file_path in cached_paths:
found_paths.add(file_path)
continue
if file_path in self._excluded_models:
continue
# Try case-insensitive match on Windows
if os.name == 'nt':
@@ -406,7 +654,7 @@ class ModelScanner:
break
if matched:
continue
# This is a new file to process
new_files.append(file_path)
@@ -422,26 +670,33 @@ class ModelScanner:
batch = new_files[i:i+batch_size]
for path in batch:
try:
model_data = await self.scan_single_model(path)
if model_data:
# Add to cache
self._cache.raw_data.append(model_data)
# Update hash index if available
if 'sha256' in model_data and 'file_path' in model_data:
self._hash_index.add_entry(model_data['sha256'].lower(), model_data['file_path'])
# Update tags count
if 'tags' in model_data and model_data['tags']:
for tag in model_data['tags']:
self._tags_count[tag] = self._tags_count.get(tag, 0) + 1
total_added += 1
# Find the appropriate root path for this file
root_path = None
for potential_root in self.get_model_roots():
if path.startswith(potential_root):
root_path = potential_root
break
if root_path:
model_data = await self._process_model_file(path, root_path)
if model_data:
# Add to cache
self._cache.raw_data.append(model_data)
# Update hash index if available
if 'sha256' in model_data and 'file_path' in model_data:
self._hash_index.add_entry(model_data['sha256'].lower(), model_data['file_path'])
# Update tags count
if 'tags' in model_data and model_data['tags']:
for tag in model_data['tags']:
self._tags_count[tag] = self._tags_count.get(tag, 0) + 1
total_added += 1
else:
logger.error(f"Could not determine root path for {path}")
except Exception as e:
logger.error(f"Error adding {path} to cache: {e}")
# Yield control after each batch
await asyncio.sleep(0)
# Find missing files (in cache but not in filesystem)
missing_files = cached_paths - found_paths
@@ -480,6 +735,9 @@ class ModelScanner:
# Resort cache
await self._cache.resort()
# Save updated cache to disk
await self._save_cache_to_disk()
logger.info(f"{self.model_type.capitalize()} Scanner: Cache reconciliation completed in {time.time() - start_time:.2f} seconds. Added {total_added}, removed {total_removed} models.")
except Exception as e:
logger.error(f"{self.model_type.capitalize()} Scanner: Error reconciling cache: {e}", exc_info=True)
@@ -491,36 +749,17 @@ class ModelScanner:
"""Scan all model directories and return metadata"""
raise NotImplementedError("Subclasses must implement scan_all_models")
def is_initializing(self) -> bool:
"""Check if the scanner is currently initializing"""
return self._is_initializing
def get_model_roots(self) -> List[str]:
"""Get model root directories"""
raise NotImplementedError("Subclasses must implement get_model_roots")
async def scan_single_model(self, file_path: str) -> Optional[Dict]:
"""Scan a single model file and return its metadata"""
try:
if not os.path.exists(os.path.realpath(file_path)):
return None
# Get basic file info
metadata = await self._get_file_info(file_path)
if not metadata:
return None
folder = self._calculate_folder(file_path)
# Ensure folder field exists
metadata_dict = metadata.to_dict()
metadata_dict['folder'] = folder or ''
return metadata_dict
except Exception as e:
logger.error(f"Error scanning {file_path}: {e}")
return None
async def _get_file_info(self, file_path: str) -> Optional[BaseModelMetadata]:
async def _create_default_metadata(self, file_path: str) -> Optional[BaseModelMetadata]:
"""Get model file info and metadata (extensible for different model types)"""
return await get_file_info(file_path, self.model_class)
return await MetadataManager.create_default_metadata(file_path, self.model_class)
def _calculate_folder(self, file_path: str) -> str:
"""Calculate the folder path for a model file"""
@@ -533,7 +772,7 @@ class ModelScanner:
# Common methods shared between scanners
async def _process_model_file(self, file_path: str, root_path: str) -> Dict:
"""Process a single model file and return its metadata"""
metadata = await load_metadata(file_path, self.model_class)
metadata = await MetadataManager.load_metadata(file_path, self.model_class)
if metadata is None:
civitai_info_path = f"{os.path.splitext(file_path)[0]}.civitai.info"
@@ -549,16 +788,48 @@ class ModelScanner:
metadata = self.model_class.from_civitai_info(version_info, file_info, file_path)
metadata.preview_url = find_preview_file(file_name, os.path.dirname(file_path))
await save_metadata(file_path, metadata)
await MetadataManager.save_metadata(file_path, metadata, True)
logger.debug(f"Created metadata from .civitai.info for {file_path}")
except Exception as e:
logger.error(f"Error creating metadata from .civitai.info for {file_path}: {e}")
else:
# Check if metadata exists but civitai field is empty - try to restore from civitai.info
if metadata.civitai is None or metadata.civitai == {}:
civitai_info_path = f"{os.path.splitext(file_path)[0]}.civitai.info"
if os.path.exists(civitai_info_path):
try:
with open(civitai_info_path, 'r', encoding='utf-8') as f:
version_info = json.load(f)
logger.debug(f"Restoring missing civitai data from .civitai.info for {file_path}")
metadata.civitai = version_info
# Ensure tags are also updated if they're missing
if (not metadata.tags or len(metadata.tags) == 0) and 'model' in version_info:
if 'tags' in version_info['model']:
metadata.tags = version_info['model']['tags']
# Also restore description if missing
if (not metadata.modelDescription or metadata.modelDescription == "") and 'model' in version_info:
if 'description' in version_info['model']:
metadata.modelDescription = version_info['model']['description']
# Save the updated metadata
await MetadataManager.save_metadata(file_path, metadata, True)
logger.debug(f"Updated metadata with civitai info for {file_path}")
except Exception as e:
logger.error(f"Error restoring civitai data from .civitai.info for {file_path}: {e}")
if metadata is None:
metadata = await self._get_file_info(file_path)
if metadata is None:
metadata = await self._create_default_metadata(file_path)
model_data = metadata.to_dict()
# Skip excluded models
if model_data.get('exclude', False):
self._excluded_models.append(model_data['file_path'])
return None
await self._fetch_missing_metadata(file_path, model_data)
rel_path = os.path.relpath(file_path, root_path)
folder = os.path.dirname(rel_path)
@@ -583,7 +854,10 @@ class ModelScanner:
model_id = str(model_id)
tags_missing = not model_data.get('tags') or len(model_data.get('tags', [])) == 0
desc_missing = not model_data.get('modelDescription') or model_data.get('modelDescription') in (None, "")
needs_metadata_update = tags_missing or desc_missing
# TODO: not for now, but later we should check if the creator is missing
# creator_missing = not model_data.get('civitai', {}).get('creator')
creator_missing = False
needs_metadata_update = tags_missing or desc_missing or creator_missing
if needs_metadata_update and model_id:
logger.debug(f"Fetching missing metadata for {file_path} with model ID {model_id}")
@@ -597,9 +871,7 @@ class ModelScanner:
logger.warning(f"Model {model_id} appears to be deleted from Civitai (404 response)")
model_data['civitai_deleted'] = True
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(model_data, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(file_path, model_data)
elif model_metadata:
logger.debug(f"Updating metadata for {file_path} with model ID {model_id}")
@@ -609,10 +881,10 @@ class ModelScanner:
if model_metadata.get('description') and (not model_data.get('modelDescription') or model_data.get('modelDescription') in (None, "")):
model_data['modelDescription'] = model_metadata['description']
model_data['civitai']['creator'] = model_metadata['creator']
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(model_data, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(file_path, model_data, True)
except Exception as e:
logger.error(f"Failed to update metadata from Civitai for {file_path}: {e}")
@@ -657,6 +929,44 @@ class ModelScanner:
models_list.append(result)
except Exception as e:
logger.error(f"Error processing {file_path}: {e}")
async def add_model_to_cache(self, metadata_dict: Dict, folder: str = '') -> bool:
"""Add a model to the cache and save to disk
Args:
metadata_dict: The model metadata dictionary
folder: The relative folder path for the model
Returns:
bool: True if successful, False otherwise
"""
try:
if self._cache is None:
await self.get_cached_data()
# Update folder in metadata
metadata_dict['folder'] = folder
# Add to cache
self._cache.raw_data.append(metadata_dict)
# Resort cache data
await self._cache.resort()
# Update folders list
all_folders = set(self._cache.folders)
all_folders.add(folder)
self._cache.folders = sorted(list(all_folders), key=lambda x: x.lower())
# Update the hash index
self._hash_index.add_entry(metadata_dict['sha256'], metadata_dict['file_path'])
# Save to disk
await self._save_cache_to_disk()
return True
except Exception as e:
logger.error(f"Error adding model to cache: {e}")
return False
async def move_model(self, source_path: str, target_path: str) -> bool:
"""Move a model and its associated files to a new location"""
@@ -680,41 +990,42 @@ class ModelScanner:
real_source = os.path.realpath(source_path)
real_target = os.path.realpath(target_file)
file_size = os.path.getsize(real_source)
# Get the appropriate file monitor through ServiceRegistry
if self.model_type == "lora":
monitor = await ServiceRegistry.get_lora_monitor()
elif self.model_type == "checkpoint":
monitor = await ServiceRegistry.get_checkpoint_monitor()
else:
monitor = None
if monitor:
monitor.handler.add_ignore_path(
real_source,
file_size
)
monitor.handler.add_ignore_path(
real_target,
file_size
)
shutil.move(real_source, real_target)
source_metadata = os.path.join(source_dir, f"{base_name}.metadata.json")
metadata = None
if os.path.exists(source_metadata):
target_metadata = os.path.join(target_path, f"{base_name}.metadata.json")
shutil.move(source_metadata, target_metadata)
metadata = await self._update_metadata_paths(target_metadata, target_file)
# Move all associated files with the same base name
source_metadata = None
moved_metadata_path = None
for ext in PREVIEW_EXTENSIONS:
source_preview = os.path.join(source_dir, f"{base_name}{ext}")
if os.path.exists(source_preview):
target_preview = os.path.join(target_path, f"{base_name}{ext}")
shutil.move(source_preview, target_preview)
break
# Find all files with the same base name in the source directory
files_to_move = []
try:
for file in os.listdir(source_dir):
if file.startswith(base_name + ".") and file != os.path.basename(source_path):
source_file_path = os.path.join(source_dir, file)
# Store metadata file path for special handling
if file == f"{base_name}.metadata.json":
source_metadata = source_file_path
moved_metadata_path = os.path.join(target_path, file)
else:
files_to_move.append((source_file_path, os.path.join(target_path, file)))
except Exception as e:
logger.error(f"Error listing files in {source_dir}: {e}")
# Move all associated files
metadata = None
for source_file, target_file_path in files_to_move:
try:
shutil.move(source_file, target_file_path)
except Exception as e:
logger.error(f"Error moving associated file {source_file}: {e}")
# Handle metadata file specially to update paths
if source_metadata and os.path.exists(source_metadata):
try:
shutil.move(source_metadata, moved_metadata_path)
metadata = await self._update_metadata_paths(moved_metadata_path, target_file)
except Exception as e:
logger.error(f"Error moving metadata file: {e}")
await self.update_single_model_cache(source_path, target_file, metadata)
@@ -732,15 +1043,14 @@ class ModelScanner:
metadata['file_path'] = model_path.replace(os.sep, '/')
if 'preview_url' in metadata:
if 'preview_url' in metadata and metadata['preview_url']:
preview_dir = os.path.dirname(model_path)
preview_name = os.path.splitext(os.path.basename(metadata['preview_url']))[0]
preview_ext = os.path.splitext(metadata['preview_url'])[1]
new_preview_path = os.path.join(preview_dir, f"{preview_name}{preview_ext}")
metadata['preview_url'] = new_preview_path.replace(os.sep, '/')
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(metadata_path, metadata)
return metadata
@@ -792,6 +1102,9 @@ class ModelScanner:
await cache.resort()
# Save the updated cache
await self._save_cache_to_disk()
return True
def has_hash(self, sha256: str) -> bool:
@@ -805,6 +1118,10 @@ class ModelScanner:
def get_hash_by_path(self, file_path: str) -> Optional[str]:
"""Get hash for a model by its file path"""
return self._hash_index.get_hash(file_path)
def get_hash_by_filename(self, filename: str) -> Optional[str]:
"""Get hash for a model by its filename without path"""
return self._hash_index.get_hash_by_filename(filename)
# TODO: Adjust this method to use metadata instead of finding the file
def get_preview_url_by_hash(self, sha256: str) -> Optional[str]:
@@ -863,12 +1180,17 @@ class ModelScanner:
logger.error(f"Error getting model info by name: {e}", exc_info=True)
return None
async def update_preview_in_cache(self, file_path: str, preview_url: str) -> bool:
def get_excluded_models(self) -> List[str]:
"""Get list of excluded model file paths"""
return self._excluded_models.copy()
async def update_preview_in_cache(self, file_path: str, preview_url: str, preview_nsfw_level: int) -> bool:
"""Update preview URL in cache for a specific lora
Args:
file_path: The file path of the lora to update
preview_url: The new preview URL
preview_nsfw_level: The NSFW level of the preview
Returns:
bool: True if the update was successful, False if cache doesn't exist or lora wasn't found
@@ -876,4 +1198,167 @@ class ModelScanner:
if self._cache is None:
return False
return await self._cache.update_preview_url(file_path, preview_url)
updated = await self._cache.update_preview_url(file_path, preview_url, preview_nsfw_level)
if updated:
# Save updated cache to disk
await self._save_cache_to_disk()
return updated
async def bulk_delete_models(self, file_paths: List[str]) -> Dict:
"""Delete multiple models and update cache in a batch operation
Args:
file_paths: List of file paths to delete
Returns:
Dict containing results of the operation
"""
try:
if not file_paths:
return {
'success': False,
'error': 'No file paths provided for deletion',
'results': []
}
# Keep track of success and failures
results = []
total_deleted = 0
cache_updated = False
# Get cache data
cache = await self.get_cached_data()
# Track deleted models to update cache once
deleted_models = []
for file_path in file_paths:
try:
target_dir = os.path.dirname(file_path)
file_name = os.path.splitext(os.path.basename(file_path))[0]
# Delete all associated files for the model
from ..utils.routes_common import ModelRouteUtils
deleted_files = await ModelRouteUtils.delete_model_files(
target_dir,
file_name
)
if deleted_files:
deleted_models.append(file_path)
results.append({
'file_path': file_path,
'success': True,
'deleted_files': deleted_files
})
total_deleted += 1
else:
results.append({
'file_path': file_path,
'success': False,
'error': 'No files deleted'
})
except Exception as e:
logger.error(f"Error deleting file {file_path}: {e}")
results.append({
'file_path': file_path,
'success': False,
'error': str(e)
})
# Batch update cache if any models were deleted
if deleted_models:
# Update the cache in a batch operation
cache_updated = await self._batch_update_cache_for_deleted_models(deleted_models)
return {
'success': True,
'total_deleted': total_deleted,
'total_attempted': len(file_paths),
'cache_updated': cache_updated,
'results': results
}
except Exception as e:
logger.error(f"Error in bulk delete: {e}", exc_info=True)
return {
'success': False,
'error': str(e),
'results': []
}
async def _batch_update_cache_for_deleted_models(self, file_paths: List[str]) -> bool:
"""Update cache after multiple models have been deleted
Args:
file_paths: List of file paths that were deleted
Returns:
bool: True if cache was updated and saved successfully
"""
if not file_paths or self._cache is None:
return False
try:
# Get all models that need to be removed from cache
models_to_remove = [item for item in self._cache.raw_data if item['file_path'] in file_paths]
if not models_to_remove:
return False
# Update tag counts
for model in models_to_remove:
for tag in model.get('tags', []):
if tag in self._tags_count:
self._tags_count[tag] = max(0, self._tags_count[tag] - 1)
if self._tags_count[tag] == 0:
del self._tags_count[tag]
# Update hash index
for model in models_to_remove:
file_path = model['file_path']
if hasattr(self, '_hash_index') and self._hash_index:
# Get the hash and filename before removal for duplicate checking
file_name = os.path.splitext(os.path.basename(file_path))[0]
hash_val = model.get('sha256', '').lower()
# Remove from hash index
self._hash_index.remove_by_path(file_path, hash_val)
# Check and clean up duplicates
self._cleanup_duplicates_after_removal(hash_val, file_name)
# Update cache data
self._cache.raw_data = [item for item in self._cache.raw_data if item['file_path'] not in file_paths]
# Resort cache
await self._cache.resort()
# Save updated cache to disk
await self._save_cache_to_disk()
return True
except Exception as e:
logger.error(f"Error updating cache after bulk delete: {e}", exc_info=True)
return False
def _cleanup_duplicates_after_removal(self, hash_val: str, file_name: str) -> None:
"""Clean up duplicate entries in hash index after removing a model
Args:
hash_val: SHA256 hash of the removed model
file_name: File name of the removed model without extension
"""
if not hash_val or not file_name or not hasattr(self, '_hash_index'):
return
# Clean up hash duplicates if only 0 or 1 entries remain
if hash_val in self._hash_index._duplicate_hashes:
if len(self._hash_index._duplicate_hashes[hash_val]) <= 1:
del self._hash_index._duplicate_hashes[hash_val]
# Clean up filename duplicates if only 0 or 1 entries remain
if file_name in self._hash_index._duplicate_filenames:
if len(self._hash_index._duplicate_filenames[file_name]) <= 1:
del self._hash_index._duplicate_filenames[file_name]

View File

@@ -2,6 +2,7 @@ import asyncio
from typing import List, Dict
from dataclasses import dataclass
from operator import itemgetter
from natsort import natsorted
@dataclass
class RecipeCache:
@@ -16,7 +17,7 @@ class RecipeCache:
async def resort(self, name_only: bool = False):
"""Resort all cached data views"""
async with self._lock:
self.sorted_by_name = sorted(
self.sorted_by_name = natsorted(
self.raw_data,
key=lambda x: x.get('title', '').lower() # Case-insensitive sort
)

View File

@@ -9,6 +9,7 @@ from .recipe_cache import RecipeCache
from .service_registry import ServiceRegistry
from .lora_scanner import LoraScanner
from ..utils.utils import fuzzy_match
from natsort import natsorted
import sys
logger = logging.getLogger(__name__)
@@ -164,7 +165,7 @@ class RecipeScanner:
if hasattr(self._cache, "resort"):
try:
# Sort by name
self._cache.sorted_by_name = sorted(
self._cache.sorted_by_name = natsorted(
self._cache.raw_data,
key=lambda x: x.get('title', '').lower()
)
@@ -321,6 +322,20 @@ class RecipeScanner:
# Update lora information with local paths and availability
await self._update_lora_information(recipe_data)
# Calculate and update fingerprint if missing
if 'loras' in recipe_data and 'fingerprint' not in recipe_data:
from ..utils.utils import calculate_recipe_fingerprint
fingerprint = calculate_recipe_fingerprint(recipe_data['loras'])
recipe_data['fingerprint'] = fingerprint
# Write updated recipe data back to file
try:
with open(recipe_path, 'w', encoding='utf-8') as f:
json.dump(recipe_data, f, indent=4, ensure_ascii=False)
logger.info(f"Added fingerprint to recipe: {recipe_path}")
except Exception as e:
logger.error(f"Error writing updated recipe with fingerprint: {e}")
return recipe_data
except Exception as e:
@@ -801,3 +816,60 @@ class RecipeScanner:
logger.info(f"Resorted recipe cache after updating {cache_updated_count} items")
return file_updated_count, cache_updated_count
async def find_recipes_by_fingerprint(self, fingerprint: str) -> list:
"""Find recipes with a matching fingerprint
Args:
fingerprint: The recipe fingerprint to search for
Returns:
List of recipe details that match the fingerprint
"""
if not fingerprint:
return []
# Get all recipes from cache
cache = await self.get_cached_data()
# Find recipes with matching fingerprint
matching_recipes = []
for recipe in cache.raw_data:
if recipe.get('fingerprint') == fingerprint:
recipe_details = {
'id': recipe.get('id'),
'title': recipe.get('title'),
'file_url': self._format_file_url(recipe.get('file_path')),
'modified': recipe.get('modified'),
'created_date': recipe.get('created_date'),
'lora_count': len(recipe.get('loras', []))
}
matching_recipes.append(recipe_details)
return matching_recipes
async def find_all_duplicate_recipes(self) -> dict:
"""Find all recipe duplicates based on fingerprints
Returns:
Dictionary where keys are fingerprints and values are lists of recipe IDs
"""
# Get all recipes from cache
cache = await self.get_cached_data()
# Group recipes by fingerprint
fingerprint_groups = {}
for recipe in cache.raw_data:
fingerprint = recipe.get('fingerprint')
if not fingerprint:
continue
if fingerprint not in fingerprint_groups:
fingerprint_groups[fingerprint] = []
fingerprint_groups[fingerprint].append(recipe.get('id'))
# Filter to only include groups with more than one recipe
duplicate_groups = {k: v for k, v in fingerprint_groups.items() if len(v) > 1}
return duplicate_groups

View File

@@ -58,26 +58,6 @@ class ServiceRegistry:
scanner = await CheckpointScanner.get_instance()
await cls.register_service("checkpoint_scanner", scanner)
return scanner
@classmethod
async def get_lora_monitor(cls):
"""Get the LoraFileMonitor instance"""
from .file_monitor import LoraFileMonitor
monitor = await cls.get_service("lora_monitor")
if monitor is None:
monitor = await LoraFileMonitor.get_instance()
await cls.register_service("lora_monitor", monitor)
return monitor
@classmethod
async def get_checkpoint_monitor(cls):
"""Get the CheckpointFileMonitor instance"""
from .file_monitor import CheckpointFileMonitor
monitor = await cls.get_service("checkpoint_monitor")
if monitor is None:
monitor = await CheckpointFileMonitor.get_instance()
await cls.register_service("checkpoint_monitor", monitor)
return monitor
@classmethod
async def get_civitai_client(cls):
@@ -95,7 +75,6 @@ class ServiceRegistry:
from .download_manager import DownloadManager
manager = await cls.get_service("download_manager")
if manager is None:
# We'll let DownloadManager.get_instance handle file_monitor parameter
manager = await DownloadManager.get_instance()
await cls.register_service("download_manager", manager)
return manager

View File

@@ -7,19 +7,42 @@ NSFW_LEVELS = {
"Blocked": 32, # Probably not actually visible through the API without being logged in on model owner account?
}
# Node type constants
NODE_TYPES = {
"Lora Loader (LoraManager)": 1,
"Lora Stacker (LoraManager)": 2
}
# Default ComfyUI node color when bgcolor is null
DEFAULT_NODE_COLOR = "#353535"
# preview extensions
PREVIEW_EXTENSIONS = [
'.webp',
'.preview.webp',
'.preview.png',
'.preview.jpeg',
'.preview.jpg',
'.preview.png',
'.preview.jpeg',
'.preview.jpg',
'.preview.mp4',
'.png',
'.jpeg',
'.jpg',
'.mp4'
'.png',
'.jpeg',
'.jpg',
'.mp4',
'.gif',
'.webm'
]
# Card preview image width
CARD_PREVIEW_WIDTH = 480
CARD_PREVIEW_WIDTH = 480
# Width for optimized example images
EXAMPLE_IMAGE_WIDTH = 832
# Supported media extensions for example downloads
SUPPORTED_MEDIA_EXTENSIONS = {
'images': ['.jpg', '.jpeg', '.png', '.webp', '.gif'],
'videos': ['.mp4', '.webm']
}
# Valid Lora types
VALID_LORA_TYPES = ['lora', 'locon', 'dora']

View File

@@ -0,0 +1,404 @@
import logging
import os
import asyncio
import json
import time
import aiohttp
from aiohttp import web
from ..services.service_registry import ServiceRegistry
from .example_images_processor import ExampleImagesProcessor
from .example_images_metadata import MetadataUpdater
logger = logging.getLogger(__name__)
# Download status tracking
download_task = None
is_downloading = False
download_progress = {
'total': 0,
'completed': 0,
'current_model': '',
'status': 'idle', # idle, running, paused, completed, error
'errors': [],
'last_error': None,
'start_time': None,
'end_time': None,
'processed_models': set(), # Track models that have been processed
'refreshed_models': set() # Track models that had metadata refreshed
}
class DownloadManager:
"""Manages downloading example images for models"""
@staticmethod
async def start_download(request):
"""
Start downloading example images for models
Expects a JSON body with:
{
"output_dir": "path/to/output", # Base directory to save example images
"optimize": true, # Whether to optimize images (default: true)
"model_types": ["lora", "checkpoint"], # Model types to process (default: both)
"delay": 1.0 # Delay between downloads to avoid rate limiting (default: 1.0)
}
"""
global download_task, is_downloading, download_progress
if is_downloading:
# Create a copy for JSON serialization
response_progress = download_progress.copy()
response_progress['processed_models'] = list(download_progress['processed_models'])
response_progress['refreshed_models'] = list(download_progress['refreshed_models'])
return web.json_response({
'success': False,
'error': 'Download already in progress',
'status': response_progress
}, status=400)
try:
# Parse the request body
data = await request.json()
output_dir = data.get('output_dir')
optimize = data.get('optimize', True)
model_types = data.get('model_types', ['lora', 'checkpoint'])
delay = float(data.get('delay', 0.2)) # Default to 0.2 seconds
if not output_dir:
return web.json_response({
'success': False,
'error': 'Missing output_dir parameter'
}, status=400)
# Create the output directory
os.makedirs(output_dir, exist_ok=True)
# Initialize progress tracking
download_progress['total'] = 0
download_progress['completed'] = 0
download_progress['current_model'] = ''
download_progress['status'] = 'running'
download_progress['errors'] = []
download_progress['last_error'] = None
download_progress['start_time'] = time.time()
download_progress['end_time'] = None
# Get the processed models list from a file if it exists
progress_file = os.path.join(output_dir, '.download_progress.json')
if os.path.exists(progress_file):
try:
with open(progress_file, 'r', encoding='utf-8') as f:
saved_progress = json.load(f)
download_progress['processed_models'] = set(saved_progress.get('processed_models', []))
logger.info(f"Loaded previous progress, {len(download_progress['processed_models'])} models already processed")
except Exception as e:
logger.error(f"Failed to load progress file: {e}")
download_progress['processed_models'] = set()
else:
download_progress['processed_models'] = set()
# Start the download task
is_downloading = True
download_task = asyncio.create_task(
DownloadManager._download_all_example_images(
output_dir,
optimize,
model_types,
delay
)
)
# Create a copy for JSON serialization
response_progress = download_progress.copy()
response_progress['processed_models'] = list(download_progress['processed_models'])
response_progress['refreshed_models'] = list(download_progress['refreshed_models'])
return web.json_response({
'success': True,
'message': 'Download started',
'status': response_progress
})
except Exception as e:
logger.error(f"Failed to start example images download: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_status(request):
"""Get the current status of example images download"""
global download_progress
# Create a copy of the progress dict with the set converted to a list for JSON serialization
response_progress = download_progress.copy()
response_progress['processed_models'] = list(download_progress['processed_models'])
response_progress['refreshed_models'] = list(download_progress['refreshed_models'])
return web.json_response({
'success': True,
'is_downloading': is_downloading,
'status': response_progress
})
@staticmethod
async def pause_download(request):
"""Pause the example images download"""
global download_progress
if not is_downloading:
return web.json_response({
'success': False,
'error': 'No download in progress'
}, status=400)
download_progress['status'] = 'paused'
return web.json_response({
'success': True,
'message': 'Download paused'
})
@staticmethod
async def resume_download(request):
"""Resume the example images download"""
global download_progress
if not is_downloading:
return web.json_response({
'success': False,
'error': 'No download in progress'
}, status=400)
if download_progress['status'] == 'paused':
download_progress['status'] = 'running'
return web.json_response({
'success': True,
'message': 'Download resumed'
})
else:
return web.json_response({
'success': False,
'error': f"Download is in '{download_progress['status']}' state, cannot resume"
}, status=400)
@staticmethod
async def _download_all_example_images(output_dir, optimize, model_types, delay):
"""Download example images for all models"""
global is_downloading, download_progress
# Create independent download session
connector = aiohttp.TCPConnector(
ssl=True,
limit=3,
force_close=False,
enable_cleanup_closed=True
)
timeout = aiohttp.ClientTimeout(total=None, connect=60, sock_read=60)
independent_session = aiohttp.ClientSession(
connector=connector,
trust_env=True,
timeout=timeout
)
try:
# Get scanners
scanners = []
if 'lora' in model_types:
lora_scanner = await ServiceRegistry.get_lora_scanner()
scanners.append(('lora', lora_scanner))
if 'checkpoint' in model_types:
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
scanners.append(('checkpoint', checkpoint_scanner))
# Get all models
all_models = []
for scanner_type, scanner in scanners:
cache = await scanner.get_cached_data()
if cache and cache.raw_data:
for model in cache.raw_data:
if model.get('sha256'):
all_models.append((scanner_type, model, scanner))
# Update total count
download_progress['total'] = len(all_models)
logger.info(f"Found {download_progress['total']} models to process")
# Process each model
for i, (scanner_type, model, scanner) in enumerate(all_models):
# Main logic for processing model is here, but actual operations are delegated to other classes
was_remote_download = await DownloadManager._process_model(
scanner_type, model, scanner,
output_dir, optimize, independent_session
)
# Update progress
download_progress['completed'] += 1
# Only add delay after remote download of models, and not after processing the last model
if was_remote_download and i < len(all_models) - 1 and download_progress['status'] == 'running':
await asyncio.sleep(delay)
# Mark as completed
download_progress['status'] = 'completed'
download_progress['end_time'] = time.time()
logger.info(f"Example images download completed: {download_progress['completed']}/{download_progress['total']} models processed")
except Exception as e:
error_msg = f"Error during example images download: {str(e)}"
logger.error(error_msg, exc_info=True)
download_progress['errors'].append(error_msg)
download_progress['last_error'] = error_msg
download_progress['status'] = 'error'
download_progress['end_time'] = time.time()
finally:
# Close the independent session
try:
await independent_session.close()
except Exception as e:
logger.error(f"Error closing download session: {e}")
# Save final progress to file
try:
DownloadManager._save_progress(output_dir)
except Exception as e:
logger.error(f"Failed to save progress file: {e}")
# Set download status to not downloading
is_downloading = False
@staticmethod
async def _process_model(scanner_type, model, scanner, output_dir, optimize, independent_session):
"""Process a single model download"""
global download_progress
# Check if download is paused
while download_progress['status'] == 'paused':
await asyncio.sleep(1)
# Check if download should continue
if download_progress['status'] != 'running':
logger.info(f"Download stopped: {download_progress['status']}")
return False # Return False to indicate no remote download happened
model_hash = model.get('sha256', '').lower()
model_name = model.get('model_name', 'Unknown')
model_file_path = model.get('file_path', '')
model_file_name = model.get('file_name', '')
try:
# Update current model info
download_progress['current_model'] = f"{model_name} ({model_hash[:8]})"
# Skip if already processed AND directory exists with files
if model_hash in download_progress['processed_models']:
model_dir = os.path.join(output_dir, model_hash)
has_files = os.path.exists(model_dir) and any(os.listdir(model_dir))
if has_files:
logger.debug(f"Skipping already processed model: {model_name}")
return False
else:
logger.info(f"Model {model_name} marked as processed but folder empty or missing, reprocessing")
# Create model directory
model_dir = os.path.join(output_dir, model_hash)
os.makedirs(model_dir, exist_ok=True)
# First check for local example images - local processing doesn't need delay
local_images_processed = await ExampleImagesProcessor.process_local_examples(
model_file_path, model_file_name, model_name, model_dir, optimize
)
# If we processed local images, update metadata
if local_images_processed:
await MetadataUpdater.update_metadata_from_local_examples(
model_hash, model, scanner_type, scanner, model_dir
)
download_progress['processed_models'].add(model_hash)
return False # Return False to indicate no remote download happened
# If no local images, try to download from remote
elif model.get('civitai') and model.get('civitai', {}).get('images'):
images = model.get('civitai', {}).get('images', [])
success, is_stale = await ExampleImagesProcessor.download_model_images(
model_hash, model_name, images, model_dir, optimize, independent_session
)
# If metadata is stale, try to refresh it
if is_stale and model_hash not in download_progress['refreshed_models']:
await MetadataUpdater.refresh_model_metadata(
model_hash, model_name, scanner_type, scanner
)
# Get the updated model data
updated_model = await MetadataUpdater.get_updated_model(
model_hash, scanner
)
if updated_model and updated_model.get('civitai', {}).get('images'):
# Retry download with updated metadata
updated_images = updated_model.get('civitai', {}).get('images', [])
success, _ = await ExampleImagesProcessor.download_model_images(
model_hash, model_name, updated_images, model_dir, optimize, independent_session
)
# Only mark as processed if all images were downloaded successfully
if success:
download_progress['processed_models'].add(model_hash)
return True # Return True to indicate a remote download happened
# Save progress periodically
if download_progress['completed'] % 10 == 0 or download_progress['completed'] == download_progress['total'] - 1:
DownloadManager._save_progress(output_dir)
return False # Default return if no conditions met
except Exception as e:
error_msg = f"Error processing model {model.get('model_name')}: {str(e)}"
logger.error(error_msg, exc_info=True)
download_progress['errors'].append(error_msg)
download_progress['last_error'] = error_msg
return False # Return False on exception
@staticmethod
def _save_progress(output_dir):
"""Save download progress to file"""
global download_progress
try:
progress_file = os.path.join(output_dir, '.download_progress.json')
# Read existing progress file if it exists
existing_data = {}
if os.path.exists(progress_file):
try:
with open(progress_file, 'r', encoding='utf-8') as f:
existing_data = json.load(f)
except Exception as e:
logger.warning(f"Failed to read existing progress file: {e}")
# Create new progress data
progress_data = {
'processed_models': list(download_progress['processed_models']),
'refreshed_models': list(download_progress['refreshed_models']),
'completed': download_progress['completed'],
'total': download_progress['total'],
'last_update': time.time()
}
# Preserve existing fields (especially naming_version)
for key, value in existing_data.items():
if key not in progress_data:
progress_data[key] = value
# Write updated progress data
with open(progress_file, 'w', encoding='utf-8') as f:
json.dump(progress_data, f, indent=2)
except Exception as e:
logger.error(f"Failed to save progress file: {e}")

View File

@@ -0,0 +1,201 @@
import logging
import os
import re
import sys
import subprocess
from aiohttp import web
from ..services.settings_manager import settings
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
logger = logging.getLogger(__name__)
class ExampleImagesFileManager:
"""Manages access and operations for example image files"""
@staticmethod
async def open_folder(request):
"""
Open the example images folder for a specific model
Expects a JSON request body with:
{
"model_hash": "sha256_hash" # SHA256 hash of the model
}
"""
try:
# Parse request body
data = await request.json()
model_hash = data.get('model_hash')
if not model_hash:
return web.json_response({
'success': False,
'error': 'Missing model_hash parameter'
}, status=400)
# Get example images path from settings
example_images_path = settings.get('example_images_path')
if not example_images_path:
return web.json_response({
'success': False,
'error': 'No example images path configured. Please set it in the settings panel first.'
}, status=400)
# Construct folder path for this model
model_folder = os.path.join(example_images_path, model_hash)
# Check if folder exists
if not os.path.exists(model_folder):
return web.json_response({
'success': False,
'error': 'No example images found for this model. Download example images first.'
}, status=404)
# Open folder in file explorer
if os.name == 'nt': # Windows
os.startfile(model_folder)
elif os.name == 'posix': # macOS and Linux
if sys.platform == 'darwin': # macOS
subprocess.Popen(['open', model_folder])
else: # Linux
subprocess.Popen(['xdg-open', model_folder])
return web.json_response({
'success': True,
'message': f'Opened example images folder for model {model_hash}'
})
except Exception as e:
logger.error(f"Failed to open example images folder: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def get_files(request):
"""
Get the list of example image files for a specific model
Expects:
- model_hash in query parameters
Returns:
- List of image files and their paths
"""
try:
# Get model_hash from query parameters
model_hash = request.query.get('model_hash')
if not model_hash:
return web.json_response({
'success': False,
'error': 'Missing model_hash parameter'
}, status=400)
# Get example images path from settings
example_images_path = settings.get('example_images_path')
if not example_images_path:
return web.json_response({
'success': False,
'error': 'No example images path configured'
}, status=400)
# Construct folder path for this model
model_folder = os.path.join(example_images_path, model_hash)
# Check if folder exists
if not os.path.exists(model_folder):
return web.json_response({
'success': False,
'error': 'No example images found for this model',
'files': []
}, status=404)
# Get list of files in the folder
files = []
for file in os.listdir(model_folder):
file_path = os.path.join(model_folder, file)
if os.path.isfile(file_path):
# Check if file is a supported media file
file_ext = os.path.splitext(file)[1].lower()
if (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
files.append({
'name': file,
'path': f'/example_images_static/{model_hash}/{file}',
'extension': file_ext,
'is_video': file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']
})
return web.json_response({
'success': True,
'files': files
})
except Exception as e:
logger.error(f"Failed to get example image files: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def has_images(request):
"""
Check if the example images folder for a model exists and is not empty
Expects:
- model_hash in query parameters
Returns:
- Boolean indicating whether the folder exists and contains images/videos
"""
try:
# Get model_hash from query parameters
model_hash = request.query.get('model_hash')
if not model_hash:
return web.json_response({
'success': False,
'error': 'Missing model_hash parameter'
}, status=400)
# Get example images path from settings
example_images_path = settings.get('example_images_path')
if not example_images_path:
return web.json_response({
'has_images': False
})
# Construct folder path for this model
model_folder = os.path.join(example_images_path, model_hash)
# Check if folder exists
if not os.path.exists(model_folder) or not os.path.isdir(model_folder):
return web.json_response({
'has_images': False
})
# Check if folder contains any supported media files
for file in os.listdir(model_folder):
file_path = os.path.join(model_folder, file)
if os.path.isfile(file_path):
file_ext = os.path.splitext(file)[1].lower()
if (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
return web.json_response({
'has_images': True
})
# If reached here, folder exists but has no supported media files
return web.json_response({
'has_images': False
})
except Exception as e:
logger.error(f"Failed to check example images folder: {e}", exc_info=True)
return web.json_response({
'has_images': False,
'error': str(e)
})

View File

@@ -0,0 +1,390 @@
import logging
import os
import re
from ..utils.metadata_manager import MetadataManager
from ..utils.routes_common import ModelRouteUtils
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
from ..utils.exif_utils import ExifUtils
from ..recipes.constants import GEN_PARAM_KEYS
logger = logging.getLogger(__name__)
class MetadataUpdater:
"""Handles updating model metadata related to example images"""
@staticmethod
async def refresh_model_metadata(model_hash, model_name, scanner_type, scanner):
"""Refresh model metadata from CivitAI
Args:
model_hash: SHA256 hash of the model
model_name: Model name (for logging)
scanner_type: Scanner type ('lora' or 'checkpoint')
scanner: Scanner instance for this model type
Returns:
bool: True if metadata was successfully refreshed, False otherwise
"""
from ..utils.example_images_download_manager import download_progress
try:
# Find the model in the scanner cache
cache = await scanner.get_cached_data()
model_data = None
for item in cache.raw_data:
if item.get('sha256') == model_hash:
model_data = item
break
if not model_data:
logger.warning(f"Model {model_name} with hash {model_hash} not found in cache")
return False
file_path = model_data.get('file_path')
if not file_path:
logger.warning(f"Model {model_name} has no file path")
return False
# Track that we're refreshing this model
download_progress['refreshed_models'].add(model_hash)
# Use ModelRouteUtils to refresh metadata
async def update_cache_func(old_path, new_path, metadata):
return await scanner.update_single_model_cache(old_path, new_path, metadata)
success = await ModelRouteUtils.fetch_and_update_model(
model_hash,
file_path,
model_data,
update_cache_func
)
if success:
logger.info(f"Successfully refreshed metadata for {model_name}")
return True
else:
logger.warning(f"Failed to refresh metadata for {model_name}")
return False
except Exception as e:
error_msg = f"Error refreshing metadata for {model_name}: {str(e)}"
logger.error(error_msg, exc_info=True)
download_progress['errors'].append(error_msg)
download_progress['last_error'] = error_msg
return False
@staticmethod
async def get_updated_model(model_hash, scanner):
"""Get updated model data
Args:
model_hash: SHA256 hash of the model
scanner: Scanner instance
Returns:
dict: Updated model data or None if not found
"""
cache = await scanner.get_cached_data()
for item in cache.raw_data:
if item.get('sha256') == model_hash:
return item
return None
@staticmethod
async def update_metadata_from_local_examples(model_hash, model, scanner_type, scanner, model_dir):
"""Update model metadata with local example image information
Args:
model_hash: SHA256 hash of the model
model: Model data dictionary
scanner_type: Scanner type ('lora' or 'checkpoint')
scanner: Scanner instance for this model type
model_dir: Model images directory
Returns:
bool: True if metadata was successfully updated, False otherwise
"""
try:
# Collect local image paths
local_images_paths = []
if os.path.exists(model_dir):
for file in os.listdir(model_dir):
file_path = os.path.join(model_dir, file)
if os.path.isfile(file_path):
file_ext = os.path.splitext(file)[1].lower()
is_supported = (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos'])
if is_supported:
local_images_paths.append(file_path)
# Check if metadata update is needed (no civitai field or empty images)
needs_update = not model.get('civitai') or not model.get('civitai', {}).get('images')
if needs_update and local_images_paths:
logger.debug(f"Found {len(local_images_paths)} local example images for {model.get('model_name')}, updating metadata")
# Create or get civitai field
if not model.get('civitai'):
model['civitai'] = {}
# Create images array
images = []
# Generate metadata for each local image/video
for path in local_images_paths:
# Determine if video or image
file_ext = os.path.splitext(path)[1].lower()
is_video = file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']
# Create image metadata entry
image_entry = {
"url": "", # Empty URL as required
"nsfwLevel": 0,
"width": 720, # Default dimensions
"height": 1280,
"type": "video" if is_video else "image",
"meta": None,
"hasMeta": False,
"hasPositivePrompt": False
}
# If it's an image, try to get actual dimensions (optional enhancement)
try:
from PIL import Image
if not is_video and os.path.exists(path):
with Image.open(path) as img:
image_entry["width"], image_entry["height"] = img.size
except:
# If PIL fails or is unavailable, use default dimensions
pass
images.append(image_entry)
# Update the model's civitai.images field
model['civitai']['images'] = images
# Save metadata to .metadata.json file
file_path = model.get('file_path')
try:
# Create a copy of model data without 'folder' field
model_copy = model.copy()
model_copy.pop('folder', None)
# Write metadata to file
await MetadataManager.save_metadata(file_path, model_copy)
logger.info(f"Saved metadata for {model.get('model_name')}")
except Exception as e:
logger.error(f"Failed to save metadata for {model.get('model_name')}: {str(e)}")
# Save updated metadata to scanner cache
success = await scanner.update_single_model_cache(file_path, file_path, model)
if success:
logger.info(f"Successfully updated metadata for {model.get('model_name')} with {len(images)} local examples")
return True
else:
logger.warning(f"Failed to update metadata for {model.get('model_name')}")
return False
except Exception as e:
logger.error(f"Error updating metadata from local examples: {str(e)}", exc_info=True)
return False
@staticmethod
async def update_metadata_after_import(model_hash, model_data, scanner, newly_imported_paths):
"""Update model metadata after importing example images
Args:
model_hash: SHA256 hash of the model
model_data: Model data dictionary
scanner: Scanner instance (lora or checkpoint)
newly_imported_paths: List of paths to newly imported files
Returns:
tuple: (regular_images, custom_images) - Both image arrays
"""
try:
# Ensure civitai field exists in model_data
if not model_data.get('civitai'):
model_data['civitai'] = {}
# Ensure customImages array exists
if not model_data['civitai'].get('customImages'):
model_data['civitai']['customImages'] = []
# Get current customImages array
custom_images = model_data['civitai']['customImages']
# Add new image entry for each imported file
for path_tuple in newly_imported_paths:
path, short_id = path_tuple
# Determine if video or image
file_ext = os.path.splitext(path)[1].lower()
is_video = file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']
# Create image metadata entry
image_entry = {
"url": "", # Empty URL as requested
"id": short_id,
"nsfwLevel": 0,
"width": 720, # Default dimensions
"height": 1280,
"type": "video" if is_video else "image",
"meta": None,
"hasMeta": False,
"hasPositivePrompt": False
}
# Extract and parse metadata if this is an image
if not is_video:
try:
# Extract metadata from image
extracted_metadata = ExifUtils.extract_image_metadata(path)
if extracted_metadata:
# Parse the extracted metadata to get generation parameters
parsed_meta = MetadataUpdater._parse_image_metadata(extracted_metadata)
if parsed_meta:
image_entry["meta"] = parsed_meta
image_entry["hasMeta"] = True
image_entry["hasPositivePrompt"] = bool(parsed_meta.get("prompt", ""))
logger.debug(f"Extracted metadata from {os.path.basename(path)}")
except Exception as e:
logger.warning(f"Failed to extract metadata from {os.path.basename(path)}: {e}")
# If it's an image, try to get actual dimensions
try:
from PIL import Image
if not is_video and os.path.exists(path):
with Image.open(path) as img:
image_entry["width"], image_entry["height"] = img.size
except:
# If PIL fails or is unavailable, use default dimensions
pass
# Append to existing customImages array
custom_images.append(image_entry)
# Save metadata to .metadata.json file
file_path = model_data.get('file_path')
if file_path:
try:
# Create a copy of model data without 'folder' field
model_copy = model_data.copy()
model_copy.pop('folder', None)
# Write metadata to file
await MetadataManager.save_metadata(file_path, model_copy)
logger.info(f"Saved metadata for {model_data.get('model_name')}")
except Exception as e:
logger.error(f"Failed to save metadata: {str(e)}")
# Save updated metadata to scanner cache
if file_path:
await scanner.update_single_model_cache(file_path, file_path, model_data)
# Get regular images array (might be None)
regular_images = model_data['civitai'].get('images', [])
# Return both image arrays
return regular_images, custom_images
except Exception as e:
logger.error(f"Failed to update metadata after import: {e}", exc_info=True)
return [], []
@staticmethod
def _parse_image_metadata(user_comment):
"""Parse metadata from image to extract generation parameters
Args:
user_comment: Metadata string extracted from image
Returns:
dict: Parsed metadata with generation parameters
"""
if not user_comment:
return None
try:
# Initialize metadata dictionary
metadata = {}
# Split on Negative prompt if it exists
if "Negative prompt:" in user_comment:
parts = user_comment.split('Negative prompt:', 1)
prompt = parts[0].strip()
negative_and_params = parts[1] if len(parts) > 1 else ""
else:
# No negative prompt section
param_start = re.search(r'Steps: \d+', user_comment)
if param_start:
prompt = user_comment[:param_start.start()].strip()
negative_and_params = user_comment[param_start.start():]
else:
prompt = user_comment.strip()
negative_and_params = ""
# Add prompt if it's in GEN_PARAM_KEYS
if 'prompt' in GEN_PARAM_KEYS:
metadata['prompt'] = prompt
# Extract negative prompt and parameters
if negative_and_params:
# If we split on "Negative prompt:", check for params section
if "Negative prompt:" in user_comment:
param_start = re.search(r'Steps: ', negative_and_params)
if param_start:
neg_prompt = negative_and_params[:param_start.start()].strip()
if 'negative_prompt' in GEN_PARAM_KEYS:
metadata['negative_prompt'] = neg_prompt
params_section = negative_and_params[param_start.start():]
else:
if 'negative_prompt' in GEN_PARAM_KEYS:
metadata['negative_prompt'] = negative_and_params.strip()
params_section = ""
else:
# No negative prompt, entire section is params
params_section = negative_and_params
# Extract generation parameters
if params_section:
# Extract basic parameters
param_pattern = r'([A-Za-z\s]+): ([^,]+)'
params = re.findall(param_pattern, params_section)
for key, value in params:
clean_key = key.strip().lower().replace(' ', '_')
# Skip if not in recognized gen param keys
if clean_key not in GEN_PARAM_KEYS:
continue
# Convert numeric values
if clean_key in ['steps', 'seed']:
try:
metadata[clean_key] = int(value.strip())
except ValueError:
metadata[clean_key] = value.strip()
elif clean_key in ['cfg_scale']:
try:
metadata[clean_key] = float(value.strip())
except ValueError:
metadata[clean_key] = value.strip()
else:
metadata[clean_key] = value.strip()
# Extract size if available and add if a recognized key
size_match = re.search(r'Size: (\d+)x(\d+)', params_section)
if size_match and 'size' in GEN_PARAM_KEYS:
width, height = size_match.groups()
metadata['size'] = f"{width}x{height}"
# Return metadata if we have any entries
return metadata if metadata else None
except Exception as e:
logger.error(f"Error parsing image metadata: {e}", exc_info=True)
return None

View File

@@ -0,0 +1,318 @@
import asyncio
import logging
import os
import re
import json
from ..services.settings_manager import settings
from ..services.service_registry import ServiceRegistry
from ..utils.metadata_manager import MetadataManager
from ..utils.example_images_processor import ExampleImagesProcessor
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
logger = logging.getLogger(__name__)
CURRENT_NAMING_VERSION = 2 # Increment this when naming conventions change
class ExampleImagesMigration:
"""Handles migrations for example images naming conventions"""
@staticmethod
async def check_and_run_migrations():
"""Check if migrations are needed and run them in background"""
example_images_path = settings.get('example_images_path')
if not example_images_path or not os.path.exists(example_images_path):
logger.debug("No example images path configured or path doesn't exist, skipping migrations")
return
# Check current version from progress file
current_version = 0
progress_file = os.path.join(example_images_path, '.download_progress.json')
if os.path.exists(progress_file):
try:
with open(progress_file, 'r', encoding='utf-8') as f:
progress_data = json.load(f)
current_version = progress_data.get('naming_version', 0)
except Exception as e:
logger.error(f"Failed to load progress file for migration check: {e}")
# If current version is less than target version, start migration
if current_version < CURRENT_NAMING_VERSION:
logger.info(f"Starting example images naming migration from v{current_version} to v{CURRENT_NAMING_VERSION}")
# Start migration in background task
asyncio.create_task(
ExampleImagesMigration.run_migrations(example_images_path, current_version, CURRENT_NAMING_VERSION)
)
@staticmethod
async def run_migrations(example_images_path, from_version, to_version):
"""Run necessary migrations based on version difference"""
try:
# Get all model folders
model_folders = []
for item in os.listdir(example_images_path):
item_path = os.path.join(example_images_path, item)
if os.path.isdir(item_path) and len(item) == 64: # SHA256 hash is 64 chars
model_folders.append(item_path)
logger.info(f"Found {len(model_folders)} model folders to check for migration")
# Apply migrations sequentially
if from_version < 1 and to_version >= 1:
await ExampleImagesMigration._migrate_to_v1(model_folders)
if from_version < 2 and to_version >= 2:
await ExampleImagesMigration._migrate_to_v2(model_folders)
# Update version in progress file
progress_file = os.path.join(example_images_path, '.download_progress.json')
try:
progress_data = {}
if os.path.exists(progress_file):
with open(progress_file, 'r', encoding='utf-8') as f:
progress_data = json.load(f)
progress_data['naming_version'] = to_version
with open(progress_file, 'w', encoding='utf-8') as f:
json.dump(progress_data, f, indent=2)
logger.info(f"Example images naming migration to v{to_version} completed")
except Exception as e:
logger.error(f"Failed to update version in progress file: {e}")
except Exception as e:
logger.error(f"Error during migration: {e}", exc_info=True)
@staticmethod
async def _migrate_to_v1(model_folders):
"""Migrate from 1-based to 0-based indexing"""
count = 0
for folder in model_folders:
has_one_based = False
has_zero_based = False
files_to_rename = []
# Check naming pattern in this folder
for file in os.listdir(folder):
if re.match(r'image_1\.\w+$', file):
has_one_based = True
if re.match(r'image_0\.\w+$', file):
has_zero_based = True
# Only migrate folders with 1-based indexing and no 0-based
if has_one_based and not has_zero_based:
# Create rename mapping
for file in os.listdir(folder):
match = re.match(r'image_(\d+)\.(\w+)$', file)
if match:
index = int(match.group(1))
ext = match.group(2)
if index > 0: # Only rename if index is positive
files_to_rename.append((
file,
f"image_{index-1}.{ext}"
))
# Use temporary names to avoid conflicts
for old_name, new_name in files_to_rename:
old_path = os.path.join(folder, old_name)
temp_path = os.path.join(folder, f"temp_{old_name}")
try:
os.rename(old_path, temp_path)
except Exception as e:
logger.error(f"Failed to rename {old_path} to {temp_path}: {e}")
# Rename from temporary names to final names
for old_name, new_name in files_to_rename:
temp_path = os.path.join(folder, f"temp_{old_name}")
new_path = os.path.join(folder, new_name)
try:
os.rename(temp_path, new_path)
logger.debug(f"Renamed {old_name} to {new_name} in {folder}")
except Exception as e:
logger.error(f"Failed to rename {temp_path} to {new_path}: {e}")
count += 1
# Give other tasks a chance to run
if count % 10 == 0:
await asyncio.sleep(0)
logger.info(f"Migrated {count} folders from 1-based to 0-based indexing")
@staticmethod
async def _migrate_to_v2(model_folders):
"""
Migrate to v2 naming scheme:
- Move custom examples from images array to customImages array
- Rename files from image_<index>.<ext> to custom_<short_id>.<ext>
- Add id field to each custom image entry
"""
count = 0
updated_models = 0
migration_errors = 0
# Get scanner instances
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
# Wait until scanners are initialized
scanners = [lora_scanner, checkpoint_scanner]
for scanner in scanners:
if scanner.is_initializing():
logger.info("Waiting for scanners to complete initialization before starting migration...")
initialized = False
retry_count = 0
while not initialized and retry_count < 120: # Wait up to 120 seconds
await asyncio.sleep(1)
initialized = not scanner.is_initializing()
retry_count += 1
if not initialized:
logger.warning("Scanner initialization timeout - proceeding with migration anyway")
logger.info(f"Starting migration to v2 naming scheme for {len(model_folders)} model folders")
for folder in model_folders:
try:
# Extract model hash from folder name
model_hash = os.path.basename(folder)
if not model_hash or len(model_hash) != 64:
continue
# Find the model in scanner cache
model_data = None
scanner = None
for scan_obj in scanners:
if scan_obj.has_hash(model_hash):
cache = await scan_obj.get_cached_data()
for item in cache.raw_data:
if item.get('sha256') == model_hash:
model_data = item
scanner = scan_obj
break
if model_data:
break
if not model_data or not scanner:
logger.debug(f"Model with hash {model_hash} not found in cache, skipping migration")
continue
# Clone model data to avoid modifying the cache directly
model_metadata = model_data.copy()
# Check if model has civitai metadata
if not model_metadata.get('civitai'):
continue
# Get images array
images = model_metadata.get('civitai', {}).get('images', [])
if not images:
continue
# Initialize customImages array if it doesn't exist
if not model_metadata['civitai'].get('customImages'):
model_metadata['civitai']['customImages'] = []
# Find custom examples (entries with empty url)
custom_indices = []
for i, image in enumerate(images):
if image.get('url') == "":
custom_indices.append(i)
if not custom_indices:
continue
logger.debug(f"Found {len(custom_indices)} custom examples in {model_hash}")
# Process each custom example
for index in custom_indices:
try:
image_entry = images[index]
# Determine media type based on the entry type
media_type = 'videos' if image_entry.get('type') == 'video' else 'images'
extensions_to_try = SUPPORTED_MEDIA_EXTENSIONS[media_type]
# Find the image file by trying possible extensions
old_path = None
old_filename = None
found = False
for ext in extensions_to_try:
test_path = os.path.join(folder, f"image_{index}{ext}")
if os.path.exists(test_path):
old_path = test_path
old_filename = f"image_{index}{ext}"
found = True
break
if not found:
logger.warning(f"Could not find file for index {index} in {model_hash}, skipping")
continue
# Generate short ID for the custom example
short_id = ExampleImagesProcessor.generate_short_id()
# Get file extension
file_ext = os.path.splitext(old_path)[1]
# Create new filename
new_filename = f"custom_{short_id}{file_ext}"
new_path = os.path.join(folder, new_filename)
# Rename the file
try:
os.rename(old_path, new_path)
logger.debug(f"Renamed {old_filename} to {new_filename} in {folder}")
except Exception as e:
logger.error(f"Failed to rename {old_path} to {new_path}: {e}")
continue
# Create a copy of the image entry with the id field
custom_entry = image_entry.copy()
custom_entry['id'] = short_id
# Add to customImages array
model_metadata['civitai']['customImages'].append(custom_entry)
count += 1
except Exception as e:
logger.error(f"Error migrating custom example at index {index} for {model_hash}: {e}")
# Remove custom examples from the original images array
model_metadata['civitai']['images'] = [
img for i, img in enumerate(images) if i not in custom_indices
]
# Save the updated metadata
file_path = model_data.get('file_path')
if file_path:
try:
# Create a copy of model data without 'folder' field
model_copy = model_metadata.copy()
model_copy.pop('folder', None)
# Save metadata to file
await MetadataManager.save_metadata(file_path, model_copy)
# Update scanner cache
await scanner.update_single_model_cache(file_path, file_path, model_metadata)
updated_models += 1
except Exception as e:
logger.error(f"Failed to save metadata for {model_hash}: {e}")
migration_errors += 1
# Give other tasks a chance to run
if count % 10 == 0:
await asyncio.sleep(0)
except Exception as e:
logger.error(f"Error migrating folder {folder}: {e}")
migration_errors += 1
logger.info(f"Migration to v2 complete: migrated {count} custom examples across {updated_models} models with {migration_errors} errors")

View File

@@ -0,0 +1,494 @@
import logging
import os
import re
import tempfile
import random
import string
from aiohttp import web
from ..utils.constants import SUPPORTED_MEDIA_EXTENSIONS
from ..services.service_registry import ServiceRegistry
from ..services.settings_manager import settings
from .example_images_metadata import MetadataUpdater
from ..utils.metadata_manager import MetadataManager
logger = logging.getLogger(__name__)
class ExampleImagesProcessor:
"""Processes and manipulates example images"""
@staticmethod
def generate_short_id(length=8):
"""Generate a short random alphanumeric identifier"""
chars = string.ascii_lowercase + string.digits
return ''.join(random.choice(chars) for _ in range(length))
@staticmethod
def get_civitai_optimized_url(image_url):
"""Convert Civitai image URL to its optimized WebP version"""
base_pattern = r'(https://image\.civitai\.com/[^/]+/[^/]+)'
match = re.match(base_pattern, image_url)
if match:
base_url = match.group(1)
return f"{base_url}/optimized=true/image.webp"
return image_url
@staticmethod
async def download_model_images(model_hash, model_name, model_images, model_dir, optimize, independent_session):
"""Download images for a single model
Returns:
tuple: (success, is_stale_metadata) - whether download was successful, whether metadata is stale
"""
model_success = True
for i, image in enumerate(model_images):
image_url = image.get('url')
if not image_url:
continue
# Get image filename from URL
image_filename = os.path.basename(image_url.split('?')[0])
image_ext = os.path.splitext(image_filename)[1].lower()
# Handle images and videos
is_image = image_ext in SUPPORTED_MEDIA_EXTENSIONS['images']
is_video = image_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']
if not (is_image or is_video):
logger.debug(f"Skipping unsupported file type: {image_filename}")
continue
# Use 0-based indexing instead of 1-based indexing
save_filename = f"image_{i}{image_ext}"
# If optimizing images and this is a Civitai image, use their pre-optimized WebP version
if is_image and optimize and 'civitai.com' in image_url:
image_url = ExampleImagesProcessor.get_civitai_optimized_url(image_url)
save_filename = f"image_{i}.webp"
# Check if already downloaded
save_path = os.path.join(model_dir, save_filename)
if os.path.exists(save_path):
logger.debug(f"File already exists: {save_path}")
continue
# Download the file
try:
logger.debug(f"Downloading {save_filename} for {model_name}")
# Download directly using the independent session
async with independent_session.get(image_url, timeout=60) as response:
if response.status == 200:
with open(save_path, 'wb') as f:
async for chunk in response.content.iter_chunked(8192):
if chunk:
f.write(chunk)
elif response.status == 404:
error_msg = f"Failed to download file: {image_url}, status code: 404 - Model metadata might be stale"
logger.warning(error_msg)
model_success = False # Mark the model as failed due to 404 error
# Return early to trigger metadata refresh attempt
return False, True # (success, is_metadata_stale)
else:
error_msg = f"Failed to download file: {image_url}, status code: {response.status}"
logger.warning(error_msg)
model_success = False # Mark the model as failed
except Exception as e:
error_msg = f"Error downloading file {image_url}: {str(e)}"
logger.error(error_msg)
model_success = False # Mark the model as failed
return model_success, False # (success, is_metadata_stale)
@staticmethod
async def process_local_examples(model_file_path, model_file_name, model_name, model_dir, optimize):
"""Process local example images
Returns:
bool: True if local images were processed successfully, False otherwise
"""
try:
if not model_file_path or not os.path.exists(os.path.dirname(model_file_path)):
return False
model_dir_path = os.path.dirname(model_file_path)
local_images = []
# Look for files with pattern: filename.example.*.ext
if model_file_name:
example_prefix = f"{model_file_name}.example."
if os.path.exists(model_dir_path):
for file in os.listdir(model_dir_path):
file_lower = file.lower()
if file_lower.startswith(example_prefix.lower()):
file_ext = os.path.splitext(file_lower)[1]
is_supported = (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos'])
if is_supported:
local_images.append(os.path.join(model_dir_path, file))
# Process local images if found
if local_images:
logger.info(f"Found {len(local_images)} local example images for {model_name}")
for local_image_path in local_images:
# Extract index from filename
file_name = os.path.basename(local_image_path)
example_prefix = f"{model_file_name}.example."
try:
# Extract the part between '.example.' and the file extension
index_part = file_name[len(example_prefix):].split('.')[0]
# Try to parse it as an integer
index = int(index_part)
local_ext = os.path.splitext(local_image_path)[1].lower()
save_filename = f"image_{index}{local_ext}"
except (ValueError, IndexError):
# If we can't parse the index, fall back to sequential numbering
logger.warning(f"Could not extract index from {file_name}, using sequential numbering")
local_ext = os.path.splitext(local_image_path)[1].lower()
save_filename = f"image_{len(local_images)}{local_ext}"
save_path = os.path.join(model_dir, save_filename)
# Skip if already exists in output directory
if os.path.exists(save_path):
logger.debug(f"File already exists in output: {save_path}")
continue
# Copy the file
with open(local_image_path, 'rb') as src_file:
with open(save_path, 'wb') as dst_file:
dst_file.write(src_file.read())
return True
return False
except Exception as e:
logger.error(f"Error processing local examples for {model_name}: {str(e)}")
return False
@staticmethod
async def import_images(request):
"""
Import local example images
Accepts:
- multipart/form-data form with model_hash and files fields
or
- JSON request with model_hash and file_paths
Returns:
- Success status and list of imported files
"""
try:
model_hash = None
files_to_import = []
temp_files_to_cleanup = []
# Check if it's a multipart form-data request (direct file upload)
if request.content_type and 'multipart/form-data' in request.content_type:
reader = await request.multipart()
# First get model_hash
field = await reader.next()
if field.name == 'model_hash':
model_hash = await field.text()
# Then process all files
while True:
field = await reader.next()
if field is None:
break
if field.name == 'files':
# Create a temporary file with appropriate suffix for type detection
file_name = field.filename
file_ext = os.path.splitext(file_name)[1].lower()
with tempfile.NamedTemporaryFile(suffix=file_ext, delete=False) as tmp_file:
temp_path = tmp_file.name
temp_files_to_cleanup.append(temp_path) # Track for cleanup
# Write chunks to the temporary file
while True:
chunk = await field.read_chunk()
if not chunk:
break
tmp_file.write(chunk)
# Add to the list of files to process
files_to_import.append(temp_path)
else:
# Parse JSON request (legacy method using file paths)
data = await request.json()
model_hash = data.get('model_hash')
files_to_import = data.get('file_paths', [])
if not model_hash:
return web.json_response({
'success': False,
'error': 'Missing model_hash parameter'
}, status=400)
if not files_to_import:
return web.json_response({
'success': False,
'error': 'No files provided to import'
}, status=400)
# Get example images path
example_images_path = settings.get('example_images_path')
if not example_images_path:
return web.json_response({
'success': False,
'error': 'No example images path configured'
}, status=400)
# Find the model and get current metadata
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
model_data = None
scanner = None
# Check both scanners to find the model
for scan_obj in [lora_scanner, checkpoint_scanner]:
cache = await scan_obj.get_cached_data()
for item in cache.raw_data:
if item.get('sha256') == model_hash:
model_data = item
scanner = scan_obj
break
if model_data:
break
if not model_data:
return web.json_response({
'success': False,
'error': f"Model with hash {model_hash} not found in cache"
}, status=404)
# Create model folder
model_folder = os.path.join(example_images_path, model_hash)
os.makedirs(model_folder, exist_ok=True)
imported_files = []
errors = []
newly_imported_paths = []
# Process each file path
for file_path in files_to_import:
try:
# Ensure the file exists
if not os.path.isfile(file_path):
errors.append(f"File not found: {file_path}")
continue
# Check if file type is supported
file_ext = os.path.splitext(file_path)[1].lower()
if not (file_ext in SUPPORTED_MEDIA_EXTENSIONS['images'] or
file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']):
errors.append(f"Unsupported file type: {file_path}")
continue
# Generate new filename using short ID instead of UUID
short_id = ExampleImagesProcessor.generate_short_id()
new_filename = f"custom_{short_id}{file_ext}"
dest_path = os.path.join(model_folder, new_filename)
# Copy the file
import shutil
shutil.copy2(file_path, dest_path)
# Store both the dest_path and the short_id
newly_imported_paths.append((dest_path, short_id))
# Add to imported files list
imported_files.append({
'name': new_filename,
'path': f'/example_images_static/{model_hash}/{new_filename}',
'extension': file_ext,
'is_video': file_ext in SUPPORTED_MEDIA_EXTENSIONS['videos']
})
except Exception as e:
errors.append(f"Error importing {file_path}: {str(e)}")
# Update metadata with new example images
regular_images, custom_images = await MetadataUpdater.update_metadata_after_import(
model_hash,
model_data,
scanner,
newly_imported_paths
)
return web.json_response({
'success': len(imported_files) > 0,
'message': f'Successfully imported {len(imported_files)} files' +
(f' with {len(errors)} errors' if errors else ''),
'files': imported_files,
'errors': errors,
'regular_images': regular_images,
'custom_images': custom_images,
"model_file_path": model_data.get('file_path', ''),
})
except Exception as e:
logger.error(f"Failed to import example images: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
finally:
# Clean up temporary files
for temp_file in temp_files_to_cleanup:
try:
os.remove(temp_file)
except Exception as e:
logger.error(f"Failed to remove temporary file {temp_file}: {e}")
@staticmethod
async def delete_custom_image(request):
"""
Delete a custom example image for a model
Accepts:
- JSON request with model_hash and short_id
Returns:
- Success status and updated image lists
"""
try:
# Parse request data
data = await request.json()
model_hash = data.get('model_hash')
short_id = data.get('short_id')
if not model_hash or not short_id:
return web.json_response({
'success': False,
'error': 'Missing required parameters: model_hash and short_id'
}, status=400)
# Get example images path
example_images_path = settings.get('example_images_path')
if not example_images_path:
return web.json_response({
'success': False,
'error': 'No example images path configured'
}, status=400)
# Find the model and get current metadata
lora_scanner = await ServiceRegistry.get_lora_scanner()
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
model_data = None
scanner = None
# Check both scanners to find the model
for scan_obj in [lora_scanner, checkpoint_scanner]:
if scan_obj.has_hash(model_hash):
cache = await scan_obj.get_cached_data()
for item in cache.raw_data:
if item.get('sha256') == model_hash:
model_data = item
scanner = scan_obj
break
if model_data:
break
if not model_data:
return web.json_response({
'success': False,
'error': f"Model with hash {model_hash} not found in cache"
}, status=404)
# Check if model has custom images
if not model_data.get('civitai', {}).get('customImages'):
return web.json_response({
'success': False,
'error': f"Model has no custom images"
}, status=404)
# Find the custom image with matching short_id
custom_images = model_data['civitai']['customImages']
matching_image = None
new_custom_images = []
for image in custom_images:
if image.get('id') == short_id:
matching_image = image
else:
new_custom_images.append(image)
if not matching_image:
return web.json_response({
'success': False,
'error': f"Custom image with id {short_id} not found"
}, status=404)
# Find and delete the actual file
model_folder = os.path.join(example_images_path, model_hash)
file_deleted = False
if os.path.exists(model_folder):
for filename in os.listdir(model_folder):
if f"custom_{short_id}" in filename:
file_path = os.path.join(model_folder, filename)
try:
os.remove(file_path)
file_deleted = True
logger.info(f"Deleted custom example file: {file_path}")
break
except Exception as e:
return web.json_response({
'success': False,
'error': f"Failed to delete file: {str(e)}"
}, status=500)
if not file_deleted:
logger.warning(f"File for custom example with id {short_id} not found, but metadata will still be updated")
# Update metadata
model_data['civitai']['customImages'] = new_custom_images
# Save updated metadata to file
file_path = model_data.get('file_path')
if file_path:
try:
# Create a copy of model data without 'folder' field
model_copy = model_data.copy()
model_copy.pop('folder', None)
# Write metadata to file
await MetadataManager.save_metadata(file_path, model_copy)
logger.debug(f"Saved updated metadata for {model_data.get('model_name')}")
except Exception as e:
logger.error(f"Failed to save metadata: {str(e)}")
return web.json_response({
'success': False,
'error': f"Failed to save metadata: {str(e)}"
}, status=500)
# Update cache
await scanner.update_single_model_cache(file_path, file_path, model_data)
# Get regular images array (might be None)
regular_images = model_data['civitai'].get('images', [])
return web.json_response({
'success': True,
'regular_images': regular_images,
'custom_images': new_custom_images,
'model_file_path': model_data.get('file_path', '')
})
except Exception as e:
logger.error(f"Failed to delete custom example image: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)

View File

@@ -31,7 +31,7 @@ class ExifUtils:
# Method 2: Check EXIF UserComment field
if img.format not in ['JPEG', 'TIFF', 'WEBP']:
# For non-JPEG/TIFF/WEBP images, try to get EXIF through PIL
exif = img._getexif()
exif = img.getexif()
if exif and piexif.ExifIFD.UserComment in exif:
user_comment = exif[piexif.ExifIFD.UserComment]
if isinstance(user_comment, bytes):
@@ -147,7 +147,7 @@ class ExifUtils:
"file_name": lora.get("file_name", ""),
"hash": lora.get("hash", "").lower() if lora.get("hash") else "",
"strength": float(lora.get("strength", 1.0)),
"modelVersionId": lora.get("modelVersionId", ""),
"modelVersionId": lora.get("modelVersionId", 0),
"modelName": lora.get("modelName", ""),
"modelVersionName": lora.get("modelVersionName", ""),
}

View File

@@ -1,13 +1,7 @@
import logging
import os
import hashlib
import json
import time
from typing import Dict, Optional, Type
from .model_utils import determine_base_model
from .lora_metadata import extract_lora_metadata, extract_checkpoint_metadata
from .models import BaseModelMetadata, LoraMetadata, CheckpointMetadata
from .constants import PREVIEW_EXTENSIONS, CARD_PREVIEW_WIDTH
from .exif_utils import ExifUtils
@@ -24,7 +18,12 @@ async def calculate_sha256(file_path: str) -> str:
def find_preview_file(base_name: str, dir_path: str) -> str:
"""Find preview file for given base name in directory"""
for ext in PREVIEW_EXTENSIONS:
temp_extensions = PREVIEW_EXTENSIONS.copy()
# Add example extension for compatibility
# https://github.com/willmiao/ComfyUI-Lora-Manager/issues/225
# The preview image will be optimized to lora-name.webp, so it won't affect other logic
temp_extensions.append(".example.0.jpeg")
for ext in temp_extensions:
full_pattern = os.path.join(dir_path, f"{base_name}{ext}")
if os.path.exists(full_pattern):
# Check if this is an image and not already webp
@@ -42,7 +41,7 @@ def find_preview_file(base_name: str, dir_path: str) -> str:
target_width=CARD_PREVIEW_WIDTH,
format='webp',
quality=85,
preserve_metadata=False # Changed from True to False
preserve_metadata=False
)
# Save the optimized webp file
@@ -63,188 +62,4 @@ def find_preview_file(base_name: str, dir_path: str) -> str:
def normalize_path(path: str) -> str:
"""Normalize file path to use forward slashes"""
return path.replace(os.sep, "/") if path else path
async def get_file_info(file_path: str, model_class: Type[BaseModelMetadata] = LoraMetadata) -> Optional[BaseModelMetadata]:
"""Get basic file information as a model metadata object"""
# First check if file actually exists and resolve symlinks
try:
real_path = os.path.realpath(file_path)
if not os.path.exists(real_path):
return None
except Exception as e:
logger.error(f"Error checking file existence for {file_path}: {e}")
return None
base_name = os.path.splitext(os.path.basename(file_path))[0]
dir_path = os.path.dirname(file_path)
preview_url = find_preview_file(base_name, dir_path)
# Check if a .json file exists with SHA256 hash to avoid recalculation
json_path = f"{os.path.splitext(file_path)[0]}.json"
sha256 = None
if os.path.exists(json_path):
try:
with open(json_path, 'r', encoding='utf-8') as f:
json_data = json.load(f)
if 'sha256' in json_data:
sha256 = json_data['sha256'].lower()
logger.debug(f"Using SHA256 from .json file for {file_path}")
except Exception as e:
logger.error(f"Error reading .json file for {file_path}: {e}")
# If SHA256 is still not found, check for a .sha256 file
if sha256 is None:
sha256_file = f"{os.path.splitext(file_path)[0]}.sha256"
if os.path.exists(sha256_file):
try:
with open(sha256_file, 'r', encoding='utf-8') as f:
sha256 = f.read().strip().lower()
logger.debug(f"Using SHA256 from .sha256 file for {file_path}")
except Exception as e:
logger.error(f"Error reading .sha256 file for {file_path}: {e}")
try:
# If we didn't get SHA256 from the .json file, calculate it
if not sha256:
start_time = time.time()
sha256 = await calculate_sha256(real_path)
logger.debug(f"Calculated SHA256 for {file_path} in {time.time() - start_time:.2f} seconds")
# Create default metadata based on model class
if model_class == CheckpointMetadata:
metadata = CheckpointMetadata(
file_name=base_name,
model_name=base_name,
file_path=normalize_path(file_path),
size=os.path.getsize(real_path),
modified=os.path.getmtime(real_path),
sha256=sha256,
base_model="Unknown", # Will be updated later
preview_url=normalize_path(preview_url),
tags=[],
modelDescription="",
model_type="checkpoint"
)
# Extract checkpoint-specific metadata
# model_info = await extract_checkpoint_metadata(real_path)
# metadata.base_model = model_info['base_model']
# if 'model_type' in model_info:
# metadata.model_type = model_info['model_type']
else: # Default to LoraMetadata
metadata = LoraMetadata(
file_name=base_name,
model_name=base_name,
file_path=normalize_path(file_path),
size=os.path.getsize(real_path),
modified=os.path.getmtime(real_path),
sha256=sha256,
base_model="Unknown", # Will be updated later
usage_tips="{}",
preview_url=normalize_path(preview_url),
tags=[],
modelDescription=""
)
# Extract lora-specific metadata
model_info = await extract_lora_metadata(real_path)
metadata.base_model = model_info['base_model']
# Save metadata to file
await save_metadata(file_path, metadata)
return metadata
except Exception as e:
logger.error(f"Error getting file info for {file_path}: {e}")
return None
async def save_metadata(file_path: str, metadata: BaseModelMetadata) -> None:
"""Save metadata to .metadata.json file"""
metadata_path = f"{os.path.splitext(file_path)[0]}.metadata.json"
try:
metadata_dict = metadata.to_dict()
metadata_dict['file_path'] = normalize_path(metadata_dict['file_path'])
metadata_dict['preview_url'] = normalize_path(metadata_dict['preview_url'])
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata_dict, f, indent=2, ensure_ascii=False)
except Exception as e:
print(f"Error saving metadata to {metadata_path}: {str(e)}")
async def load_metadata(file_path: str, model_class: Type[BaseModelMetadata] = LoraMetadata) -> Optional[BaseModelMetadata]:
"""Load metadata from .metadata.json file"""
metadata_path = f"{os.path.splitext(file_path)[0]}.metadata.json"
try:
if os.path.exists(metadata_path):
with open(metadata_path, 'r', encoding='utf-8') as f:
data = json.load(f)
needs_update = False
# Check and normalize base model name
normalized_base_model = determine_base_model(data['base_model'])
if data['base_model'] != normalized_base_model:
data['base_model'] = normalized_base_model
needs_update = True
# Compare paths without extensions
stored_path_base = os.path.splitext(data['file_path'])[0]
current_path_base = os.path.splitext(normalize_path(file_path))[0]
if stored_path_base != current_path_base:
data['file_path'] = normalize_path(file_path)
needs_update = True
# TODO: optimize preview image to webp format if not already done
preview_url = data.get('preview_url', '')
if not preview_url or not os.path.exists(preview_url):
base_name = os.path.splitext(os.path.basename(file_path))[0]
dir_path = os.path.dirname(file_path)
new_preview_url = normalize_path(find_preview_file(base_name, dir_path))
if new_preview_url != preview_url:
data['preview_url'] = new_preview_url
needs_update = True
else:
# Compare preview paths without extensions
stored_preview_base = os.path.splitext(preview_url)[0]
current_preview_base = os.path.splitext(normalize_path(preview_url))[0]
if stored_preview_base != current_preview_base:
data['preview_url'] = normalize_path(preview_url)
needs_update = True
# Ensure all fields are present
if 'tags' not in data:
data['tags'] = []
needs_update = True
if 'modelDescription' not in data:
data['modelDescription'] = ""
needs_update = True
# For checkpoint metadata
if model_class == CheckpointMetadata and 'model_type' not in data:
data['model_type'] = "checkpoint"
needs_update = True
# For lora metadata
if model_class == LoraMetadata and 'usage_tips' not in data:
data['usage_tips'] = "{}"
needs_update = True
if needs_update:
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(data, f, indent=2, ensure_ascii=False)
return model_class.from_dict(data)
except Exception as e:
print(f"Error loading metadata from {metadata_path}: {str(e)}")
return None
async def update_civitai_metadata(file_path: str, civitai_data: Dict) -> None:
"""Update metadata file with Civitai data"""
metadata = await load_metadata(file_path)
metadata['civitai'] = civitai_data
await save_metadata(file_path, metadata)
return path.replace(os.sep, "/") if path else path

View File

@@ -1,7 +1,11 @@
from safetensors import safe_open
from typing import Dict
from typing import Dict, List, Tuple
from .model_utils import determine_base_model
import os
import logging
import json
logger = logging.getLogger(__name__)
async def extract_lora_metadata(file_path: str) -> Dict:
"""Extract essential metadata from safetensors file"""
@@ -77,4 +81,53 @@ async def extract_checkpoint_metadata(file_path: str) -> dict:
except Exception as e:
logger.error(f"Error extracting checkpoint metadata for {file_path}: {e}")
# Return default values
return {'base_model': 'Unknown', 'model_type': 'checkpoint'}
return {'base_model': 'Unknown', 'model_type': 'checkpoint'}
async def extract_trained_words(file_path: str) -> Tuple[List[Tuple[str, int]], str]:
"""Extract trained words from a safetensors file and sort by frequency
Args:
file_path: Path to the safetensors file
Returns:
Tuple of:
- List of (word, frequency) tuples sorted by frequency (highest first)
- class_tokens value (or None if not found)
"""
class_tokens = None
try:
with safe_open(file_path, framework="pt", device="cpu") as f:
metadata = f.metadata()
# Extract class_tokens from ss_datasets if present
if metadata and "ss_datasets" in metadata:
try:
datasets_data = json.loads(metadata["ss_datasets"])
# Look for class_tokens in the first subset
if datasets_data and isinstance(datasets_data, list) and datasets_data[0].get("subsets"):
subsets = datasets_data[0].get("subsets", [])
if subsets and isinstance(subsets, list) and len(subsets) > 0:
class_tokens = subsets[0].get("class_tokens")
except Exception as e:
logger.error(f"Error parsing ss_datasets for class_tokens: {str(e)}")
# Extract tag frequency as before
if metadata and "ss_tag_frequency" in metadata:
# Parse the JSON string into a dictionary
tag_data = json.loads(metadata["ss_tag_frequency"])
# The structure may have an outer key (like "image_dir" or "img")
# We need to get the inner dictionary with the actual word frequencies
if tag_data:
# Get the first key (usually "image_dir" or "img")
first_key = list(tag_data.keys())[0]
words_dict = tag_data[first_key]
# Sort words by frequency (highest first)
sorted_words = sorted(words_dict.items(), key=lambda x: x[1], reverse=True)
return sorted_words, class_tokens
except Exception as e:
logger.error(f"Error extracting trained words from {file_path}: {str(e)}")
return [], class_tokens

View File

@@ -0,0 +1,292 @@
import os
import json
import shutil
import logging
from typing import Dict, Optional, Type, Union
from .models import BaseModelMetadata, LoraMetadata
from .file_utils import normalize_path, find_preview_file, calculate_sha256
from .lora_metadata import extract_lora_metadata, extract_checkpoint_metadata
logger = logging.getLogger(__name__)
class MetadataManager:
"""
Centralized manager for all metadata operations.
This class is responsible for:
1. Loading metadata safely with fallback mechanisms
2. Saving metadata with atomic operations and backups
3. Creating default metadata for models
4. Handling unknown fields gracefully
"""
@staticmethod
async def load_metadata(file_path: str, model_class: Type[BaseModelMetadata] = LoraMetadata) -> Optional[BaseModelMetadata]:
"""
Load metadata with robust error handling and data preservation.
Args:
file_path: Path to the model file
model_class: Class to instantiate (LoraMetadata, CheckpointMetadata, etc.)
Returns:
BaseModelMetadata instance or None if file doesn't exist
"""
metadata_path = f"{os.path.splitext(file_path)[0]}.metadata.json"
backup_path = f"{metadata_path}.bak"
# Try loading the main metadata file
if os.path.exists(metadata_path):
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# Create model instance
metadata = model_class.from_dict(data)
# Normalize paths
await MetadataManager._normalize_metadata_paths(metadata, file_path)
return metadata
except json.JSONDecodeError:
# JSON parsing error - try to restore from backup
logger.warning(f"Invalid JSON in metadata file: {metadata_path}")
return await MetadataManager._restore_from_backup(backup_path, file_path, model_class)
except Exception as e:
# Other errors might be due to unknown fields or schema changes
logger.error(f"Error loading metadata from {metadata_path}: {str(e)}")
return await MetadataManager._restore_from_backup(backup_path, file_path, model_class)
return None
@staticmethod
async def _restore_from_backup(backup_path: str, file_path: str, model_class: Type[BaseModelMetadata]) -> Optional[BaseModelMetadata]:
"""
Try to restore metadata from backup file
Args:
backup_path: Path to backup file
file_path: Path to the original model file
model_class: Class to instantiate
Returns:
BaseModelMetadata instance or None if restoration fails
"""
if os.path.exists(backup_path):
try:
logger.info(f"Attempting to restore metadata from backup: {backup_path}")
with open(backup_path, 'r', encoding='utf-8') as f:
data = json.load(f)
# Process data similarly to normal loading
metadata = model_class.from_dict(data)
await MetadataManager._normalize_metadata_paths(metadata, file_path)
return metadata
except Exception as e:
logger.error(f"Failed to restore from backup: {str(e)}")
return None
@staticmethod
async def save_metadata(path: str, metadata: Union[BaseModelMetadata, Dict], create_backup: bool = False) -> bool:
"""
Save metadata with atomic write operations and backup creation.
Args:
path: Path to the model file or directly to the metadata file
metadata: Metadata to save (either BaseModelMetadata object or dict)
create_backup: Whether to create a new backup of existing file if a backup doesn't already exist
Returns:
bool: Success or failure
"""
# Determine if the input is a metadata path or a model file path
if path.endswith('.metadata.json'):
metadata_path = path
else:
# Use existing logic for model file paths
file_path = path
metadata_path = f"{os.path.splitext(file_path)[0]}.metadata.json"
temp_path = f"{metadata_path}.tmp"
backup_path = f"{metadata_path}.bak"
try:
# Create backup if file exists and either:
# 1. create_backup is True, OR
# 2. backup file doesn't already exist
if os.path.exists(metadata_path) and (create_backup or not os.path.exists(backup_path)):
try:
shutil.copy2(metadata_path, backup_path)
logger.debug(f"Created metadata backup at: {backup_path}")
except Exception as e:
logger.warning(f"Failed to create metadata backup: {str(e)}")
# Convert to dict if needed
if isinstance(metadata, BaseModelMetadata):
metadata_dict = metadata.to_dict()
# Preserve unknown fields if present
if hasattr(metadata, '_unknown_fields'):
metadata_dict.update(metadata._unknown_fields)
else:
metadata_dict = metadata.copy()
# Normalize paths
if 'file_path' in metadata_dict:
metadata_dict['file_path'] = normalize_path(metadata_dict['file_path'])
if 'preview_url' in metadata_dict:
metadata_dict['preview_url'] = normalize_path(metadata_dict['preview_url'])
# Write to temporary file first
with open(temp_path, 'w', encoding='utf-8') as f:
json.dump(metadata_dict, f, indent=2, ensure_ascii=False)
# Atomic rename operation
os.replace(temp_path, metadata_path)
return True
except Exception as e:
logger.error(f"Error saving metadata to {metadata_path}: {str(e)}")
# Clean up temporary file if it exists
if os.path.exists(temp_path):
try:
os.remove(temp_path)
except:
pass
return False
@staticmethod
async def create_default_metadata(file_path: str, model_class: Type[BaseModelMetadata] = LoraMetadata) -> Optional[BaseModelMetadata]:
"""
Create basic metadata structure for a model file.
This replaces the old get_file_info function with a more appropriately named method.
Args:
file_path: Path to the model file
model_class: Class to instantiate
Returns:
BaseModelMetadata instance or None if file doesn't exist
"""
# First check if file actually exists and resolve symlinks
try:
real_path = os.path.realpath(file_path)
if not os.path.exists(real_path):
return None
except Exception as e:
logger.error(f"Error checking file existence for {file_path}: {e}")
return None
try:
base_name = os.path.splitext(os.path.basename(file_path))[0]
dir_path = os.path.dirname(file_path)
# Find preview image
preview_url = find_preview_file(base_name, dir_path)
# Calculate file hash
sha256 = await calculate_sha256(real_path)
# Create instance based on model type
if model_class.__name__ == "CheckpointMetadata":
metadata = model_class(
file_name=base_name,
model_name=base_name,
file_path=normalize_path(file_path),
size=os.path.getsize(real_path),
modified=os.path.getmtime(real_path),
sha256=sha256,
base_model="Unknown",
preview_url=normalize_path(preview_url),
tags=[],
modelDescription="",
model_type="checkpoint",
from_civitai=True
)
else: # Default to LoraMetadata
metadata = model_class(
file_name=base_name,
model_name=base_name,
file_path=normalize_path(file_path),
size=os.path.getsize(real_path),
modified=os.path.getmtime(real_path),
sha256=sha256,
base_model="Unknown",
preview_url=normalize_path(preview_url),
tags=[],
modelDescription="",
from_civitai=True,
usage_tips="{}"
)
# Try to extract model-specific metadata
await MetadataManager._enrich_metadata(metadata, real_path)
# Save the created metadata
await MetadataManager.save_metadata(file_path, metadata, create_backup=False)
return metadata
except Exception as e:
logger.error(f"Error creating default metadata for {file_path}: {e}")
return None
@staticmethod
async def _enrich_metadata(metadata: BaseModelMetadata, file_path: str) -> None:
"""
Enrich metadata with model-specific information
Args:
metadata: Metadata to enrich
file_path: Path to the model file
"""
try:
if metadata.__class__.__name__ == "LoraMetadata":
model_info = await extract_lora_metadata(file_path)
metadata.base_model = model_info['base_model']
# elif metadata.__class__.__name__ == "CheckpointMetadata":
# model_info = await extract_checkpoint_metadata(file_path)
# metadata.base_model = model_info['base_model']
# if 'model_type' in model_info:
# metadata.model_type = model_info['model_type']
except Exception as e:
logger.error(f"Error enriching metadata: {str(e)}")
@staticmethod
async def _normalize_metadata_paths(metadata: BaseModelMetadata, file_path: str) -> None:
"""
Normalize paths in metadata object
Args:
metadata: Metadata object to update
file_path: Current file path for the model
"""
need_update = False
# Check if file path is different from what's in metadata
if normalize_path(file_path) != metadata.file_path:
metadata.file_path = normalize_path(file_path)
need_update = True
# Check if preview exists at the current location
preview_url = metadata.preview_url
if preview_url:
# Get directory parts of both paths
file_dir = os.path.dirname(file_path)
preview_dir = os.path.dirname(preview_url)
# Update preview if it doesn't exist OR if model and preview are in different directories
if not os.path.exists(preview_url) or file_dir != preview_dir:
base_name = os.path.splitext(os.path.basename(file_path))[0]
dir_path = os.path.dirname(file_path)
new_preview_url = find_preview_file(base_name, dir_path)
if new_preview_url:
metadata.preview_url = normalize_path(new_preview_url)
need_update = True
# If path attributes were changed, save the metadata back to disk
if need_update:
await MetadataManager.save_metadata(file_path, metadata, create_backup=False)

View File

@@ -1,5 +1,5 @@
from dataclasses import dataclass, asdict
from typing import Dict, Optional, List
from dataclasses import dataclass, asdict, field
from typing import Dict, Optional, List, Any
from datetime import datetime
import os
from .model_utils import determine_base_model
@@ -21,6 +21,10 @@ class BaseModelMetadata:
civitai: Optional[Dict] = None # Civitai API data if available
tags: List[str] = None # Model tags
modelDescription: str = "" # Full model description
civitai_deleted: bool = False # Whether deleted from Civitai
favorite: bool = False # Whether the model is a favorite
exclude: bool = False # Whether to exclude this model from the cache
_unknown_fields: Dict[str, Any] = field(default_factory=dict, repr=False, compare=False) # Store unknown fields
def __post_init__(self):
# Initialize empty lists to avoid mutable default parameter issue
@@ -31,11 +35,43 @@ class BaseModelMetadata:
def from_dict(cls, data: Dict) -> 'BaseModelMetadata':
"""Create instance from dictionary"""
data_copy = data.copy()
return cls(**data_copy)
# Use cached fields if available, otherwise compute them
if not hasattr(cls, '_known_fields_cache'):
known_fields = set()
for c in cls.mro():
if hasattr(c, '__annotations__'):
known_fields.update(c.__annotations__.keys())
cls._known_fields_cache = known_fields
known_fields = cls._known_fields_cache
# Extract fields that match our class attributes
fields_to_use = {k: v for k, v in data_copy.items() if k in known_fields}
# Store unknown fields separately
unknown_fields = {k: v for k, v in data_copy.items() if k not in known_fields and not k.startswith('_')}
# Create instance with known fields
instance = cls(**fields_to_use)
# Add unknown fields as a separate attribute
instance._unknown_fields = unknown_fields
return instance
def to_dict(self) -> Dict:
"""Convert to dictionary for JSON serialization"""
return asdict(self)
result = asdict(self)
# Remove private fields
result = {k: v for k, v in result.items() if not k.startswith('_')}
# Add back unknown fields if they exist
if hasattr(self, '_unknown_fields'):
result.update(self._unknown_fields)
return result
@property
def modified_datetime(self) -> datetime:
@@ -64,6 +100,15 @@ class LoraMetadata(BaseModelMetadata):
file_name = file_info['name']
base_model = determine_base_model(version_info.get('baseModel', ''))
# Extract tags and description if available
tags = []
description = ""
if 'model' in version_info:
if 'tags' in version_info['model']:
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
@@ -75,7 +120,9 @@ class LoraMetadata(BaseModelMetadata):
preview_url=None, # Will be updated after preview download
preview_nsfw_level=0, # Will be updated after preview download
from_civitai=True,
civitai=version_info
civitai=version_info,
tags=tags,
modelDescription=description
)
@dataclass
@@ -90,6 +137,15 @@ class CheckpointMetadata(BaseModelMetadata):
base_model = determine_base_model(version_info.get('baseModel', ''))
model_type = version_info.get('type', 'checkpoint')
# Extract tags and description if available
tags = []
description = ""
if 'model' in version_info:
if 'tags' in version_info['model']:
tags = version_info['model']['tags']
if 'description' in version_info['model']:
description = version_info['model']['description']
return cls(
file_name=os.path.splitext(file_name)[0],
model_name=version_info.get('model').get('name', os.path.splitext(file_name)[0]),
@@ -102,6 +158,8 @@ class CheckpointMetadata(BaseModelMetadata):
preview_nsfw_level=0,
from_civitai=True,
civitai=version_info,
model_type=model_type
model_type=model_type,
tags=tags,
modelDescription=description
)

File diff suppressed because it is too large Load Diff

View File

@@ -9,6 +9,7 @@ from .constants import PREVIEW_EXTENSIONS, CARD_PREVIEW_WIDTH
from ..config import config
from ..services.civitai_client import CivitaiClient
from ..utils.exif_utils import ExifUtils
from ..utils.metadata_manager import MetadataManager
from ..services.download_manager import DownloadManager
logger = logging.getLogger(__name__)
@@ -32,27 +33,61 @@ class ModelRouteUtils:
async def handle_not_found_on_civitai(metadata_path: str, local_metadata: Dict) -> None:
"""Handle case when model is not found on CivitAI"""
local_metadata['from_civitai'] = False
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(local_metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(metadata_path, local_metadata)
@staticmethod
async def update_model_metadata(metadata_path: str, local_metadata: Dict,
civitai_metadata: Dict, client: CivitaiClient) -> None:
"""Update local metadata with CivitAI data"""
local_metadata['civitai'] = civitai_metadata
# Save existing trainedWords and customImages if they exist
existing_civitai = local_metadata.get('civitai') or {} # Use empty dict if None
# Create a new civitai metadata by updating existing with new
merged_civitai = existing_civitai.copy()
merged_civitai.update(civitai_metadata)
# Special handling for trainedWords - ensure we don't lose any existing trained words
if 'trainedWords' in existing_civitai:
existing_trained_words = existing_civitai.get('trainedWords', [])
new_trained_words = civitai_metadata.get('trainedWords', [])
# Use a set to combine words without duplicates, then convert back to list
merged_trained_words = list(set(existing_trained_words + new_trained_words))
merged_civitai['trainedWords'] = merged_trained_words
# Update local metadata with merged civitai data
local_metadata['civitai'] = merged_civitai
local_metadata['from_civitai'] = True
# Update model name if available
if 'model' in civitai_metadata:
if civitai_metadata.get('model', {}).get('name'):
local_metadata['model_name'] = civitai_metadata['model']['name']
# Fetch additional model metadata (description and tags) if we have model ID
model_id = civitai_metadata['modelId']
if model_id:
model_metadata, _ = await client.get_model_metadata(str(model_id))
if model_metadata:
local_metadata['modelDescription'] = model_metadata.get('description', '')
local_metadata['tags'] = model_metadata.get('tags', [])
# Extract model metadata directly from civitai_metadata if available
model_metadata = None
if 'model' in civitai_metadata and civitai_metadata.get('model'):
# Data is already available in the response from get_model_version
model_metadata = {
'description': civitai_metadata.get('model', {}).get('description', ''),
'tags': civitai_metadata.get('model', {}).get('tags', []),
'creator': civitai_metadata.get('creator', {})
}
# If we have modelId and don't have enough metadata, fetch additional data
if not model_metadata or not model_metadata.get('description'):
model_id = civitai_metadata.get('modelId')
if model_id:
fetched_metadata, _ = await client.get_model_metadata(str(model_id))
if fetched_metadata:
model_metadata = fetched_metadata
# Update local metadata with the model information
if model_metadata:
local_metadata['modelDescription'] = model_metadata.get('description', '')
local_metadata['tags'] = model_metadata.get('tags', [])
if 'creator' in model_metadata and model_metadata['creator']:
local_metadata['civitai']['creator'] = model_metadata['creator']
# Update base model
local_metadata['base_model'] = determine_base_model(civitai_metadata.get('baseModel'))
@@ -60,7 +95,7 @@ class ModelRouteUtils:
# Update preview if needed
if not local_metadata.get('preview_url') or not os.path.exists(local_metadata['preview_url']):
first_preview = next((img for img in civitai_metadata.get('images', [])), None)
if first_preview:
if (first_preview):
# Determine if content is video or image
is_video = first_preview['type'] == 'video'
@@ -119,8 +154,7 @@ class ModelRouteUtils:
local_metadata['preview_nsfw_level'] = first_preview.get('nsfwLevel', 0)
# Save updated metadata
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(local_metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(metadata_path, local_metadata, True)
@staticmethod
async def fetch_and_update_model(
@@ -142,6 +176,11 @@ class ModelRouteUtils:
"""
client = CivitaiClient()
try:
# Validate input parameters
if not isinstance(model_data, dict):
logger.error(f"Invalid model_data type: {type(model_data)}")
return False
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
# Check if model metadata exists
@@ -153,8 +192,7 @@ class ModelRouteUtils:
# Mark as not from CivitAI if not found
local_metadata['from_civitai'] = False
model_data['from_civitai'] = False
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(local_metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(file_path, local_metadata)
return False
# Update metadata
@@ -165,21 +203,25 @@ class ModelRouteUtils:
client
)
# Update cache object directly
model_data.update({
# Update cache object directly using safe .get() method
update_dict = {
'model_name': local_metadata.get('model_name'),
'preview_url': local_metadata.get('preview_url'),
'from_civitai': True,
'civitai': civitai_metadata
})
}
model_data.update(update_dict)
# Update cache using the provided function
await update_cache_func(file_path, file_path, local_metadata)
return True
except KeyError as e:
logger.error(f"Error fetching CivitAI data - Missing key: {e} in model_data={model_data}")
return False
except Exception as e:
logger.error(f"Error fetching CivitAI data: {e}")
logger.error(f"Error fetching CivitAI data: {str(e)}", exc_info=True) # Include stack trace
return False
finally:
await client.close()
@@ -193,18 +235,17 @@ class ModelRouteUtils:
fields = [
"id", "modelId", "name", "createdAt", "updatedAt",
"publishedAt", "trainedWords", "baseModel", "description",
"model", "images"
"model", "images", "customImages", "creator"
]
return {k: data[k] for k in fields if k in data}
@staticmethod
async def delete_model_files(target_dir: str, file_name: str, file_monitor=None) -> List[str]:
async def delete_model_files(target_dir: str, file_name: str) -> List[str]:
"""Delete model and associated files
Args:
target_dir: Directory containing the model files
file_name: Base name of the model file without extension
file_monitor: Optional file monitor to ignore delete events
Returns:
List of deleted file paths
@@ -222,11 +263,7 @@ class ModelRouteUtils:
main_file = patterns[0]
main_path = os.path.join(target_dir, main_file).replace(os.sep, '/')
if os.path.exists(main_path):
# Notify file monitor to ignore delete event if available
if file_monitor:
file_monitor.handler.add_ignore_path(main_path, 0)
if os.path.exists(main_path):
# Delete file
os.remove(main_path)
deleted.append(main_path)
@@ -247,10 +284,12 @@ class ModelRouteUtils:
@staticmethod
def get_multipart_ext(filename):
"""Get extension that may have multiple parts like .metadata.json"""
"""Get extension that may have multiple parts like .metadata.json or .metadata.json.bak"""
parts = filename.split(".")
if len(parts) > 2: # If contains multi-part extension
if len(parts) == 3: # If contains 2-part extension
return "." + ".".join(parts[-2:]) # Take the last two parts, like ".metadata.json"
elif len(parts) >= 4: # If contains 3-part or more extensions
return "." + ".".join(parts[-3:]) # Take the last three parts, like ".metadata.json.bak"
return os.path.splitext(filename)[1] # Otherwise take the regular extension, like ".safetensors"
# New common endpoint handlers
@@ -275,13 +314,9 @@ class ModelRouteUtils:
target_dir = os.path.dirname(file_path)
file_name = os.path.splitext(os.path.basename(file_path))[0]
# Get the file monitor from the scanner if available
file_monitor = getattr(scanner, 'file_monitor', None)
deleted_files = await ModelRouteUtils.delete_model_files(
target_dir,
file_name,
file_monitor
file_name
)
# Remove from cache
@@ -292,6 +327,8 @@ class ModelRouteUtils:
# Update hash index if available
if hasattr(scanner, '_hash_index') and scanner._hash_index:
scanner._hash_index.remove_by_path(file_path)
await scanner._save_cache_to_disk()
return web.json_response({
'success': True,
@@ -311,7 +348,7 @@ class ModelRouteUtils:
scanner: The model scanner instance with cache management methods
Returns:
web.Response: The HTTP response
web.Response: The HTTP response with metadata on success
"""
try:
data = await request.json()
@@ -336,7 +373,8 @@ class ModelRouteUtils:
# Update the cache
await scanner.update_single_model_cache(data['file_path'], data['file_path'], local_metadata)
return web.json_response({"success": True})
# Return the updated metadata along with success status
return web.json_response({"success": True, "metadata": local_metadata})
finally:
await client.close()
@@ -346,15 +384,7 @@ class ModelRouteUtils:
@staticmethod
async def handle_replace_preview(request: web.Request, scanner) -> web.Response:
"""Handle preview image replacement request
Args:
request: The aiohttp request
scanner: The model scanner instance with methods to update cache
Returns:
web.Response: The HTTP response
"""
"""Handle preview image replacement request"""
try:
reader = await request.multipart()
@@ -363,6 +393,15 @@ class ModelRouteUtils:
if field.name != 'preview_file':
raise ValueError("Expected 'preview_file' field")
content_type = field.headers.get('Content-Type', 'image/png')
# Try to get original filename if available
content_disposition = field.headers.get('Content-Disposition', '')
original_filename = None
import re
filename_match = re.search(r'filename="(.*?)"', content_disposition)
if filename_match:
original_filename = filename_match.group(1)
preview_data = await field.read()
# Read model path
@@ -371,17 +410,47 @@ class ModelRouteUtils:
raise ValueError("Expected 'model_path' field")
model_path = (await field.read()).decode()
# Read NSFW level
nsfw_level = 0 # Default to 0 (unknown)
field = await reader.next()
if field and field.name == 'nsfw_level':
try:
nsfw_level = int((await field.read()).decode())
except (ValueError, TypeError):
logger.warning("Invalid NSFW level format, using default 0")
# Save preview file
base_name = os.path.splitext(os.path.basename(model_path))[0]
folder = os.path.dirname(model_path)
# Determine if content is video or image
# Determine format based on content type and original filename
is_gif = False
if original_filename and original_filename.lower().endswith('.gif'):
is_gif = True
elif content_type.lower() == 'image/gif':
is_gif = True
# Determine if content is video or image and handle specific formats
if content_type.startswith('video/'):
# For videos, keep original format and use .mp4 extension
extension = '.mp4'
# For videos, preserve original format if possible
if original_filename:
extension = os.path.splitext(original_filename)[1].lower()
# Default to .mp4 if no extension or unrecognized
if not extension or extension not in ['.mp4', '.webm', '.mov', '.avi']:
extension = '.mp4'
else:
# Try to determine extension from content type
if 'webm' in content_type:
extension = '.webm'
else:
extension = '.mp4' # Default
optimized_data = preview_data # No optimization for videos
elif is_gif:
# Preserve GIF format without optimization
extension = '.gif'
optimized_data = preview_data
else:
# For images, optimize and convert to WebP
# For other images, optimize and convert to WebP
optimized_data, _ = ExifUtils.optimize_image(
image_data=preview_data,
target_width=CARD_PREVIEW_WIDTH,
@@ -389,41 +458,111 @@ class ModelRouteUtils:
quality=85,
preserve_metadata=False
)
extension = '.webp' # Use .webp without .preview part
extension = '.webp'
# Delete any existing preview files for this model
for ext in PREVIEW_EXTENSIONS:
existing_preview = os.path.join(folder, base_name + ext)
if os.path.exists(existing_preview):
try:
os.remove(existing_preview)
logger.debug(f"Deleted existing preview: {existing_preview}")
except Exception as e:
logger.warning(f"Failed to delete existing preview {existing_preview}: {e}")
preview_path = os.path.join(folder, base_name + extension).replace(os.sep, '/')
with open(preview_path, 'wb') as f:
f.write(optimized_data)
# Update preview path in metadata
# Update preview path and NSFW level in metadata
metadata_path = os.path.splitext(model_path)[0] + '.metadata.json'
if os.path.exists(metadata_path):
try:
with open(metadata_path, 'r', encoding='utf-8') as f:
metadata = json.load(f)
# Update preview_url directly in the metadata dict
# Update preview_url and preview_nsfw_level in the metadata dict
metadata['preview_url'] = preview_path
metadata['preview_nsfw_level'] = nsfw_level
with open(metadata_path, 'w', encoding='utf-8') as f:
json.dump(metadata, f, indent=2, ensure_ascii=False)
await MetadataManager.save_metadata(model_path, metadata)
except Exception as e:
logger.error(f"Error updating metadata: {e}")
# Update preview URL in scanner cache
if hasattr(scanner, 'update_preview_in_cache'):
await scanner.update_preview_in_cache(model_path, preview_path)
await scanner.update_preview_in_cache(model_path, preview_path, nsfw_level)
return web.json_response({
"success": True,
"preview_url": config.get_preview_static_url(preview_path)
"preview_url": config.get_preview_static_url(preview_path),
"preview_nsfw_level": nsfw_level
})
except Exception as e:
logger.error(f"Error replacing preview: {e}", exc_info=True)
return web.Response(text=str(e), status=500)
@staticmethod
async def handle_exclude_model(request: web.Request, scanner) -> web.Response:
"""Handle model exclusion request
Args:
request: The aiohttp request
scanner: The model scanner instance with cache management methods
Returns:
web.Response: The HTTP response
"""
try:
data = await request.json()
file_path = data.get('file_path')
if not file_path:
return web.Response(text='Model path is required', status=400)
# Update metadata to mark as excluded
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
metadata['exclude'] = True
# Save updated metadata
await MetadataManager.save_metadata(file_path, metadata)
# Update cache
cache = await scanner.get_cached_data()
# Find and remove model from cache
model_to_remove = next((item for item in cache.raw_data if item['file_path'] == file_path), None)
if model_to_remove:
# Update tags count
for tag in model_to_remove.get('tags', []):
if tag in scanner._tags_count:
scanner._tags_count[tag] = max(0, scanner._tags_count[tag] - 1)
if scanner._tags_count[tag] == 0:
del scanner._tags_count[tag]
# Remove from hash index if available
if hasattr(scanner, '_hash_index') and scanner._hash_index:
scanner._hash_index.remove_by_path(file_path)
# Remove from cache data
cache.raw_data = [item for item in cache.raw_data if item['file_path'] != file_path]
await cache.resort()
# Add to excluded models list
scanner._excluded_models.append(file_path)
await scanner._save_cache_to_disk()
return web.json_response({
'success': True,
'message': f"Model {os.path.basename(file_path)} excluded"
})
except Exception as e:
logger.error(f"Error excluding model: {e}", exc_info=True)
return web.Response(text=str(e), status=500)
@staticmethod
async def handle_download_model(request: web.Request, download_manager: DownloadManager, model_type="lora") -> web.Response:
"""Handle model download request
@@ -500,4 +639,329 @@ class ModelRouteUtils:
)
logger.error(f"Error downloading {model_type}: {error_message}")
return web.Response(status=500, text=error_message)
return web.Response(status=500, text=error_message)
@staticmethod
async def handle_bulk_delete_models(request: web.Request, scanner) -> web.Response:
"""Handle bulk deletion of models
Args:
request: The aiohttp request
scanner: The model scanner instance with cache management methods
Returns:
web.Response: The HTTP response
"""
try:
data = await request.json()
file_paths = data.get('file_paths', [])
if not file_paths:
return web.json_response({
'success': False,
'error': 'No file paths provided for deletion'
}, status=400)
# Use the scanner's bulk delete method to handle all cache and file operations
result = await scanner.bulk_delete_models(file_paths)
return web.json_response({
'success': result.get('success', False),
'total_deleted': result.get('total_deleted', 0),
'total_attempted': result.get('total_attempted', len(file_paths)),
'results': result.get('results', [])
})
except Exception as e:
logger.error(f"Error in bulk delete: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def handle_relink_civitai(request: web.Request, scanner) -> web.Response:
"""Handle CivitAI metadata re-linking request by model ID and/or version ID
Args:
request: The aiohttp request
scanner: The model scanner instance with cache management methods
Returns:
web.Response: The HTTP response
"""
try:
data = await request.json()
file_path = data.get('file_path')
model_id = data.get('model_id')
model_version_id = data.get('model_version_id')
if not file_path or not model_id:
return web.json_response({"success": False, "error": "Both file_path and model_id are required"}, status=400)
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
# Check if model metadata exists
local_metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
# Create a client for fetching from Civitai
client = await CivitaiClient.get_instance()
try:
# Fetch metadata using get_model_version which includes more comprehensive data
civitai_metadata = await client.get_model_version(model_id, model_version_id)
if not civitai_metadata:
error_msg = f"Model version not found on CivitAI for ID: {model_id}"
if model_version_id:
error_msg += f" with version: {model_version_id}"
return web.json_response({"success": False, "error": error_msg}, status=404)
# Try to find the primary model file to get the SHA256 hash
primary_model_file = None
for file in civitai_metadata.get('files', []):
if file.get('primary', False) and file.get('type') == 'Model':
primary_model_file = file
break
# Update the SHA256 hash in local metadata if available
if primary_model_file and primary_model_file.get('hashes', {}).get('SHA256'):
local_metadata['sha256'] = primary_model_file['hashes']['SHA256'].lower()
# Update metadata with CivitAI information
await ModelRouteUtils.update_model_metadata(metadata_path, local_metadata, civitai_metadata, client)
# Update the cache
await scanner.update_single_model_cache(file_path, file_path, local_metadata)
return web.json_response({
"success": True,
"message": f"Model successfully re-linked to Civitai model {model_id}" +
(f" version {model_version_id}" if model_version_id else ""),
"hash": local_metadata.get('sha256', '')
})
finally:
await client.close()
except Exception as e:
logger.error(f"Error re-linking to CivitAI: {e}", exc_info=True)
return web.json_response({"success": False, "error": str(e)}, status=500)
@staticmethod
async def handle_verify_duplicates(request: web.Request, scanner) -> web.Response:
"""Handle verification of duplicate model hashes
Args:
request: The aiohttp request
scanner: The model scanner instance with cache management methods
Returns:
web.Response: The HTTP response with verification results
"""
try:
data = await request.json()
file_paths = data.get('file_paths', [])
if not file_paths:
return web.json_response({
'success': False,
'error': 'No file paths provided for verification'
}, status=400)
# Results tracking
results = {
'verified_as_duplicates': True, # Start true, set to false if any mismatch
'mismatched_files': [],
'new_hash_map': {}
}
# Get expected hash from the first file's metadata
expected_hash = None
first_metadata_path = os.path.splitext(file_paths[0])[0] + '.metadata.json'
first_metadata = await ModelRouteUtils.load_local_metadata(first_metadata_path)
if first_metadata and 'sha256' in first_metadata:
expected_hash = first_metadata['sha256'].lower()
# Process each file
for file_path in file_paths:
# Skip files that don't exist
if not os.path.exists(file_path):
continue
# Calculate actual hash
try:
from .file_utils import calculate_sha256
actual_hash = await calculate_sha256(file_path)
# Get metadata
metadata_path = os.path.splitext(file_path)[0] + '.metadata.json'
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
# Compare hashes
stored_hash = metadata.get('sha256', '').lower()
# Set expected hash from first file if not yet set
if not expected_hash:
expected_hash = stored_hash
# Check if hash matches expected hash
if actual_hash != expected_hash:
results['verified_as_duplicates'] = False
results['mismatched_files'].append(file_path)
results['new_hash_map'][file_path] = actual_hash
# Check if stored hash needs updating
if actual_hash != stored_hash:
# Update metadata with actual hash
metadata['sha256'] = actual_hash
# Save updated metadata
await MetadataManager.save_metadata(file_path, metadata)
# Update cache
await scanner.update_single_model_cache(file_path, file_path, metadata)
except Exception as e:
logger.error(f"Error verifying hash for {file_path}: {e}")
results['mismatched_files'].append(file_path)
results['new_hash_map'][file_path] = "error_calculating_hash"
results['verified_as_duplicates'] = False
return web.json_response({
'success': True,
**results
})
except Exception as e:
logger.error(f"Error verifying duplicate models: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)
@staticmethod
async def handle_rename_model(request: web.Request, scanner) -> web.Response:
"""Handle renaming a model file and its associated files
Args:
request: The aiohttp request
scanner: The model scanner instance
Returns:
web.Response: The HTTP response
"""
try:
data = await request.json()
file_path = data.get('file_path')
new_file_name = data.get('new_file_name')
if not file_path or not new_file_name:
return web.json_response({
'success': False,
'error': 'File path and new file name are required'
}, status=400)
# Validate the new file name (no path separators or invalid characters)
invalid_chars = ['/', '\\', ':', '*', '?', '"', '<', '>', '|']
if any(char in new_file_name for char in invalid_chars):
return web.json_response({
'success': False,
'error': 'Invalid characters in file name'
}, status=400)
# Get the directory and current file name
target_dir = os.path.dirname(file_path)
old_file_name = os.path.splitext(os.path.basename(file_path))[0]
# Check if the target file already exists
new_file_path = os.path.join(target_dir, f"{new_file_name}.safetensors").replace(os.sep, '/')
if os.path.exists(new_file_path):
return web.json_response({
'success': False,
'error': 'A file with this name already exists'
}, status=400)
# Define the patterns for associated files
patterns = [
f"{old_file_name}.safetensors", # Required
f"{old_file_name}.metadata.json",
f"{old_file_name}.metadata.json.bak",
]
# Add all preview file extensions
for ext in PREVIEW_EXTENSIONS:
patterns.append(f"{old_file_name}{ext}")
# Find all matching files
existing_files = []
for pattern in patterns:
path = os.path.join(target_dir, pattern)
if os.path.exists(path):
existing_files.append((path, pattern))
# Get the hash from the main file to update hash index
hash_value = None
metadata = None
metadata_path = os.path.join(target_dir, f"{old_file_name}.metadata.json")
if os.path.exists(metadata_path):
metadata = await ModelRouteUtils.load_local_metadata(metadata_path)
hash_value = metadata.get('sha256')
# Rename all files
renamed_files = []
new_metadata_path = None
for old_path, pattern in existing_files:
# Get the file extension like .safetensors or .metadata.json
ext = ModelRouteUtils.get_multipart_ext(pattern)
# Create the new path
new_path = os.path.join(target_dir, f"{new_file_name}{ext}").replace(os.sep, '/')
# Rename the file
os.rename(old_path, new_path)
renamed_files.append(new_path)
# Keep track of metadata path for later update
if ext == '.metadata.json':
new_metadata_path = new_path
# Update the metadata file with new file name and paths
if new_metadata_path and metadata:
# Update file_name, file_path and preview_url in metadata
metadata['file_name'] = new_file_name
metadata['file_path'] = new_file_path
# Update preview_url if it exists
if 'preview_url' in metadata and metadata['preview_url']:
old_preview = metadata['preview_url']
ext = ModelRouteUtils.get_multipart_ext(old_preview)
new_preview = os.path.join(target_dir, f"{new_file_name}{ext}").replace(os.sep, '/')
metadata['preview_url'] = new_preview
# Save updated metadata
await MetadataManager.save_metadata(new_file_path, metadata)
# Update the scanner cache
if metadata:
await scanner.update_single_model_cache(file_path, new_file_path, metadata)
# Update recipe files and cache if hash is available and recipe_scanner exists
if hash_value and hasattr(scanner, 'update_lora_filename_by_hash'):
recipe_scanner = await ServiceRegistry.get_recipe_scanner()
if recipe_scanner:
recipes_updated, cache_updated = await recipe_scanner.update_lora_filename_by_hash(hash_value, new_file_name)
logger.info(f"Updated {recipes_updated} recipe files and {cache_updated} cache entries for renamed model")
return web.json_response({
'success': True,
'new_file_path': new_file_path,
'renamed_files': renamed_files,
'reload_required': False
})
except Exception as e:
logger.error(f"Error renaming model: {e}", exc_info=True)
return web.json_response({
'success': False,
'error': str(e)
}, status=500)

376
py/utils/usage_stats.py Normal file
View File

@@ -0,0 +1,376 @@
import os
import json
import sys
import time
import asyncio
import logging
import datetime
import shutil
from typing import Dict, Set
from ..config import config
from ..services.service_registry import ServiceRegistry
# Check if running in standalone mode
standalone_mode = 'nodes' not in sys.modules
if not standalone_mode:
from ..metadata_collector.metadata_registry import MetadataRegistry
from ..metadata_collector.constants import MODELS, LORAS
logger = logging.getLogger(__name__)
class UsageStats:
"""Track usage statistics for models and save to JSON"""
_instance = None
_lock = asyncio.Lock() # For thread safety
# Default stats file name
STATS_FILENAME = "lora_manager_stats.json"
BACKUP_SUFFIX = ".backup"
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._initialized = False
return cls._instance
def __init__(self):
if self._initialized:
return
# Initialize stats storage
self.stats = {
"checkpoints": {}, # sha256 -> { total: count, history: { date: count } }
"loras": {}, # sha256 -> { total: count, history: { date: count } }
"total_executions": 0,
"last_save_time": 0
}
# Queue for prompt_ids to process
self.pending_prompt_ids = set()
# Load existing stats if available
self._stats_file_path = self._get_stats_file_path()
self._load_stats()
# Save interval in seconds
self.save_interval = 90 # 1.5 minutes
# Start background task to process queued prompt_ids
self._bg_task = asyncio.create_task(self._background_processor())
self._initialized = True
logger.info("Usage statistics tracker initialized")
def _get_stats_file_path(self) -> str:
"""Get the path to the stats JSON file"""
if not config.loras_roots or len(config.loras_roots) == 0:
# Fallback to temporary directory if no lora roots
return os.path.join(config.temp_directory, self.STATS_FILENAME)
# Use the first lora root
return os.path.join(config.loras_roots[0], self.STATS_FILENAME)
def _backup_old_stats(self):
"""Backup the old stats file before conversion"""
if os.path.exists(self._stats_file_path):
backup_path = f"{self._stats_file_path}{self.BACKUP_SUFFIX}"
try:
shutil.copy2(self._stats_file_path, backup_path)
logger.info(f"Backed up old stats file to {backup_path}")
return True
except Exception as e:
logger.error(f"Failed to backup stats file: {e}")
return False
def _convert_old_format(self, old_stats):
"""Convert old stats format to new format with history"""
new_stats = {
"checkpoints": {},
"loras": {},
"total_executions": old_stats.get("total_executions", 0),
"last_save_time": old_stats.get("last_save_time", time.time())
}
# Get today's date in YYYY-MM-DD format
today = datetime.datetime.now().strftime("%Y-%m-%d")
# Convert checkpoint stats
if "checkpoints" in old_stats and isinstance(old_stats["checkpoints"], dict):
for hash_id, count in old_stats["checkpoints"].items():
new_stats["checkpoints"][hash_id] = {
"total": count,
"history": {
today: count
}
}
# Convert lora stats
if "loras" in old_stats and isinstance(old_stats["loras"], dict):
for hash_id, count in old_stats["loras"].items():
new_stats["loras"][hash_id] = {
"total": count,
"history": {
today: count
}
}
logger.info("Successfully converted stats from old format to new format with history")
return new_stats
def _is_old_format(self, stats):
"""Check if the stats are in the old format (direct count values)"""
# Check if any lora or checkpoint entry is a direct number instead of an object
if "loras" in stats and isinstance(stats["loras"], dict):
for hash_id, data in stats["loras"].items():
if isinstance(data, (int, float)):
return True
if "checkpoints" in stats and isinstance(stats["checkpoints"], dict):
for hash_id, data in stats["checkpoints"].items():
if isinstance(data, (int, float)):
return True
return False
def _load_stats(self):
"""Load existing statistics from file"""
try:
if os.path.exists(self._stats_file_path):
with open(self._stats_file_path, 'r', encoding='utf-8') as f:
loaded_stats = json.load(f)
# Check if old format and needs conversion
if self._is_old_format(loaded_stats):
logger.info("Detected old stats format, performing conversion")
self._backup_old_stats()
self.stats = self._convert_old_format(loaded_stats)
else:
# Update our stats with loaded data (already in new format)
if isinstance(loaded_stats, dict):
# Update individual sections to maintain structure
if "checkpoints" in loaded_stats and isinstance(loaded_stats["checkpoints"], dict):
self.stats["checkpoints"] = loaded_stats["checkpoints"]
if "loras" in loaded_stats and isinstance(loaded_stats["loras"], dict):
self.stats["loras"] = loaded_stats["loras"]
if "total_executions" in loaded_stats:
self.stats["total_executions"] = loaded_stats["total_executions"]
if "last_save_time" in loaded_stats:
self.stats["last_save_time"] = loaded_stats["last_save_time"]
logger.info(f"Loaded usage statistics from {self._stats_file_path}")
except Exception as e:
logger.error(f"Error loading usage statistics: {e}")
async def save_stats(self, force=False):
"""Save statistics to file"""
try:
# Only save if it's been at least save_interval since last save or force is True
current_time = time.time()
if not force and (current_time - self.stats.get("last_save_time", 0)) < self.save_interval:
return False
# Use a lock to prevent concurrent writes
async with self._lock:
# Update last save time
self.stats["last_save_time"] = current_time
# Create directory if it doesn't exist
os.makedirs(os.path.dirname(self._stats_file_path), exist_ok=True)
# Write to a temporary file first, then move it to avoid corruption
temp_path = f"{self._stats_file_path}.tmp"
with open(temp_path, 'w', encoding='utf-8') as f:
json.dump(self.stats, f, indent=2, ensure_ascii=False)
# Replace the old file with the new one
os.replace(temp_path, self._stats_file_path)
logger.debug(f"Saved usage statistics to {self._stats_file_path}")
return True
except Exception as e:
logger.error(f"Error saving usage statistics: {e}", exc_info=True)
return False
def register_execution(self, prompt_id):
"""Register a completed execution by prompt_id for later processing"""
if prompt_id:
self.pending_prompt_ids.add(prompt_id)
async def _background_processor(self):
"""Background task to process queued prompt_ids"""
try:
while True:
# Wait a short interval before checking for new prompt_ids
await asyncio.sleep(5) # Check every 5 seconds
# Process any pending prompt_ids
if self.pending_prompt_ids:
async with self._lock:
# Get a copy of the set and clear original
prompt_ids = self.pending_prompt_ids.copy()
self.pending_prompt_ids.clear()
# Process each prompt_id
registry = MetadataRegistry()
for prompt_id in prompt_ids:
try:
metadata = registry.get_metadata(prompt_id)
await self._process_metadata(metadata)
except Exception as e:
logger.error(f"Error processing prompt_id {prompt_id}: {e}")
# Periodically save stats
await self.save_stats()
except asyncio.CancelledError:
# Task was cancelled, clean up
await self.save_stats(force=True)
except Exception as e:
logger.error(f"Error in background processing task: {e}", exc_info=True)
# Restart the task after a delay if it fails
asyncio.create_task(self._restart_background_task())
async def _restart_background_task(self):
"""Restart the background task after a delay"""
await asyncio.sleep(30) # Wait 30 seconds before restarting
self._bg_task = asyncio.create_task(self._background_processor())
async def _process_metadata(self, metadata):
"""Process metadata from an execution"""
if not metadata or not isinstance(metadata, dict):
return
# Increment total executions count
self.stats["total_executions"] += 1
# Get today's date in YYYY-MM-DD format
today = datetime.datetime.now().strftime("%Y-%m-%d")
# Process checkpoints
if MODELS in metadata and isinstance(metadata[MODELS], dict):
await self._process_checkpoints(metadata[MODELS], today)
# Process loras
if LORAS in metadata and isinstance(metadata[LORAS], dict):
await self._process_loras(metadata[LORAS], today)
async def _process_checkpoints(self, models_data, today_date):
"""Process checkpoint models from metadata"""
try:
# Get checkpoint scanner service
checkpoint_scanner = await ServiceRegistry.get_checkpoint_scanner()
if not checkpoint_scanner:
logger.warning("Checkpoint scanner not available for usage tracking")
return
for node_id, model_info in models_data.items():
if not isinstance(model_info, dict):
continue
# Check if this is a checkpoint model
model_type = model_info.get("type")
if model_type == "checkpoint":
model_name = model_info.get("name")
if not model_name:
continue
# Clean up filename (remove extension if present)
model_filename = os.path.splitext(os.path.basename(model_name))[0]
# Get hash for this checkpoint
model_hash = checkpoint_scanner.get_hash_by_filename(model_filename)
if model_hash:
# Update stats for this checkpoint with date tracking
if model_hash not in self.stats["checkpoints"]:
self.stats["checkpoints"][model_hash] = {
"total": 0,
"history": {}
}
# Increment total count
self.stats["checkpoints"][model_hash]["total"] += 1
# Increment today's count
if today_date not in self.stats["checkpoints"][model_hash]["history"]:
self.stats["checkpoints"][model_hash]["history"][today_date] = 0
self.stats["checkpoints"][model_hash]["history"][today_date] += 1
except Exception as e:
logger.error(f"Error processing checkpoint usage: {e}", exc_info=True)
async def _process_loras(self, loras_data, today_date):
"""Process LoRA models from metadata"""
try:
# Get LoRA scanner service
lora_scanner = await ServiceRegistry.get_lora_scanner()
if not lora_scanner:
logger.warning("LoRA scanner not available for usage tracking")
return
for node_id, lora_info in loras_data.items():
if not isinstance(lora_info, dict):
continue
# Get the list of LoRAs from standardized format
lora_list = lora_info.get("lora_list", [])
for lora in lora_list:
if not isinstance(lora, dict):
continue
lora_name = lora.get("name")
if not lora_name:
continue
# Get hash for this LoRA
lora_hash = lora_scanner.get_hash_by_filename(lora_name)
if lora_hash:
# Update stats for this LoRA with date tracking
if lora_hash not in self.stats["loras"]:
self.stats["loras"][lora_hash] = {
"total": 0,
"history": {}
}
# Increment total count
self.stats["loras"][lora_hash]["total"] += 1
# Increment today's count
if today_date not in self.stats["loras"][lora_hash]["history"]:
self.stats["loras"][lora_hash]["history"][today_date] = 0
self.stats["loras"][lora_hash]["history"][today_date] += 1
except Exception as e:
logger.error(f"Error processing LoRA usage: {e}", exc_info=True)
async def get_stats(self):
"""Get current usage statistics"""
return self.stats
async def get_model_usage_count(self, model_type, sha256):
"""Get usage count for a specific model by hash"""
if model_type == "checkpoint":
if sha256 in self.stats["checkpoints"]:
return self.stats["checkpoints"][sha256]["total"]
elif model_type == "lora":
if sha256 in self.stats["loras"]:
return self.stats["loras"][sha256]["total"]
return 0
async def process_execution(self, prompt_id):
"""Process a prompt execution immediately (synchronous approach)"""
if not prompt_id:
return
try:
# Process metadata for this prompt_id
registry = MetadataRegistry()
metadata = registry.get_metadata(prompt_id)
if metadata:
await self._process_metadata(metadata)
# Save stats if needed
await self.save_stats()
except Exception as e:
logger.error(f"Error processing prompt_id {prompt_id}: {e}", exc_info=True)

View File

@@ -114,3 +114,49 @@ def fuzzy_match(text: str, pattern: str, threshold: float = 0.7) -> bool:
# All words found either as substrings or fuzzy matches
return True
def calculate_recipe_fingerprint(loras):
"""
Calculate a unique fingerprint for a recipe based on its LoRAs.
The fingerprint is created by sorting LoRA hashes, filtering invalid entries,
normalizing strength values to 2 decimal places, and joining in format:
hash1:strength1|hash2:strength2|...
Args:
loras (list): List of LoRA dictionaries with hash and strength values
Returns:
str: The calculated fingerprint
"""
if not loras:
return ""
# Filter valid entries and extract hash and strength
valid_loras = []
for lora in loras:
# Skip excluded loras
if lora.get("exclude", False):
continue
# Get the hash - use modelVersionId as fallback if hash is empty
hash_value = lora.get("hash", "").lower()
if not hash_value and lora.get("isDeleted", False) and lora.get("modelVersionId"):
hash_value = str(lora.get("modelVersionId"))
# Skip entries without a valid hash
if not hash_value:
continue
# Normalize strength to 2 decimal places (check both strength and weight fields)
strength = round(float(lora.get("strength", lora.get("weight", 1.0))), 2)
valid_loras.append((hash_value, strength))
# Sort by hash
valid_loras.sort()
# Join in format hash1:strength1|hash2:strength2|...
fingerprint = "|".join([f"{hash_value}:{strength}" for hash_value, strength in valid_loras])
return fingerprint

View File

@@ -1,3 +0,0 @@
"""
ComfyUI workflow parsing module to extract generation parameters
"""

View File

@@ -1,58 +0,0 @@
"""
Command-line interface for the ComfyUI workflow parser
"""
import argparse
import json
import os
import logging
import sys
from .parser import parse_workflow
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler()]
)
logger = logging.getLogger(__name__)
def main():
"""Entry point for the CLI"""
parser = argparse.ArgumentParser(description='Parse ComfyUI workflow files')
parser.add_argument('input', help='Input workflow JSON file path')
parser.add_argument('-o', '--output', help='Output JSON file path')
parser.add_argument('-p', '--pretty', action='store_true', help='Pretty print JSON output')
parser.add_argument('--debug', action='store_true', help='Enable debug logging')
args = parser.parse_args()
# Set logging level
if args.debug:
logging.getLogger().setLevel(logging.DEBUG)
# Validate input file
if not os.path.isfile(args.input):
logger.error(f"Input file not found: {args.input}")
sys.exit(1)
# Parse workflow
try:
result = parse_workflow(args.input, args.output)
# Print result to console if output file not specified
if not args.output:
if args.pretty:
print(json.dumps(result, indent=4))
else:
print(json.dumps(result))
else:
logger.info(f"Output saved to: {args.output}")
except Exception as e:
logger.error(f"Error parsing workflow: {e}")
if args.debug:
import traceback
traceback.print_exc()
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,3 +0,0 @@
"""
Extension directory for custom node mappers
"""

View File

@@ -1,285 +0,0 @@
"""
ComfyUI Core nodes mappers extension for workflow parsing
"""
import logging
from typing import Dict, Any, List
logger = logging.getLogger(__name__)
# =============================================================================
# Transform Functions
# =============================================================================
def transform_random_noise(inputs: Dict) -> Dict:
"""Transform function for RandomNoise node"""
return {"seed": str(inputs.get("noise_seed", ""))}
def transform_ksampler_select(inputs: Dict) -> Dict:
"""Transform function for KSamplerSelect node"""
return {"sampler": inputs.get("sampler_name", "")}
def transform_basic_scheduler(inputs: Dict) -> Dict:
"""Transform function for BasicScheduler node"""
result = {
"scheduler": inputs.get("scheduler", ""),
"denoise": str(inputs.get("denoise", "1.0"))
}
# Get steps from inputs or steps input
if "steps" in inputs:
if isinstance(inputs["steps"], str):
result["steps"] = inputs["steps"]
elif isinstance(inputs["steps"], dict) and "value" in inputs["steps"]:
result["steps"] = str(inputs["steps"]["value"])
else:
result["steps"] = str(inputs["steps"])
return result
def transform_basic_guider(inputs: Dict) -> Dict:
"""Transform function for BasicGuider node"""
result = {}
# Process conditioning
if "conditioning" in inputs:
if isinstance(inputs["conditioning"], str):
result["prompt"] = inputs["conditioning"]
elif isinstance(inputs["conditioning"], dict):
result["conditioning"] = inputs["conditioning"]
# Get model information if needed
if "model" in inputs and isinstance(inputs["model"], dict):
result["model"] = inputs["model"]
return result
def transform_model_sampling_flux(inputs: Dict) -> Dict:
"""Transform function for ModelSamplingFlux - mostly a pass-through node"""
# This node is primarily used for routing, so we mostly pass through values
return inputs["model"]
def transform_sampler_custom_advanced(inputs: Dict) -> Dict:
"""Transform function for SamplerCustomAdvanced node"""
result = {}
# Extract seed from noise
if "noise" in inputs and isinstance(inputs["noise"], dict):
result["seed"] = str(inputs["noise"].get("seed", ""))
# Extract sampler info
if "sampler" in inputs and isinstance(inputs["sampler"], dict):
sampler = inputs["sampler"].get("sampler", "")
if sampler:
result["sampler"] = sampler
# Extract scheduler, steps, denoise from sigmas
if "sigmas" in inputs and isinstance(inputs["sigmas"], dict):
sigmas = inputs["sigmas"]
result["scheduler"] = sigmas.get("scheduler", "")
result["steps"] = str(sigmas.get("steps", ""))
result["denoise"] = str(sigmas.get("denoise", "1.0"))
# Extract prompt and guidance from guider
if "guider" in inputs and isinstance(inputs["guider"], dict):
guider = inputs["guider"]
# Get prompt from conditioning
if "conditioning" in guider and isinstance(guider["conditioning"], str):
result["prompt"] = guider["conditioning"]
elif "conditioning" in guider and isinstance(guider["conditioning"], dict):
result["guidance"] = guider["conditioning"].get("guidance", "")
result["prompt"] = guider["conditioning"].get("prompt", "")
if "model" in guider and isinstance(guider["model"], dict):
result["checkpoint"] = guider["model"].get("checkpoint", "")
result["loras"] = guider["model"].get("loras", "")
result["clip_skip"] = str(int(guider["model"].get("clip_skip", "-1")) * -1)
# Extract dimensions from latent_image
if "latent_image" in inputs and isinstance(inputs["latent_image"], dict):
latent = inputs["latent_image"]
width = latent.get("width", 0)
height = latent.get("height", 0)
if width and height:
result["width"] = width
result["height"] = height
result["size"] = f"{width}x{height}"
return result
def transform_ksampler(inputs: Dict) -> Dict:
"""Transform function for KSampler nodes"""
result = {
"seed": str(inputs.get("seed", "")),
"steps": str(inputs.get("steps", "")),
"cfg": str(inputs.get("cfg", "")),
"sampler": inputs.get("sampler_name", ""),
"scheduler": inputs.get("scheduler", ""),
}
# Process positive prompt
if "positive" in inputs:
result["prompt"] = inputs["positive"]
# Process negative prompt
if "negative" in inputs:
result["negative_prompt"] = inputs["negative"]
# Get dimensions from latent image
if "latent_image" in inputs and isinstance(inputs["latent_image"], dict):
width = inputs["latent_image"].get("width", 0)
height = inputs["latent_image"].get("height", 0)
if width and height:
result["size"] = f"{width}x{height}"
# Add clip_skip if present
if "clip_skip" in inputs:
result["clip_skip"] = str(inputs.get("clip_skip", ""))
# Add guidance if present
if "guidance" in inputs:
result["guidance"] = str(inputs.get("guidance", ""))
# Add model if present
if "model" in inputs:
result["checkpoint"] = inputs.get("model", {}).get("checkpoint", "")
result["loras"] = inputs.get("model", {}).get("loras", "")
result["clip_skip"] = str(inputs.get("model", {}).get("clip_skip", -1) * -1)
return result
def transform_empty_latent(inputs: Dict) -> Dict:
"""Transform function for EmptyLatentImage nodes"""
width = inputs.get("width", 0)
height = inputs.get("height", 0)
return {"width": width, "height": height, "size": f"{width}x{height}"}
def transform_clip_text(inputs: Dict) -> Any:
"""Transform function for CLIPTextEncode nodes"""
return inputs.get("text", "")
def transform_flux_guidance(inputs: Dict) -> Dict:
"""Transform function for FluxGuidance nodes"""
result = {}
if "guidance" in inputs:
result["guidance"] = inputs["guidance"]
if "conditioning" in inputs:
conditioning = inputs["conditioning"]
if isinstance(conditioning, str):
result["prompt"] = conditioning
else:
result["prompt"] = "Unknown prompt"
return result
def transform_unet_loader(inputs: Dict) -> Dict:
"""Transform function for UNETLoader node"""
unet_name = inputs.get("unet_name", "")
return {"checkpoint": unet_name} if unet_name else {}
def transform_checkpoint_loader(inputs: Dict) -> Dict:
"""Transform function for CheckpointLoaderSimple node"""
ckpt_name = inputs.get("ckpt_name", "")
return {"checkpoint": ckpt_name} if ckpt_name else {}
def transform_latent_upscale_by(inputs: Dict) -> Dict:
"""Transform function for LatentUpscaleBy node"""
result = {}
width = inputs["samples"].get("width", 0) * inputs["scale_by"]
height = inputs["samples"].get("height", 0) * inputs["scale_by"]
result["width"] = width
result["height"] = height
result["size"] = f"{width}x{height}"
return result
def transform_clip_set_last_layer(inputs: Dict) -> Dict:
"""Transform function for CLIPSetLastLayer node"""
result = {}
if "stop_at_clip_layer" in inputs:
result["clip_skip"] = inputs["stop_at_clip_layer"]
return result
# =============================================================================
# Node Mapper Definitions
# =============================================================================
# Define the mappers for ComfyUI core nodes not in main mapper
NODE_MAPPERS_EXT = {
# KSamplers
"SamplerCustomAdvanced": {
"inputs_to_track": ["noise", "guider", "sampler", "sigmas", "latent_image"],
"transform_func": transform_sampler_custom_advanced
},
"KSampler": {
"inputs_to_track": [
"seed", "steps", "cfg", "sampler_name", "scheduler",
"denoise", "positive", "negative", "latent_image",
"model", "clip_skip"
],
"transform_func": transform_ksampler
},
# ComfyUI core nodes
"EmptyLatentImage": {
"inputs_to_track": ["width", "height", "batch_size"],
"transform_func": transform_empty_latent
},
"EmptySD3LatentImage": {
"inputs_to_track": ["width", "height", "batch_size"],
"transform_func": transform_empty_latent
},
"CLIPTextEncode": {
"inputs_to_track": ["text", "clip"],
"transform_func": transform_clip_text
},
"FluxGuidance": {
"inputs_to_track": ["guidance", "conditioning"],
"transform_func": transform_flux_guidance
},
"RandomNoise": {
"inputs_to_track": ["noise_seed"],
"transform_func": transform_random_noise
},
"KSamplerSelect": {
"inputs_to_track": ["sampler_name"],
"transform_func": transform_ksampler_select
},
"BasicScheduler": {
"inputs_to_track": ["scheduler", "steps", "denoise", "model"],
"transform_func": transform_basic_scheduler
},
"BasicGuider": {
"inputs_to_track": ["model", "conditioning"],
"transform_func": transform_basic_guider
},
"ModelSamplingFlux": {
"inputs_to_track": ["max_shift", "base_shift", "width", "height", "model"],
"transform_func": transform_model_sampling_flux
},
"UNETLoader": {
"inputs_to_track": ["unet_name"],
"transform_func": transform_unet_loader
},
"CheckpointLoaderSimple": {
"inputs_to_track": ["ckpt_name"],
"transform_func": transform_checkpoint_loader
},
"LatentUpscale": {
"inputs_to_track": ["width", "height"],
"transform_func": transform_empty_latent
},
"LatentUpscaleBy": {
"inputs_to_track": ["samples", "scale_by"],
"transform_func": transform_latent_upscale_by
},
"CLIPSetLastLayer": {
"inputs_to_track": ["clip", "stop_at_clip_layer"],
"transform_func": transform_clip_set_last_layer
}
}

View File

@@ -1,74 +0,0 @@
"""
KJNodes mappers extension for ComfyUI workflow parsing
"""
import logging
import re
from typing import Dict, Any
logger = logging.getLogger(__name__)
# =============================================================================
# Transform Functions
# =============================================================================
def transform_join_strings(inputs: Dict) -> str:
"""Transform function for JoinStrings nodes"""
string1 = inputs.get("string1", "")
string2 = inputs.get("string2", "")
delimiter = inputs.get("delimiter", "")
return f"{string1}{delimiter}{string2}"
def transform_string_constant(inputs: Dict) -> str:
"""Transform function for StringConstant nodes"""
return inputs.get("string", "")
def transform_empty_latent_presets(inputs: Dict) -> Dict:
"""Transform function for EmptyLatentImagePresets nodes"""
dimensions = inputs.get("dimensions", "")
invert = inputs.get("invert", False)
# Extract width and height from dimensions string
# Expected format: "width x height (ratio)" or similar
width = 0
height = 0
if dimensions:
# Try to extract dimensions using regex
match = re.search(r'(\d+)\s*x\s*(\d+)', dimensions)
if match:
width = int(match.group(1))
height = int(match.group(2))
# If invert is True, swap width and height
if invert and width and height:
width, height = height, width
return {"width": width, "height": height, "size": f"{width}x{height}"}
def transform_int_constant(inputs: Dict) -> int:
"""Transform function for INTConstant nodes"""
return inputs.get("value", 0)
# =============================================================================
# Node Mapper Definitions
# =============================================================================
# Define the mappers for KJNodes
NODE_MAPPERS_EXT = {
"JoinStrings": {
"inputs_to_track": ["string1", "string2", "delimiter"],
"transform_func": transform_join_strings
},
"StringConstantMultiline": {
"inputs_to_track": ["string"],
"transform_func": transform_string_constant
},
"EmptyLatentImagePresets": {
"inputs_to_track": ["dimensions", "invert", "batch_size"],
"transform_func": transform_empty_latent_presets
},
"INTConstant": {
"inputs_to_track": ["value"],
"transform_func": transform_int_constant
}
}

View File

@@ -1,37 +0,0 @@
"""
Main entry point for the workflow parser module
"""
import os
import sys
import logging
from typing import Dict, Optional, Union
# Add the parent directory to sys.path to enable imports
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, '..', '..'))
sys.path.insert(0, os.path.dirname(SCRIPT_DIR))
from .parser import parse_workflow
logger = logging.getLogger(__name__)
def parse_comfyui_workflow(
workflow_path: str,
output_path: Optional[str] = None
) -> Dict:
"""
Parse a ComfyUI workflow file and extract generation parameters
Args:
workflow_path: Path to the workflow JSON file
output_path: Optional path to save the output JSON
Returns:
Dictionary containing extracted parameters
"""
return parse_workflow(workflow_path, output_path)
if __name__ == "__main__":
# If run directly, use the CLI
from .cli import main
main()

View File

@@ -1,282 +0,0 @@
"""
Node mappers for ComfyUI workflow parsing
"""
import logging
import os
import importlib.util
import inspect
from typing import Dict, List, Any, Optional, Union, Type, Callable, Tuple
logger = logging.getLogger(__name__)
# Global mapper registry
_MAPPER_REGISTRY: Dict[str, Dict] = {}
# =============================================================================
# Mapper Definition Functions
# =============================================================================
def create_mapper(
node_type: str,
inputs_to_track: List[str],
transform_func: Callable[[Dict], Any] = None
) -> Dict:
"""Create a mapper definition for a node type"""
mapper = {
"node_type": node_type,
"inputs_to_track": inputs_to_track,
"transform": transform_func or (lambda inputs: inputs)
}
return mapper
def register_mapper(mapper: Dict) -> None:
"""Register a node mapper in the global registry"""
_MAPPER_REGISTRY[mapper["node_type"]] = mapper
logger.debug(f"Registered mapper for node type: {mapper['node_type']}")
def get_mapper(node_type: str) -> Optional[Dict]:
"""Get a mapper for the specified node type"""
return _MAPPER_REGISTRY.get(node_type)
def get_all_mappers() -> Dict[str, Dict]:
"""Get all registered mappers"""
return _MAPPER_REGISTRY.copy()
# =============================================================================
# Node Processing Function
# =============================================================================
def process_node(node_id: str, node_data: Dict, workflow: Dict, parser: 'WorkflowParser') -> Any: # type: ignore
"""Process a node using its mapper and extract relevant information"""
node_type = node_data.get("class_type")
mapper = get_mapper(node_type)
if not mapper:
logger.warning(f"No mapper found for node type: {node_type}")
return None
result = {}
# Extract inputs based on the mapper's tracked inputs
for input_name in mapper["inputs_to_track"]:
if input_name in node_data.get("inputs", {}):
input_value = node_data["inputs"][input_name]
# Check if input is a reference to another node's output
if isinstance(input_value, list) and len(input_value) == 2:
try:
# Format is [node_id, output_slot]
ref_node_id, output_slot = input_value
# Convert node_id to string if it's an integer
if isinstance(ref_node_id, int):
ref_node_id = str(ref_node_id)
# Recursively process the referenced node
ref_value = parser.process_node(ref_node_id, workflow)
if ref_value is not None:
result[input_name] = ref_value
else:
# If we couldn't get a value from the reference, store the raw value
result[input_name] = input_value
except Exception as e:
logger.error(f"Error processing reference in node {node_id}, input {input_name}: {e}")
result[input_name] = input_value
else:
# Direct value
result[input_name] = input_value
# Apply the transform function
try:
return mapper["transform"](result)
except Exception as e:
logger.error(f"Error in transform function for node {node_id} of type {node_type}: {e}")
return result
# =============================================================================
# Transform Functions
# =============================================================================
def transform_lora_loader(inputs: Dict) -> Dict:
"""Transform function for LoraLoader nodes"""
loras_data = inputs.get("loras", [])
lora_stack = inputs.get("lora_stack", {}).get("lora_stack", [])
lora_texts = []
# Process loras array
if isinstance(loras_data, dict) and "__value__" in loras_data:
loras_list = loras_data["__value__"]
elif isinstance(loras_data, list):
loras_list = loras_data
else:
loras_list = []
# Process each active lora entry
for lora in loras_list:
if isinstance(lora, dict) and lora.get("active", False):
lora_name = lora.get("name", "")
strength = lora.get("strength", 1.0)
lora_texts.append(f"<lora:{lora_name}:{strength}>")
# Process lora_stack if valid
if lora_stack and isinstance(lora_stack, list):
if not (len(lora_stack) == 2 and isinstance(lora_stack[0], (str, int)) and isinstance(lora_stack[1], int)):
for stack_entry in lora_stack:
lora_name = stack_entry[0]
strength = stack_entry[1]
lora_texts.append(f"<lora:{lora_name}:{strength}>")
result = {
"checkpoint": inputs.get("model", {}).get("checkpoint", ""),
"loras": " ".join(lora_texts)
}
if "clip" in inputs and isinstance(inputs["clip"], dict):
result["clip_skip"] = inputs["clip"].get("clip_skip", "-1")
return result
def transform_lora_stacker(inputs: Dict) -> Dict:
"""Transform function for LoraStacker nodes"""
loras_data = inputs.get("loras", [])
result_stack = []
# Handle existing stack entries
existing_stack = []
lora_stack_input = inputs.get("lora_stack", [])
if isinstance(lora_stack_input, dict) and "lora_stack" in lora_stack_input:
existing_stack = lora_stack_input["lora_stack"]
elif isinstance(lora_stack_input, list):
if not (len(lora_stack_input) == 2 and isinstance(lora_stack_input[0], (str, int)) and
isinstance(lora_stack_input[1], int)):
existing_stack = lora_stack_input
# Add existing entries
if existing_stack:
result_stack.extend(existing_stack)
# Process new loras
if isinstance(loras_data, dict) and "__value__" in loras_data:
loras_list = loras_data["__value__"]
elif isinstance(loras_data, list):
loras_list = loras_data
else:
loras_list = []
for lora in loras_list:
if isinstance(lora, dict) and lora.get("active", False):
lora_name = lora.get("name", "")
strength = float(lora.get("strength", 1.0))
result_stack.append((lora_name, strength))
return {"lora_stack": result_stack}
def transform_trigger_word_toggle(inputs: Dict) -> str:
"""Transform function for TriggerWordToggle nodes"""
toggle_data = inputs.get("toggle_trigger_words", [])
if isinstance(toggle_data, dict) and "__value__" in toggle_data:
toggle_words = toggle_data["__value__"]
elif isinstance(toggle_data, list):
toggle_words = toggle_data
else:
toggle_words = []
# Filter active trigger words
active_words = []
for item in toggle_words:
if isinstance(item, dict) and item.get("active", False):
word = item.get("text", "")
if word and not word.startswith("__dummy"):
active_words.append(word)
return ", ".join(active_words)
# =============================================================================
# Node Mapper Definitions
# =============================================================================
# Central definition of all supported node types and their configurations
NODE_MAPPERS = {
# LoraManager nodes
"Lora Loader (LoraManager)": {
"inputs_to_track": ["model", "clip", "loras", "lora_stack"],
"transform_func": transform_lora_loader
},
"Lora Stacker (LoraManager)": {
"inputs_to_track": ["loras", "lora_stack"],
"transform_func": transform_lora_stacker
},
"TriggerWord Toggle (LoraManager)": {
"inputs_to_track": ["toggle_trigger_words"],
"transform_func": transform_trigger_word_toggle
}
}
def register_all_mappers() -> None:
"""Register all mappers from the NODE_MAPPERS dictionary"""
for node_type, config in NODE_MAPPERS.items():
mapper = create_mapper(
node_type=node_type,
inputs_to_track=config["inputs_to_track"],
transform_func=config["transform_func"]
)
register_mapper(mapper)
logger.info(f"Registered {len(NODE_MAPPERS)} node mappers")
# =============================================================================
# Extension Loading
# =============================================================================
def load_extensions(ext_dir: str = None) -> None:
"""
Load mapper extensions from the specified directory
Extension files should define a NODE_MAPPERS_EXT dictionary containing mapper configurations.
These will be added to the global NODE_MAPPERS dictionary and registered automatically.
"""
# Use default path if none provided
if ext_dir is None:
# Get the directory of this file
current_dir = os.path.dirname(os.path.abspath(__file__))
ext_dir = os.path.join(current_dir, 'ext')
# Ensure the extension directory exists
if not os.path.exists(ext_dir):
os.makedirs(ext_dir, exist_ok=True)
logger.info(f"Created extension directory: {ext_dir}")
return
# Load each Python file in the extension directory
for filename in os.listdir(ext_dir):
if filename.endswith('.py') and not filename.startswith('_'):
module_path = os.path.join(ext_dir, filename)
module_name = f"workflow.ext.{filename[:-3]}" # Remove .py
try:
# Load the module
spec = importlib.util.spec_from_file_location(module_name, module_path)
if spec and spec.loader:
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
# Check if the module defines NODE_MAPPERS_EXT
if hasattr(module, 'NODE_MAPPERS_EXT'):
# Add the extension mappers to the global NODE_MAPPERS dictionary
NODE_MAPPERS.update(module.NODE_MAPPERS_EXT)
logger.info(f"Added {len(module.NODE_MAPPERS_EXT)} mappers from extension: {filename}")
else:
logger.warning(f"Extension {filename} does not define NODE_MAPPERS_EXT dictionary")
except Exception as e:
logger.warning(f"Error loading extension {filename}: {e}")
# Re-register all mappers after loading extensions
register_all_mappers()
# Initialize the registry with default mappers
# register_default_mappers()

View File

@@ -1,181 +0,0 @@
"""
Main workflow parser implementation for ComfyUI
"""
import json
import logging
from typing import Dict, List, Any, Optional, Union, Set
from .mappers import get_mapper, get_all_mappers, load_extensions, process_node
from .utils import (
load_workflow, save_output, find_node_by_type,
trace_model_path
)
logger = logging.getLogger(__name__)
class WorkflowParser:
"""Parser for ComfyUI workflows"""
def __init__(self):
"""Initialize the parser with mappers"""
self.processed_nodes: Set[str] = set() # Track processed nodes to avoid cycles
self.node_results_cache: Dict[str, Any] = {} # Cache for processed node results
# Load extensions
load_extensions()
def process_node(self, node_id: str, workflow: Dict) -> Any:
"""Process a single node and extract relevant information"""
# Return cached result if available
if node_id in self.node_results_cache:
return self.node_results_cache[node_id]
# Check if we're in a cycle
if node_id in self.processed_nodes:
return None
# Mark this node as being processed (to detect cycles)
self.processed_nodes.add(node_id)
if node_id not in workflow:
self.processed_nodes.remove(node_id)
return None
node_data = workflow[node_id]
node_type = node_data.get("class_type")
result = None
if get_mapper(node_type):
try:
result = process_node(node_id, node_data, workflow, self)
# Cache the result
self.node_results_cache[node_id] = result
except Exception as e:
logger.error(f"Error processing node {node_id} of type {node_type}: {e}", exc_info=True)
# Return a partial result or None depending on how we want to handle errors
result = {}
# Remove node from processed set to allow it to be processed again in a different context
self.processed_nodes.remove(node_id)
return result
def find_primary_sampler_node(self, workflow: Dict) -> Optional[str]:
"""
Find the primary sampler node in the workflow.
Priority:
1. First try to find a SamplerCustomAdvanced node
2. If not found, look for KSampler nodes with denoise=1.0
3. If still not found, use the first KSampler node
Args:
workflow: The workflow data as a dictionary
Returns:
The node ID of the primary sampler node, or None if not found
"""
# First check for SamplerCustomAdvanced nodes
sampler_advanced_nodes = []
ksampler_nodes = []
# Scan workflow for sampler nodes
for node_id, node_data in workflow.items():
node_type = node_data.get("class_type")
if node_type == "SamplerCustomAdvanced":
sampler_advanced_nodes.append(node_id)
elif node_type == "KSampler":
ksampler_nodes.append(node_id)
# If we found SamplerCustomAdvanced nodes, return the first one
if sampler_advanced_nodes:
logger.debug(f"Found SamplerCustomAdvanced node: {sampler_advanced_nodes[0]}")
return sampler_advanced_nodes[0]
# If we have KSampler nodes, look for one with denoise=1.0
if ksampler_nodes:
for node_id in ksampler_nodes:
node_data = workflow[node_id]
inputs = node_data.get("inputs", {})
denoise = inputs.get("denoise", 0)
# Check if denoise is 1.0 (allowing for small floating point differences)
if abs(float(denoise) - 1.0) < 0.001:
logger.debug(f"Found KSampler node with denoise=1.0: {node_id}")
return node_id
# If no KSampler with denoise=1.0 found, use the first one
logger.debug(f"No KSampler with denoise=1.0 found, using first KSampler: {ksampler_nodes[0]}")
return ksampler_nodes[0]
# No sampler nodes found
logger.warning("No sampler nodes found in workflow")
return None
def parse_workflow(self, workflow_data: Union[str, Dict], output_path: Optional[str] = None) -> Dict:
"""
Parse the workflow and extract generation parameters
Args:
workflow_data: The workflow data as a dictionary or a file path
output_path: Optional path to save the output JSON
Returns:
Dictionary containing extracted parameters
"""
# Load workflow from file if needed
if isinstance(workflow_data, str):
workflow = load_workflow(workflow_data)
else:
workflow = workflow_data
# Reset the processed nodes tracker and cache
self.processed_nodes = set()
self.node_results_cache = {}
# Find the primary sampler node
sampler_node_id = self.find_primary_sampler_node(workflow)
if not sampler_node_id:
logger.warning("No suitable sampler node found in workflow")
return {}
# Process sampler node to extract parameters
sampler_result = self.process_node(sampler_node_id, workflow)
if not sampler_result:
return {}
# Return the sampler result directly - it's already in the format we need
# This simplifies the structure and makes it easier to use in recipe_routes.py
# Handle standard ComfyUI names vs our output format
if "cfg" in sampler_result:
sampler_result["cfg_scale"] = sampler_result.pop("cfg")
# Add clip_skip = 1 to match reference output if not already present
if "clip_skip" not in sampler_result:
sampler_result["clip_skip"] = "1"
# Ensure the prompt is a string and not a nested dictionary
if "prompt" in sampler_result and isinstance(sampler_result["prompt"], dict):
if "prompt" in sampler_result["prompt"]:
sampler_result["prompt"] = sampler_result["prompt"]["prompt"]
# Save the result if requested
if output_path:
save_output(sampler_result, output_path)
return sampler_result
def parse_workflow(workflow_path: str, output_path: Optional[str] = None) -> Dict:
"""
Parse a ComfyUI workflow file and extract generation parameters
Args:
workflow_path: Path to the workflow JSON file
output_path: Optional path to save the output JSON
Returns:
Dictionary containing extracted parameters
"""
parser = WorkflowParser()
return parser.parse_workflow(workflow_path, output_path)

View File

@@ -1,63 +0,0 @@
"""
Test script for the ComfyUI workflow parser
"""
import os
import json
import logging
from .parser import parse_workflow
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[logging.StreamHandler()]
)
logger = logging.getLogger(__name__)
# Configure paths
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
ROOT_DIR = os.path.abspath(os.path.join(SCRIPT_DIR, '..', '..'))
REFS_DIR = os.path.join(ROOT_DIR, 'refs')
OUTPUT_DIR = os.path.join(ROOT_DIR, 'output')
def test_parse_flux_workflow():
"""Test parsing the flux example workflow"""
# Ensure output directory exists
os.makedirs(OUTPUT_DIR, exist_ok=True)
# Define input and output paths
input_path = os.path.join(REFS_DIR, 'flux_prompt.json')
output_path = os.path.join(OUTPUT_DIR, 'parsed_flux_output.json')
# Parse workflow
logger.info(f"Parsing workflow: {input_path}")
result = parse_workflow(input_path, output_path)
# Print result summary
logger.info(f"Output saved to: {output_path}")
logger.info(f"Parsing completed. Result summary:")
logger.info(f" LoRAs: {result.get('loras', '')}")
gen_params = result.get('gen_params', {})
logger.info(f" Prompt: {gen_params.get('prompt', '')[:50]}...")
logger.info(f" Steps: {gen_params.get('steps', '')}")
logger.info(f" Sampler: {gen_params.get('sampler', '')}")
logger.info(f" Size: {gen_params.get('size', '')}")
# Compare with reference output
ref_output_path = os.path.join(REFS_DIR, 'flux_output.json')
try:
with open(ref_output_path, 'r') as f:
ref_output = json.load(f)
# Simple validation
loras_match = result.get('loras', '') == ref_output.get('loras', '')
prompt_match = gen_params.get('prompt', '') == ref_output.get('gen_params', {}).get('prompt', '')
logger.info(f"Validation against reference:")
logger.info(f" LoRAs match: {loras_match}")
logger.info(f" Prompt match: {prompt_match}")
except Exception as e:
logger.warning(f"Failed to compare with reference output: {e}")
if __name__ == "__main__":
test_parse_flux_workflow()

View File

@@ -1,120 +0,0 @@
"""
Utility functions for ComfyUI workflow parsing
"""
import json
import os
import logging
from typing import Dict, List, Any, Optional, Union, Set, Tuple
logger = logging.getLogger(__name__)
def load_workflow(workflow_path: str) -> Dict:
"""Load a workflow from a JSON file"""
try:
with open(workflow_path, 'r', encoding='utf-8') as f:
return json.load(f)
except Exception as e:
logger.error(f"Error loading workflow from {workflow_path}: {e}")
raise
def save_output(output: Dict, output_path: str) -> None:
"""Save the parsed output to a JSON file"""
os.makedirs(os.path.dirname(os.path.abspath(output_path)), exist_ok=True)
try:
with open(output_path, 'w', encoding='utf-8') as f:
json.dump(output, f, indent=4)
except Exception as e:
logger.error(f"Error saving output to {output_path}: {e}")
raise
def find_node_by_type(workflow: Dict, node_type: str) -> Optional[str]:
"""Find a node of the specified type in the workflow"""
for node_id, node_data in workflow.items():
if node_data.get("class_type") == node_type:
return node_id
return None
def find_nodes_by_type(workflow: Dict, node_type: str) -> List[str]:
"""Find all nodes of the specified type in the workflow"""
return [node_id for node_id, node_data in workflow.items()
if node_data.get("class_type") == node_type]
def get_input_node_ids(workflow: Dict, node_id: str) -> Dict[str, Tuple[str, int]]:
"""
Get the node IDs for all inputs of the given node
Returns a dictionary mapping input names to (node_id, output_slot) tuples
"""
result = {}
if node_id not in workflow:
return result
node_data = workflow[node_id]
for input_name, input_value in node_data.get("inputs", {}).items():
# Check if this input is connected to another node
if isinstance(input_value, list) and len(input_value) == 2:
# Input is connected to another node's output
# Format: [node_id, output_slot]
ref_node_id, output_slot = input_value
result[input_name] = (str(ref_node_id), output_slot)
return result
def trace_model_path(workflow: Dict, start_node_id: str) -> List[str]:
"""
Trace the model path backward from KSampler to find all LoRA nodes
Args:
workflow: The workflow data
start_node_id: The starting node ID (usually KSampler)
Returns:
List of node IDs in the model path
"""
model_path_nodes = []
# Get the model input from the start node
if start_node_id not in workflow:
return model_path_nodes
# Track visited nodes to avoid cycles
visited = set()
# Stack for depth-first search
stack = []
# Get model input reference if available
start_node = workflow[start_node_id]
if "inputs" in start_node and "model" in start_node["inputs"] and isinstance(start_node["inputs"]["model"], list):
model_ref = start_node["inputs"]["model"]
stack.append(str(model_ref[0]))
# Perform depth-first search
while stack:
node_id = stack.pop()
# Skip if already visited
if node_id in visited:
continue
# Mark as visited
visited.add(node_id)
# Skip if node doesn't exist
if node_id not in workflow:
continue
node = workflow[node_id]
node_type = node.get("class_type", "")
# Add current node to result list if it's a LoRA node
if "Lora" in node_type:
model_path_nodes.append(node_id)
# Add all input nodes that have a "model" or "lora_stack" output to the stack
if "inputs" in node:
for input_name, input_value in node["inputs"].items():
if input_name in ["model", "lora_stack"] and isinstance(input_value, list) and len(input_value) == 2:
stack.append(str(input_value[0]))
return model_path_nodes

View File

@@ -1,7 +1,7 @@
[project]
name = "comfyui-lora-manager"
description = "LoRA Manager for ComfyUI - Access it at http://localhost:8188/loras for managing LoRA models with previews and metadata integration."
version = "0.8.7"
version = "0.8.19"
license = {file = "LICENSE"}
dependencies = [
"aiohttp",
@@ -12,7 +12,10 @@ dependencies = [
"piexif",
"Pillow",
"olefile", # for getting rid of warning message
"requests"
"requests",
"toml",
"natsort",
"msgpack"
]
[project.urls]
@@ -22,4 +25,4 @@ Repository = "https://github.com/willmiao/ComfyUI-Lora-Manager"
[tool.comfy]
PublisherId = "willmiao"
DisplayName = "ComfyUI-Lora-Manager"
Icon = ""
Icon = "https://github.com/willmiao/ComfyUI-Lora-Manager/blob/main/static/images/android-chrome-512x512.png?raw=true"

View File

@@ -1,11 +1,258 @@
{
"loras": "<lora:ck-neon-retrowave-IL-000012:0.8> <lora:aorunIllstrious:1> <lora:ck-shadow-circuit-IL-000012:0.78> <lora:MoriiMee_Gothic_Niji_Style_Illustrious_r1:0.45> <lora:ck-nc-cyberpunk-IL-000011:0.4>",
"prompt": "in the style of ck-rw, aorun, scales, makeup, bare shoulders, pointy ears, dress, claws, in the style of cksc, artist:moriimee, in the style of cknc, masterpiece, best quality, good quality, very aesthetic, absurdres, newest, 8K, depth of field, focused subject, close up, stylized, in gold and neon shades, wabi sabi, 1girl, rainbow angel wings, looking at viewer, dynamic angle, from below, from side, relaxing",
"negative_prompt": "bad quality, worst quality, worst detail, sketch ,signature, watermark, patreon logo, nsfw",
"steps": "20",
"sampler": "euler_ancestral",
"cfg_scale": "8",
"seed": "241",
"size": "832x1216",
"clip_skip": "2"
"id": 649516,
"name": "Cynthia -シロナ - Pokemon Diamond and Pearl - PDXL LORA",
"description": "<p><strong>Warning: Without Adetailer eyes are fucked (rainbow color and artefact)</strong></p><p><span style=\"color:rgb(193, 194, 197)\">Trained on </span><a target=\"_blank\" rel=\"ugc\" href=\"https://civitai.com/models/257749/horsefucker-diffusion-v6-xl\"><strong>Pony Diffusion V6 XL</strong></a> with 63 pictures.<br />Best result with weight between : 0.8-1.</p><p><span style=\"color:rgb(193, 194, 197)\">Basic prompts : </span><code>1girl, cynthia \\(pokemon\\), blonde hair, hair over one eye, very long hair, grey eyes, eyelashes, hair ornament</code> <br /><span style=\"color:rgb(193, 194, 197)\">Outfit prompts : </span><code>fur collar, black coat, fur-trimmed coat, long sleeves, black pants, black shirt, high heels</code></p><p>Reviews are really appreciated, i love to see the community use my work, that's why I share it.<br />If you like my work, you can tip me <a target=\"_blank\" rel=\"ugc\" href=\"https://ko-fi.com/konan49773\"><strong>here.</strong></a></p><p>Got a specific request ? I'm open for commission on my <a target=\"_blank\" rel=\"ugc\" href=\"https://ko-fi.com/konan49773/commissions\"><strong>kofi</strong></a> or<strong> </strong><a target=\"_blank\" rel=\"ugc\" href=\"https://www.fiverr.com/konanai/create-lora-model-for-you\"><strong>fiverr gig</strong></a> *! If you provide enough data, OCs are accepted</p>",
"allowNoCredit": true,
"allowCommercialUse": [
"Image",
"RentCivit"
],
"allowDerivatives": true,
"allowDifferentLicense": true,
"type": "LORA",
"minor": false,
"sfwOnly": false,
"poi": false,
"nsfw": false,
"nsfwLevel": 29,
"availability": "Public",
"cosmetic": null,
"supportsGeneration": true,
"stats": {
"downloadCount": 811,
"favoriteCount": 0,
"thumbsUpCount": 175,
"thumbsDownCount": 0,
"commentCount": 4,
"ratingCount": 0,
"rating": 0,
"tippedAmountCount": 10
},
"creator": {
"username": "Konan",
"image": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7cd552a1-60fe-4baf-a0e4-f7d5d5381711/width=96/Konan.jpeg"
},
"tags": [
"anime",
"character",
"cynthia",
"woman",
"pokemon",
"pokegirl"
],
"modelVersions": [
{
"id": 726676,
"index": 0,
"name": "v1.0",
"baseModel": "Pony",
"createdAt": "2024-08-16T01:13:16.099Z",
"publishedAt": "2024-08-16T01:14:44.984Z",
"status": "Published",
"availability": "Public",
"nsfwLevel": 29,
"trainedWords": [
"1girl, cynthia \\(pokemon\\), blonde hair, hair over one eye, very long hair, grey eyes, eyelashes, hair ornament",
"fur collar, black coat, fur-trimmed coat, long sleeves, black pants, black shirt, high heels"
],
"covered": true,
"stats": {
"downloadCount": 811,
"ratingCount": 0,
"rating": 0,
"thumbsUpCount": 175,
"thumbsDownCount": 0
},
"files": [
{
"id": 641092,
"sizeKB": 56079.65234375,
"name": "CynthiaXL.safetensors",
"type": "Model",
"pickleScanResult": "Success",
"pickleScanMessage": "No Pickle imports",
"virusScanResult": "Success",
"virusScanMessage": null,
"scannedAt": "2024-08-16T01:17:19.087Z",
"metadata": {
"format": "SafeTensor"
},
"hashes": {},
"downloadUrl": "https://civitai.com/api/download/models/726676",
"primary": true
}
],
"images": [
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/b346d757-2b59-4aeb-9f09-3bee2724519d/width=1248/24511993.jpeg",
"nsfwLevel": 1,
"width": 1248,
"height": 1824,
"hash": "UqNc==RP.9s+~pxvIst7kWWBWBjY%MWBt7WB",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/fc132ac0-cc1c-4b68-a1d7-5b97b0996ac2/width=1248/24511997.jpeg",
"nsfwLevel": 1,
"width": 1248,
"height": 1824,
"hash": "UMGSS+?tTw.60MIX9cbb~WxHRRR-NEtLRiR%",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/7b3237d1-e672-466a-85d0-cc5dd42ab130/width=1160/24512001.jpeg",
"nsfwLevel": 4,
"width": 1160,
"height": 1696,
"hash": "U9NA6f~o00%h00wvIYt74:ER-=D%5600DiE1",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/ccd7d11d-4fa9-4434-85a1-fb999312e60d/width=1248/24511991.jpeg",
"nsfwLevel": 1,
"width": 1248,
"height": 1824,
"hash": "UyNTg.j?~qxu?aoLRkj]%MfkM{jZaya}a#ax",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/1743be6d-7fe5-4b55-9f19-c931618fa259/width=1248/24511996.jpeg",
"nsfwLevel": 4,
"width": 1248,
"height": 1824,
"hash": "UGOC~n^+?w~6Tx_4oM^$yYEkMds74:9F#*xY",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/91693c98-d037-4489-882c-100eb26019a0/width=1160/24512010.jpeg",
"nsfwLevel": 4,
"width": 1160,
"height": 1696,
"hash": "UJI}kp^-Kl%hXAIX4;Nf^+M|9GRP0Mt8%L%2",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/49c7a294-ac5b-4832-98e5-2acd0f1a8782/width=1248/24512017.jpeg",
"nsfwLevel": 4,
"width": 1248,
"height": 1824,
"hash": "UML;8Qn|9G%3mnWA4nWFMf%N?Hae~qog-oNF",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d7b442f2-6ead-4a7a-9578-54d9ec2ff148/width=1248/24512015.jpeg",
"nsfwLevel": 1,
"width": 1248,
"height": 1824,
"hash": "UPGR#kt8xw%M0LWC9bWC?wxtR*NLM^jrxWM|",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/d840f1e9-3dd3-4531-b83a-1ba2c6b7feaa/width=1160/24512004.jpeg",
"nsfwLevel": 8,
"width": 1160,
"height": 1696,
"hash": "ULNm1i_39wi^*I%hDiM_tlo#xuV?^kNIxCs,",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/520387ae-c176-43e3-92bd-5cd2a672475e/width=1248/24512012.jpeg",
"nsfwLevel": 4,
"width": 1248,
"height": 1824,
"hash": "URM%l.%M.9Ip~poIkExu_3V@M|xuD%oJM{D*",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9ea28b94-f326-4776-83ff-851cc203c627/width=1248/24511988.jpeg",
"nsfwLevel": 1,
"width": 1248,
"height": 1824,
"hash": "U-PZloog_Nxut6j]WXWB-;j?IVa#ofaxj]j]",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
},
{
"url": "https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/2e749dbb-7d5a-48f1-8e29-fea5022a5fe9/width=1248/24522268.jpeg",
"nsfwLevel": 16,
"width": 1248,
"height": 1824,
"hash": "UPLgtm9Z0z=|0yRRE2-A9rWAoNE1~DwOr=t7",
"type": "image",
"minor": false,
"poi": false,
"hasMeta": true,
"hasPositivePrompt": true,
"onSite": false,
"remixOfId": null
}
],
"downloadUrl": "https://civitai.com/api/download/models/726676"
}
]
}

View File

@@ -1,294 +0,0 @@
Loading workflow from D:\Workspace\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\refs\prompt.json
Expected output from D:\Workspace\ComfyUI\custom_nodes\ComfyUI-Lora-Manager\refs\output.json
Expected output:
{
"loras": "<lora:ck-neon-retrowave-IL-000012:0.8> <lora:aorunIllstrious:1> <lora:ck-shadow-circuit-IL-000012:0.78> <lora:MoriiMee_Gothic_Niji_Style_Illustrious_r1:0.45> <lora:ck-nc-cyberpunk-IL-000011:0.4>",
"gen_params": {
"prompt": "in the style of ck-rw, aorun, scales, makeup, bare shoulders, pointy ears, dress, claws, in the style of cksc, artist:moriimee, in the style of cknc, masterpiece, best quality, good quality, very aesthetic, absurdres, newest, 8K, depth of field, focused subject, close up, stylized, in gold and neon shades, wabi sabi, 1girl, rainbow angel wings, looking at viewer, dynamic angle, from below, from side, relaxing",
"negative_prompt": "bad quality, worst quality, worst detail, sketch ,signature, watermark, patreon logo, nsfw",
"steps": "20",
"sampler": "euler_ancestral",
"cfg_scale": "8",
"seed": "241",
"size": "832x1216",
"clip_skip": "2"
}
}
Sampler node:
{
"inputs": {
"seed": 241,
"steps": 20,
"cfg": 8,
"sampler_name": "euler_ancestral",
"scheduler": "karras",
"denoise": 1,
"model": [
"56",
0
],
"positive": [
"6",
0
],
"negative": [
"7",
0
],
"latent_image": [
"5",
0
]
},
"class_type": "KSampler",
"_meta": {
"title": "KSampler"
}
}
Extracted parameters:
seed: 241
steps: 20
cfg_scale: 8
Positive node (6):
{
"inputs": {
"text": [
"22",
0
],
"clip": [
"56",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
}
Text node (22):
{
"inputs": {
"string1": [
"55",
0
],
"string2": [
"21",
0
],
"delimiter": ", "
},
"class_type": "JoinStrings",
"_meta": {
"title": "Join Strings"
}
}
String1 node (55):
{
"inputs": {
"group_mode": true,
"toggle_trigger_words": [
{
"text": "in the style of ck-rw",
"active": true
},
{
"text": "aorun, scales, makeup, bare shoulders, pointy ears",
"active": true
},
{
"text": "dress",
"active": true
},
{
"text": "claws",
"active": true
},
{
"text": "in the style of cksc",
"active": true
},
{
"text": "artist:moriimee",
"active": true
},
{
"text": "in the style of cknc",
"active": true
},
{
"text": "__dummy_item__",
"active": false,
"_isDummy": true
},
{
"text": "__dummy_item__",
"active": false,
"_isDummy": true
}
],
"orinalMessage": "in the style of ck-rw,, aorun, scales, makeup, bare shoulders, pointy ears,, dress,, claws,, in the style of cksc,, artist:moriimee,, in the style of cknc",
"trigger_words": [
"56",
2
]
},
"class_type": "TriggerWord Toggle (LoraManager)",
"_meta": {
"title": "TriggerWord Toggle (LoraManager)"
}
}
String2 node (21):
{
"inputs": {
"string": "masterpiece, best quality, good quality, very aesthetic, absurdres, newest, 8K, depth of field, focused subject, close up, stylized, in gold and neon shades, wabi sabi, 1girl, rainbow angel wings, looking at viewer, dynamic angle, from below, from side, relaxing",
"strip_newlines": false
},
"class_type": "StringConstantMultiline",
"_meta": {
"title": "positive"
}
}
Negative node (7):
{
"inputs": {
"text": "bad quality, worst quality, worst detail, sketch ,signature, watermark, patreon logo, nsfw",
"clip": [
"56",
1
]
},
"class_type": "CLIPTextEncode",
"_meta": {
"title": "CLIP Text Encode (Prompt)"
}
}
LoRA nodes (3):
LoRA node 56:
{
"inputs": {
"text": "<lora:ck-shadow-circuit-IL-000012:0.78> <lora:MoriiMee_Gothic_Niji_Style_Illustrious_r1:0.45> <lora:ck-nc-cyberpunk-IL-000011:0.4>",
"loras": [
{
"name": "ck-shadow-circuit-IL-000012",
"strength": 0.78,
"active": true
},
{
"name": "MoriiMee_Gothic_Niji_Style_Illustrious_r1",
"strength": 0.45,
"active": true
},
{
"name": "ck-nc-cyberpunk-IL-000011",
"strength": 0.4,
"active": true
},
{
"name": "__dummy_item1__",
"strength": 0,
"active": false,
"_isDummy": true
},
{
"name": "__dummy_item2__",
"strength": 0,
"active": false,
"_isDummy": true
}
],
"model": [
"4",
0
],
"clip": [
"4",
1
],
"lora_stack": [
"57",
0
]
},
"class_type": "Lora Loader (LoraManager)",
"_meta": {
"title": "Lora Loader (LoraManager)"
}
}
LoRA node 57:
{
"inputs": {
"text": "<lora:aorunIllstrious:1>",
"loras": [
{
"name": "aorunIllstrious",
"strength": "0.90",
"active": true
},
{
"name": "__dummy_item1__",
"strength": 0,
"active": false,
"_isDummy": true
},
{
"name": "__dummy_item2__",
"strength": 0,
"active": false,
"_isDummy": true
}
],
"lora_stack": [
"59",
0
]
},
"class_type": "Lora Stacker (LoraManager)",
"_meta": {
"title": "Lora Stacker (LoraManager)"
}
}
LoRA node 59:
{
"inputs": {
"text": "<lora:ck-neon-retrowave-IL-000012:0.8>",
"loras": [
{
"name": "ck-neon-retrowave-IL-000012",
"strength": 0.8,
"active": true
},
{
"name": "__dummy_item1__",
"strength": 0,
"active": false,
"_isDummy": true
},
{
"name": "__dummy_item2__",
"strength": 0,
"active": false,
"_isDummy": true
}
]
},
"class_type": "Lora Stacker (LoraManager)",
"_meta": {
"title": "Lora Stacker (LoraManager)"
}
}
Test completed.

View File

@@ -6,4 +6,9 @@ beautifulsoup4
piexif
Pillow
olefile
requests
requests
toml
numpy
torch
natsort
msgpack

14
settings.json.example Normal file
View File

@@ -0,0 +1,14 @@
{
"civitai_api_key": "your_civitai_api_key_here",
"show_only_sfw": false,
"folder_paths": {
"loras": [
"C:/path/to/your/loras_folder",
"C:/path/to/another/loras_folder"
],
"checkpoints": [
"C:/path/to/your/checkpoints_folder",
"C:/path/to/another/checkpoints_folder"
]
}
}

368
standalone.py Normal file
View File

@@ -0,0 +1,368 @@
from pathlib import Path
import os
import sys
import json
# Create mock folder_paths module BEFORE any other imports
class MockFolderPaths:
@staticmethod
def get_folder_paths(folder_name):
# Load paths from settings.json
settings_path = os.path.join(os.path.dirname(__file__), 'settings.json')
try:
if os.path.exists(settings_path):
with open(settings_path, 'r', encoding='utf-8') as f:
settings = json.load(f)
# For diffusion_models, combine unet and diffusers paths
if folder_name == "diffusion_models":
paths = []
if 'folder_paths' in settings:
if 'unet' in settings['folder_paths']:
paths.extend(settings['folder_paths']['unet'])
if 'diffusers' in settings['folder_paths']:
paths.extend(settings['folder_paths']['diffusers'])
# Filter out paths that don't exist
valid_paths = [p for p in paths if os.path.exists(p)]
if valid_paths:
return valid_paths
else:
print(f"Warning: No valid paths found for {folder_name}")
# For other folder names, return their paths directly
elif 'folder_paths' in settings and folder_name in settings['folder_paths']:
paths = settings['folder_paths'][folder_name]
valid_paths = [p for p in paths if os.path.exists(p)]
if valid_paths:
return valid_paths
else:
print(f"Warning: No valid paths found for {folder_name}")
except Exception as e:
print(f"Error loading folder paths from settings: {e}")
# Fallback to empty list if no paths found
return []
@staticmethod
def get_temp_directory():
return os.path.join(os.path.dirname(__file__), 'temp')
@staticmethod
def set_temp_directory(path):
os.makedirs(path, exist_ok=True)
return path
# Create mock server module with PromptServer
class MockPromptServer:
def __init__(self):
self.app = None
def send_sync(self, *args, **kwargs):
pass
# Create mock metadata_collector module
class MockMetadataCollector:
def init(self):
pass
def get_metadata(self, prompt_id=None):
return {}
# Initialize basic mocks before any imports
sys.modules['folder_paths'] = MockFolderPaths()
sys.modules['server'] = type('server', (), {'PromptServer': MockPromptServer()})
sys.modules['py.metadata_collector'] = MockMetadataCollector()
# Now we can safely import modules that depend on folder_paths and server
import argparse
import asyncio
import logging
from aiohttp import web
# Setup logging
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger("lora-manager-standalone")
# Configure aiohttp access logger to be less verbose
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
# Now we can import the global config from our local modules
from py.config import config
class StandaloneServer:
"""Server implementation for standalone mode"""
def __init__(self):
self.app = web.Application(logger=logger)
self.instance = self # Make it compatible with PromptServer.instance pattern
# Ensure the app's access logger is configured to reduce verbosity
self.app._subapps = [] # Ensure this exists to avoid AttributeError
# Configure access logging for the app
self.app.on_startup.append(self._configure_access_logger)
async def _configure_access_logger(self, app):
"""Configure access logger to reduce verbosity"""
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
# If using aiohttp>=3.8.0, configure access logger through app directly
if hasattr(app, 'access_logger'):
app.access_logger.setLevel(logging.WARNING)
async def setup(self):
"""Set up the standalone server"""
# Create placeholders for compatibility with ComfyUI's implementation
self.last_prompt_id = None
self.last_node_id = None
self.client_id = None
# Set up routes
self.setup_routes()
# Add startup and shutdown handlers
self.app.on_startup.append(self.on_startup)
self.app.on_shutdown.append(self.on_shutdown)
def setup_routes(self):
"""Set up basic routes"""
# Add a simple status endpoint
self.app.router.add_get('/', self.handle_status)
# Add static route for example images if the path exists in settings
settings_path = os.path.join(os.path.dirname(__file__), 'settings.json')
if os.path.exists(settings_path):
with open(settings_path, 'r', encoding='utf-8') as f:
settings = json.load(f)
example_images_path = settings.get('example_images_path')
logger.info(f"Example images path: {example_images_path}")
if example_images_path and os.path.exists(example_images_path):
self.app.router.add_static('/example_images_static', example_images_path)
logger.info(f"Added static route for example images: /example_images_static -> {example_images_path}")
async def handle_status(self, request):
"""Handle status request by redirecting to loras page"""
# Redirect to loras page instead of showing status
raise web.HTTPFound('/loras')
# Original JSON response (commented out)
# return web.json_response({
# "status": "running",
# "mode": "standalone",
# "loras_roots": config.loras_roots,
# "checkpoints_roots": config.checkpoints_roots
# })
async def on_startup(self, app):
"""Startup handler"""
logger.info("LoRA Manager standalone server starting...")
async def on_shutdown(self, app):
"""Shutdown handler"""
logger.info("LoRA Manager standalone server shutting down...")
def send_sync(self, event_type, data, sid=None):
"""Stub for compatibility with PromptServer"""
# In standalone mode, we don't have the same websocket system
pass
async def start(self, host='127.0.0.1', port=8188):
"""Start the server"""
runner = web.AppRunner(self.app)
await runner.setup()
site = web.TCPSite(runner, host, port)
await site.start()
# Log the server address with a clickable localhost URL regardless of the actual binding
logger.info(f"Server started at http://127.0.0.1:{port}")
# Keep the server running
while True:
await asyncio.sleep(3600) # Sleep for a long time
async def publish_loop(self):
"""Stub for compatibility with PromptServer"""
# This method exists in ComfyUI's server but we don't need it
pass
# After all mocks are in place, import LoraManager
from py.lora_manager import LoraManager
class StandaloneLoraManager(LoraManager):
"""Extended LoraManager for standalone mode"""
@classmethod
def add_routes(cls, server_instance):
"""Initialize and register all routes for standalone mode"""
app = server_instance.app
# Store app in a global-like location for compatibility
sys.modules['server'].PromptServer.instance = server_instance
# Configure aiohttp access logger to be less verbose
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
added_targets = set() # Track already added target paths
# Add static routes for each lora root
for idx, root in enumerate(config.loras_roots, start=1):
if not os.path.exists(root):
logger.warning(f"Lora root path does not exist: {root}")
continue
preview_path = f'/loras_static/root{idx}/preview'
# Check if this root is a link path in the mappings
real_root = root
for target, link in config._path_mappings.items():
if os.path.normpath(link) == os.path.normpath(root):
# If so, route should point to the target (real path)
real_root = target
break
# Normalize and standardize path display for consistency
display_root = real_root.replace('\\', '/')
# Add static route for original path - use the normalized path
app.router.add_static(preview_path, real_root)
logger.info(f"Added static route {preview_path} -> {display_root}")
# Record route mapping with normalized path
config.add_route_mapping(real_root, preview_path)
added_targets.add(os.path.normpath(real_root))
# Add static routes for each checkpoint root
for idx, root in enumerate(config.checkpoints_roots, start=1):
if not os.path.exists(root):
logger.warning(f"Checkpoint root path does not exist: {root}")
continue
preview_path = f'/checkpoints_static/root{idx}/preview'
# Check if this root is a link path in the mappings
real_root = root
for target, link in config._path_mappings.items():
if os.path.normpath(link) == os.path.normpath(root):
# If so, route should point to the target (real path)
real_root = target
break
# Normalize and standardize path display for consistency
display_root = real_root.replace('\\', '/')
# Add static route for original path
app.router.add_static(preview_path, real_root)
logger.info(f"Added static route {preview_path} -> {display_root}")
# Record route mapping
config.add_route_mapping(real_root, preview_path)
added_targets.add(os.path.normpath(real_root))
# Add static routes for symlink target paths that aren't already covered
link_idx = {
'lora': 1,
'checkpoint': 1
}
for target_path, link_path in config._path_mappings.items():
norm_target = os.path.normpath(target_path)
if norm_target not in added_targets:
# Determine if this is a checkpoint or lora link based on path
is_checkpoint = any(os.path.normpath(cp_root) in os.path.normpath(link_path) for cp_root in config.checkpoints_roots)
is_checkpoint = is_checkpoint or any(os.path.normpath(cp_root) in norm_target for cp_root in config.checkpoints_roots)
if is_checkpoint:
route_path = f'/checkpoints_static/link_{link_idx["checkpoint"]}/preview'
link_idx["checkpoint"] += 1
else:
route_path = f'/loras_static/link_{link_idx["lora"]}/preview'
link_idx["lora"] += 1
# Display path with forward slashes for consistency
display_target = target_path.replace('\\', '/')
try:
app.router.add_static(route_path, Path(target_path).resolve(strict=False))
logger.info(f"Added static route for link target {route_path} -> {display_target}")
config.add_route_mapping(target_path, route_path)
added_targets.add(norm_target)
except Exception as e:
logger.warning(f"Failed to add static route on initialization for {target_path}: {e}")
continue
# Add static route for plugin assets
app.router.add_static('/loras_static', config.static_path)
# Setup feature routes
from py.routes.lora_routes import LoraRoutes
from py.routes.api_routes import ApiRoutes
from py.routes.recipe_routes import RecipeRoutes
from py.routes.checkpoints_routes import CheckpointsRoutes
from py.routes.update_routes import UpdateRoutes
from py.routes.misc_routes import MiscRoutes
from py.routes.example_images_routes import ExampleImagesRoutes
from py.routes.stats_routes import StatsRoutes
lora_routes = LoraRoutes()
checkpoints_routes = CheckpointsRoutes()
stats_routes = StatsRoutes()
# Initialize routes
lora_routes.setup_routes(app)
checkpoints_routes.setup_routes(app)
stats_routes.setup_routes(app)
ApiRoutes.setup_routes(app)
RecipeRoutes.setup_routes(app)
UpdateRoutes.setup_routes(app)
MiscRoutes.setup_routes(app)
ExampleImagesRoutes.setup_routes(app)
# Schedule service initialization
app.on_startup.append(lambda app: cls._initialize_services())
# Add cleanup
app.on_shutdown.append(cls._cleanup)
app.on_shutdown.append(ApiRoutes.cleanup)
def parse_args():
"""Parse command line arguments"""
parser = argparse.ArgumentParser(description="LoRA Manager Standalone Server")
parser.add_argument("--host", type=str, default="0.0.0.0",
help="Host address to bind the server to (default: 0.0.0.0)")
parser.add_argument("--port", type=int, default=8188,
help="Port to bind the server to (default: 8188, access via http://localhost:8188/loras)")
# parser.add_argument("--loras", type=str, nargs="+",
# help="Additional paths to LoRA model directories (optional if settings.json has paths)")
# parser.add_argument("--checkpoints", type=str, nargs="+",
# help="Additional paths to checkpoint model directories (optional if settings.json has paths)")
parser.add_argument("--log-level", type=str, default="INFO",
choices=["DEBUG", "INFO", "WARNING", "ERROR", "CRITICAL"],
help="Logging level")
return parser.parse_args()
async def main():
"""Main entry point for standalone mode"""
args = parse_args()
# Set log level
logging.getLogger().setLevel(getattr(logging, args.log_level))
# Explicitly configure aiohttp access logger regardless of selected log level
logging.getLogger('aiohttp.access').setLevel(logging.WARNING)
# Create the server instance
server = StandaloneServer()
# Initialize routes via the standalone lora manager
StandaloneLoraManager.add_routes(server)
# Set up and start the server
await server.setup()
await server.start(host=args.host, port=args.port)
if __name__ == "__main__":
try:
# Run the main function
asyncio.run(main())
except KeyboardInterrupt:
logger.info("Server stopped by user")

View File

@@ -29,16 +29,29 @@ html, body {
:root {
--bg-color: #ffffff;
--text-color: #333333;
--text-muted: #6c757d;
--card-bg: #ffffff;
--border-color: #e0e0e0;
/* Color System */
--lora-accent: oklch(68% 0.28 256);
/* Color Components */
--lora-accent-l: 68%;
--lora-accent-c: 0.28;
--lora-accent-h: 256;
--lora-warning-l: 75%;
--lora-warning-c: 0.25;
--lora-warning-h: 80;
--lora-success-l: 70%;
--lora-success-c: 0.2;
--lora-success-h: 140;
/* Composed Colors */
--lora-accent: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h));
--lora-surface: oklch(100% 0 0 / 0.98);
--lora-border: oklch(90% 0.02 256 / 0.15);
--lora-text: oklch(95% 0.02 256);
--lora-error: oklch(75% 0.32 29);
--lora-warning: oklch(75% 0.25 80); /* Add warning color for deleted LoRAs */
--lora-warning: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h)); /* Modified to be used with oklch() */
--lora-success: oklch(var(--lora-success-l) var(--lora-success-c) var(--lora-success-h)); /* New green success color */
/* Spacing Scale */
--space-1: calc(8px * 1);
@@ -59,9 +72,20 @@ html, body {
--scrollbar-width: 8px; /* 添加滚动条宽度变量 */
}
html[data-theme="dark"] {
background-color: #1a1a1a !important;
color-scheme: dark;
}
html[data-theme="light"] {
background-color: #ffffff !important;
color-scheme: light;
}
[data-theme="dark"] {
--bg-color: #1a1a1a;
--text-color: #e0e0e0;
--text-muted: #a0a0a0;
--card-bg: #2d2d2d;
--border-color: #404040;
@@ -69,7 +93,7 @@ html, body {
--lora-surface: oklch(25% 0.02 256 / 0.98);
--lora-border: oklch(90% 0.02 256 / 0.15);
--lora-text: oklch(98% 0.02 256);
--lora-warning: oklch(75% 0.25 80); /* Add warning color for dark theme too */
--lora-warning: oklch(75% 0.25 80); /* Modified to be used with oklch() */
}
body {

View File

@@ -0,0 +1,165 @@
/* Alphabet Bar Component */
.alphabet-bar-container {
position: fixed;
left: 0;
top: 50%;
transform: translateY(-50%);
z-index: 100;
display: flex;
transition: transform 0.3s ease;
}
.alphabet-bar-container.collapsed {
transform: translateY(-50%) translateX(-90%);
}
/* New visual indicator for when a letter is active and bar is collapsed */
.alphabet-bar-container.collapsed .toggle-alphabet-bar.has-active-letter {
border-color: var(--lora-accent);
background: oklch(var(--lora-accent) / 0.15);
}
.alphabet-bar-container.collapsed .toggle-alphabet-bar.has-active-letter::after {
content: '';
position: absolute;
top: 7px;
right: 7px;
width: 8px;
height: 8px;
background-color: var(--lora-accent);
border-radius: 50%;
animation: pulse-active 2s infinite;
}
@keyframes pulse-active {
0% { transform: scale(0.8); opacity: 0.7; }
50% { transform: scale(1.1); opacity: 1; }
100% { transform: scale(0.8); opacity: 0.7; }
}
.alphabet-bar {
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: 0 var(--border-radius-xs) var(--border-radius-xs) 0;
padding: 8px 4px;
display: flex;
flex-direction: column;
gap: 6px;
align-items: center;
box-shadow: 2px 0 8px rgba(0, 0, 0, 0.1);
max-height: 80vh;
overflow-y: auto;
scrollbar-width: thin;
}
.alphabet-bar::-webkit-scrollbar {
width: 4px;
}
.alphabet-bar::-webkit-scrollbar-thumb {
background: var(--border-color);
border-radius: 4px;
}
.toggle-alphabet-bar {
background: var(--card-bg);
border: 1px solid var(--border-color);
border-left: none;
border-radius: 0 var(--border-radius-xs) var(--border-radius-xs) 0;
padding: 8px 4px;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
color: var(--text-color);
width: 20px;
height: 40px;
align-self: center;
box-shadow: 2px 0 8px rgba(0, 0, 0, 0.1);
}
.toggle-alphabet-bar:hover {
background: var(--bg-hover);
}
.toggle-alphabet-bar i {
transition: transform 0.3s ease;
}
.alphabet-bar-container.collapsed .toggle-alphabet-bar i {
transform: rotate(180deg);
}
.letter-chip {
padding: 4px 2px;
border-radius: var(--border-radius-xs);
background: var(--bg-color);
color: var(--text-color);
cursor: pointer;
min-width: 24px;
text-align: center;
font-size: 0.85em;
transition: all 0.2s ease;
border: 1px solid var(--border-color);
}
.letter-chip:hover {
background: var(--lora-accent);
color: white;
transform: scale(1.1);
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.letter-chip.active {
background: var(--lora-accent);
color: white;
border-color: var(--lora-accent);
}
.letter-chip.disabled {
opacity: 0.5;
pointer-events: none;
cursor: default;
}
/* Hide the count by default, only show in tooltip */
.letter-chip .count {
display: none;
}
.alphabet-bar-title {
font-size: 0.75em;
color: var(--text-color);
opacity: 0.7;
margin-bottom: 6px;
writing-mode: vertical-lr;
transform: rotate(180deg);
white-space: nowrap;
}
@media (max-width: 768px) {
.alphabet-bar-container {
transform: translateY(-50%) translateX(-90%);
}
.alphabet-bar-container.active {
transform: translateY(-50%) translateX(0);
}
.letter-chip {
padding: 3px 1px;
min-width: 20px;
font-size: 0.75em;
}
}
/* Keyframe animations for the active letter */
@keyframes pulse {
0% { transform: scale(1); }
50% { transform: scale(1.1); }
100% { transform: scale(1); }
}
.letter-chip.active {
animation: pulse 1s ease-in-out 1;
}

View File

@@ -60,6 +60,18 @@
border-color: var(--lora-accent);
}
/* Danger button style - updated to use proper theme variables */
.bulk-operations-actions button.danger-btn {
background: oklch(70% 0.2 29); /* Light red background that works in both themes */
color: oklch(98% 0.01 0); /* Almost white text for good contrast */
border-color: var(--lora-error);
}
.bulk-operations-actions button.danger-btn:hover {
background: var(--lora-error);
color: oklch(100% 0 0); /* Pure white text on hover for maximum contrast */
}
/* Style for selected cards */
.lora-card.selected {
box-shadow: 0 0 0 2px var(--lora-accent);
@@ -262,83 +274,6 @@
background: var(--lora-accent);
}
/* NSFW Level Selector */
.nsfw-level-selector {
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-base);
padding: 16px;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.2);
z-index: var(--z-modal);
width: 300px;
display: none;
}
.nsfw-level-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 16px;
}
.nsfw-level-header h3 {
margin: 0;
font-size: 16px;
font-weight: 500;
}
.close-nsfw-selector {
background: transparent;
border: none;
color: var(--text-color);
cursor: pointer;
padding: 4px;
border-radius: var(--border-radius-xs);
}
.close-nsfw-selector:hover {
background: var(--border-color);
}
.current-level {
margin-bottom: 12px;
padding: 8px;
background: var(--bg-color);
border-radius: var(--border-radius-xs);
border: 1px solid var(--border-color);
}
.nsfw-level-options {
display: flex;
flex-wrap: wrap;
gap: 8px;
}
.nsfw-level-btn {
flex: 1 0 calc(33% - 8px);
padding: 8px;
border-radius: var(--border-radius-xs);
background: var(--bg-color);
border: 1px solid var(--border-color);
color: var(--text-color);
cursor: pointer;
transition: all 0.2s ease;
}
.nsfw-level-btn:hover {
background: var(--lora-border);
}
.nsfw-level-btn.active {
background: var(--lora-accent);
color: white;
border-color: var(--lora-accent);
}
/* Mobile optimizations */
@media (max-width: 768px) {
.selected-thumbnails-strip {

View File

@@ -1,14 +1,17 @@
/* 卡片网格布局 */
.card-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(260px, 1fr)); /* Adjusted from 320px */
gap: 12px; /* Reduced from var(--space-2) for tighter horizontal spacing */
grid-template-columns: repeat(auto-fill, minmax(260px, 1fr)); /* Base size */
gap: 12px; /* Consistent gap for both row and column spacing */
row-gap: 20px; /* Increase vertical spacing between rows */
margin-top: var(--space-2);
padding-top: 4px; /* 添加顶部内边距,为悬停动画提供空间 */
padding-bottom: 4px; /* 添加底部内边距,为悬停动画提供空间 */
max-width: 1400px; /* Container width control */
width: 100%; /* Ensure it takes full width of container */
max-width: 1400px; /* Base container width */
margin-left: auto;
margin-right: auto;
box-sizing: border-box; /* Include padding in width calculation */
}
.lora-card {
@@ -17,13 +20,14 @@
border-radius: var(--border-radius-base);
backdrop-filter: blur(16px);
transition: transform 160ms ease-out;
aspect-ratio: 896/1152;
max-width: 260px; /* Adjusted from 320px to fit 5 cards */
aspect-ratio: 896/1152; /* Preserve aspect ratio */
max-width: 260px; /* Base size */
width: 100%;
margin: 0 auto;
cursor: pointer; /* Added from recipe-card */
display: flex; /* Added from recipe-card */
flex-direction: column; /* Added from recipe-card */
overflow: hidden; /* Add overflow hidden to contain children */
cursor: pointer;
display: flex;
flex-direction: column;
overflow: hidden;
}
.lora-card:hover {
@@ -36,6 +40,30 @@
outline-offset: 2px;
}
/* Responsive adjustments for 1440p screens (2K) */
@media (min-width: 2000px) {
.card-grid {
max-width: 1800px; /* Increased for 2K screens */
grid-template-columns: repeat(auto-fill, minmax(270px, 1fr));
}
.lora-card {
max-width: 270px;
}
}
/* Responsive adjustments for 4K screens */
@media (min-width: 3000px) {
.card-grid {
max-width: 2400px; /* Increased for 4K screens */
grid-template-columns: repeat(auto-fill, minmax(280px, 1fr));
}
.lora-card {
max-width: 280px;
}
}
/* Responsive adjustments */
@media (max-width: 1400px) {
.card-grid {
@@ -58,6 +86,42 @@
min-height: 0; /* Fix for potential flexbox sizing issue in Firefox */
}
/* Smaller text for medium density */
.medium-density .model-name {
font-size: 0.95em;
max-height: 3em; /* Increased from 2.6em */
}
.medium-density .base-model-label {
font-size: 0.85em;
max-width: 120px;
}
.medium-density .card-actions i {
font-size: 0.98em;
padding: 4px;
}
/* Smaller text for compact mode */
.compact-density .model-name {
font-size: 0.9em;
max-height: 2.8em; /* Increased from 2.4em */
}
.compact-density .base-model-label {
font-size: 0.8em;
max-width: 110px;
}
.compact-density .card-actions i {
font-size: 0.95em;
padding: 3px;
}
.compact-density .model-info {
padding-bottom: 2px;
}
.card-preview img,
.card-preview video {
width: 100%;
@@ -103,6 +167,38 @@
text-shadow: 1px 1px 1px rgba(0, 0, 0, 0.5);
}
/* NSFW warning adjustments for medium density */
.medium-density .nsfw-warning {
padding: calc(var(--space-2) * 0.85);
max-width: 70%;
}
.medium-density .nsfw-warning p {
font-size: 0.95em;
margin-bottom: calc(var(--space-1) * 0.85);
}
.medium-density .show-content-btn {
font-size: 0.85em;
padding: 3px calc(var(--space-1) * 0.85);
}
/* NSFW warning adjustments for compact density */
.compact-density .nsfw-warning {
padding: calc(var(--space-2) * 0.7);
max-width: 60%;
}
.compact-density .nsfw-warning p {
font-size: 0.85em;
margin-bottom: calc(var(--space-1) * 0.7);
}
.compact-density .show-content-btn {
font-size: 0.8em;
padding: 2px var(--space-1);
}
.toggle-blur-btn {
position: absolute;
left: var(--space-1);
@@ -156,6 +252,18 @@
z-index: 3;
}
/* New styles for hover reveal mode */
.hover-reveal .card-header,
.hover-reveal .card-footer {
opacity: 0;
transition: opacity 0.2s ease;
}
.hover-reveal .lora-card:hover .card-header,
.hover-reveal .lora-card:hover .card-footer {
opacity: 1;
}
.card-footer {
position: absolute;
bottom: 0;
@@ -192,12 +300,43 @@
margin-left: var(--space-1);
cursor: pointer;
color: white;
transition: opacity 0.2s;
font-size: 0.9em;
transition: opacity 0.2s, transform 0.15s ease;
font-size: 1.0em; /* Increased from 0.9em for better visibility */
width: 16px; /* Fixed width for consistent spacing */
height: 16px; /* Fixed height for larger touch target */
display: flex;
align-items: center;
justify-content: center;
border-radius: 50%;
padding: 4px; /* Add padding to increase clickable area */
box-sizing: content-box; /* Ensure padding adds to dimensions */
position: relative; /* For proper positioning */
margin: 0; /* Reset margin */
}
.card-actions i::before {
position: absolute; /* Position the icon glyph */
top: 50%;
left: 50%;
transform: translate(-50%, -50%); /* Center the icon */
}
.card-actions {
display: flex;
gap: var(--space-1); /* Use gap instead of margin for spacing between icons */
align-items: center;
}
.card-actions i:hover {
opacity: 0.8;
opacity: 0.9;
transform: scale(1.1);
background-color: rgba(255, 255, 255, 0.1);
}
/* Style for active favorites */
.favorite-active {
color: #ffc107 !important; /* Gold color for favorites */
text-shadow: 0 0 5px rgba(255, 193, 7, 0.5);
}
/* 响应式设计 */
@@ -236,21 +375,24 @@
text-decoration: none;
}
/* Updated model name to fix text cutoff issues */
.model-name {
font-weight: bold;
text-shadow: 1px 1px 2px rgba(0, 0, 0, 0.5);
font-size: 0.95em;
word-break: break-word;
display: block;
max-height: 2.8em;
max-height: 3em; /* Increased to ensure two full lines */
overflow: hidden;
/* Add line height for consistency */
line-height: 1.4;
}
.model-info {
flex: 1;
min-width: 0;
overflow: hidden;
padding-bottom: 4px;
padding-bottom: 6px; /* Increased from 4px to give more room for text */
}
.base-model {
@@ -282,28 +424,23 @@
font-size: 0.85em;
}
/* Recipe specific elements - migrated from recipe-card.css */
.recipe-indicator {
position: absolute;
top: 6px;
left: 8px;
width: 24px;
height: 24px;
background: var(--lora-primary);
border-radius: 50%;
display: flex;
align-items: center;
justify-content: center;
color: white;
font-weight: bold;
z-index: 2;
}
.base-model-wrapper {
display: flex;
align-items: center;
gap: 8px;
margin-left: 32px; /* For accommodating the recipe indicator */
/* Prevent text selection on cards and interactive elements */
.lora-card,
.lora-card *,
.card-actions,
.card-actions i,
.toggle-blur-btn,
.show-content-btn,
.card-preview img,
.card-preview video,
.card-footer,
.card-header,
.model-name,
.base-model-label {
-webkit-user-select: none;
-moz-user-select: none;
-ms-user-select: none;
user-select: none;
}
.lora-count {
@@ -331,4 +468,84 @@
padding: 2rem;
background: var(--lora-surface-alt);
border-radius: var(--border-radius-base);
}
}
/* Virtual scrolling specific styles - updated */
.virtual-scroll-item {
position: absolute;
box-sizing: border-box;
transition: transform 160ms ease-out;
margin: 0; /* Remove margins, positioning is handled by VirtualScroller */
width: 100%; /* Allow width to be set by the VirtualScroller */
}
.virtual-scroll-item:hover {
transform: translateY(-2px); /* Keep hover effect */
z-index: 1; /* Ensure hovered items appear above others */
}
/* When using virtual scroll, adjust container */
.card-grid.virtual-scroll {
display: block;
position: relative;
margin: 0 auto;
padding: 4px 0; /* Add top/bottom padding equivalent to card padding */
height: auto;
width: 100%;
max-width: 1400px; /* Keep the max-width from original grid */
box-sizing: border-box; /* Include padding in width calculation */
overflow-x: hidden; /* Prevent horizontal overflow */
}
/* For larger screens, allow more space for the cards */
@media (min-width: 2000px) {
.card-grid.virtual-scroll {
max-width: 1800px;
}
}
@media (min-width: 3000px) {
.card-grid.virtual-scroll {
max-width: 2400px;
}
}
/* Add after the existing .lora-card:hover styles */
@keyframes update-pulse {
0% { box-shadow: 0 0 0 0 var(--lora-accent-transparent); }
50% { box-shadow: 0 0 0 4px var(--lora-accent-transparent); }
100% { box-shadow: 0 0 0 0 var(--lora-accent-transparent); }
}
/* Add semi-transparent version of accent color for animation */
:root {
--lora-accent-transparent: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.6);
}
.lora-card.updated {
animation: update-pulse 1.2s ease-out;
}
/* Add a subtle updated tag that fades in and out */
.update-indicator {
position: absolute;
top: 8px;
right: 8px;
background: var(--lora-accent);
color: white;
border-radius: var(--border-radius-xs);
padding: 3px 6px;
font-size: 0.75em;
opacity: 0;
transform: translateY(-5px);
z-index: 4;
animation: update-tag 1.8s ease-out forwards;
}
@keyframes update-tag {
0% { opacity: 0; transform: translateY(-5px); }
15% { opacity: 1; transform: translateY(0); }
85% { opacity: 1; transform: translateY(0); }
100% { opacity: 0; transform: translateY(0); }
}

View File

@@ -95,7 +95,7 @@
flex: 1;
}
.version-info {
.version-content .version-info {
display: flex;
flex-wrap: wrap;
flex-direction: row !important;
@@ -104,7 +104,7 @@
font-size: 0.9em;
}
.version-info .base-model {
.version-content .version-info .base-model {
background: oklch(var(--lora-accent) / 0.1);
color: var(--lora-accent);
padding: 2px 8px;
@@ -190,14 +190,6 @@
border-color: var(--lora-border);
}
/* Add disabled button styles */
.primary-btn.disabled {
background-color: var(--border-color);
color: var(--text-color);
opacity: 0.7;
cursor: not-allowed;
}
/* Enhance the local badge to make it more noticeable */
.version-item.exists-locally {
background: oklch(var(--lora-accent) / 0.05);

View File

@@ -0,0 +1,608 @@
/* Duplicates Management Styles */
/* Duplicates banner */
.duplicates-banner {
position: sticky; /* Keep the sticky position */
top: var(--space-1);
width: 100%;
background-color: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1); /* Use accent color with low opacity */
color: var(--text-color);
border-top: 1px solid oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.3); /* Add top border with accent color */
border-bottom: 1px solid oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.4); /* Make bottom border stronger */
z-index: var(--z-overlay);
padding: 12px 0;
box-shadow: 0 3px 10px rgba(0, 0, 0, 0.2); /* Stronger shadow */
transition: all 0.3s ease;
margin-bottom: 20px;
}
.duplicates-banner .banner-content {
position: relative;
max-width: 1400px;
margin: 0 auto;
display: flex;
align-items: center;
gap: 12px;
padding: 0 16px;
}
/* Responsive container for larger screens - match container in layout.css */
@media (min-width: 2000px) {
.duplicates-banner .banner-content {
max-width: 1800px;
}
}
@media (min-width: 3000px) {
.duplicates-banner .banner-content {
max-width: 2400px;
}
}
.duplicates-banner i.fa-exclamation-triangle {
font-size: 18px;
color: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
}
.duplicates-banner .banner-actions {
margin-left: auto;
display: flex;
gap: 8px;
align-items: center;
}
/* Improved exit button in banner */
.duplicates-banner button.btn-exit-mode {
min-width: 120px;
background-color: var(--card-bg);
color: var(--text-color);
border: 1px solid var(--border-color);
padding: 6px 12px;
border-radius: var(--border-radius-xs);
font-size: 0.85em;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
gap: 6px;
transition: all 0.2s ease;
}
.duplicates-banner button.btn-exit-mode:hover {
background-color: var(--bg-color);
border-color: var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h);
transform: translateY(-1px);
}
.duplicates-banner button {
min-width: 100px;
display: flex;
align-items: center;
justify-content: center;
gap: 4px;
border-radius: var(--border-radius-xs);
padding: 4px 10px;
border: 1px solid var(--border-color);
background: var(--card-bg);
color: var(--text-color);
font-size: 0.85em;
transition: all 0.2s ease;
cursor: pointer;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05);
}
.duplicates-banner button:hover {
border-color: var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h);
background: var(--bg-color);
transform: translateY(-1px);
box-shadow: 0 3px 5px rgba(0, 0, 0, 0.08);
}
.duplicates-banner button.btn-exit {
min-width: unset;
width: 28px;
height: 28px;
padding: 0;
display: flex;
align-items: center;
justify-content: center;
border-radius: 50%;
}
.duplicates-banner button.disabled {
opacity: 0.5;
cursor: not-allowed;
}
/* Duplicate groups */
.duplicate-group {
position: relative;
border: 2px solid oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
border-radius: var(--border-radius-base);
padding: 16px;
margin-bottom: 24px;
background: var(--card-bg);
box-shadow: 0 2px 6px rgba(0, 0, 0, 0.12); /* Add subtle shadow to groups */
/* Add responsive width settings to match banner */
max-width: 1400px;
margin-left: auto;
margin-right: auto;
}
/* Add responsive container adjustments for duplicate groups - match container in banner */
@media (min-width: 2000px) {
.duplicate-group {
max-width: 1800px;
}
}
@media (min-width: 3000px) {
.duplicate-group {
max-width: 2400px;
}
}
.duplicate-group-header {
background-color: var(--bg-color);
color: var(--text-color);
border: 1px solid var(--border-color);
padding: 10px 16px; /* Slightly increased padding */
border-radius: var(--border-radius-xs);
margin-bottom: 16px;
display: flex;
justify-content: space-between;
align-items: center;
border-left: 4px solid oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h)); /* Add accent border on the left */
}
.duplicate-group-header span:last-child {
display: flex;
gap: 8px;
align-items: center;
}
.duplicate-group-header button {
min-width: 80px;
display: flex;
align-items: center;
justify-content: center;
gap: 4px;
border-radius: var(--border-radius-xs);
padding: 4px 8px;
border: 1px solid var(--border-color);
background: var(--card-bg);
color: var(--text-color);
font-size: 0.85em;
transition: all 0.2s ease;
cursor: pointer;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.05);
margin-left: 8px;
}
.duplicate-group-header button:hover {
border-color: var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h);
background: var(--bg-color);
transform: translateY(-1px);
box-shadow: 0 3px 5px rgba(0, 0, 0, 0.08);
}
.card-group-container {
display: flex;
flex-wrap: wrap;
gap: 16px;
justify-content: flex-start;
align-items: flex-start;
}
/* Make cards in duplicate groups have consistent width */
.card-group-container .lora-card {
flex: 0 0 auto;
width: 240px;
margin: 0;
cursor: pointer; /* Indicate the card is clickable */
}
/* Ensure the grid layout is only applied to the main recipe grid, not duplicate groups */
.duplicate-mode .card-grid {
display: block;
}
/* Scrollable container for large duplicate groups */
.card-group-container.scrollable {
max-height: 450px;
overflow-y: auto;
padding-right: 8px;
}
/* Add a toggle button to expand/collapse large duplicate groups */
.group-toggle-btn {
position: absolute;
right: 16px;
bottom: -12px;
background: var(--card-bg);
color: var(--text-color);
border: 1px solid var(--border-color);
border-radius: 50%;
width: 24px;
height: 24px;
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
z-index: 1;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
transition: all 0.2s ease;
}
.group-toggle-btn:hover {
border-color: var(--lora-accent-l) var(--lora-accent-c) var (--lora-accent-h);
transform: translateY(-1px);
box-shadow: 0 3px 5px rgba(0, 0, 0, 0.08);
}
/* Duplicate card styling */
.lora-card.duplicate {
position: relative;
transition: all 0.2s ease;
}
.lora-card.duplicate:hover {
border-color: var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h);
}
.lora-card.duplicate.latest {
border-style: solid;
border-color: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
}
.lora-card.duplicate-selected {
border: 2px solid oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h));
box-shadow: 0 0 8px rgba(0, 0, 0, 0.2);
}
.lora-card .selector-checkbox {
position: absolute;
top: 10px;
right: 10px;
z-index: 10;
width: 20px;
height: 20px;
cursor: pointer;
}
/* Latest indicator */
.lora-card.duplicate.latest::after {
content: "Latest";
position: absolute;
top: 10px;
left: 10px;
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h));
color: white;
font-size: 12px;
padding: 2px 6px;
border-radius: var(--border-radius-xs);
z-index: 5;
}
/* Model tooltip for duplicates mode */
.model-tooltip {
position: absolute;
background-color: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-sm);
box-shadow: 0 2px 10px rgba(0,0,0,0.2);
padding: 10px;
z-index: 1000;
max-width: 350px;
min-width: 250px;
color: var(--text-color);
font-size: 0.9em;
pointer-events: none; /* Don't block mouse events */
}
.model-tooltip .tooltip-header {
font-weight: bold;
font-size: 1.1em;
margin-bottom: 8px;
padding-bottom: 5px;
border-bottom: 1px solid var(--border-color);
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.model-tooltip .tooltip-info div {
margin-bottom: 4px;
display: flex;
flex-wrap: wrap;
word-break: break-all; /* Ensure long hashes wrap properly */
}
.model-tooltip .tooltip-info div strong {
margin-right: 5px;
min-width: 70px;
}
/* Latest indicator */
.hash-mismatch-info {
margin-top: 8px;
padding-top: 8px;
border-top: 1px dashed var(--border-color);
color: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
font-weight: bold;
word-break: break-all; /* Ensure long hashes wrap properly */
}
/* Verification Badge Styles */
.verification-badge {
display: inline-flex;
align-items: center;
margin-left: 8px;
padding: 2px 6px;
font-size: 0.8em;
border-radius: var(--border-radius-xs);
font-weight: normal;
}
.verification-badge.metadata {
background-color: var(--bg-color);
border: 1px solid var(--border-color);
color: var(--text-color);
}
.verification-badge.verified {
background-color: oklch(70% 0.2 140); /* Green for verified */
color: white;
}
.verification-badge.mismatch {
background-color: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
color: white;
}
.verification-badge i {
margin-right: 4px;
}
/* Hash Mismatch Styling */
.lora-card.duplicate.hash-mismatch {
border: 2px dashed oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
opacity: 0.85;
position: relative;
}
.lora-card.duplicate.hash-mismatch::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: repeating-linear-gradient(
45deg,
oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h) / 0.05),
oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h) / 0.05) 10px,
transparent 10px,
transparent 20px
);
z-index: 1;
pointer-events: none;
}
.lora-card.duplicate.hash-mismatch .card-preview {
filter: grayscale(20%);
}
/* Mismatch Badge */
.mismatch-badge {
position: absolute;
top: 10px;
left: 10px; /* Changed from right:10px to left:10px */
background: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
color: white;
font-size: 12px;
padding: 3px 8px;
border-radius: var(--border-radius-xs);
z-index: 5;
}
/* Disabled checkbox style */
.lora-card.duplicate.hash-mismatch .selector-checkbox {
opacity: 0.5;
cursor: not-allowed;
}
/* Hash mismatch info in tooltip */
.hash-mismatch-info {
margin-top: 8px;
padding-top: 8px;
border-top: 1px dashed var(--border-color);
color: oklch(var(--lora-warning-l) var(--lora-warning-c) var(--lora-warning-h));
font-weight: bold;
}
/* Verify hash button styling */
.btn-verify-hashes {
display: flex;
align-items: center;
gap: 6px;
padding: 4px 10px;
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-xs);
font-size: 0.85em;
cursor: pointer;
transition: all 0.2s ease;
}
.btn-verify-hashes:hover {
background: var(--bg-color);
border-color: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h));
transform: translateY(-1px);
}
.btn-verify-hashes i {
font-size: 0.9em;
}
/* Badge Styles */
.badge {
display: inline-flex;
align-items: center;
justify-content: center;
min-width: 16px; /* Reduced from 20px */
height: 16px; /* Reduced from 20px */
border-radius: 8px; /* Adjusted for smaller size */
background-color: var(--lora-error);
color: white;
font-size: 10px; /* Smaller font size */
font-weight: bold;
padding: 0 4px; /* Reduced padding */
position: absolute;
top: -8px; /* Moved closer to button */
right: -8px; /* Moved closer to button */
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.15); /* Softer shadow */
transition: transform 0.2s ease, opacity 0.2s ease;
}
.badge:empty {
display: none;
}
/* Make the pulse animation more subtle */
.badge.pulse {
animation: badge-pulse 2s infinite; /* Slower animation */
}
@keyframes badge-pulse {
0% {
transform: scale(1);
}
50% {
transform: scale(1.1); /* Less expansion */
}
100% {
transform: scale(1);
}
}
/* Help icon styling */
.help-icon {
color: var(--text-color);
opacity: 0.7;
cursor: help;
font-size: 16px;
margin-left: 8px;
transition: all 0.2s ease;
}
.help-icon:hover {
opacity: 1;
color: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h));
}
/* Help tooltip */
.help-tooltip {
display: none;
position: absolute;
max-width: 400px;
background: var(--card-bg);
color: var(--text-color);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-sm);
padding: 12px 16px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
z-index: var(--z-overlay);
font-size: 0.9em;
margin-top: 10px;
text-align: left;
pointer-events: none;
}
.help-tooltip:after {
content: "";
position: absolute;
top: -8px;
left: 10px; /* Position the arrow near the left instead of center */
border-width: 0 8px 8px 8px;
border-style: solid;
border-color: transparent transparent var(--card-bg) transparent;
}
/* Responsive adjustments */
@media (max-width: 768px) {
.duplicates-banner .banner-content {
flex-direction: column;
align-items: flex-start;
gap: 8px;
}
.duplicates-banner .banner-actions {
width: 100%;
margin-left: 0;
justify-content: space-between;
}
.duplicate-group-header {
flex-direction: column;
gap: 8px;
align-items: flex-start;
}
.duplicate-group-header span:last-child {
display: flex;
gap: 8px;
width: 100%;
}
.duplicate-group-header button {
margin-left: 0;
flex: 1;
}
.help-tooltip {
max-width: calc(100% - 40px);
}
/* Remove the fixed positioning adjustments for mobile since we're now using dynamic positioning */
.help-tooltip:after {
left: 10px;
}
}
/* In dark mode, add additional distinction */
html[data-theme="dark"] .duplicates-banner {
box-shadow: 0 3px 12px rgba(0, 0, 0, 0.4); /* Stronger shadow in dark mode */
background-color: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.15); /* Slightly stronger background in dark mode */
}
html[data-theme="dark"] .duplicate-group {
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.25); /* Stronger shadow in dark mode */
}
html[data-theme="dark"] .help-tooltip {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.3);
}
/* Styles for disabled controls during duplicates mode */
.disabled-during-duplicates {
opacity: 0.5 !important;
pointer-events: none !important;
cursor: not-allowed !important;
user-select: none !important;
filter: grayscale(50%) !important;
}
/* Make the active duplicates button more prominent */
#findDuplicatesBtn.active {
background: var(--lora-accent);
color: white;
border-color: var(--lora-accent);
box-shadow: 0 0 0 2px oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.25);
position: relative;
z-index: 5;
}
#findDuplicatesBtn.active:hover {
background: oklch(calc(var(--lora-accent-l) - 5%) var(--lora-accent-c) var(--lora-accent-h));
}

View File

@@ -79,6 +79,50 @@
flex: 1;
max-width: 400px;
margin: 0 1rem;
transition: opacity 0.2s ease;
}
/* Disabled state for header search */
.header-search.disabled {
opacity: 0.5;
pointer-events: none;
}
.header-search.disabled input {
background-color: var(--input-disabled-bg, #f5f5f5);
color: var(--text-muted);
cursor: not-allowed;
}
.header-search.disabled button {
background-color: var(--button-disabled-bg, #e0e0e0);
color: var(--text-muted);
cursor: not-allowed;
}
.header-search.disabled .search-icon {
color: var(--text-muted);
}
/* Dark theme specific styles for disabled header search */
[data-theme="dark"] .header-search.disabled input {
background-color: #3a3a3a;
color: #888888;
border-color: #555555;
}
[data-theme="dark"] .header-search.disabled button {
background-color: #3a3a3a;
color: #888888;
border-color: #555555;
}
[data-theme="dark"] .header-search.disabled .search-icon {
color: #888888;
}
[data-theme="dark"] .header-search.disabled .fas {
color: #888888;
}
/* Header controls (formerly corner controls) */
@@ -115,7 +159,8 @@
}
.theme-toggle .light-icon,
.theme-toggle .dark-icon {
.theme-toggle .dark-icon,
.theme-toggle .auto-icon {
position: absolute;
top: 50%;
left: 50%;
@@ -124,15 +169,62 @@
transition: opacity 0.3s ease;
}
/* Default state shows dark icon */
.theme-toggle .dark-icon {
opacity: 1;
}
[data-theme="light"] .theme-toggle .light-icon {
/* Light theme shows light icon */
.theme-toggle.theme-light .light-icon {
opacity: 1;
}
[data-theme="light"] .theme-toggle .dark-icon {
.theme-toggle.theme-light .dark-icon,
.theme-toggle.theme-light .auto-icon {
opacity: 0;
}
/* Dark theme shows dark icon */
.theme-toggle.theme-dark .dark-icon {
opacity: 1;
}
.theme-toggle.theme-dark .light-icon,
.theme-toggle.theme-dark .auto-icon {
opacity: 0;
}
/* Auto theme shows auto icon */
.theme-toggle.theme-auto .auto-icon {
opacity: 1;
}
.theme-toggle.theme-auto .light-icon,
.theme-toggle.theme-auto .dark-icon {
opacity: 0;
}
/* Badge styling */
.update-badge {
position: absolute;
top: -3px;
right: -3px;
width: 8px;
height: 8px;
background-color: var(--lora-error);
border-radius: 50%;
border: 2px solid var(--card-bg);
transition: all 0.2s ease;
pointer-events: none;
opacity: 0;
}
.update-badge.visible {
opacity: 1;
}
.update-badge.hidden,
.update-badge:not(.visible) {
opacity: 0;
}

View File

@@ -291,7 +291,7 @@
gap: 8px;
padding: var(--space-1);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-sm);
border-radius: var (--border-radius-sm);
background: var(--lora-surface);
}
@@ -733,3 +733,150 @@
font-size: 0.9em;
line-height: 1.4;
}
/* Duplicate Recipes Styles */
.duplicate-recipes-container {
margin-bottom: var(--space-3);
border-radius: var(--border-radius-sm);
overflow: hidden;
animation: fadeIn 0.3s ease-in-out;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(-10px); }
to { opacity: 1; transform: translateY(0); }
}
.duplicate-warning {
display: flex;
align-items: flex-start;
gap: 12px;
padding: 12px 16px;
background: oklch(var(--lora-warning) / 0.1);
border: 1px solid var(--lora-warning);
border-radius: var(--border-radius-sm) var(--border-radius-sm) 0 0;
color: var(--text-color);
}
.duplicate-warning .warning-icon {
color: var(--lora-warning);
font-size: 1.2em;
padding-top: 2px;
}
.duplicate-warning .warning-content {
flex: 1;
}
.duplicate-warning .warning-title {
font-weight: 600;
margin-bottom: 4px;
}
.duplicate-warning .warning-text {
font-size: 0.9em;
line-height: 1.4;
display: flex;
justify-content: space-between;
align-items: center;
flex-wrap: wrap;
gap: 8px;
}
.toggle-duplicates-btn {
background: none;
border: none;
color: var(--lora-warning);
cursor: pointer;
font-size: 0.9em;
display: flex;
align-items: center;
gap: 6px;
padding: 4px 8px;
border-radius: var(--border-radius-xs);
}
.toggle-duplicates-btn:hover {
background: oklch(var(--lora-warning) / 0.1);
}
.duplicate-recipes-list {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(150px, 1fr));
gap: 12px;
padding: 16px;
border: 1px solid var(--border-color);
border-top: none;
border-radius: 0 0 var(--border-radius-sm) var(--border-radius-sm);
background: var(--bg-color);
max-height: 300px;
overflow-y: auto;
transition: max-height 0.3s ease, padding 0.3s ease;
}
.duplicate-recipes-list.collapsed {
max-height: 0;
padding: 0 16px;
overflow: hidden;
}
.duplicate-recipe-card {
position: relative;
border-radius: var(--border-radius-sm);
overflow: hidden;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
transition: transform 0.2s ease;
}
.duplicate-recipe-card:hover {
transform: translateY(-2px);
}
.duplicate-recipe-preview {
width: 100%;
position: relative;
aspect-ratio: 2/3;
background: var(--bg-color);
}
.duplicate-recipe-preview img {
width: 100%;
height: 100%;
object-fit: cover;
}
.duplicate-recipe-title {
position: absolute;
bottom: 0;
left: 0;
right: 0;
padding: 8px;
background: rgba(0, 0, 0, 0.7);
color: white;
font-size: 0.85em;
line-height: 1.3;
max-height: 50%;
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
}
.duplicate-recipe-details {
padding: 8px;
background: var(--bg-color);
font-size: 0.75em;
display: flex;
justify-content: space-between;
align-items: center;
color: var(--text-color);
opacity: 0.8;
}
.duplicate-recipe-date,
.duplicate-recipe-lora-count {
display: flex;
align-items: center;
gap: 4px;
}

View File

@@ -0,0 +1,96 @@
/* Keyboard navigation indicator and help */
.keyboard-nav-hint {
display: inline-flex;
align-items: center;
justify-content: center;
position: relative;
width: 32px;
height: 32px;
border-radius: 50%;
background: var(--card-bg);
border: 1px solid var(--border-color);
color: var(--text-color);
cursor: help;
transition: all 0.2s ease;
margin-left: 8px;
}
.keyboard-nav-hint:hover {
background: var(--lora-accent);
color: white;
transform: translateY(-2px);
box-shadow: 0 3px 5px rgba(0, 0, 0, 0.08);
}
.keyboard-nav-hint i {
font-size: 14px;
}
/* Tooltip styling */
.tooltip {
position: relative;
}
.tooltip .tooltiptext {
visibility: hidden;
width: 240px;
background-color: var(--lora-surface);
color: var(--text-color);
text-align: center;
border-radius: var(--border-radius-xs);
padding: 8px;
position: absolute;
z-index: 9999; /* 确保在卡片上方显示 */
left: 120%; /* 将tooltip显示在图标右侧 */
top: 50%; /* 垂直居中 */
transform: translateY(-50%); /* 垂直居中 */
opacity: 0;
transition: opacity 0.3s;
box-shadow: 0 3px 8px rgba(0, 0, 0, 0.15);
border: 1px solid var(--lora-border);
font-size: 0.85em;
line-height: 1.4;
}
.tooltip .tooltiptext::after {
content: "";
position: absolute;
top: 50%; /* 箭头垂直居中 */
right: 100%; /* 箭头在左侧 */
margin-top: -5px;
border-width: 5px;
border-style: solid;
border-color: transparent var(--lora-border) transparent transparent; /* 箭头指向左侧 */
}
.tooltip:hover .tooltiptext {
visibility: visible;
opacity: 1;
}
/* Keyboard shortcuts table */
.keyboard-shortcuts {
width: 100%;
border-collapse: collapse;
margin-top: 5px;
}
.keyboard-shortcuts td {
padding: 4px;
text-align: left;
}
.keyboard-shortcuts td:first-child {
font-weight: bold;
width: 40%;
}
.key {
display: inline-block;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: 3px;
padding: 1px 5px;
font-size: 0.8em;
box-shadow: 0 1px 2px rgba(0, 0, 0, 0.08);
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,100 @@
/* Model Description Styling */
.model-description-container {
background: var(--lora-surface);
border-radius: var(--border-radius-sm);
overflow: hidden;
min-height: 200px;
position: relative;
/* Remove the max-height and overflow-y to allow content to expand naturally */
}
.model-description-loading {
display: flex;
align-items: center;
justify-content: center;
padding: var(--space-3);
color: var(--text-color);
opacity: 0.7;
font-size: 0.9em;
}
.model-description-loading .fa-spinner {
margin-right: var(--space-1);
}
.model-description-content {
padding: var(--space-2);
line-height: 1.5;
overflow-wrap: break-word;
font-size: 0.95em;
}
.model-description-content h1,
.model-description-content h2,
.model-description-content h3,
.model-description-content h4,
.model-description-content h5,
.model-description-content h6 {
margin-top: 1em;
margin-bottom: 0.5em;
font-weight: 600;
}
.model-description-content p {
margin-bottom: 1em;
}
.model-description-content img {
max-width: 100%;
height: auto;
border-radius: var(--border-radius-xs);
display: block;
margin: 1em 0;
}
.model-description-content pre {
background: rgba(0, 0, 0, 0.05);
border-radius: var(--border-radius-xs);
padding: var(--space-1);
white-space: pre-wrap;
margin: 1em 0;
overflow-x: auto;
}
.model-description-content code {
font-family: monospace;
font-size: 0.9em;
background: rgba(0, 0, 0, 0.05);
padding: 0.1em 0.3em;
border-radius: 3px;
}
.model-description-content pre code {
background: transparent;
padding: 0;
}
.model-description-content ul,
.model-description-content ol {
margin-left: 1.5em;
margin-bottom: 1em;
}
.model-description-content li {
margin-bottom: 0.5em;
}
.model-description-content blockquote {
border-left: 3px solid var (--lora-accent);
padding-left: 1em;
margin-left: 0;
margin-right: 0;
font-style: italic;
opacity: 0.8;
}
/* Adjust dark mode for model description */
[data-theme="dark"] .model-description-content pre,
[data-theme="dark"] .model-description-content code {
background: rgba(255, 255, 255, 0.05);
}

View File

@@ -0,0 +1,489 @@
/* Lora Modal Header */
.modal-header {
display: flex;
flex-direction: column;
justify-content: flex-start;
align-items: flex-start;
margin-bottom: var(--space-3);
padding-bottom: var(--space-2);
border-bottom: 1px solid var(--lora-border);
}
/* Info Grid */
.info-grid {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: var(--space-2);
margin-bottom: var(--space-3);
}
.info-item {
padding: var(--space-2);
background: rgba(0, 0, 0, 0.03);
border: 1px solid rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-sm);
}
/* 调整深色主题下的样式 */
[data-theme="dark"] .info-item {
background: rgba(255, 255, 255, 0.03);
border: 1px solid var(--lora-border);
}
.info-item.full-width {
grid-column: 1 / -1;
}
.info-item label {
display: block;
font-size: 0.85em;
color: var(--text-color);
opacity: 0.8;
margin-bottom: 4px;
}
.info-item span {
color: var(--text-color);
word-break: break-word;
}
.info-item.usage-tips,
.info-item.notes {
grid-column: 1 / -1 !important; /* Make notes section full width */
}
/* Add specific styles for notes content */
.info-item.notes .editable-field [contenteditable] {
min-height: 60px; /* Increase height for multiple lines */
max-height: 150px; /* Limit maximum height */
overflow-y: auto; /* Add scrolling for long content */
white-space: pre-wrap; /* Preserve line breaks */
line-height: 1.5; /* Improve readability */
padding: 8px 12px; /* Slightly increase padding */
}
.file-path {
font-family: monospace;
font-size: 0.9em;
}
.description-text {
line-height: 1.5;
max-height: 100px;
overflow-y: auto;
}
/* Editable Fields */
.editable-field {
position: relative;
display: flex;
gap: 8px;
align-items: flex-start;
}
.editable-field [contenteditable] {
flex: 1;
min-height: 24px;
padding: 4px 8px;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-xs);
font-size: 0.9em;
line-height: 1.4;
color: var(--text-color);
transition: border-color 0.2s;
word-break: break-word;
}
.editable-field [contenteditable]:focus {
outline: none;
border-color: var(--lora-accent);
background: var(--bg-color);
}
.editable-field [contenteditable]:empty::before {
content: attr(data-placeholder);
color: var(--text-color);
opacity: 0.5;
}
.notes-hint {
font-size: 0.8em;
color: var(--text-color);
opacity: 0.7;
margin-left: 5px;
cursor: help;
position: relative; /* Add positioning context */
}
@media (max-width: 640px) {
.info-item.usage-tips,
.info-item.notes {
grid-column: 1 / -1;
}
}
/* 修改 back-to-top 按钮样式,使其固定在 modal 内部 */
.modal-content .back-to-top {
position: sticky; /* 改用 sticky 定位 */
float: right; /* 使用 float 确保按钮在右侧 */
bottom: 20px; /* 距离底部的距离 */
margin-right: 20px; /* 右侧间距 */
margin-top: -56px; /* 负边距确保不占用额外空间 */
width: 36px;
height: 36px;
border-radius: 50%;
background: var(--card-bg);
border: 1px solid var(--border-color);
color: var(--text-color);
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
opacity: 0;
visibility: hidden;
transform: translateY(10px);
transition: all 0.3s ease;
z-index: 10;
}
.modal-content .back-to-top.visible {
opacity: 1;
visibility: visible;
transform: translateY(0);
}
.modal-content .back-to-top:hover {
background: var(--lora-accent);
color: white;
transform: translateY(-2px);
}
/* File name copy styles */
.file-name-wrapper {
display: flex;
align-items: center;
gap: 8px;
padding: 4px;
border-radius: var(--border-radius-xs);
transition: background-color 0.2s;
position: relative;
}
.file-name-content {
padding: 2px 4px;
border-radius: var(--border-radius-xs);
border: 1px solid transparent;
flex: 1;
}
.file-name-wrapper.editing .file-name-content {
border: 1px solid var(--lora-accent);
background: var(--bg-color);
outline: none;
}
.edit-file-name-btn {
background: transparent;
border: none;
color: var(--text-color);
opacity: 0;
cursor: pointer;
padding: 2px 5px;
border-radius: var(--border-radius-xs);
transition: all 0.2s ease;
margin-left: var(--space-1);
}
.edit-file-name-btn.visible,
.file-name-wrapper:hover .edit-file-name-btn {
opacity: 0.5;
}
.edit-file-name-btn:hover {
opacity: 0.8 !important;
background: rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] .edit-file-name-btn:hover {
background: rgba(255, 255, 255, 0.05);
}
/* Base Model and Size combined styles */
.info-item.base-size {
display: flex;
gap: var(--space-3);
}
.base-wrapper {
flex: 2; /* 分配更多空间给base model */
}
/* Base model display and editing styles */
.base-model-display {
display: flex;
align-items: center;
position: relative;
}
.base-model-content {
padding: 2px 4px;
border-radius: var(--border-radius-xs);
border: 1px solid transparent;
color: var(--text-color);
flex: 1;
}
.edit-base-model-btn {
background: transparent;
border: none;
color: var(--text-color);
opacity: 0;
cursor: pointer;
padding: 2px 5px;
border-radius: var(--border-radius-xs);
transition: all 0.2s ease;
margin-left: var(--space-1);
}
.edit-base-model-btn.visible,
.base-model-display:hover .edit-base-model-btn {
opacity: 0.5;
}
.edit-base-model-btn:hover {
opacity: 0.8 !important;
background: rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] .edit-base-model-btn:hover {
background: rgba(255, 255, 255, 0.05);
}
.base-model-selector {
width: 100%;
padding: 3px 5px;
background: var(--bg-color);
border: 1px solid var(--lora-accent);
border-radius: var(--border-radius-xs);
color: var(--text-color);
font-size: 0.9em;
outline: none;
margin-right: var(--space-1);
}
.size-wrapper {
flex: 1;
border-left: 1px solid var(--lora-border);
padding-left: var(--space-3);
}
.base-wrapper label,
.size-wrapper label {
display: block;
margin-bottom: 4px;
}
.size-wrapper span {
font-family: monospace;
font-size: 0.9em;
opacity: 0.9;
}
/* New Model Name Header Styles */
.model-name-header {
display: flex;
align-items: center;
width: calc(100% - 40px); /* Avoid overlap with close button */
position: relative;
}
.model-name-content {
margin: 0;
padding: var(--space-1);
border-radius: var(--border-radius-xs);
font-size: 1.5em !important;
font-weight: 600;
line-height: 1.2;
color: var(--text-color);
border: 1px solid transparent;
outline: none;
flex: 1;
}
.model-name-content:focus {
border: 1px solid var(--lora-accent);
background: var(--bg-color);
}
.edit-model-name-btn {
background: transparent;
border: none;
color: var(--text-color);
opacity: 0;
cursor: pointer;
padding: 2px 5px;
border-radius: var(--border-radius-xs);
transition: all 0.2s ease;
margin-left: var(--space-1);
}
.edit-model-name-btn.visible,
.model-name-header:hover .edit-model-name-btn {
opacity: 0.5;
}
.edit-model-name-btn:hover {
opacity: 0.8 !important;
background: rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] .edit-model-name-btn:hover {
background: rgba(255, 255, 255, 0.05);
}
/* Tab System Styling */
.showcase-tabs {
display: flex;
border-bottom: 1px solid var(--lora-border);
margin-bottom: var(--space-2);
position: relative;
z-index: 2;
}
.tab-btn {
padding: var(--space-1) var(--space-2);
background: transparent;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-color);
cursor: pointer;
font-size: 0.95em;
transition: all 0.2s;
opacity: 0.7;
position: relative;
}
.tab-btn:hover {
opacity: 1;
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.05);
}
.tab-btn.active {
border-bottom: 2px solid var(--lora-accent);
opacity: 1;
font-weight: 600;
}
.tab-content {
position: relative;
min-height: 100px;
}
.tab-pane {
display: none;
}
.tab-pane.active {
display: block;
}
.view-all-btn {
display: flex;
align-items: center;
gap: 5px;
padding: 6px 12px;
background-color: var(--lora-accent);
color: var(--lora-text);
border: none;
border-radius: var(--border-radius-sm);
cursor: pointer;
transition: background-color 0.2s;
font-size: 13px;
}
.view-all-btn:hover {
opacity: 0.9;
}
/* Loading, error and empty states */
.recipes-loading,
.recipes-error,
.recipes-empty {
display: flex;
flex-direction: column;
align-items: center;
justify-content: center;
padding: 40px;
text-align: center;
min-height: 200px;
}
.recipes-loading i,
.recipes-error i,
.recipes-empty i {
font-size: 32px;
margin-bottom: 15px;
color: var(--lora-accent);
}
.recipes-error i {
color: var(--lora-error);
}
/* Creator Information Styles */
.creator-info {
display: flex;
align-items: center;
gap: 10px;
margin-bottom: var(--space-1);
padding: 6px 10px;
background: rgba(0, 0, 0, 0.03);
border: 1px solid rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-sm);
max-width: fit-content;
}
[data-theme="dark"] .creator-info {
background: rgba(255, 255, 255, 0.03);
border: 1px solid var(--lora-border);
}
.creator-avatar {
width: 28px;
height: 28px;
border-radius: 50%;
overflow: hidden;
flex-shrink: 0;
display: flex;
align-items: center;
justify-content: center;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
}
.creator-avatar img {
width: 100%;
height: 100%;
object-fit: cover;
}
.creator-placeholder {
background: var(--lora-accent);
color: white;
display: flex;
align-items: center;
justify-content: center;
}
.creator-username {
font-size: 0.9em;
font-weight: 500;
color: var(--text-color);
}
/* Optional: add hover effect for creator info */
.creator-info:hover {
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
border-color: var(--lora-accent);
}

View File

@@ -0,0 +1,68 @@
/* Update Preset Controls styles */
.preset-controls {
display: flex;
gap: var(--space-2);
margin-bottom: var(--space-2);
}
.preset-controls select,
.preset-controls input {
padding: var(--space-1);
background: var(--bg-color);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
color: var(--text-color);
}
.preset-tags {
display: flex;
flex-wrap: wrap;
gap: var(--space-1);
}
.preset-tag {
display: flex;
align-items: center;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
padding: calc(var(--space-1) * 0.5) var(--space-1);
gap: var(--space-1);
transition: all 0.2s ease;
}
.preset-tag span {
color: var(--lora-accent);
font-size: 0.9em;
}
.preset-tag i {
color: var(--text-color);
opacity: 0.5;
cursor: pointer;
transition: all 0.2s ease;
}
.preset-tag:hover {
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
border-color: var(--lora-accent);
}
.preset-tag i:hover {
color: var(--lora-error);
opacity: 1;
}
.add-preset-btn {
padding: calc(var(--space-1) * 0.5) var(--space-2);
background: var(--lora-accent);
color: var(--lora-text);
border: none;
border-radius: var(--border-radius-xs);
cursor: pointer;
transition: opacity 0.2s;
}
.add-preset-btn:hover {
opacity: 0.9;
}

View File

@@ -0,0 +1,478 @@
/* Showcase Section */
.showcase-section {
position: relative;
margin-top: var(--space-4);
}
.carousel {
transition: max-height 0.3s ease-in-out;
overflow: hidden;
}
.carousel.collapsed {
max-height: 0;
}
.carousel-container {
display: flex;
flex-direction: column;
gap: var(--space-2);
}
.media-wrapper {
position: relative;
width: 100%;
background: var(--lora-surface);
margin-bottom: var(--space-2);
overflow: hidden; /* Ensure metadata panel is contained */
}
.media-wrapper:last-child {
margin-bottom: 0;
}
.media-wrapper img,
.media-wrapper video {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
object-fit: contain;
}
.no-examples {
text-align: center;
padding: var(--space-3);
color: var(--text-color);
opacity: 0.7;
}
/* Adjust the media wrapper for tab system */
#showcase-tab .carousel-container {
margin-top: var(--space-2);
}
/* Add styles for blurred showcase content */
.nsfw-media-wrapper {
position: relative;
}
.media-wrapper img.blurred,
.media-wrapper video.blurred {
filter: blur(25px);
}
.media-wrapper .nsfw-overlay {
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
display: flex;
align-items: center;
justify-content: center;
z-index: 2;
pointer-events: none;
}
/* Position the toggle button at the top left of showcase media */
.showcase-toggle-btn {
position: absolute;
z-index: 3;
}
/* Add styles for showcase media controls */
.media-controls {
position: absolute;
display: flex;
gap: 6px;
z-index: 4;
opacity: 0;
transform: translateY(-5px);
transition: opacity 0.2s ease, transform 0.2s ease;
pointer-events: none;
}
.media-controls.visible {
opacity: 1;
transform: translateY(0);
pointer-events: auto;
}
.media-control-btn {
width: 28px;
height: 28px;
border-radius: 50%;
background: var(--bg-color);
border: 1px solid var(--border-color);
color: var(--text-color);
display: flex;
align-items: center;
justify-content: center;
cursor: pointer;
transition: all 0.2s ease;
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.15);
padding: 0;
position: relative;
overflow: hidden;
}
.media-control-btn:hover {
transform: translateY(-2px);
box-shadow: 0 3px 7px rgba(0, 0, 0, 0.2);
}
.media-control-btn.set-preview-btn:hover {
background: var(--lora-accent);
color: white;
border-color: var(--lora-accent);
}
.media-control-btn.example-delete-btn:hover:not(.disabled) {
background: var(--lora-error);
color: white;
border-color: var(--lora-error);
}
/* Disabled state for delete button */
.media-control-btn.example-delete-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
}
/* Two-step confirmation for delete button */
.media-control-btn.example-delete-btn .confirm-icon {
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
display: flex;
align-items: center;
justify-content: center;
background: var(--lora-error);
color: white;
font-size: 1em;
opacity: 0;
transition: opacity 0.2s ease;
}
.media-control-btn.example-delete-btn.confirm .fa-trash-alt {
opacity: 0;
}
.media-control-btn.example-delete-btn.confirm .confirm-icon {
opacity: 1;
}
.media-control-btn.example-delete-btn.confirm {
background: var(--lora-error);
color: white;
border-color: var(--lora-error);
}
@keyframes pulse {
0% {
box-shadow: 0 0 0 0 rgba(220, 53, 69, 0.7);
}
70% {
box-shadow: 0 0 0 5px rgba(220, 53, 69, 0);
}
100% {
box-shadow: 0 0 0 0 rgba(220, 53, 69, 0);
}
}
/* Image Metadata Panel Styles */
.image-metadata-panel {
position: absolute;
bottom: 0;
left: 0;
right: 0;
background: var(--bg-color);
border-top: 1px solid var(--border-color);
padding: var(--space-2);
transform: translateY(100%);
transition: transform 0.3s cubic-bezier(0.175, 0.885, 0.32, 1.275), opacity 0.25s ease;
z-index: 5;
max-height: 50%; /* Reduced to take less space */
overflow-y: auto;
box-shadow: 0 -2px 8px rgba(0, 0, 0, 0.1);
opacity: 0;
pointer-events: none;
}
/* Show metadata panel only when the 'visible' class is added */
.media-wrapper .image-metadata-panel.visible {
transform: translateY(0);
opacity: 0.98;
pointer-events: auto;
}
/* Adjust to dark theme */
[data-theme="dark"] .image-metadata-panel {
background: var(--card-bg);
box-shadow: 0 -2px 8px rgba(0, 0, 0, 0.3);
}
.metadata-content {
display: flex;
flex-direction: column;
gap: 10px;
}
/* Styling for parameters tags */
.params-tags {
display: flex;
flex-wrap: wrap;
gap: 6px;
margin-bottom: var(--space-1);
padding-bottom: var(--space-1);
border-bottom: 1px solid var(--lora-border);
}
.param-tag {
display: inline-flex;
align-items: center;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
padding: 2px 6px;
font-size: 0.8em;
line-height: 1.2;
white-space: nowrap;
}
.param-tag .param-name {
font-weight: 600;
color: var(--text-color);
margin-right: 4px;
opacity: 0.8;
}
.param-tag .param-value {
color: var(--lora-accent);
}
/* Special styling for prompt row */
.metadata-row.prompt-row {
flex-direction: column;
padding-top: 0;
}
.metadata-row.prompt-row + .metadata-row.prompt-row {
margin-top: var(--space-2);
}
.metadata-label {
font-weight: 600;
color: var(--text-color);
opacity: 0.8;
font-size: 0.85em;
display: block;
margin-bottom: 4px;
}
.metadata-prompt-wrapper {
position: relative;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-xs);
padding: 6px 30px 6px 8px;
margin-top: 2px;
max-height: 80px; /* Reduced from 120px */
overflow-y: auto;
word-break: break-word;
width: 100%;
box-sizing: border-box;
}
.metadata-prompt {
color: var(--text-color);
font-family: monospace;
font-size: 0.85em;
white-space: pre-wrap;
}
.copy-prompt-btn {
position: absolute;
top: 6px;
right: 6px;
background: transparent;
border: none;
color: var(--text-color);
opacity: 0.6;
cursor: pointer;
padding: 3px;
transition: all 0.2s ease;
}
.copy-prompt-btn:hover {
opacity: 1;
color: var(--lora-accent);
}
/* Scrollbar styling for metadata panel */
.image-metadata-panel::-webkit-scrollbar {
width: 6px;
}
.image-metadata-panel::-webkit-scrollbar-track {
background: transparent;
}
.image-metadata-panel::-webkit-scrollbar-thumb {
background-color: var(--border-color);
border-radius: 3px;
}
/* For Firefox */
.image-metadata-panel {
scrollbar-width: thin;
scrollbar-color: var(--border-color) transparent;
}
/* No metadata message styling */
.no-metadata-message {
display: flex;
align-items: center;
justify-content: center;
padding: var(--space-2);
color: var(--text-color);
opacity: 0.7;
text-align: center;
font-style: italic;
gap: 8px;
}
.no-metadata-message i {
font-size: 1.1em;
color: var(--lora-accent);
opacity: 0.8;
}
/* Scroll Indicator */
.scroll-indicator {
cursor: pointer;
padding: var(--space-2);
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
display: flex;
align-items: center;
justify-content: center;
gap: 8px;
margin-bottom: var(--space-2);
transition: background-color 0.2s, transform 0.2s;
}
.scroll-indicator:hover {
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
transform: translateY(-1px);
}
.scroll-indicator span {
font-size: 0.9em;
color: var(--text-color);
}
.lazy {
opacity: 0;
transition: opacity 0.3s;
}
.lazy[src] {
opacity: 1;
}
/* Example Import Area */
.example-import-area {
margin-top: var(--space-4);
padding: var(--space-2);
}
.example-import-area.empty {
margin-top: var(--space-2);
padding: var(--space-4) var(--space-2);
}
.import-container {
border: 2px dashed var(--border-color);
border-radius: var(--border-radius-sm);
padding: var(--space-4);
text-align: center;
transition: all 0.3s ease;
background: var(--lora-surface);
cursor: pointer;
}
.import-container.highlight {
border-color: var(--lora-accent);
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
transform: scale(1.01);
}
.import-placeholder {
display: flex;
flex-direction: column;
align-items: center;
gap: var(--space-1);
padding-top: var(--space-1);
}
.import-placeholder i {
font-size: 2.5rem;
/* color: var(--lora-accent); */
opacity: 0.8;
margin-bottom: var(--space-1);
}
.import-placeholder h3 {
margin: 0 0 var(--space-1);
font-size: 1.2rem;
font-weight: 500;
color: var(--text-color);
}
.import-placeholder p {
margin: var(--space-1) 0;
color: var(--text-color);
opacity: 0.8;
}
.import-placeholder .sub-text {
font-size: 0.9em;
opacity: 0.6;
margin: var(--space-1) 0;
}
.import-formats {
font-size: 0.8em !important;
opacity: 0.6 !important;
margin-top: var(--space-2) !important;
}
.select-files-btn {
background: var(--lora-accent);
color: var(--lora-text);
border: none;
border-radius: var(--border-radius-xs);
padding: var(--space-2) var(--space-3);
cursor: pointer;
font-size: 0.9em;
display: flex;
align-items: center;
gap: 8px;
transition: all 0.2s;
}
.select-files-btn:hover {
opacity: 0.9;
transform: translateY(-1px);
}
/* For dark theme */
[data-theme="dark"] .import-container {
background: rgba(255, 255, 255, 0.03);
}

View File

@@ -0,0 +1,148 @@
/* Model Tags styles */
.model-tags {
display: none;
}
.model-tag {
display: none;
}
/* Updated Model Tags styles - improved visibility in light theme */
.model-tags-container {
position: relative;
}
.model-tags-compact {
display: flex;
flex-wrap: nowrap;
gap: 6px;
align-items: center;
}
.model-tag-compact {
/* Updated styles to match info-item appearance */
background: rgba(0, 0, 0, 0.03);
border: 1px solid rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-xs);
padding: 2px 8px;
font-size: 0.75em;
color: var(--text-color);
white-space: nowrap;
}
/* Style for empty tags placeholder */
.model-tag-empty {
background: rgba(0, 0, 0, 0.02);
border: 1px dashed rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-xs);
padding: 2px 8px;
font-size: 0.75em;
color: var(--text-color);
white-space: nowrap;
opacity: 0.7;
font-style: italic;
}
/* Adjust dark theme tag styles */
[data-theme="dark"] .model-tag-compact {
background: rgba(255, 255, 255, 0.03);
border: 1px solid var(--lora-border);
}
/* Dark theme for empty tags */
[data-theme="dark"] .model-tag-empty {
background: rgba(255, 255, 255, 0.02);
border: 1px dashed var(--lora-border);
}
.model-tag-more {
background: var(--lora-accent);
color: var(--lora-text);
border-radius: var(--border-radius-xs);
padding: 2px 8px;
font-size: 0.75em;
cursor: pointer;
white-space: nowrap;
font-weight: 500;
}
.model-tags-tooltip {
position: absolute;
top: calc(100% + 8px);
left: 0;
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-sm);
box-shadow: 0 3px 8px rgba(0, 0, 0, 0.15);
padding: 10px 14px;
max-width: 400px;
z-index: 10;
opacity: 0;
visibility: hidden;
transform: translateY(-4px);
transition: all 0.2s ease;
pointer-events: none;
}
.model-tags-tooltip.visible {
opacity: 1;
visibility: visible;
transform: translateY(0);
pointer-events: auto;
}
.tooltip-content {
display: flex;
flex-wrap: wrap;
gap: 6px;
max-height: 200px;
overflow-y: auto;
}
.tooltip-tag {
/* Updated styles to match info-item appearance */
background: rgba(0, 0, 0, 0.03);
border: 1px solid rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-xs);
padding: 3px 8px;
font-size: 0.75em;
color: var(--text-color);
}
/* Adjust dark theme tooltip tag styles */
[data-theme="dark"] .tooltip-tag {
background: rgba(255, 255, 255, 0.03);
border: 1px solid var(--lora-border);
}
/* Model Tags Edit Mode */
.model-tags-header {
display: flex;
justify-content: space-between;
align-items: center;
}
.edit-tags-btn {
background: transparent;
border: none;
color: var(--text-color);
opacity: 0;
cursor: pointer;
padding: 2px 5px;
border-radius: var(--border-radius-xs);
transition: all 0.2s ease;
margin-left: var(--space-1);
}
.edit-tags-btn.visible,
.model-tags-container:hover .edit-tags-btn {
opacity: 0.5;
}
/* Edit mode active state */
.model-tags-container.edit-mode {
width: 100%;
display: block;
flex-basis: 100%;
grid-column: 1 / -1;
}

View File

@@ -0,0 +1,112 @@
/* Update Trigger Words styles */
.info-item.trigger-words {
padding: var(--space-2);
background: rgba(0, 0, 0, 0.03);
border: 1px solid rgba(0, 0, 0, 0.1);
border-radius: var(--border-radius-sm);
}
/* 调整 trigger words 样式 */
[data-theme="dark"] .info-item.trigger-words {
background: rgba(255, 255, 255, 0.03);
border: 1px solid var(--lora-border);
}
/* New header style for trigger words */
.trigger-words-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 6px;
}
.trigger-words-content {
margin-bottom: var(--space-1);
}
.trigger-words-tags {
display: flex;
flex-wrap: wrap;
gap: 8px;
align-items: flex-start;
}
/* No trigger words message */
.no-trigger-words {
color: var(--text-color);
opacity: 0.7;
font-style: italic;
font-size: 0.9em;
}
/* Trigger word tags in display mode */
.trigger-word-tag {
display: inline-flex;
align-items: center;
background: var(--bg-color);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-xs);
padding: 4px 8px;
cursor: pointer;
transition: all 0.2s ease;
gap: 6px;
position: relative;
}
.trigger-word-content {
color: var(--lora-accent) !important;
font-size: 0.85em;
line-height: 1.4;
word-break: break-word;
}
.trigger-word-tag:hover {
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1);
border-color: var(--lora-accent);
}
.trigger-word-copy {
display: flex;
align-items: center;
color: var(--text-color);
opacity: 0.5;
flex-shrink: 0;
transition: opacity 0.2s;
}
.trained-word-freq {
color: var(--text-color);
font-size: 0.75em;
background: rgba(0, 0, 0, 0.05);
border-radius: 10px;
min-width: 20px;
padding: 1px 5px;
text-align: center;
line-height: 1.2;
}
[data-theme="dark"] .trained-word-freq {
background: rgba(255, 255, 255, 0.05);
}
/* Class tokens styling */
.class-tokens-container {
padding: 10px;
display: flex;
flex-wrap: wrap;
gap: 8px;
}
.class-token-item {
background: oklch(var(--lora-accent-l) var(--lora-accent-c) var(--lora-accent-h) / 0.1) !important;
border: 1px solid var(--lora-accent) !important;
}
.token-badge {
background: var(--lora-accent);
color: white;
font-size: 0.7em;
padding: 2px 5px;
border-radius: 8px;
white-space: nowrap;
}

View File

@@ -39,4 +39,182 @@
.context-menu-item i {
width: 16px;
text-align: center;
}
/* NSFW Level Selector */
.nsfw-level-selector {
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background: var(--card-bg);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-base);
padding: 16px;
box-shadow: 0 4px 20px rgba(0, 0, 0, 0.2);
z-index: var(--z-modal);
width: 300px;
display: none;
}
.nsfw-level-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 16px;
}
.nsfw-level-header h3 {
margin: 0;
font-size: 16px;
font-weight: 500;
}
.close-nsfw-selector {
background: transparent;
border: none;
color: var(--text-color);
cursor: pointer;
padding: 4px;
border-radius: var(--border-radius-xs);
}
.close-nsfw-selector:hover {
background: var(--border-color);
}
.current-level {
margin-bottom: 12px;
padding: 8px;
background: var(--bg-color);
border-radius: var(--border-radius-xs);
border: 1px solid var(--border-color);
}
.nsfw-level-options {
display: flex;
flex-wrap: wrap;
gap: 8px;
}
.nsfw-level-btn {
flex: 1 0 calc(33% - 8px);
padding: 8px;
border-radius: var(--border-radius-xs);
background: var(--bg-color);
border: 1px solid var(--border-color);
color: var(--text-color);
cursor: pointer;
transition: all 0.2s ease;
}
.nsfw-level-btn:hover {
background: var(--lora-border);
}
.nsfw-level-btn.active {
background: var(--lora-accent);
color: white;
border-color: var(--lora-accent);
}
/* Node Selector */
.node-selector {
position: fixed;
background: var(--lora-surface);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-xs);
padding: 4px 0;
min-width: 200px;
max-width: 350px;
max-height: 400px;
overflow-y: auto;
box-shadow: 0 2px 10px rgba(0, 0, 0, 0.2);
z-index: 1000;
display: none;
backdrop-filter: blur(10px);
}
.node-item {
padding: 10px 15px;
cursor: pointer;
display: flex;
align-items: center;
gap: 10px;
color: var(--text-color);
background: var(--lora-surface);
transition: background-color 0.2s;
border-bottom: 1px solid var(--border-color);
}
.node-item:last-child {
border-bottom: none;
}
.node-item:hover {
background-color: var(--lora-accent);
color: var(--lora-text);
}
.node-icon-indicator {
width: 24px;
height: 24px;
border-radius: 4px;
display: flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
}
.node-icon-indicator i {
color: white;
font-size: 12px;
text-shadow: 0 1px 2px rgba(0, 0, 0, 0.3);
}
.node-icon-indicator.all-nodes {
background: linear-gradient(45deg, #4a90e2, #357abd);
}
/* Remove old node-color-indicator styles */
.node-color-indicator {
display: none;
}
.send-all-item {
border-top: 1px solid var(--border-color);
font-weight: 500;
background: var(--card-bg);
}
.send-all-item:hover {
background-color: var(--lora-accent);
color: var(--lora-text);
}
.send-all-item i {
width: 16px;
text-align: center;
}
/* Node Selector Header */
.node-selector-header {
padding: 10px 15px;
border-bottom: 1px solid var(--border-color);
background: var(--card-bg);
display: flex;
flex-direction: column;
gap: 4px;
}
.selector-action-type {
font-weight: 600;
font-size: 14px;
color: var(--lora-accent);
}
.selector-instruction {
font-size: 12px;
color: var(--text-muted);
font-style: italic;
}

View File

@@ -44,26 +44,12 @@ body.modal-open {
}
/* Delete Modal specific styles */
.delete-modal-content {
max-width: 500px;
text-align: center;
}
.delete-message {
color: var(--text-color);
margin: var(--space-2) 0;
}
.delete-model-info {
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
padding: var(--space-2);
margin: var(--space-2) 0;
color: var(--text-color);
word-break: break-all;
}
/* Update delete modal styles */
.delete-modal {
display: none; /* Set initial display to none */
@@ -92,7 +78,8 @@ body.modal-open {
animation: modalFadeIn 0.2s ease-out;
}
.delete-model-info {
.delete-model-info,
.exclude-model-info {
/* Update info display styling */
background: var(--lora-surface);
border: 1px solid var(--lora-border);
@@ -123,7 +110,7 @@ body.modal-open {
margin-top: var(--space-3);
}
.cancel-btn, .delete-btn {
.cancel-btn, .delete-btn, .exclude-btn, .confirm-btn {
padding: 8px var(--space-2);
border-radius: 6px;
border: none;
@@ -143,6 +130,12 @@ body.modal-open {
color: white;
}
/* Style for exclude button - different from delete button */
.exclude-btn, .confirm-btn {
background: var(--lora-accent, #4f46e5);
color: white;
}
.cancel-btn:hover {
background: var(--lora-border);
}
@@ -151,9 +144,14 @@ body.modal-open {
opacity: 0.9;
}
.exclude-btn:hover, .confirm-btn:hover {
opacity: 0.9;
background: oklch(from var(--lora-accent, #4f46e5) l c h / 85%);
}
.modal-content h2 {
color: var(--text-color);
margin-bottom: var(--space-2);
margin-bottom: var(--space-1);
font-size: 1.5em;
}
@@ -496,6 +494,114 @@ input:checked + .toggle-slider:before {
filter: blur(8px);
}
/* Example Images Settings Styles */
.download-buttons {
justify-content: flex-start;
gap: var(--space-2);
}
.primary-btn {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 16px;
background-color: var(--lora-accent);
color: var(--lora-text);
border: none;
border-radius: var(--border-radius-sm);
cursor: pointer;
transition: background-color 0.2s;
font-size: 0.95em;
}
.primary-btn:hover {
background-color: oklch(from var(--lora-accent) l c h / 85%);
color: var(--lora-text);
}
/* Secondary button styles */
.secondary-btn {
display: flex;
align-items: center;
gap: 8px;
padding: 8px 16px;
background-color: var(--card-bg);
color: var (--text-color);
border: 1px solid var(--border-color);
border-radius: var(--border-radius-sm);
cursor: pointer;
transition: all 0.2s;
font-size: 0.95em;
}
.secondary-btn:hover {
background-color: var(--border-color);
color: var(--text-color);
}
/* Disabled button styles */
.primary-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
background-color: var(--lora-accent);
color: var(--lora-text);
pointer-events: none;
}
.secondary-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
pointer-events: none;
}
.restart-required-icon {
color: var(--lora-warning);
margin-left: 5px;
font-size: 0.85em;
vertical-align: text-bottom;
}
/* Dark theme specific button adjustments */
[data-theme="dark"] .primary-btn:hover {
background-color: oklch(from var(--lora-accent) l c h / 75%);
}
[data-theme="dark"] .secondary-btn {
background-color: var(--lora-surface);
}
[data-theme="dark"] .secondary-btn:hover {
background-color: oklch(35% 0.02 256 / 0.98);
}
.primary-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
}
.path-control {
display: flex;
gap: 8px;
align-items: center;
width: 100%;
}
.path-control input[type="text"] {
flex: 1;
padding: 6px 10px;
border-radius: var(--border-radius-xs);
border: 1px solid var(--border-color);
background-color: var(--lora-surface);
color: var (--text-color);
font-size: 0.95em;
height: 32px;
}
.primary-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
}
/* Add styles for delete preview image */
.delete-preview {
max-width: 150px;
@@ -573,4 +679,406 @@ input:checked + .toggle-slider:before {
.changelog-item a:hover {
text-decoration: underline;
}
/* Add warning text style for settings */
.warning-text {
color: var(--lora-warning, #e67e22);
font-weight: 500;
}
[data-theme="dark"] .warning-text {
color: var(--lora-warning, #f39c12);
}
/* Add styles for density description list */
.density-description {
margin: 8px 0;
padding-left: 20px;
font-size: 0.9em;
}
.density-description li {
margin-bottom: 4px;
}
/* Help Modal styles */
.help-modal {
max-width: 850px;
}
.help-header {
display: flex;
align-items: center;
margin-bottom: var(--space-2);
}
.modal-help-icon {
font-size: 24px;
color: var(--lora-accent);
margin-right: var(--space-2);
vertical-align: text-bottom;
}
/* Tab navigation styles */
.help-tabs {
display: flex;
border-bottom: 1px solid var(--lora-border);
margin-bottom: var(--space-2);
gap: 8px;
}
.tab-btn {
padding: 8px 16px;
background: transparent;
border: none;
border-bottom: 2px solid transparent;
color: var(--text-color);
cursor: pointer;
font-weight: 500;
transition: all 0.2s;
opacity: 0.7;
}
.tab-btn:hover {
background-color: rgba(0, 0, 0, 0.05);
opacity: 0.9;
}
.tab-btn.active {
color: var(--lora-accent);
border-bottom: 2px solid var(--lora-accent);
opacity: 1;
}
/* Tab content styles */
.help-content {
padding: var(--space-1) 0;
overflow-y: auto;
}
.tab-pane {
display: none;
}
.tab-pane.active {
display: block;
}
.help-text {
margin: var(--space-2) 0;
}
.help-text ul {
padding-left: 20px;
margin-top: 8px;
}
.help-text li {
margin-bottom: 8px;
}
/* Documentation link styles */
.docs-section {
margin-bottom: var(--space-3);
}
.docs-section h4 {
display: flex;
align-items: center;
gap: 8px;
margin-bottom: var(--space-1);
}
.docs-links {
list-style-type: none;
padding-left: var(--space-3);
}
.docs-links li {
margin-bottom: var(--space-1);
position: relative;
}
.docs-links li:before {
content: "•";
position: absolute;
left: -15px;
color: var(--lora-accent);
}
.docs-links a {
color: var(--lora-accent);
text-decoration: none;
transition: color 0.2s;
}
.docs-links a:hover {
text-decoration: underline;
}
/* Update video list styles */
.video-list {
display: flex;
flex-direction: column;
gap: var(--space-3);
}
.video-item {
display: flex;
flex-direction: column;
}
.video-info {
padding: var(--space-1);
}
.video-info h4 {
margin-bottom: var(--space-1);
}
.video-info p {
font-size: 0.9em;
opacity: 0.8;
}
/* Dark theme adjustments */
[data-theme="dark"] .tab-btn:hover {
background-color: rgba(255, 255, 255, 0.05);
}
/* Update date badge styles */
.update-date-badge {
display: inline-flex;
align-items: center;
font-size: 0.75em;
font-weight: 500;
background-color: var(--lora-accent);
color: var(--lora-text);
padding: 4px 8px;
border-radius: 12px;
margin-left: 10px;
vertical-align: middle;
animation: fadeIn 0.5s ease-in-out;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
}
.update-date-badge i {
margin-right: 5px;
font-size: 0.9em;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(-5px); }
to { opacity: 1; transform: translateY(0); }
}
/* Dark theme adjustments */
[data-theme="dark"] .update-date-badge {
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.3);
}
/* Re-link to Civitai Modal styles */
.warning-box {
background-color: rgba(255, 193, 7, 0.1);
border: 1px solid rgba(255, 193, 7, 0.5);
border-radius: var(--border-radius-sm);
padding: var(--space-2);
margin-bottom: var(--space-3);
}
.warning-box i {
color: var(--lora-warning);
margin-right: var(--space-1);
}
.warning-box ul {
padding-left: 20px;
margin: var(--space-1) 0;
}
.warning-box li {
margin-bottom: 4px;
}
.input-group {
display: flex;
flex-direction: column;
margin-bottom: var(--space-2);
}
.input-group label {
margin-bottom: var(--space-1);
font-weight: 500;
}
.input-group input {
padding: 8px 12px;
border-radius: var(--border-radius-xs);
border: 1px solid var(--border-color);
background-color: var(--lora-surface);
color: var(--text-color);
}
.input-error {
color: var(--lora-error);
font-size: 0.9em;
min-height: 20px;
margin-top: 4px;
}
[data-theme="dark"] .warning-box {
background-color: rgba(255, 193, 7, 0.05);
border-color: rgba(255, 193, 7, 0.3);
}
/* Privacy-friendly video embed styles */
.video-container {
position: relative;
width: 100%;
padding-bottom: 56.25%; /* 16:9 aspect ratio */
height: 0;
margin-bottom: var(--space-2);
border-radius: var(--border-radius-sm);
overflow: hidden;
background-color: rgba(0, 0, 0, 0.05);
}
.video-thumbnail {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
display: flex;
justify-content: center;
align-items: center;
}
.video-thumbnail img {
width: 100%;
height: 100%;
object-fit: cover;
transition: filter 0.2s ease;
}
.video-play-overlay {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
transition: opacity 0.2s ease;
}
/* External link button styles */
.external-link-btn {
display: flex;
align-items: center;
gap: 8px;
padding: 10px 20px;
border-radius: var(--border-radius-sm);
font-weight: 500;
cursor: pointer;
transition: all 0.2s ease;
background-color: var(--lora-accent);
color: white;
text-decoration: none;
border: none;
}
.external-link-btn:hover {
background-color: oklch(from var(--lora-accent) l c h / 85%);
}
.video-thumbnail i {
font-size: 1.2em;
}
/* Smaller video container for the updates tab */
.video-item .video-container {
padding-bottom: 40%; /* Shorter height for the playlist */
}
/* Dark theme adjustments */
[data-theme="dark"] .video-container {
background-color: rgba(255, 255, 255, 0.03);
}
/* Example Access Modal */
.example-access-modal {
max-width: 550px;
text-align: center;
}
.example-access-options {
display: flex;
flex-direction: column;
gap: var(--space-2);
margin: var(--space-3) 0;
}
.example-option-btn {
display: flex;
flex-direction: column;
align-items: center;
padding: var(--space-2);
border-radius: var(--border-radius-sm);
border: 1px solid var(--lora-border);
background-color: var(--lora-surface);
cursor: pointer;
transition: all 0.2s;
}
.example-option-btn:hover {
transform: translateY(-2px);
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
border-color: var(--lora-accent);
}
.example-option-btn i {
font-size: 2em;
margin-bottom: var(--space-1);
color: var(--lora-accent);
}
.option-title {
font-weight: 500;
margin-bottom: 4px;
font-size: 1.1em;
}
.option-desc {
font-size: 0.9em;
opacity: 0.8;
}
.example-option-btn.disabled {
opacity: 0.5;
cursor: not-allowed;
}
.example-option-btn.disabled i {
color: var(--text-color);
opacity: 0.5;
}
.modal-footer-note {
font-size: 0.9em;
opacity: 0.7;
margin-top: var(--space-2);
display: flex;
align-items: center;
justify-content: center;
gap: 8px;
}
/* Dark theme adjustments */
[data-theme="dark"] .example-option-btn:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.25);
}

View File

@@ -0,0 +1,217 @@
/* Progress Panel Styles */
.progress-panel {
position: fixed;
bottom: 20px;
right: 20px;
width: 350px;
background: var(--lora-surface);
border: 1px solid var(--lora-border);
border-radius: var(--border-radius-sm);
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
z-index: calc(var(--z-modal) - 1);
transition: transform 0.3s ease, opacity 0.3s ease;
opacity: 0;
transform: translateY(20px);
pointer-events: none; /* Ignore mouse events when invisible */
}
.progress-panel.visible {
opacity: 1;
transform: translateY(0);
pointer-events: auto; /* Capture mouse events when visible */
}
.progress-panel.collapsed .progress-panel-content {
display: none;
}
.progress-panel.collapsed .progress-panel-header {
border-bottom: none;
padding-bottom: calc(var(--space-2) + 12px);
}
.progress-panel-header {
padding: var(--space-2);
display: flex;
justify-content: space-between;
align-items: center;
border-bottom: 1px solid var(--lora-border);
}
.progress-panel-title {
font-weight: 500;
color: var(--text-color);
display: flex;
align-items: center;
gap: 8px;
}
.progress-panel-actions {
display: flex;
gap: 6px;
}
.icon-button {
background: none;
border: none;
color: var(--text-color);
width: 24px;
height: 24px;
border-radius: 50%;
cursor: pointer;
display: flex;
align-items: center;
justify-content: center;
opacity: 0.6;
transition: all 0.2s;
position: relative;
}
.icon-button:hover {
opacity: 1;
background: rgba(0, 0, 0, 0.05);
}
[data-theme="dark"] .icon-button:hover {
background: rgba(255, 255, 255, 0.1);
}
.progress-panel-content {
padding: var(--space-2);
}
.download-progress-info {
margin-bottom: var(--space-2);
}
.progress-status {
display: flex;
justify-content: space-between;
margin-bottom: 8px;
font-size: 0.9em;
color: var(--text-color);
}
/* Use specific selectors to avoid conflicts with loading.css */
.progress-panel .progress-container {
width: 100%;
background-color: var(--lora-border);
border-radius: 4px;
overflow: hidden;
height: var(--space-1);
}
.progress-panel .progress-bar {
width: 0%;
height: 100%;
background-color: var(--lora-accent);
transition: width 0.5s ease;
}
.current-model-info {
background: var(--bg-color);
border-radius: var(--border-radius-xs);
padding: 8px;
margin-bottom: var(--space-2);
font-size: 0.95em;
}
.current-label {
font-size: 0.85em;
color: var(--text-color);
opacity: 0.7;
margin-bottom: 4px;
}
.current-model-name {
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
color: var(--text-color);
}
.download-stats {
display: flex;
justify-content: space-between;
margin-bottom: var(--space-2);
}
.stat-item {
font-size: 0.9em;
color: var(--text-color);
}
.stat-label {
opacity: 0.7;
margin-right: 4px;
}
.download-errors {
background: oklch(var(--lora-warning) / 0.1);
border: 1px solid var(--lora-warning);
border-radius: var(--border-radius-xs);
padding: var(--space-1);
max-height: 100px;
overflow-y: auto;
font-size: 0.85em;
}
.error-header {
color: var(--lora-warning);
font-weight: 500;
margin-bottom: 4px;
}
.error-list {
color: var(--text-color);
opacity: 0.85;
}
.hidden {
display: none !important;
}
/* Mini progress indicator on pause button when panel collapsed */
.mini-progress-container {
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
border-radius: 50%;
pointer-events: none;
opacity: 0; /* Hide by default */
transition: opacity 0.2s ease;
}
/* Show mini progress when panel is collapsed */
.progress-panel.collapsed .mini-progress-container {
opacity: 1;
}
.mini-progress-circle {
stroke: var(--lora-accent);
fill: none;
stroke-width: 2.5;
stroke-linecap: round;
transform: rotate(-90deg);
transform-origin: center;
transition: stroke-dashoffset 0.3s ease;
}
.mini-progress-background {
stroke: var(--lora-border);
fill: none;
stroke-width: 2;
}
.progress-percent {
position: absolute;
top: 100%;
left: 50%;
transform: translateX(-50%);
font-size: 0.65em;
color: var(--text-color);
opacity: 0.8;
white-space: nowrap;
}

Some files were not shown because too many files have changed in this diff Show More